9
« on: September 19, 2012, 05:53:01 pm »
What I'm trying to understand is - let's say I want to define a show as seq1, seq2, and seq3. These correspond to audio files 1.mp3, 2.mp3, and 3.mp3.
I'm writing a converter application to take my sequences and put them into conductor format. The output .seq file will have data for seq1, then some silence (I think the assumption is 4 seconds), then seq2, silence, and seq3. When the schedule is made, I assume that outputFile.seq will be matched up with 1.mp3, 2.mp3, and 3.mp3. That's cool too.
Is the silence of 4 seconds hardcoded into the conductor? Or, is there a way for the conductor to know when it should start playing the next mp3? And if it's hardcoded, does this still work okay as the number of songs in a show gets large (ie, do we begin to see sync issues between the music and lights as we get into 20, 30, or more songs in a show)?
Trial and error will take care of this, but I didn't know if anyone had more insight into the specifics.