filmov
tv
Using Timecode in Live Music - The Production Academy
Показать описание
Normally we don't think about timecode too much in live sound; in live music it's mostly used to synchronize lights and video. When implemented properly, it’s really effective at getting these production elements to operate consistently in time with the music, and it’s an essential part of many large-scale productions.
But even though it’s not very common in live music, it’s also possible to have timecode trigger events on our audio consoles. Usually this would be used to recall settings in a scene/snapshot. On pro consoles this could configured to do almost anything - change gains, move faders, unmute, change EQ, insert a plug-in, or change whatever parameters are allowed by that console.
So, what is time code? Standard SMPTE timecode is a string of four numbers that represents a specific frame in video or audio. This is displayed as: 00:00:00:00 (hour:minute:second:frame). As time progresses, the numbers move sequentially by frame like a clock.
In music (and video/TV) production we use linear timecode (LTC), which is SMPTE timecode that’s transmitted as an audio signal. We can record this signal alongside the music in our playback system so that each song has unique timecode. That way any given frame number corresponds to one very specific point in a song.
In multitrack playback setups, LTC is recorded on a separate track in the playback software (Pro Tools, Ableton Live, Digital Performer, etc.). Then it gets a separate output of the interface, which feeds lighting or audio consoles (FYI - it’s standard to just use normal XLR cables for these connections). This lets the playback system trigger actions on the consoles, which keeps everything super in sync with the music.
Even though it's not something many audio engineers do, I've implemented timecode in some of my shows with some great results. I’ll set the console to recall a snapshot at a specific point in the music, which triggers something like unmuting FX or boosting a fader for a guitar solo. Of course, this only works when I'm developing a show file on a specific console for a band that I'm on tour with, mixing the same show every night.
Usually the first reaction I get from other engineers when I tell them that I do this is: “That's crazy. I wouldn't want my faders moving on their own!” And I totally get that. I mean, really good engineers are comfortable walking up to any console and getting a solid mix going for almost any band. They don’t want anything happening that’s out of their direct control.
But when I have 60+ inputs, with a bunch of things changing during the song, taking advantage of the technology and programming the console ahead of time lets me not worry about some of the minor fader moves. That way I can keep my attention on the big picture and concentrate on the overall sound of the show.
But even though it’s not very common in live music, it’s also possible to have timecode trigger events on our audio consoles. Usually this would be used to recall settings in a scene/snapshot. On pro consoles this could configured to do almost anything - change gains, move faders, unmute, change EQ, insert a plug-in, or change whatever parameters are allowed by that console.
So, what is time code? Standard SMPTE timecode is a string of four numbers that represents a specific frame in video or audio. This is displayed as: 00:00:00:00 (hour:minute:second:frame). As time progresses, the numbers move sequentially by frame like a clock.
In music (and video/TV) production we use linear timecode (LTC), which is SMPTE timecode that’s transmitted as an audio signal. We can record this signal alongside the music in our playback system so that each song has unique timecode. That way any given frame number corresponds to one very specific point in a song.
In multitrack playback setups, LTC is recorded on a separate track in the playback software (Pro Tools, Ableton Live, Digital Performer, etc.). Then it gets a separate output of the interface, which feeds lighting or audio consoles (FYI - it’s standard to just use normal XLR cables for these connections). This lets the playback system trigger actions on the consoles, which keeps everything super in sync with the music.
Even though it's not something many audio engineers do, I've implemented timecode in some of my shows with some great results. I’ll set the console to recall a snapshot at a specific point in the music, which triggers something like unmuting FX or boosting a fader for a guitar solo. Of course, this only works when I'm developing a show file on a specific console for a band that I'm on tour with, mixing the same show every night.
Usually the first reaction I get from other engineers when I tell them that I do this is: “That's crazy. I wouldn't want my faders moving on their own!” And I totally get that. I mean, really good engineers are comfortable walking up to any console and getting a solid mix going for almost any band. They don’t want anything happening that’s out of their direct control.
But when I have 60+ inputs, with a bunch of things changing during the song, taking advantage of the technology and programming the console ahead of time lets me not worry about some of the minor fader moves. That way I can keep my attention on the big picture and concentrate on the overall sound of the show.
Комментарии