Audio signal flow: Difference between revisions

Content deleted Content added
CmdrObot (talk | contribs)
sp (3): Oftentimes→Often, oftentimes→often
Heavily revised the section called signal flow chain, changed it to signal flow example
Line 5:
'''Audio signal flow''' is the path an [[sound|audio]] signal takes from source to output, including all the processing involved in generating audible sound from electronic impulses or recorded media.<ref>{{cite book | title = Pro Tools 6 for Macintosh and Windows | author= Steven Roback | edition = 2nd | publisher = Peachpit Press | year = 2004 | isbn = 978-0-321-21315-0 | page = 303 | url = http://books.google.com/books?id=6kcD7mPdaXwC&pg=PT319&dq=%22audio+signal+flow%22&lr=&num=20&as_brr=3&ei=Fe47S-TAFYrSkwTjy5DMAQ&cd=2#v=onepage&q=%22audio%20signal%20flow%22&f=false }}</ref>
 
== Analog recordingRecording ==
An [[Mixing console|analog console]], also known as a mixing board, is a device for routing the multitude of audio signals present in a recording into various outputs. These boards allow the audio signal to be controlled, split, filtered and otherwise adjusted internally and by other devices in the electrical environment. Analog mixers are usually the central piece of equipment in a [[recording studio]] or live sound venue. Recording artists using analog consoles had to record using tape decks. Two factors that allowed engineers to distort the audio are in relation to the tape’s width and the speed at which the song was played back.<ref>http://arts.ucsc.edu/ems/music/equipment/analog_recorders/Analog_Recorders.html#basics</ref>
 
== Digital recordingRecording ==
Digital audio recording is a very recent and efficient innovation in the [[music industry]]. It has allowed a huge expansion in the ability to manipulate the audio after it is recorded. In [[digital recording]], the audio signal is converted into digital information that a computer can process. Our computers use DAW (digital audio workstations) to turn the digitized music into the product of an audible sound.<ref name="Alten, Stanley R 2008">Alten, Stanley R. Audio in Media, 8th Edition. Wadsworth CENGAGE Learning, 2008.</ref>
 
== Signal flowFlow chainExample ==
The exact series of elements in a signal flow will vary from system to system. The following example depicts a typical signal flow for recording a vocalist in a recording studio.
To start off the [[Signal chain (signal processing chain)|signal flow chain]] there must be a [[microphone]] line. This line is a direct transfer of the audible sound to the mixing board. Microphones work as transducers and convert the audio into an [[electrical current]].<ref>http://www.leeds.ac.uk/music/studio/teaching/audio/Mics/mics.htm</ref> Speakers are also transducers as they convert the electrical signal to an audible sound. Microphone lines give no effect to the audio; they provide the most basic and clean sound.
 
The first element in the signal flow is the vocalist, which produces the signal. This signal propagates acoustically to the microphone according to the [[Inverse-square law]], where it is converted by a transducer into an electrical signal. Other objects may also produce sound in the acoustical environment, such as HVAC systems, computer fans, traffic noise, elevators, plumbing, etc. These noise sources are also be picked up by the microphone. It is therefore important to optimize the acoustical signal/noise ratio at the microphone. This can be accomplished by reducing the amplitude of unwanted noise (for example, turning off the HVAC system while recording), or by taking advantage of the inverse-square law; by moving the microphone closer to the signal source and farther away from any noise sources, the signal/noise ratio is increased.
The [[auxiliary send]] provides a space for [[Plug-in (computing)|plug-ins]] to be activated. Plug-ins allow the [[recording engineer]] to insert a special effect on the audio signal. Many engineers use [[reverb]] or [[delay (audio effect)|delay]] to create a unique effect on a singer’s voice or even insert a loud [[distortion]] on the lead guitarist’s riff. The auxiliary send is another part of mixing that enhances the audio’s personality, but is not required. It is also important for the audio engineer to realise the amount of processing power required by the host [[CPU]] to manipulate the audio. It is more efficient to use time modifier plug-ins such as reverb, delay etc. in an auxiliary send - as an insert on the channel strip would require that portion of the signal to be modified dramatically, instead of a split of the signal which would in turn be sent to the auxiliary send.
 
After the microphone, the signal passes down a cable to the microphone preamplifier, which amplifies the microphone signal to line level. This is important because a line-level signal is necessary to drive the input circuitry of any further processing equipment down the chain, which will generally not be able to accept the extremely low-voltage signal produced by a typical microphone.
The on/off switch gives the engineer the option to either activate or bypass the function. The [[Fader (audio engineering)|fader]] only controls the tracks volume but is essential to the entire mix. The fader is gauged by [[decibel]]s and it is very important. It becomes critical to understand the decibel output of the audio track before recording the signal. A very loud signal can blow the speakers and greatly damage the recording equipment.
 
For the purposes of this example, the output of the microphone preamplifier is then sent to an EQ, where the timbre of the sound may be manipulated for artistic or technical purposes. Examples of artistic purposes include making the singer sound "brighter," "darker," "more forward," "less nasal," etc. Examples of technical purposes include reducing unwanted low-frequency rumble from HVAC systems, compensating for high-frequency loss caused by distant microphone placement, etc.
Often, when listening to music, it is possible to almost pick out where the instrument is being played in the music. The pan knob allows engineers to “place” the instruments mentally and give the music a great feel. Human ears hear through [[binaural localization]] and can tell the difference between right and left sounds.<ref name="Alten, Stanley R 2008"/> The sound engineer's main goal in using the pan knob is to create a sonic soundscape of instruments, this would create clarity and transparency within the mix, essentially allowing each individual performer to be heard. They want to paint a picture for the listener to make the music more appealing. Often engineers follow the pattern of how you may hear the instruments if the performance were live.
 
The output of the EQ will then be sent to a compressor, which is a device that manipulates the dynamic range of a signal for either artistic or technical reasons.
After all these selections have been adjusted to personal preference there are a few final steps. To ease the job of adjusting levels, sub groups may be assigned. This is when a group of microphone lines can be synchronized together so that only one knob controls them all. Sub group assignments are very helpful for an instrument like the drums. A drum set often has microphones on close to every drum and can be very hard to adjust each drum head individually. By assigning the drums to a group, the engineer will be allowed to move the output volume with one knob instead of six lines, for example.
 
The output of the compressor is then sent to a mix buss, where the signal will be combined ("mixed") with other signals, such as other singers or musical instruments.
 
The mixed signal is then sent to an analog-to-digital converter, which converts the signal to a digital format, allowing the signal to be sent to a digital recording device, such as a computer.
 
When all audio has been sent through each step, a master mix is in place.
 
==See also==
*[[Echo (phenomenon)]]
*[[Multi-path propagation]]
*[[Reverb]]