Master the 2-channel art first Everyone is talking about multichannel sound. I have no doubt that well-engineered multi-channel recordings will produce a more natural soundfield than we’ve been able to achieve in our 2-channel recordings, but it amazes me how few engineers really know how to take advantage of good ol’ fashioned 2-channel stereo. I’ve been making “naturalistic” 2-channel recordings for many years, and there are others working in the pop field who produce 2-channel (pop, jazz, even rock) recordings with beautiful depth and space. I’m rather disappointed in the sound of 2-channel recordings made by simple “pan-potted mono”, the typical sound of a rock mix. But it doesn’t have to be, if you study the works of the masters.
I wonder if the recording engineers who are disappointed in 2-channel recording may simply be using the wrong techniques. Pan-potted mono techniques,coupled by artificial reverberation–tend to produce a vague, undefined image, and I can understand why many engineers complain about how difficult it is to get definition working in only two channels. They say that when they move to multichannel mixing (e.g., 5.1) that they have a much easier time of it. Granted, though I suggest that first they study how to make a good 2-channel mixdown with depth, space, clarity, and definition. It’s possible if you know the tricks. Most of those tricks involve the use of the Haas effect, phase delays, more natural reverbs and unmasking techniques. If engineers don’t study the art of creating good 2-channel recordings, when we move to 5.1, ultimately we will end up with more humdrum mixes, more “pan-potted mono”, only with more speakers. This article describes techniques that will help you with 2-channel and multichannel recordings. Furthermore, well-engineered 2-channel recordings have encoded ambience information which can be extracted to multichannel, and it pays to learn about these techniques.
The Perception of Depth
At first thought, it may seem that depth in a recording is achieved by increasing the ratio of reverberant to direct sound. But it is a much more involved process. Our binaural hearing apparatus is largely responsible for the perception of depth. But recording engineers were concerned with achieving depth even in the days of monophonic sound. In the monophonic days, many halls for orchestral recording were deader than those of today. Why do monophonic recording and dead rooms seem to go well together? The answer is involved in two principles that work hand in hand: 1) The masking principle and 2) The Haas effect.
The Masking Principle
The masking principle says that a louder sound will tend to cover (mask) a softer sound, especially if the two sounds lie in the same frequency range. If these two sounds happen to be the direct sound from a musical instrument and the reverberation from that instrument, then the initial reverberation can appear to be covered by the direct sound. When the direct sound ceases, the reverberant hangover is finally perceived.
In concert halls, our two ears sense reverberation as coming diffusely from all around us, and the direct sound as having a distinct single location. Thus, in halls, the masking effect is somewhat reduced by the ears’ ability to sense direction.
In monophonic recording, the reverberation is reproduced from the same source speaker as the direct sound, and so we may perceive the room as deader than it really is, because of directional masking. Furthermore, if we choose a recording hall that is very live, then the reverberation will tend to intrude on our perception of the direct sound, since both will be reproduced from the same location–the single speaker. So there is a limit to how much reverberation can be used in mono.
This is one explanation for the incompatibility of many stereophonic recordings with monophonic reproduction. The larger amount of reverberation tolerable in stereo becomes less acceptable in mono due to directional masking. As we extend our recording techniques to 2-channel (and eventually multichannel) we can overcome directional masking by spreading reverberation spatially away from the direct source, achieving both a clear (intelligible) and warm recording at the same time.
The Haas Effect
The Haas effect can be used to overcome directional masking. Haas says that, in general, echoes occurring within approximately 40ms of the direct sound become fused with the direct sound. We say that the echo becomes “one” with the direct sound, and only a loudness enhancement occurs.
A very important corollary to the Haas effect says that fusion (and loudness enhancement) will occur even if the closely-timed echo comes from a different direction than the original source. However, the brain will continue to recognize (binaurally) the location of the original sound as the proper direction of the source. The Haas effect allows nearby echoes (up to approximately 40ms delay, typically 30ms) to enhance an original sound without confusing its directionality. We can take advantage of the Haas effect to naturally and effectively convert an existing 2-channel recording to a 4-channel or surround medium. When remixing, place a discrete delay in the surround speakers to enhance and extract the original ambience from a previously recorded source! No artificial reverberator is needed if there is sufficient reverberation in the original source. Here’s how it works:
Because of the Haas effect, the ear fuses the delayed with the original sound, and still perceives the direct sound as coming from the front speakers. But this does not apply to ambience–ambience will be spread, diffused between the location of the original sound and the delay (in the surround speakers). Thus, the Haas effect only works for correlated material; uncorrelated material (such as natural reverberation) is extracted, enhanced, and spread directionally. Dolby laboratories calls this effect “the magic surround,” for they discovered that natural reverberation was extracted to the rear speakers when a delay was applied to them. Dolby also uses an L minus R matrix to further enhance the separation. The wider the bandwidth of the surround system and the more diffuse its character, the more effective the psychoacoustic extraction of ambience to the surround speakers.
There’s more to Haas than this simple explanation. To become proficient in using Haas in mixing, study the original papers on the various fusion effects at different delay and amplitude ratios.
Haas’ Relationship to Natural Environments
We may say that the shorter echoes which occur in a natural environment (from nearby wall and floor) are correlated with the original sound, as they have a direct relationship. The longer reverberation is uncorrelated; it is what we call the ambience of a room. Most dead recording studios have little or no ambient field, and the deadest studios have only a few perceptible early reflections to support and enhance the original sound.
In a good stereo recording, the early correlated room reflections are captured with their correct placement; they support the original sound, help us locate the sound source as to distance and do not interfere with left-right orientation. The later uncorrelated reflections, which we call reverberation, naturally contribute to the perception of distance, but because they are uncorrelated with the original source the reverberation does not help us locate the original source in space. This fact explains why the multitrack mixing engineer discovers that adding artificial reverberation to a dry, single-miked instrument may deteriorate the sense of location of that instrument. If the recording engineer uses stereophonic miking techniques and a liver room instead, capturing early reflections on two tracks of the multitrack, the remix engineer will need less artificial reverberation and what little he adds can be done convincingly.
Using Frequency Response to Simulate Depth
Another contributor to the sense of distance in a natural acoustic environment is the absorption qualities of air. As the distance from a sound source increases, the apparent high frequency response is reduced. This provides another tool which the recording engineer can use to simulate distance, as our ears have been trained to associate distance with high-frequency rolloff. An interesting experiment is to alter a treble control while playing back a good orchestral recording. Notice how the apparent front-to-back depth of the orchestra changes considerably as you manipulate the high frequencies.
Recording Techniques to Achieve Front-to-Back Depth Minimalist Techniques
Balancing the Orchestra
A musical group is shown in a hall cross section. Various microphone positions are indicated by letters A-F.
Microphones A are located very close to the front of the orchestra. As a result, the ratio of A‘s distance from the back compared to the front is very large. Consequently, the front of the orchestra will be much louder in comparison to the rear. Front-to-back balance will be exaggerated. However, there is much to be said in favor of mike position A, since the conductor usually stands there, and he purposely places the softer instruments (strings) in the front, and the louder (brass and percussion) in the back, somewhat compensating for the level discrepancy due to location. Also, the radiation characteristics of the horns of trumpets and trombones help them to overcome distance. These instruments frequently sound closer than other instruments located at the same physical distance because the focus of the horn increases direct to reflected ratio. Notice that orchestral brass often seem much closer than the percussion, though they are placed at similar distances. You should take these factors into account when arranging an ensemble for recording. Clearly, we also perceive depth by the larger ratio of reflected to direct sound for the back instruments.
The further back we move in the hall, the smaller the ratio of back-to-front distance, and the front instruments have less advantage over the rear. At position B, the brass and percussion are only two times the distance from the mikes as the strings. This (according to theory) makes the back of the orchestra 6 dB down compared to the front, but much less than 6 dB in a reverberant hall, because level changes less with distance.
For example, in position C, the microphones are beyond the critical distance–the point where direct and reverberant sound are equal. If the front of the orchestra seems too loud at B, position C will not solve the problem; it will have similar front-back balance but be more buried in reverberation.
Using Microphone Height To Control Depth And Reverberation
Changing the microphone’s height allows us to alter the front-to-back perspective independently of reverberation. Position D has no front-to-back depth, since the mikes are directly over the center of the orchestra. Position E is the same distance from the orchestra as A, but being much higher, the relative back-to-front ratio is much less. At E we may find the ideal depth perspective and a good level balance between the front and rear instruments. If even less front-to-back depth is desired, then F may be the solution, although with more overall reverberation and at a greater distance. Or we can try a position higher than E, with less reverb than F.
Directivity of Musical Instruments
Frequently, the higher up we move, the more high frequencies we perceive, especially from the strings. This is because the high frequencies of many instruments (particularly violins and violas) radiate upward rather than forward. The high frequency factor adds more complexity to the problem, since it has been noted that treble response affects the apparent distance of a source. Note that when the mike moves past the critical distance inthe hall, we may not hear significant changes in high frequency response when height is changed.
The recording engineer should be aware of how all the above factors affect the depth picture so he can make an intelligent decision on the mike position to try next. The difference between a B+ recording and an A+ recording can be a matter of inches. Hopefully you will recognize the right position when you’ve found it.
Beyond Minimalist Recording
The engineer/producer often desires additional warmth, ambience, or distance after finding the mike position that achieves the perfect instrumental balance. In this case, moving the mikes back into the reverberant field cannot be the solution. Another call for increased ambience is when the hall is a bit dry. In either case, trucking the entire ensemble to another hall may be tempting, but is not always the most practical solution.
The minimalist approach is to change the microphone pattern(s) to less directional (e.g., omni or figure-8). But this can get complex, as each pattern demands its own spacing and angle. Simplistically speaking, with a constant distance, changing the microphone pattern affects direct to reverberant ratio.
Perhaps the easiest solution is to add ambience mikes. If you know the principles of acoustic phase cancellation, adding more mikes is theoretically a sin. However, acoustic phase cancllation does not occur when the extra mikes are placed purely in the reverberant field, for the reverberant field is uncorrelated with the direct sound. The problem, of course, is knowing when the mikes are deep enough in the reverberant field. Proper application of the 3 to 1 rule will minimize acoustic phase cancellation. So will careful listening. The ambience mikes should be back far enough in the hall, and the hallmust be sufficiently reverberant so that when these mikes are mixed into the program, no deterioration in the direct frequency response is heard, just an added warmth and increased reverberation. Sometimes halls are so dry that there is distinct, correlated sound even at the back, and ambience mikes would cause a comb filter effect.
Assuming the added ambience consists of uncorrelated reverberation, then theoretically an artificial reverberation chamber should accomplish similar results to those obtained with ambience microphones. The answer is a qualified yes, assuming the artificial reverberation chamber sounds very good and consonant with the sound of the original recording hall.
What happens to the depth and distance picture of the orchestra as the ambience is added? In general, the front-to-back depth of the orchestra remains the same or increases minimally, but the apparent overall distance increases as more reverberation is mixed in. The change in depth may not be linear for the whole orchestra since the instruments with more dominant high frequencies may seem to remain closer even with added reverberation.
The Influence of Hall Characteristics on Recorded Front-to-Back Depth in Live Halls
In general, the more reverberant the hall, the further back the rear of the orchestra will seem, given a fixed microphone distance. In one problem hall the reverberation is much greater in the upper bass frequency region, particularly around 150 to 300 Hz.
A string quartet usually places the cello in the back. Since that instrument is very rich in the upper bass region, in this problem hall the cello always sounds further away from the mikes than the second violin, which is located at his right. Strangely enough, a concertgoer in this hall does not notice the extra sonic distance because his strong visual sense locates the cello easily and does not allow him to notice an incongruity. When he closes his eyes, however, the astute listener notices that, yes, the cello sounds further back than it looks!
It is therefore rather difficult to get a proper depth picture with a pair of microphones in this problem hall. Depth seems to increase almost exponentially when low frequency instruments are placed only a few feet away. It is especially difficult to record a piano quintet in this hall because the low end of the piano excites the room and seems hard to locate spatially. The problem is aggravated when the piano is on half-stick, cutting down the high frequency definition of the instrument.
The miking solution I choose for this problem is a compromise; close mike the piano, and mix this with a panning position identical to the piano’s virtual image arriving from the main mike pair. I can only add a small portion of this close mike before the apparent level of the piano is taken above the balance a listener would hear in the hall. The close mike helps solidify the image and locate the piano. It gives the listener a little more direct sound on which to focus.
Very Dead Rooms
Can minimalist techniques work in a dead studio? Not very well. My observations are that simple miking has no advantage over multiple miking in a deadroom. I once recorded a horn overdub in a dead room, with six tracks of close mikes and two for a more distant stereo pair. In this dead room there were no significant differences between the sound of this “minimalist” pair and six multiple mono close up mikes! The close mikes were, of course, carefully equalized, leveled and panned from left to right. This was a surprising discovery, and it points out the importance of good hall acoustics on a musical sound. In other words, when there are no significant early reflections, you might as well choose multiple miking, with its attendant post-production balance advantages.
Miking Techniques and the Depth Picture
The various simple miking techniques reveal depth to greater or lesser degree. Microphone patterns which have out of phase lobes (e.g., hypercardioid and figure-8) can produce an uncanny holographic quality when used in properly angled pairs. Even tightly-spaced (coincident) figure-8’s can give as much of a depth picture as spaced omnis. But coincident miking reduces time ambiguity between left and right channels, and sometimes we seek that very ambiguity. Thus, there is no single ideal minimalist technique for good depth, and you should become familiar with the relative effects on depth caused by changing mike spacing, patterns, and angles. For example, with any given mike pattern, the farther apart the microphones of a pair, the wider the stereo image of the ensemble. Instruments near the sides tend to pull more left or right. Center instruments tend to get wider and more diffuse in their image picture, harder to locate or focus spatially.
The technical reasons for this are tied in to the Haas effect for delays of under approximately 5ms. vs. significantly longer delays. With very short delays between two spatially located sources, the image location becomes ambiguous. A listener can experiment with this effect by mistuning the azimuth on an analog two-track machine and playing a mono tape over a well-focused stereo speaker system. When the azimuth is correct, the center image will be tight and defined. When the azimuth is mistuned, the center image will get wider and acoustically out of focus. Similar problems can (and do) occur with the mike-to-mike time delays always present in spaced-pair techniques.
The Front-to-back Picture with Spaced Microphones
I have found that when spaced mike pairs are used, the depth picture also appears to increase, especially in the center. For example, the front line of a chorus will no longer seem straight. Instead, it appears to be on an arc bowing away from the listener in the middle. If soloists are placed at the left and right sides of this chorus instead of in the middle, a rather pleasant and workable artificial depth effect will occur. Therefore, do not overrule the use of spaced-pair techniques. Adding a third omnidirectional mike in the center of two other omni’s can stabilize the center image, and proportionally reduces center depth.
Multiple Miking Techniques
I have described how multiple close mikes destroy the depth picture; in general I stand behind that statement. But soloists do exist in orchestras, and for many reasons, they are not always positioned in front of the group. When looking for a natural depth picture, try to move the soloists closer instead of adding additional mikes, which can cause acoustic phase cancellation. But when the soloist cannot be moved, plays too softly, or when hall acoustics make him sound too far back, then a close mike or mikes (known as spotmikes) must be added. When the close solo mikes are a properly placed stereo pair and the hall is not too dead, the depth image will seem more natural than one obtained with a single solo mike.
Apply the 3 to 1 rule. Also, listen closely for frequency response problems when the close mike is mixed in. As noted, the live hall is more forgiving. The close mike (not surprisingly) will appear to bring the solo instrument closer to the listener. If this practice is not overdone, the effect is not a problem as long as musical balance is maintained, and the close mike levels are not changed during the performance. We’ve all heard recordings made with this disconcerting practice. Trumpets on roller skates?
Delay Mixing
At first thought, adding a delay to the close mike seems attractive. While this delay will synchronize the direct sound of that instrument with the direct sound of that instrument arriving at the front mikes, the single delay line cannot effectively simulate the other delays of the multiple early room reflections surrounding the soloist. The multiple early reflections arrive at the distant mikes and contribute to direction and depth. They do not arrive at the close mike with significant amplitude compared to the direct sound entering the close mike. Therefore, while delay mixing may help, it is not a panacea.
Influence of the Control Room Environment on Perceived Depth
At this point, many engineers may say, “I’ve never noticed depth in my control room!” The widespread practice of placing near-field monitors on the meter bridges of consoles kills almost all sense of depth. Comb-filtering and sympathetic vibrations from nearby surfaces destroy the perception of delicate time and spatial cues. The recent advent of smaller virtual control surfaces has helped reduce the size of consoles, but seek advice from an expert acoustician if you want to appreciate and manipulate depth in your recordings. We should all do this before we expand to multi-channel, for we still have a lot to learn about taking advantage of the hidden depth in 2-channel recordings.
Musical Examples to Check Out
Check out the CD Honor Roll for examples of fantastic recordings. Standard multitrack music recording techniques make it difficult for engineers to achieve depth in their recordings. Mixdown tricks with reverb and delay may help, but good engineers realize that the best trick is no trick: Record as much as you can by using stereo pairs in a live room. Here are some examples of audiophile records I’ve recorded that purposely take advantage of depth and space, both foreground and background, on Chesky Records. Sara K. Hobo, Chesky JD155. Check out the percussion on track 3, BrickHouse… Johnny Frigo, Debut of a Legend, Chesky JD119. Check out the sound of the drums and the sax on track 9, I Love Paris. Ana Caram, The Other Side of Jobim, Chesky JD73. Check out the percussion, cello and sax on Correnteza. Carlos Heredia, Gypsy Flamenco, Chesky WO126. Play it loud! And listen to track 1 for the sound of the background singers and handclaps. Phil Woods, Astor and Elis, Chesky JD146, for the natural-sounding combination of intimacy and depth of the jazz ensemble.
Technological Impediments to Capturing Recorded Depth
Depth is the first thing to suffer when low-resolution technology is used. Here is a list of some of the technical practices that when misused, or accumulated, can contribute to a boringly flat, depthless recorded picture: Multitrack and multimike techniques, small/dead recording studios, low resolution recording media, amplitude compression, improper use of dithering, cumulative digital processing, and low-resolution digital processing (e.g., using single-precision as opposed to double or higher-precision equalizers). When recording, mixing and mastering-use the best miking techniques, room acoustics, and highest resolution technology, and you’ll resurrect the missing depth in your recordings.
Thanks to:
My assistant, David Holzmann, for transcribing my original 1981 article, which I have herein revised and updated.
Lou Burroughs, whose 1974 book Microphones: Design and Application, now out of print, is still one of the prime references on this subject and covers the topic of acoustic phase cancellation. Burroughs invented the 3-to-1 rule, expressed simply: When a sound source is picked up by one microphone and also “leaking” into another microphone that is mixed to the same channel—-make sure the distance between microphones is at least three times the distance of the first mike from the source. This general rule does not account for the intensities of all instruments or all room acoustics, but you should listen critically when your microphone distances decrease. And remember, this applies to mixing to one channel, for non-coincident stereo microphone techniques can break the 3-t-1 rule, and if so, be sure to check the sound in mono for phase cancellation.
E. Roerback Madsen, whose article “Extraction of Ambiance Information from Ordinary Recordings” can be found in the 1970 October issue of the Journal of the Audio Engineering Society. Covers the Haas effect and its correlary.
Don Davis, who first defined “critical distance” and many other acoustic terms.
Share this Article