Comb Filtering : Please engineer / smarty-pants ... learn me sumtin

Discussion in 'General Audio Discussion' started by biscuithead, Nov 10, 2017.

  1. twiiii

    twiiii Addicted Member

    west Texas
    Ok guys how do you correct the comb filtering between your two ears? Stereo headphones? No your brain has done that for you already as you reached child hood. Its like learning perfect pitch. Isn't it amazing all the things are sensory systems and our brain do for us without us being aware.

    Recording is a very simple process as long as each mic remains discrete. Its when the sound comes out of the speakers that weird things begin to happen. Its why there's a real fascination with full range single driver speaker systems.

    Remember as much as we try making the process of bring ing Hi-fidelity into the home an exercise in science, for now it is an ART.
    Bill Ferris likes this.


    Please register to disable this ad.

  2. mfrench

    mfrench AK Subscriber Subscriber

    Over by Rainbow
    Absolutely so.

    This response can get really long and geeky. I’ve given the long detailed answer to this here:
    I’ve taken the time to repair the photobucket damage in this thread, ^. So, I link there, as the example I have here at AK is likely full of holes from PB.
    I go into deep detail there.
    NOTE: I've just now realized that a forum software "upgrade" wiped this thread out, ^, again, after the photobucket trashing repair effort. So, I now get to try to sort this out all over again, maybe.
    This thread, ^, has now been fixed, once again. It took a couple of hours of editing to fix it, but, its worth it, I hope.

    To my immediate reply to your question,..
    In two channel stereo recording we have a left mic and a right mic (speaking in generalities).
    The physics of sound, in regard to two channel stereo, holds that anything that those two mics hear in common shared impulse will get cancelled to center.
    This is a highly useful, positive thing, when you know how to work within that parameter.
    In that image above, the small center lobe is a bucket for phase cancelled bits, shared commonalities. Those bits, in combination give you a clear center image.
    It gets really complex, and you're rightfully confused. It took me 10 years of extremely critical listening to my own recorded efforts to really be able to discern tiny variations in off-set axis angles, random phase related issues, natural attenuation, and timing being ether near-coincidental, or, coincidental.
    What is shown above is a coincidental array, meaning; the sound impulse arrives at the point of the array at the exact same time, and at the exact same amplitude - EXACT (key concept).
    Anything of any variation is filed left or right, depending on signal impulse direction. All shared commonalities go to center.
    This center image, given the ensemble above, in a U-shape, will give me the rear of the U players in the center, as a centered ghost image, floating between your speakers. The only way that happens is shared commonalities dropping that image to center.
    The problem with coincidental timing is we are comb filter processing machines, and our brains make perfect sense of timing related offsets.

    We are binaural blockheads. Our heads are broad, and our ears are offset by a certain amount of degrees under 180º.

    That presents timing and amplitude differences that are the essence of comb filtering. But our brains sort that out just fine, unless you push beyond the parameters of the brains processing of those offsets; then it just sounds weird, or, shit starts to move around weirdly.

    We have near-coincident arrays that try to mimic the human head in width, and an attempt to recreate the offset angles of the human ears
    This is similar to what a near-coincident array looks like \_____/
    The mics in this array are head width apart (17cm to 20cm spacing), and 45º offset each, of what is called 90º combined off-axis response.
    The space between the mics replicates the head width, annd the
    offset axis creates the amplitude difference, but does not effect frequency repsonse.
    Near-coincident patterns introduce timing offsets that for all intent and purpose, are what comb filtering also introduces. So its a delicate balance around phase, phase cancellations, combing, and whether is is being a positive effect, or, going completely to crap, and being a raging negative.
    Last edited: Nov 11, 2017
  3. evsentry3

    evsentry3 Well-Known Member

    East Tennessee
    Comb filtering can and does happen with mics too. It's not exclusive to playback.

    Comb filtering plain and simple is the same sound coming from two different locations so the result is it arrives at any given point with a slightly different arrival time and when summed cancellation and reinforcement occurs. It's like sound bounce (and frequency dependent cancellation) off your listening room walls on steroids.

    When it's speakers, the time arrival of slightly different distances to you is the problem. If it's setup just right, then the lobe may be desirable. Like MTM speakers. Even with those, if you are at the wrong angle then it becomes undesirable. As at a relative distance which the arrival time creates a null. That's created by the time being very close, but different enough to partly cancel.

    In recording, a great example that you may be aware enough to have heard is the typical TV news set. The weather guy gets done with the weather and starts the hand off to the other news people. You hear the sound go slightly "in the can" sounding. That's because the second (and maybe more) mics are opened and now the weatherpersons voice is being picked up by not only his mic but the other mics at different distances. Then when it's mixed at the audio board, the arrival time varies enough and you get comb filtering causing severe notches in frequency response. Better stations work hard to minimize but often on smaller stations it's worse and more noticeable. Part of why it's often noticeable is if the weatherman is still moving back to the main set then his distance is changing relative to the other mics and so the comb filtering is changing in frequency and calls itself out more.

    Bill Ferris likes this.
  4. WaynerN

    WaynerN AK Subscriber Subscriber

    My vinyl room features 2 side speakers, the JBL S109 Aquarius IV speakers and switchable mains, going from either some JBL Studio 230, Dynaco A25XL (modded) or a pair of AR4Xs. The room has basically turned into a walk in pair of headphones. While I might not be in near-field, I certainly am in mid-field. One of my power amps has a volume control on it, so I can balance the 4 speakers and it seems the position I have found for the Aquarius speakers works with any of the mains that I want to listen to.

    You can walk around the room and there are no ill effects from "comb" filtering. I also have the room arranged with seating for 2, and both positions get great stereo.
    Picture 001.jpg Picture 003.jpg Picture 004.jpg Picture 005.jpg Picture 002.jpg

    I also have run out of room for speakers so I have speakers mounted on the sides for a stereo pair. I use these for casual FM listening (Empire Cavalier 2000, Klipsch KG4 and Paradigm Studio 40v.3. It's kind of like binaural listening when speakers are at the sides. I really like listening to FM in this way. What ever it is, it does eliminate cross-talk effects.

    ev13wt and Lavane like this.
  5. freQ(*)Oddio

    freQ(*)Oddio Super Member

    not hearing comb filtering happening on a system, that can be measured as being present, has to o many variables to be concerned about in most systems. It is a personal situation.
    Last edited: Nov 11, 2017
    I LIKE MUSIC likes this.
  6. mfrench

    mfrench AK Subscriber Subscriber

    Over by Rainbow
    A status update, in case anyone was interested. I've fixed this thread, ^
    biscuithead likes this.


    Please register to disable this ad.

  7. Joe Dawson

    Joe Dawson Active Member

    I agree Fre. I read a Rane article years ago that if the resonance or dip is less than 1/3 octave wide, despite the amplitude, it will be hardly, if at all perceived. However, as the bandwidth of the resonance widens more than 1/3rd octave, or in the case of the frequency response fall off covering several octaves, it only requires a very very small change in amplitude.

    To test this, I simply move my head from side to side, or even across the room, and no perceived sonic difference, except the position of the players. Yet, just slightly alter the high frequency response and I can easily perceived such.

    From what I have read, comb filtering problems are less than 1/3rd octave lobes. I am running a single full range driver with xover to 12" woofer at ~160hz, some 24" apart. I do not perceive any problems. YMMV with other types of speakers.

    keep on truckin
    Last edited: Nov 14, 2017
    freQ(*)Oddio likes this.
  8. Hobie1dog

    Hobie1dog Super Member

    Now I know I definitely need to come over to your house to hear that.

    I LIKE MUSIC Super Member

    I find how this works to be very interesting. I started way back when I was knee high to a phase shift induced comb filter. :eek::D

    As a youngster with some stuff, basic oscilloscopes, audio generators, microphone and so on...

    I discovered that if I put a sine wave through a single speaker used a pair of microphones, each driving an oscilloscope (that is correct, no dual trace scope for a young kid back then) and varied the distance of one microphone relative to other, I could see the trace move on one of scopes.

    This lead me to study the physics and math that defined what was involved.

    Now it is much easier.


    I understand that measurements do not necessarily define sound quality, but measurements can establish base lines.

    Note the delay finder in the upper right corner. It will tell me the location differential of two microphones or speakers and the phase (time) shift involved. It is a far cry from my original experiments back in the day of coal fired test equipment...

    Understanding the physics and math involved helps to understand the various microphone patterns as described mfrench. The physics is the physics.

    Looking at the upper center display, one can see the original sine wave on the top and the result of the combination of two microphones, on lower display go from maximum to minimum as the relative distance between the two microphones is varied.

    If broadband noise is used one can watch the comb filtering on the center lower disIt is play.

    Comb filtering is a fact of life, its impact on sound quality will vary.

    Threads about finding the sweet spot for the location of speakers are not uncommon. Comb filtering (or the reduction of) plays its part.

    It is interesting that the same physics and math that describes the interaction of a pair of microphones describes the interaction of the elements of an antenna or multiple antennas.

    The magic of designing the implementation of microphones or antennas is based on understanding the physics and math and the results there of along with empirical experimentation and or the research and experience of others to select what is needed for the desired results as applied the the desired outcome.
    Remember it is all about the :music::music::music: and each individuals perception of the sound quality.
    bhunter, Bill Ferris and biscuithead like this.

    I LIKE MUSIC Super Member

    Remember that in basic terms the end result of comb filtering is a change in frequency response.

    Comb filtering will change the spectral energy content (frequency response) as shown in my previous posts. The impact on the sound quality can be a little like applying an equalizer to the audio. The human auditory system does not ignore the impact of this.

    It is true that there is a specific forward transfer function as applied to HRTF (head related transfer function) as interpreted by the auditory system. Science tells us this is not the same thing.
    ev13wt and biscuithead like this.
  11. WaynerN

    WaynerN AK Subscriber Subscriber

    I've done lots of live recordings thru the years and I can tell you that with a pair of headphones on, you can hear microphones vary with input and response just by moving them (even a tiny bit), so your reference point using a sine wave has a reference problem, that being the microphone. Speakers and microphones will vary with their relationships to boundaries and objects, a point all of us have learned thru the years as audiophiles trying to find the ideal speaker positions.
    ev13wt likes this.


    Please register to disable this ad.


    I LIKE MUSIC Super Member

    I used sine waves as a simple example. The science is the science. If you believe the science is incorrect, tell me why.

    What I am describing is part of your experience.

    So are you saying that the video examples that I posted are not correct? If that is the case how are they not correct?

    Are you saying that the software that I referenced is incorrect.

    I do not believe you understand the basics of my microphone example. It shows the change in the phase relationship between two microphones based their differential distance from the source. It is this simple. This is basic science.

    Again, comb filtering is but one part, but to be clear, my example shows in very simple terms the change in phase relationship (timing) of the signals from two microphones at differing distances from a sound source.

    Are you saying that this is false? If so how so? Are you saying that no phase shift occurs? If so why not?

    Of course one can hear a difference in sound quality as microphones are moved. Again, comb filtering is just one part of it.

    Because it is just one part of it, does not mean that it is incorrect. If you believe it is incorrect, show me the science, the math and physics of why it is incorrect.

    My example using a sine wave makes it easy to see what happens. Any one with a dual trace oscilloscope can replicate my example. It is not rocket science...

    My use of a sine wave is not incorrect.

    Are you saying that the software in my example will not measure what it claims that it can measure, that is the phase relationship of two signals supplied by two microphones with differing distances to the sound source?

    Differential phase shift is the basis of comb filtering, is it not? Phase shift can happen because of distance differentials, can it not?

    Again, using a sine wave is just an easy way to see a phase differential. You appear to disagree with this. Tell me your basis for this disagreement?

    Again, to be clear, I never said that comb filtering is the only thing that has an impact on sound quality.

    BTW, I did not bring up the topic of microphones, mfrench did IIRC, possibly in response to me pointing out that comb filtering and beat notes are not the same thing. But the basic physics involved in comb filtering is the same for microphones and speakers and as I mentioned, even antennas.

    Again, just because what I posted is just one part, it does not mean that it is not correct.

    Remember the topic is comb filtering. All of my posts speak to the technical nature of comb filtering, its science and mechanics.
    Last edited: Nov 12, 2017
  13. WaynerN

    WaynerN AK Subscriber Subscriber

    I am not saying that the science is bad, but your implementation is not in a controlled environment, such as an anechoic chamber, where walls, ceilings and floors (and other types of boundaries) can be eliminated as being part of the measurement(s). We know that the comb effect occurs much like speakers can be out of phase with each other, a + and a - wave, for example, as opposed to 2 waves in phase +, +.

    In my vinyl room, I have experienced this myself with the mains and the Aquarius side speakers (post 45). Moving one Aquarius will change room nodes which will affevt the bass output of the system, or alter the sound stage. With only my ears as tools, I have found the location that, as a compromise, delivers the best bass foundation and sound-stage.

    I believe that anyone that wants to run 4 speakers in a 2 channel configuration simply cannot let their paradigms get in the way of experimenting with non-traditional speaker arrangements and trying only the usual such as each speaker in a corner, or stacked speakers.

    My vinyl room can be described as a happy accident. I never intended it to become the room that it is. Front speakers against the wall, half way up is certainly far from ideal. Speakers on side walls, with some at different heights is another, all caused by the room and the way it was laid out. But thru trial and error ( and some observations), it has become a very remarkable and very musical sound room for 2. Recording sound stage is huge and deep, bass notes are fast and dynamic. Lessons that I have learned is that perhaps the mains should always be elevated higher then the ear at a seating level, and that surround or support speakers are best at the off to the sides from the listening position and levels between the mains and the sides must be carefully adjusted to maintain the sound-stage and its position.

    I LIKE MUSIC Super Member

    Sorry that is not phase shift, that is polarity. Polarity and phase shift are not the same thing.

    To be very clear, and it is very basic electricity 101, changing the polarity of a signal doe not result in a phase shift. If you do not understand this, you may have difficulty understanding the science of my posts.

    Equating polarity and phase seems to indicate that you lack a basic understanding of what I am discussing.

    You are really missing the basic science. Looking at relative phase shift does not require anything special.

    It is true that comprehensive measurements require special conditions, but I never said anything about comprehensive measurements, just phase shift.

    Again, to be clear, even with the issues that you bring up, my example is still correct. Change the relative distance of the microphones to the source, music or a sine wave and there will be a phase shift (timing differential) between the signals from the microphones.

    If you believe this is not so, show me the science, math and physics.

    You have not addressed any of my technical questions.
    Last edited: Nov 12, 2017

    I LIKE MUSIC Super Member

    Here is an example of what you would see if you did the experiment that I describe.


    The display on the oscilloscope shows two sine waves with a relative phase sift. This is what you see if the two sine waves were coming from a pair of microphones that are located at a slightly different distance from a speaker playing a sine wave.

    The greater the relative difference of the microphones distance to the speaker playing the sine wave, the greater the difference in the position of the the sine waves ( phase shift) as seen on the oscilloscope.

    Again this is basic electronics 101.

    I hope this helps.

    Edit, sorry my picture did not post.
  16. mfrench

    mfrench AK Subscriber Subscriber

    Over by Rainbow
    Help me understand the scope image. I know nothing about those.
    Is the filtering occuring at the points where the wav forms cross?


    Please register to disable this ad.


    I LIKE MUSIC Super Member

    No that is not the cause of comb filtering.

    The first picture shows two equal amplitude, equal frequency sine waves with a phase sift of 90 degrees. When added together at a single microphone, the result would be a single sine wave an amplitude equal to 1.4 time the amplitude of of the individual sine waves if the angle of incidence and distance to the source is the is the same. The math and theory of the addition of sine wave is available on the net for those so interested.

    The second picture shows two sine waves with a 180 degree phase shift.

    And please note that the since waves do not start at the same time.

    This is an important point in the difference between phase shift and simple polarity reversal as referenced by a previous poster. They are not the same.

    When two sine waves of equal amplitude and frequency with a phase shift of 180 degrees are added together the result is cancellation of both sine waves.


    It is this cancellation that causes comb filtering when applied to a broad band signal, such as the drums in one of my earlier video examples. The phase shift was generated using software, but the end result is the same if the phase shift is generated with microphones, speakers, the gear involved and even antennas and RF circuits.

    To address a point raised by WanerN, the use of a sine waves is a convenient method to show visually what happens. When applied to a broadband signal, such as music, the result is comb filtering.

    Sound waves are a vector quantity, that is they have both amplitude and direction information. The signal from a single microphone is a scalar quantity, it contains only amplitude information, although the frequency response can be angle of incidence dependent.

    In a stereo recording, the placement and orientation of a pair of microphones or multiple microphones may be an attempt to recreate the vector nature of the original acoustic waveform.

    Again, to WaynerN's point, I understand that it is but one part of many parts that influence sound quality and comb filtering can be severe or minor. Its overall impact on sound quality will vary.
    Last edited: Nov 12, 2017
  18. WaynerN

    WaynerN AK Subscriber Subscriber

    I did not say the word "shift". When things are out of phase (one on the crest, one on the trough, as an example), their energy cancels each other out (that is why I thought of using the + and - signs). They are not shifted in time as your illustration shows.

    Phase shifting (time delay) happens in speakers all the time due to the cross-over network. That means that depending on design, the signal(s) that maybe on a cross-over point between two different drivers (like a woofer and a midrange) will have some energy cancelations due to the time lag. The lagged signal is being produced at the same time as the un-attenuated signal, and the sum of the two is less then then if both were not lagged. This is also the reason you can have a comb effect with just 2 speakers (depending on the reproduced material, location to boundaries and fun things like that). And, that is also a reason for mono recording purists not to use 2 speakers, as that would offer an opportunity for comb effect on their mono recordings, better to listen with just one speaker.

    I sometimes think that cross-over designers (good ones) have a plate-full because its not just merely designing cross-overs to separate drivers, its also dealing with the effects of phase shifting and dealing with the comb effect between adjacent drives mounted in the same cabinet.
    Last edited: Nov 12, 2017
    Bill Ferris likes this.

    I LIKE MUSIC Super Member

    Thank you.

    To be correct, when speaking of phase, one must specify the phase angle or amount of time delay, otherwise it is meaningless. One can not just say out of phase.

    And it only applies to signals of the same frequency. If signals of different frequencies are involved it is called group delay. This would be the case in a speaker crossover and speaker placement.

    This is why I used single frequency sine wave when discussing phase differential.

    If you do the math for the addition of wave forms you will see that the sum can be greater (see my example above for a 90 degree phase shift which causes an increase of 1.4 times or about 1.9 dB) depending the the specific amount of phase differential.

    For those that want to run the math...


    The picture below shows the result of the addition of two equal frequency wave forms with a phase differential. Note that the result (the red trace) is greater. It is not always a matter of the resulting wave form being less in amplitude.


    The resulting wave form is different when the frequencies are not the same. Note that amplitude of the resulting wave form is greater.


    It is interesting that on AK and other forums, there is not a lot of discussion of phase and group delay from source through the speakers.

    In my other profession, RF engineering, phase and group delay integrity is part of the discussion. One example is fixed position, steerable pattern antennas as shown below.


    Note how the direction of the antenna pattern is changed by applying various amount of phase shift to the individual antenna elements.
    bhunter likes this.
  20. mfrench

    mfrench AK Subscriber Subscriber

    Over by Rainbow
    feels wobbly,... grasps for something to hang onto,... I guess a faceplant will have to do. Bam.
    passes out.

    I'll check back in later.
    biscuithead likes this.

Share This Page