Comb Filtering : Please engineer / smarty-pants ... learn me sumtin

OK,... Still wobbly.

To step back a bit in the conversation,,... a question.
How far is this physical front to rearward offset measurement between the mics used in the comb filtering example that you're referring to?

I ask this, as I've had a couple of nice successes lately in having a pair of omnis up at the stage lip, outward from the conductors spot by 6'+ or so, and a pair of directional mics back a way from them by another 6' deep. So there is quite a bit of distance in depth between the mics, outward from the sound impulse.
Technically, thats a fair amount of depth between the track sources, and technically should cause issues by what I've read here. And by the book of Confusing Audio Physics, should be rife with problems. But the combination yielded a nice combination of direct sound, and a fullness of ambience. Everything was well placed in playback imagery, and no audible negative phasing artifacts.
So, when I hear of this wild comb filtering with two different mics, in speaker playback, I'm left perplexed with is how far is the offset that would cause such a wild amount of phasiness? I was shocked that I got the two stereo recordings to play together so nicely. But they mated up like old friends.

This shows the arrangement and eludes to the depth. Binaural head, and second stereo pair about 5-to-6ft out behind the head:
DSCN3525.jpg
You can see the deeper set of mics, ^^, just above the walker handles.

I have another instance where I had four omnis within a span of 2'; a pair of baffled omnis, annd a pair of 2' spaced omnis, same stand, same 90º perpendicular array across the stage-lip. I was expecting a complete meltdown to two channel mono due to the omnis being so close to each other. example, below:

DSCN2721.jpg
I say, if you're going to block the end of the center aisle, do so colorfully.

Audio is weird. Fascinating, but weird.
 
Last edited:
I find how this works to be very interesting. I started way back when I was knee high to a phase shift induced comb filter. :eek::D

As a youngster with some stuff, basic oscilloscopes, audio generators, microphone and so on...

I discovered that if I put a sine wave through a single speaker used a pair of microphones, each driving an oscilloscope (that is correct, no dual trace scope for a young kid back then) and varied the distance of one microphone relative to other, I could see the trace move on one of scopes.

This lead me to study the physics and math that defined what was involved.

Now it is much easier.

View attachment 1044019



I understand that measurements do not necessarily define sound quality, but measurements can establish base lines.

Note the delay finder in the upper right corner. It will tell me the location differential of two microphones or speakers and the phase (time) shift involved. It is a far cry from my original experiments back in the day of coal fired test equipment...

Understanding the physics and math involved helps to understand the various microphone patterns as described mfrench. The physics is the physics.

Looking at the upper center display, one can see the original sine wave on the top and the result of the combination of two microphones, on lower display go from maximum to minimum as the relative distance between the two microphones is varied.

If broadband noise is used one can watch the comb filtering on the center lower disIt is play.

Comb filtering is a fact of life, its impact on sound quality will vary.

Threads about finding the sweet spot for the location of speakers are not uncommon. Comb filtering (or the reduction of) plays its part.

It is interesting that the same physics and math that describes the interaction of a pair of microphones describes the interaction of the elements of an antenna or multiple antennas.

The magic of designing the implementation of microphones or antennas is based on understanding the physics and math and the results there of along with empirical experimentation and or the research and experience of others to select what is needed for the desired results as applied the the desired outcome.
Remember it is all about the :music::music::music: and each individuals perception of the sound quality.

Yes I LIKE MUSIC, in my case it was 4 microphones on mic stands facing a clock radio(poor man`s cheap acoustical source) feeding a mixer connected to headphones in 1972 to help understand the undesirable "phasing"(comb filter) effects I was experiencing while trying to record + FOH mix my child hood friend`s rock band..

No scope in my equipment inventory at that time, just 17~18 yr. old driven and learning ears..
It would be 2~3 yrs. before books("Sound System Engineering"/"The Audio Cyclopedia", ect.) I bought explained the effects I had encountered prior, and why so, to help me understand later on how I might mitigate or hopefully might avoid by careful study of the audio/acoustical situations in the future whether dealing in live or playback music systems..
And to be a better sound engineer..

In my case, ear`s hearing/learning/understanding first, then study/'theory/knowledge, and then test equipment(RTA, TEF, FFT. with eventual productive application results..

Thanks for stirring my brain from those interesting years past ..

Kind regards, OKB
 
ow far is this physical front to rearward offset measurement between the mics used in the comb filtering example that you're referring to?

To be clear, in the video you can see that he used software for the time differential.


Okay, one would simply use the speed of sound in feet per millisecond. He starts with 1 millisecond time differential between the tracks.

This would equal about 1.12 feet or about 13.44 inches.

The period of a 1000 Hz sine wave is, oddly enough, 1 millisecond. So if the distance differential is 6.72 inches (0.5 second time differential) the phase shift is 180 degrees. It is that simple.

You can duplicate his experiment.

Remember that in his demonstration, the relative levels of the two tracks remained the same. In a real world situation the level from the farthest microphone is likely to be less.

When you adjust the levels (close in microphones versus farther away microphones) do you adjust for what sounds good to you or do you adjust so that the absolute levels are equal?

It is standard practice in a recording session to set all levels to be absolutely equal?

If you run the math for the addition of sinusoidal wave forms you will see the impact of differential levels.

There are any number of threads on AK about speaker placement. It is not uncommon to read that a person change the relative position of a pair of speakers and went from poor imaging, sound stage and depth to great imaging, sound stag and depth. There are a lot of variables and every situation is likely to be different.

Remember I never said that phase/timing differentials and comb filtering are the end all and be all of sound quality.

this is a wonderful compositional study of phase cancellations and comb filtering.

Again, remember that the two video examples that I posted were to show the difference between comb filtering and beat notes. I was specific about the difference. I wanted to make sure that those that listened to the music that you posted, knew the difference between the very audible beat notes and comb filtering.

It is can be difficult to recognize comb filtering unless one knows the native sound quality. However the beat notes in your example are quite prominent. Again, my examples were to point out the difference (beat notes and comb filtering) and that they are not the same. I was specific about that.

WaynerN posted that my science was not correct because it involved the use of a microphone, so I replied to his post with examples.

And the rest as they say is history.
 
Not that my posts are long...

Mike, here is the math to calculate the nulls in comb filtering in the context of this thread.

Not exactly rocket science. We can convert t into distance because we know the speed of sound as I show above.

So for example a frequency of 1000 Hz the period is 1 millisecond (.001 seconds). At a distance of 6.72 inches (time differential of 0.5 milliseconds) the notch will be at 1000 Hz.

Again with my example of a 1000 Hz frequency and a distance differential of 6.72 inches/phase shift of 180 degrees.

What happens is that the delayed signal is late by a different number of cycles at each frequency.

1 kHz : 0.5 millisecond/delay * 1 cycles/millisecond = 0.5 cycles.
2 kHz : 0.5 millisecond/delay * 2 cycles/millisecond = 1.0 cycles.
3 kHz : 0.5 millisecond/delay * 3 cycles/millisecond = 1.5 cycles.

The 1st 1000 Hz frequency cancels, as we would expect. The 2000 Hz wave does NOT cancel. Because the delay is one full cycle, the effect is constructive addition – after the first 0.5 milliseconds has passed, of course.

Where things get interesting is at 3000 Hz, because that is where another cancellation occurs. The 3000 Hz amplitude goes one full cycle, and then only gets halfway when its delayed version begins. That is it has a full 360 degrees of phase shift which puts it back in phase plus an additional 180 degrees of phase shift for a total phase shift of 540 degrees, but in relative terms it is the additional 180 degrees of phase shift that cause the cancellation.

Okay, lets do the math.

Fi = the frequency of the respective notches. i = the number of the notch.

Fi = i + 1 divided by 2 times t, for all integers i ≥ 0.

Remember that t = 0.5 millisecond for a distant of 6.72 inches.

So...

For the second notch it would be

2+1 divided by 2 times t. t = 0.5 milliseconds.

So it is 3 divided by 1 millisecond (.001 seconds) and that equals 3000 Hz and so on. Imagine that.

Remember that the greater the distance (time differential) the lower the frequency of first notch and the rest of the notches will be closer together. This will have mitigating effect on the impact of the comb filtering.

An example of this.

A distance differential of 10.12 feet in round numbers.

The first notch would be at 250 Hz and the notches would occur at 100 Hz intervals, 350 Hz, 450 Hz, 550 Hz on so one.

Mike, taking into account all of the variables that may be present, the science supports you experience.
 

Attachments

  • upload_2017-11-12_22-23-38.png
    upload_2017-11-12_22-23-38.png
    11.8 KB · Views: 4
The day I have to start using my slide-rule to figure out where my speakers should go, is the day I get out of audio. Room acoustics are waaaaaay more complicated, and computer programs are not going to account for many variables. I do think it was a great thread to start, but in the end, the reality is that I don't need no stinkin' math to set up my speakers. I like making good hunches, and then let my past experiences guild me to the good spots for speaker placement (and quantities). If it sounds good, bass isn't bloated, you have a nice sound stage, what more do you want.
 
^^^^^I guess some folks take their rig's set up a bit more seriously than others. I've found that every little thing matters in getting a rig to sound right and I've used a laser pointer and level in the past to align my speakers. Stacking and extra speakers stored in my main rig's room are also off limits Once you reach a certain point where you're really happy with your electronics and speakers there's not much else to do except tweak that gear, room and cabling here and there for the best possible sound.
Thank you @I LIKE MUSIC for taking the time to post the graphs and detailed descriptions even if a lot of it is over my head:bowdown:
 
Last edited:
so your reference point using a sine wave has a reference problem, that being the microphone.

I am not saying that the science is bad, but your implementation is not in a controlled environment, such as an anechoic chamber, where walls, ceilings and floors (and other types of boundaries) can be eliminated as being part of the measurement(s).

I find it interesting that a person can assume that they have a full understanding of what a person has done when they have no first hand information.

In detail this is what I did.

The room was quiet.

A single 8 inch speaker setting on my desk being driven with a 1 KHz sine wave.

Two microphones connected to the oscilloscopes. Both used a common horizontal sweep for timing integrity.

Microphone signals amplified and sent to the oscilloscopes.

Spacing from the speaker to the microphones just about an inch.

The displayed sine waves were clean and free of artifacts. Even a young age, I was still in grade school, I understood the need to reduce outside influences as much as possible, hence the very close spacing of the microphone to the speaker. What ever impact there might of been from outside influences, did not have an impact on the results.

I as able to see the relative phase differential (timing) change as I moved one of the microphones farther away from the speaker. The impact of any outside influences on my results was literately zero.

Anyone here can do the same experiment and get the same results, no anechoic chamber needed.

"as being part of the measurement(s)"

BTW, I did not measure anything, I compared two signals in a relative manner. I did not claim any measured results. It is like picking up a sack of groceries in each hand and determining that one sack is heaver than the other.

Again, I mean no disrespect.

Thank you @I LIKE MUSIC for taking the time to post the graphs and detailed descriptions even if a lot of it is over my head:bowdown:

Thank you for the compliment.

I will be the first to admit that my slight OCDness (okay huge OCDnes) has an slight, well okay large impact on my posts.

I appreciate the patience afforded to me.
 
Since my methodology has been questioned, and some possibly legitimate concerns raised (although I believed my methodology to be okay), I decided to repeat the experiment after more decades than I care to remember.

Test set up.

Normal room. Desk, recliner, regular furniture and curtains, audio gear.

2 Microphones

1 speaker, 1000 Hz tone and music.

Software and sound card.

First test. Measuring the distance differential of the microphones (the difference of the distance to the speaker of the microphones using the software).

Distance differentials using 1000 Hz tone.

First distance.

upload_2017-11-14_1-23-20.png

And

Second distance.

upload_2017-11-14_1-26-14.png

These numbers likely have more resolution (number of digits to the right of the decimal point) than the overall accuracy, but correspond to measurements taken with a tape measure. These numbers can be converted to relative phase shift using the speed of sound.

Next test is with music, using Cool Edit Pro 2.

The group delay (difference in time of arrival of the music to each microphone) can be seen.

Again, this is just a relative indication. I put Cool Edit in the record mode then started the music.

The difference in the time of arrival (group delay, because it is not a single frequency test) can be seen.

REAL WORLD TIMING.PNG

To the original question this can represent a situation where the distance from the speakers to the listening position in not equal due to speaker placement or the uses of multiple speakers.

In addition to comb filtering, this can have an impact on (depending on what descriptors are used), clarity, smearing. imaging and so on. It is not my intent to put a specific label on this. For some this and or comb filtering may be quite noticeable, for others not so much.

These things, along with room acoustics and speaker parameters may have an impact on sound quality when the location of your speakers is changed or multiple speaker systems are used at the same time.

It is not my intent to say that it is wrong to just set up your system and listen to it or possibly spend some time experimenting with speaker location, among other things. It is merely to show some of the science involved. I make no claims on the overall impact on sound quality.

This was done in a normal, non anechoic room.

So, to each is own when it comes to our hobby. I happen to be ever so very slightly :eek::D interested in the technical side of this hobby. Again, to each his own, because it is all about the :music::music::music:.
 
Get over it and yourself. You aren't helping the OP nor any other noobs. You have taken things waaaaaaaay over the top to the point that I don't care anymore.
 
Get over it and yourself. You aren't helping the OP nor any other noobs. You have taken things waaaaaaaay over the top to the point that I don't care anymore.
The OP's thread title is "Please engineer / smarty-pants ... learn me sumtin", and he's provided no indication that he wants "engineer / smarty-pants" responses but only if they don't include any science or math. Therefore, it looks like @I LIKE MUSIC isn't "waaaaaaaay over the top" at all, but answering the OP's post in a precise, helpful, educational and accurate fashion.

Also, the OP and "other noobs" aren't the only readers here. I like what @I LIKE MUSIC wrote, and I imagine others do too. If you prefer a more seat-of-the-pants approach to math, that's fine, but it's no reason to deprecate what @I LIKE MUSIC wrote. Indeed, it's a good thing we all like different things or we'd all be fighting over the same thing.
 
Dave, thank you for your kind words.

I apologize if my posts upset or offend, that is not my intent. I try to respond the best that I can to those that post questions and respond to me.
 
Dave, thank you for your kind words.

I apologize if my posts upset or offend, that is not my intent. I try to respond the best that I can to those that post questions and respond to me.
No, none of your posts were offensive or upsetting. Some people just don't like to be owned in such a public manner.
 
WaynerN questioned my methodology, saying it was flawed.

I replied to this. It was not my intent to own anyone. I apologize for any ill will that was perceived.
 
There are many kinds of information in this world. Some is usable, other information has no practical use. I doubt there is one person that has read this thread that will be able to take advantage of the information that @I LIKE MUSIC has offered. There are no practical applications of the information (or lack of use with the information) that can be used to better one's system. That in itself, is the real problem that I have with it. While it might be fun to examine a microcosm of some physical event, it really doesn't deal with all kinds of other, real world conditions that exist in the listening room. While I can appreciate @I LIKE MUSIC's mathematical skills applied to describe comb filtering, the results can't be applied to individual installations in any practical way, leaving the listener with the only tools and techniques he/she knows how to use, their ears and an approach of logical conclusions.
 
“Understanding is a lot like sex; it’s got a practical purpose, but that’s not why people do it normally.”
--Frank Oppenheimer (brother of J. Robert Oppenheimer)
 
I think you can very much use the results in the real world. I had (well still have) a fairly basic understanding of how to set up a system and know that doing certain things seems to change the sound in one way or the other. The more information I have telling me why the more informed my decisions will be.
Yes, there are many interactions between source and your ears but if you go after each in a systematic way you can hopefully get the results you're looking for. If you just throw up your hands and say "it's all too complicated, why even bother?" then what's the point of moving past an all in one with some Thruster speakers?

Aside from that, the OP said he wanted information and he's being given that.
 
Back
Top Bottom