Digital Technology Collection

[QUOTE=' post: 12363913, member: 52426"]Another great article debunking the BS of the audiophile magazines, written by a production engineer.

http://productionadvice.co.uk/no-stair-steps-in-digital-audio/[/QUOTE]

Cpt debunks the article. Below is an article, and notice the "stairsteps". Of course, it takes time for the sample to occur; the sample is not instantaneous. And there are only ~65,000 values with 16 bit, so approximations. Besides that, there are tolerances of parts/circuitry to consider.

http://www.blazeaudio.com/howto/bg-digital.html

Also interesting is that an old Orinda digital "master disc concept", "Digital Master Disc", LP that sounds horrible with digititis. Interesting that an LP has enough resolution so as to perceive the shortcomings of digitial.

Then there is the analog section quality of the solid state chip itself to consider, and what surrounds the chips in terms of parts and quality. Is 50 cents worth of parts that high of sonic quality?

keep on truckin

joe
 
Last edited:
Cpt debunks the article

I offered a critique of the simplistic presentation in the video. My intention was not to 'debunk' the basic concept.

A DAC chip has sample steps. These steps contain high frequency signals (Fourier series analysis will show a sinc series). These signals are the imaging alias artefacts of the zero order hold. They are removed by the reconstruction filter. The output of this filter is smooth, as the high frequency content creating the steps has been removed.

Using an upsampling filter in the DAC allows the reconstruction filter to be more effective at removing the high frequencies that form the steps, or to be gentler on the in-band audio.

A DAC followed by a reconstruction filter does not have a stepped output.
 
I offered a critique of the simplistic presentation in the video. My intention was not to 'debunk' the basic concept.

A DAC chip has sample steps. These steps contain high frequency signals (Fourier series analysis will show a sinc series). These signals are the imaging alias artefacts of the zero order hold. They are removed by the reconstruction filter. The output of this filter is smooth, as the high frequency content creating the steps has been removed.

Using an upsampling filter in the DAC allows the reconstruction filter to be more effective at removing the high frequencies that form the steps, or to be gentler on the in-band audio.

A DAC followed by a reconstruction filter does not have a stepped output.

Did not mean to debunk the entire concept, just the video claiming no steps. Theory vs reality. My apologies cpt. I was thinking of your previous comments concerning steps, and a DAC chip is therefore followed by a Nyquist reconstruction filter, to remove the steps.

keep on truckin
joe
 
Last edited:
Your reply demonstrates your lack of understanding of the science stated in my post. As such, we see no civilized argument in your defense.

Continue on sir.

keep on arguing doesn't make any sense rite??
joe
Yes, arguing doesn't make any sense, but valuable information is all we are looking for. So better have valuable information which was useful to us.
 
So I know this subject elicits responses from one extreme to the other, but if we're going to have a Digital Technology Thread, I don't think we could call it complete without throwing MQA into the mix.

This is the Stereophile Q&A around MQA

J. Robert Stuart came out of the Meridian Team, so I tend to grant that a certain level of empirical knowledge. Meridian has been on the forefront of Digital Design for some time.

However, even though I am convinced this is not ‘snake oil’ and actually is an advancement in the digital capture/reproduction process, the issues I see are these;
  • It requires a decoder to process the MQA encoding. This means the DAC Manufacturers have to license it, which is maybe not a bad thing because someone has to pay for the technical development (think Dolby Labs). Also the publishers have to license their content as well for the encoding, so they too help to pay for technical development. But of course this all gets passed onto the consumer. It is not open source, but instead proprietary. It reminds me of Sony Betamax... it was better but VHS stole the show (not that it matters anymore)!
  • What I don’t like is the potential ‘lock in’ of the technology to the file via watermarking. This specifically prevents ‘sharing’ of music. That was my question about Tidal, if you have a HiDef Subscription and download MQA files for your personal use, do you maintain ownership? If you cancel your subscription, do the files become crippled because there is no active account? If you share the files are they registered to a specific device and can’t be played through someone else’s MQA DAC?
Too many unanswered questions about how this technology will be implemented for me to consider it a step forward!

But I do believe it provides a truer, ‘clearer’ representation of the original recording. I’ve heard it A/B'ed on a friends system and I do believe it works! At least it sounded cleaner and more balanced than a standard CD, HDCD or Vinyl LP of the same content.
 
And Chicks, thanks for that video of Monty going over Digital Processing. That was real informative.

Personally, I'm a 'keep your eyes wide open' kinda guy, so I'm not ready to say that bit depth & sampling rate are now beyond human hearing and are a moot point. There's just too much going on in the psycho-acoustic realm that we don't fully understand to say it doesn't have an impact. Obviously some folks feel it does.

Anyone see the recent Nova episode called "The Brain: Deception Perception"? It's amazing how we create our own realities from the senses we have and we all process a little differently. Reminds me of a friend of mine who had a bumper sticker that said: "Don't believe everything you think"! ;)
 
Good stuff Cpt, thanks for sharing! I found section 1.3 to be interesting around the subject of different types of Jitter (introduction of time based phase shift); Transmitter, Line Induced, Interference Induced and Sampling. Makes perfect sense to me and goes back to some of the points we discussed around the differences between a 'Streamer' (feeding digital content from the Cloud) and a 'Media Server' (local content hosting).

What I found really insightful is that he presents all of this in the context of a 'Transport' (CD, DVD, Blu-ray Player). I can see how the Laser reader and an integrated DAC can be 'synchronized' a lot more precisely when everything is in the same platform. What got me thinking is how does this get resolved when you separate the DAC from the Digital Playback (aka; a Server with USB connection to a DAC)? USB-InSynch has been in place for some time, but how accurate is it really in the digital audio domain? I guess that depends on a number of factors.

Meinberg Global has a really good overview of this dilemma. Their point is that the Time needs to be managed as closely to the interface as possible. That's, I guess, why a dedicated Sound or Video Card make sense, or better yet a USB Clock! I also think this is the area where MQA is trying to drive things straight to the DAC (yeah proprietary, I get it). They embed the timing into the stream and only expand it (fully synchronized) at the DAC (MQA enabled).

Great piece, thanks again!
 
Oh, and looks like The Lenbrook Group has bought out MQA. They seem to be a quality organization, so looking forward to what they do with MQA. I could see some future NAD products with MQA capabilities. ;)
 
What got me thinking is how does this get resolved when you separate the DAC from the Digital Playback (aka; a Server with USB connection to a DAC)? USB-InSynch has been in place for some time, but how accurate is it really in the digital audio domain? I guess that depends on a number of factors.
Can't be done with any source-clocked system which is why I think SPDIF is a flawed backwater...

But a frame-based protocol, where the destination pulls frames of data from the source, into a FIFO, works perfectly. I've used it many times in RF, audio & video systems. The FIFO acts as a causation barrier between the timing domains of source and destination; the source clocking is completely isolated from the destination clocking (barring electrical noise coupling). In an integrated CD player, the FIFO is provided by the CIRC recovery FIFO, and the transport can be slaved to the DAC clock using FIFO content monitoring.
 
Their point is that the Time needs to be managed as closely to the interface as possible.
The clock needs to be as close to the DAC as possible. And the interface removed from the equation by use of a FIFO, as above. I have banged on about that many, many times...
 
I'd rather see it die on its arse... But maybe that's a discussion for another thread...
I think the technological premise around Time Domain synch-ing has merit, but I'm not a fan of 'proprietary constructs' either. The fact that it has to be licensed by producers and the DAC manufacturers is what I think has dragged down adoption and lead to MQA's bankruptcy. Maybe Lenbrook will figure out a way to make the licensing work for everyone.
 
This is a bit dated, but it is a really good overview of how 'Jitter' can be introduced to a digital stream.
 
And this is one of the most succinct explanations of jitter you'll ever read, courtesy of Lycan (Jeff?) in DIYMobileAudio:

The hardware we need to "sample" the original audio waveform vs. time, starts with something called a "clock". It samples the audio waveform at a specific rate ... historically, 44.1kHz (which is slightly more than required by the Nyquist Theorem, to capture 20kHz audio) but more recently 48kHz, 96kHz even 192kHz. We have a variety of hardware "clocks" available to us in electrical engineering, the most accurate one being a crystal oscillator.
It must be immediately pointed out that we actually need two clocks for digital audio capture & reproduction. The first clock controls the analog-to-digital converter (ADC) that samples or "captures" the original signal vs. time (at the studio end), and a second clock is needed to "playback" those samples at the precisely-correct time points, by controlling the digital-to-analog converter (at the consumer end).
It wouldn't be too far off-the-mark to suggest that the whole issue of jitter arises because these two clocks are NOT ... in fact, can NEVER be ... the same physical device. There will always be some difference between these two clocks, just as there will always be some difference between any two wristwatches :(
The basic Nyquist Sampling Theorem assumes that the clocks for capture & reproduction are perfectly accurate, and identical. The study of jitter is really the recognition that, in practice, this is never the case.
It also points out the timing dilemma that Bob Stuart (and team) at Meridian were (are?) trying to solve with MQA. Basically, wrapping the ADC & DAC timing together in the data payload and 'unfolding' at playback. The origami thang.
 
Last edited:
It also points out the timing dilemma that Bob Stuart (and team) at Meridian were (are?) trying to solve with MQA. Basically, wrapping the ADC & DAC timing together in the data payload and 'unfolding' at playback.
I don't believe MQA has anything to do with jitter correction. They may waffle on about it as spurious justification, but no amount of digital manipulation can overcome the problem of jitter in the DAC clock.

Or, for that matter, compensating for ADC clock jitter; in order to address that in some way, you would need to understand the sampling timing error (jitter), by sampling (in some way, including mixing) the sampling clock with some other clock...
 
Back
Top Bottom