Effects of bit depth on quantization error

Noah Santer

New Member
Hello.

I am a DAC designer and was long ago swayed by the argument that higher bit depths (ie than 16) provide no benefit because the increase in dynamic range doesn't amount to anything useful (assuming the PCM data is mixed to occupy the full range); but during recent simulations for a sigma-delta 1-bit DAC I graphed the results for 24 bits on a whim, and was shocked to find a significant improvement in several metrics.

The most obvious was the noise floor dropped about 48dB (!), which I can talk myself into being both a) irrelevant as it was already absurdly low and b) an artifact of simulation unlikely to show up in real life where electrical (mostly supply) noise is already the dominant factor. However, the usual harmonic spikes -- which before were the only form of noise about the 16-stop/96dB limit, were reduced, to a lesser extent, as well.

Is it possible that the effects of quantization amplified by the DAC are ameliorated by the increased bit depth? Or is this just a quirk of ideal circumstances that won't have any impact on a real-world system? I'm going to do A-B testing on it in the next week or so, when I get a chance -- as well as hopefully some Klippel analysis, but until then I thought I'd ask here.

I attached an example spectrum.

Full legal disclosure, I am in the employ of Harman International (although not as a DAC designer.)

This is my first time posting, so I'm eager to here what people have to say!
 

Attachments

  • 16b vs 24b.png
    16b vs 24b.png
    95.2 KB · Views: 30
As I'm sure you know, the real information is in what the true linearity of the converter is. If your simulation is based on the assumption of +/-1/2LSB INL and DNL at 16- and 24-bits, it makes total sense that it would yield a 48dB improvement. BUT, can you get your hands on a 24-bit converter that is +/-1/2LSB linear? To make it clearer to me as to what you're asking, are you feeding the same 16-bit data to the 16- and 24-bit DACs, or are you feeding 16-bit data to the 16-bit DAC and 24-bit data to the 24-bit DAC?

To reap the benefits of a greater number of bits, and better linearity from a better DAC chip, it is all that much more important to have a good infrastructure of quiet power and superior layout of components. This is where art exceeds science.

But, let me back up. Due to the vagueness of communication, when you say you're a DAC designer, does this mean you design the DAC chips or the boxes consumers refer to as DACs (which is a lot more beyond the DAC chip)?
 
As I'm sure you know, the real information is in what the true linearity of the converter is. If your simulation is based on the assumption of +/-1/2LSB INL and DNL at 16- and 24-bits, it makes total sense that it would yield a 48dB improvement. BUT, can you get your hands on a 24-bit converter that is +/-1/2LSB linear? To make it clearer to me as to what you're asking, are you feeding the same 16-bit data to the 16- and 24-bit DACs, or are you feeding 16-bit data to the 16-bit DAC and 24-bit data to the 24-bit DAC?

To reap the benefits of a greater number of bits, and better linearity from a better DAC chip, it is all that much more important to have a good infrastructure of quiet power and superior layout of components. This is where art exceeds science.

But, let me back up. Due to the vagueness of communication, when you say you're a DAC designer, does this mean you design the DAC chips or the boxes consumers refer to as DACs (which is a lot more beyond the DAC chip)?
24b vs 16b (2^-14 stimulus).png
To clarify, I design DACs at the chip level, although this prototype is currently part of a larger experiment into FPGA-based DACs. I'm feeding 24-bit data to the 24-bit and 16-bit to the 16-bit. Like I said, this is based on simulation only at this point. The reference linearity of the DAC is most certainly NOT that good with amplitudes in the 1-2 bit range. And I think you are right, that is where the argument breaks down. That sort of noise floor difference is really only meaningful with the right stimulus. I attached a graph of the performance with the same waveform at 2^-12, and it presents a much different story.

At least now I can go back to only considering 16b DACs...
 
The real analog output stages of the DAC's are limited by SNR, but mostly by THD. Looking at the values for THD+N in different flagship chips datasheets you will notice that they seldom pass the 20-21 bit level of actual performance.
 
The real analog output stages of the DAC's are limited by SNR, but mostly by THD. Looking at the values for THD+N in different flagship chips datasheets you will notice that they seldom pass the 20-21 bit level of actual performance.
That was really the reason I started looking into FPGA designs that are capable of much higher switching speeds than are commonly used. Initial results showed very promising harmonic performance, but disappointing linearity when reproducing signals with small amplitudes. Worst-case circumstances produce significant -- and easily audible -- distortion even on FPGAs with performance levels (and thus price points) totally unrealistic for a consumer product where many hundreds of MHz switching speeds can be achieved. Some of it is helped with a trilevel (1.5 bit) design, but at the expense of increased harmonic distortion.

I'm obviously not privy to the details of how something like the Mojo Chord works, and I don't have one to test, but I suspect their quoted 17 ten-thousandths of a percent THD @ 3V is very much dependent on that "@ 3V" part.
 
On a related note, I'm currently contemplating releasing the schematics, HDL, and such for my designs under a GPL license. I'm very happy with the performance so far although there's plenty more to do to make it polished. This might warrant another thread -- do you think I'd find interest in the hifi + DIY community in this sort of thing?
 
Since it is a delta-sigma approach, the noise shaping modulators will be of outmost importance.
Read the ESS white paper about their Sabre DAC - try to read between the lines too :)
 
That was really the reason I started looking into FPGA designs that are capable of much higher switching speeds than are commonly used. Initial results showed very promising harmonic performance, but disappointing linearity when reproducing signals with small amplitudes. Worst-case circumstances produce significant -- and easily audible -- distortion even on FPGAs with performance levels (and thus price points) totally unrealistic for a consumer product where many hundreds of MHz switching speeds can be achieved. Some of it is helped with a trilevel (1.5 bit) design, but at the expense of increased harmonic distortion.

I'm obviously not privy to the details of how something like the Mojo Chord works, and I don't have one to test, but I suspect their quoted 17 ten-thousandths of a percent THD @ 3V is very much dependent on that "@ 3V" part.
You may find out that raw "much higher switching speeds" is not the magic key to success here. I may suggest that it is Edge Placement Accuracy that limits the ultimate linearity performance of these screaming FPGAs. Super speed FPGA are designed for lots of calculations (or states in a state machine) per unit time, but not necessarily for uniformity and consistency of edges, which is the real desire of the output. Perhaps you could use the fast FPGA for all the calculations, then use an external device where a lot of attention has been paid to ensure very high EPA.

Regarding the distortion at high levels, yes that's true. The quantization level is constant, so at high amplitudes it contributes a very small percentage. As the level decreases, this constant level becomes more important, thus the THD numbers increase.
 
It's seems that this (EPA and related factors like clock jitter) are at the root cause of lost performance when switching from Xilinx to Lattice FPGAs, mostly due to the loss of the dedicated I/O interfaces designed for interfacing with jitter-sensitive applications like DDR present on the Xilinx chips but not the Lattice ones. What kind of numbers are common for "good" EPA? I'm seeing jitter numbers down into the sub-500 picosecond, but that's a significant portion of the clock cycle at 192MHz.
 
It has been many many years since I've used Lattice FPGAs, we switched to Xilinx perhaps 20 or more years ago. Also, it's been many years since I've used Xilinx, since I retired in 2010. The one thing I liked about them, and I don't know if this feature is on their current offerings, is that the IO power is separate from the logic power. It is probably still like that. This way, you can consider logic power as digital, and allow it to be dirty. But, treat the IO power like it is analog and keep it as clean as possible. Things like pi-filters right at the chip, as opposed to simple by-pass caps on the power supply. Still, a clean supply just lessens the uncertainty of EPA issues due to the supply. A larger contributor to poor EPA performance is due to the logic itself, since it is not really a concern to logic performance. Variableness of logic path lengths due to equations used, asymmetrical L-H and H-L propagation delays and other whatnot all make for shaky EPA numbers. So, if you can, reclock your output data (outside of the FPGA)a half-cycle later than when it comes out, and this could help immensely.

What are "good" EPA numbers? Any uncertainty detracts from your theoretical performance. So, "good" depends on what your final target is. Just shoot for doing as good as humanly possible, and that should suffice. If the final analog performance still isn't good enough, then you have to kick in super-human heroics to pull this off. Good luck, it can be done.
 
Sometimes thou... you just can't do whatever you want, the Universe laws are usually immovable by our wishes.
I can flap my arms with super-human heroics and still won't fly.
 
I'm clearly several degrees short of superhuman, I was migrating my source code and noticed an "off by one" error in the HDL... 2^17 is not the same as 2^16, for the record.

A large amount of retesting ensued, with very positive results. Looks like the I/O buffers in Xilinx's chips are plenty good at edge placement, actually, and results on a spectrum analyzer put the test board on par with many more expensive DACs. So I'm feeling confidant enough to put in a fabrication order (did I mention that this test board is half bread-boarded, a material not necessarily known for its signal integrity....?)

The other takeaway is that switching to TMDS instead of differential LVCMOS outputs simplifies the required filtering and is in general better--something I've at least never heard proposed before.
 
Back
Top Bottom