From first-order incremental ΣΔ converters to controlled-oscillator-based converters, many ADC architectures are based on the continuous-time integration of the input signal. However, the accuracy of such converters cannot be properly estimated without establishing the impact of noise. In fact, noise is also integrated, resulting in a random error that is added to the measured value. Since drifting phenomena may make simulations and practical measurements unable to ensure longterm reliability of the converters, a theoretical tool is required. This paper presents a solution to compute the standard deviation of the noise-generated error in continuous-time integrator-based ADCs, under the assumption that a previous measure is used to calibrate the system. In addition to produce a realistic case, this assumption allows to handle a theoretical issue that made the problem not properly solvable. The theory is developed, the equations are solved in the cases of pure white noise, pure flicker noise and low-pass filtered white noise, and the implementation issues implied by the provided formulas are addressed.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.