Self-calibrating data processors and methods for calibrating same
First Claim
1. A method of calibrating the output of a data processor, comprising the steps of:
- applying first and second analog input reference signals I1 and I2 from an accurate external reference to generate respective first and second digital output values O1 and O2 ;
comparing O1 and O2 with respective ideal digital output values O1t and O2t to determine initial gain and offset errors A1 and B1, respectively, where A1 =(O1 -O2)/(O1t -O2t), where B1 =(O2 -O2t)*A1, and where I1 and I2 are selected so that their corresponding respective ideal digital output values O1t and O2t are relatively close to the minimum and maximum output values of an analog-to-digital converter circuit within the data processor;
storing A1 and B1 in an internal memory storage device;
applying an input signal Iin to the data processor to generate a non-calibrated output signal Opre-comp ; and
compensating Opre-comp to thereby output a calibrated signal Ocomp to account for initial gain and offset errors A1 and B1, where Ocomp =(Opre-comp -B1)/A1.
1 Assignment
0 Petitions
Accused Products
Abstract
Highly accurate, self-calibrating data processors and methods for calibrating the same use internal analog references with negligible time and temperature drifts. A first input reference signal set generated by any accurate, precision analog reference is applied to a data processor. The corresponding output response is compared to the theoretical ideal output response to determine the data processor'"'"'s initial gain and offset errors. This information can be stored in non-volatile memory, recalled, and used to compensate for the data processor'"'"'s initial gain and offset errors during actual use of the data processor. Subsequent errors due to time and temperature drifting can be determined by comparing the output responses to a second input reference signal set which is generated by the internal analog reference. The subsequent errors can be combined with the initial errors to compensate for system errors within the data processor.
39 Citations
2 Claims
-
1. A method of calibrating the output of a data processor, comprising the steps of:
-
applying first and second analog input reference signals I1 and I2 from an accurate external reference to generate respective first and second digital output values O1 and O2 ; comparing O1 and O2 with respective ideal digital output values O1t and O2t to determine initial gain and offset errors A1 and B1, respectively, where A1 =(O1 -O2)/(O1t -O2t), where B1 =(O2 -O2t)*A1, and where I1 and I2 are selected so that their corresponding respective ideal digital output values O1t and O2t are relatively close to the minimum and maximum output values of an analog-to-digital converter circuit within the data processor; storing A1 and B1 in an internal memory storage device; applying an input signal Iin to the data processor to generate a non-calibrated output signal Opre-comp ; and compensating Opre-comp to thereby output a calibrated signal Ocomp to account for initial gain and offset errors A1 and B1, where Ocomp =(Opre-comp -B1)/A1. - View Dependent Claims (2)
-
Specification