Non Linearity Error Of Sensor
Contents |
Федерация 中国 (China) 日本 (Japan) 대한민국 (Korea) 台灣 (Taiwan) See All Countries Toggle navigation INNOVATIONER BUTIK SUPPORT ANVÄNDARGRUPPER Sverige Sensor Terminology Publish Date: sep 23, 2013 | 5 Ratings | 4,60 out of 5 linearity of a sensor | Print Overview This tutorial is part of the National Instruments Measurement linearity error calculation Fundamentals series. Each tutorial in this series, will teach you a specific topic of common measurement applications, by explaining
Sensor Accuracy
the theory and giving practical examples. This tutorial will cover sensors and the terminology associated with them. For the complete list of tutorials, return to the NI Measurement Fundamentals Main page.
Sensor Characteristics Pdf
Table of Contents Sensitivity Range Precision Resolution Accuracy Offset Linearity Hysteresis Response Time Dynamic Linearity 1. Sensitivity The sensitivity of the sensor is defined as the slope of the output characteristic curve (DY/DX in Figure 1) or, more generally, the minimum input of physical parameter that will create a detectable output change. In some sensors, the sensitivity is defined as the input sensor terminology parameter change required to produce a standardized output change. In others, it is defined as an output voltage change for a given change in input parameter. For example, a typical blood pressure transducer may have a sensitivity rating of 10 mV/V/mm Hg; that is, there will be a 10-mV output voltage for each volt of excitation potential and each mm Hg of applied pressure. Sensitivity Error The sensitivity error (shown as a dotted curve in Figure 1) is a departure from the ideal slope of the characteristic curve. For example, the pressure transducer discussed above may have an actual sensitivity of 7.8 mV/V/mm Hg instead of 10 mV/V/mm Hg. Back to Top 2. Range The range of the sensor is the maximum and minimum values of applied parameter that can be measured. For example, a given pressure sensor may have a range of -400 to +400 mm Hg. Alternatively, the positive and negative ranges often are unequal. For example, a certain medical blood pressure transducer is specified to have a minimum (vacuum) limit of -50 mm Hg (Ymin in Figure 1) and a maximum (p
over only the part of their range where they provide predictable performance and often enhanced linearity. Zero When making a measurement it is necessary to start at a known datum, and it is often convenient to adjust the output of the instrument
Sensor Hysteresis
to zero at the datum. It, therefore, is a value ascribed to some defined point in linearity error definition the measured range. Zero Drift The signal level may vary from its set zero value when the sensor works. This introduces an error into sensitivity error the measurement equal to the amount of variation, or drift as it is usually termed. Zero drift may result from changes of temperature, electronics stabilizing, or aging of the transducer or electronic components. Sensitivity Sensitivity of a sensor is defined http://www.ni.com/white-paper/14860/en/ as the change in output of the sensor per unit change in the parameter being measured. The factor may be constant over the range of the sensor (linear), or it may vary (nonlinear). Resolution Resolution is defined as the smallest change that can be detected by a sensor. Response The time taken by a sensor to approach its true output when subjected to a step input is sometimes referred to as its response time. It is more usual, however, to quote http://www.mfg.mtu.edu/cyberman/machtool/machtool/sensors/fundamental.html a sensor as having a flat response between specified limits of frequency. This is known as the frequency response, and it indicates that if the sensor is subjected to sinusoidally oscillating input of constant amplitude, the output will faithfully reproduce a signal proportional to the input. Linearity The most convenient sensor to use is one with a linear transfer function. That is an output that is directly proportional to input over its entire range, so that the slope of a graph of output versus input describes a straight line. Hysteresis Hysteresis refers to the characteristic that a transducer has in being unable to repeat faithfully, in the opposite direction of operation, the data that have been recorded in one direction (Figure 2). Calibration If a meaningful measurement is to be made, it is necessary to measure the output of a sensor in response to an accurately known input. This process is known as calibration, and the devices that produce the input are described as calibration standards. Span (input) A dynamic range of stimuli which may be converted by a sensor id called a span or an input full scale (FS). It represents the highest possible input value which can be applied to the sensor without causing unacceptably large inaccuracy (shown in Figure 3). Full Scale Output Full scale output (FSO) is the algebraic difference between the electrical output signals measured with maximum input stimulus and the lowest input stimulus applied. Th
From the graph plotted of Voltage Output against Increase in Measurand which usually appears as a curve, a straight line is drawn from the zero point to the full scale output point. Usually the point which deviates most http://www.sensorland.com/HowPage026.html from the simple straight line will be used to specify the 'linearity' of the transducer. This is quoted as a percentage of the normal full scale output of the transducer Best fit straight line method of definition In practice, the relationship between the measurand and the output of most transducers is not perfectly linear and it is therefore necessary to find a way of using these devices to achieve the most accurate overall linearity error results. This can be achieved by constructing what is known as the "Best fit straight line" (BFSL) through the calibration points in such a way that the maximum deviation of the curve from the line is minimised as indicated in the figure below. This diagram simply illustrates how, instead of drawing a straight line between the origin and the end point at "B", we can construct a line which effectively halves the non linearity error maximum deviation that would appear at point "A" and shares it with the end point at "B". Thus the deviation from the BFSL, and hence the linearity error, is effectively halved. However, it should be appreciated that this is simply a means of interpretation of the results and relies entirely on the system, including the measuring equipment, being set up to take advantage of this technique. To take advantage of this method in the case illustrated, it is necessary to set up the system so that when the transducer is at full scale 'y' at point 'B' the indicated output would be set to a value of 'x'. This then effectively halves the error indicated at 'A' which would now deviate from the BFSL by the amount 'c' - 'd' rather than 'c' - 'e' if we had simply constructed an 'end point straight line'. It can be clearly seen that this enables us to effectively halve the errors in the system and thus record better results over the whole measuring range. Note however that although it is quite easy to construct a BFSL on a graph of transducer output against measurand, it is not so easy to do when faced with a simple set of figures. In practice we need a better, quicker and mor
be down. Please try the request again. Your cache administrator is webmaster. Generated Fri, 21 Oct 2016 16:35:13 GMT by s_wx1085 (squid/3.5.20)