Accuracy and repeatability are the two terms most frequently used to describe industrial sensor performance. The article explores accuracy and repeatability, their definitions, their root causes and how to minimize their effect on your application.

What is meant by the accuracy of a sensor?

A sensor’s accuracy is the difference between the real and measured values of the process variable. For example, if the water temperature inside a storage a tank is 40°C but the temperature sensor measures 41°C, its accuracy is 2.5%. The 1°C difference between the measured and real values is known as the measurement error. Within sensor datasheets, accuracy is normally defined as a percentage of the full-scale measurement range. If, for example, the sensors maximum operating temperature is 100 °C, an error of 1°C indicates an accuracy of ± 1% of full scale (abbreviated F.S.). An important question now arises. How do we know what the real temperature is (or the real value of any process variable)? The answer is dependent on the type of process variable. In the case of temperature, pure water at atmospheric temperature freezes at 0 °C and boils at 100 °C, providing us with two known temperatures at which accuracy can be determined.

What is meant by the repeatability of a sensor?

A sensor’s repeatability is defined as the spread of consecutively measured values of the process variable under constant conditions. A sensor that outputs values with minimal deviation, is said to have a high repeatability. For example, if we measure water temperature inside a storage tank 20 times and obtain a mean temperature of 41°C and a spread between 40.8 °C and 41.2 °C , then the repeatability is about ± 0.5%. Clearly if we were to make 100 measurements instead of 20, we would obtain an even larger spread and therefore lower repeatability. To combat this obstacle, repeatability is defined in terms of the standard deviation from mean of the measured values. By defining repeatability in terms of standard deviation, repeatability becomes independent of the number of measurements that were made.

Accuracy vs. Repeatability

Repeatability is a component of accuracy, i.e. one of the causes of measurement error, and so accuracy cannot be higher than repeatability (though some definitions of accuracy include only systematic errors). However, it is possible (but uncommon) for a sensor to have a very high repeatability but a very low accuracy.

High accuracy and high repeatability
High accuracy and high repeatability of sensor measurement
Low accuracy and high repeatability
Low accuracy and high repeatability of sensor measurement
High accuracy and low repeatability*
High accuracy and low repeatability of sensor measurement
Low accuracy and low repeatability
Low accuracy and low repeatability of sensor measurement

*Accuracy cannot be higher than repeatability. However, if a lack of repeatability is the major contributing factor to the measurement error, high accuracy can still be obtained by averaging multiple sensor readings.

What causes low accuracy and repeatability?

  • Linearity error: Sensors output a voltage, from which the value of the process variable is calculated. To reduce complexity, it is normally assumed that the relationship between the sensor output and the process variable is linear. However, the sensor output is never perfectly linear and so the use of the linearity assumption introduces measurement error. A thermocouple temperature sensor is a good example of a sensor with sever linearity error. High end thermocouple readers use non-linear equations to map output voltage to temperature so that the linearity error is reduced.
  • Hysteresis error: Hysteresis is a disparity in the sensor output due to a directional dependence. For example, if a temperature sensor has hysteresis error, then the sensors output voltage at a water temperature of 50°C will differ slightly depending on whether the water was being cooled to 50°C or heated to 50°C. Hysteresis is often experienced in pressure and force measuring devices because objects under mechanical deformation (e.g. of a pressure sensing diaphragm) exhibit hysteresis. The illustration below depicts an exaggerated representation of sensor hysteresis.
Graph illustrating the phenomenon or sensor hysteresis error
  • Long term drift: Long term drift is measurement error that occurs over long periods of time (often measured over periods of 12 months). Causes of long-term drift include the ageing of materials, corrosion, mechanical wear and damage to the sensor. The effect of a small amount of long-term drift can be significantly reduced, but not entirely compensated for, by re-calibration.
  • Zero error: The zero error is the real sensor reading obtained when the sensor output should be zero. The zero error is particularly noticeable in pressure and force sensing applications. A typical cause of zero error could be permanent deformation of a pressure sensing diaphragm so that the sensor thinks a force or load is present when it is not. The effect of zero error can be significantly reduced, but not entirely compensated for, by re-calibration.
  • Temperature error: Temperature error is caused by the sensitivity of the sensor’s output voltage (of non-temperature sensors) to changes in temperature. For example, strain gauges are commonly used to measure mechanical force because stretching them causes their electrical resistance to increase. However, their electrical resistance is also a function of temperature. The effect of temperature can be so significant that many sensors employ temperature compensation techniques to reduce the magnitude of the temperature error.

How can sensor accuracy be maximized?

  • Purchase the correct type of sensor: It sounds obvious, but an important aspect in maximizing accuracy, is purchasing the correct type of sensor. A good example can be observed with volume flow rate meters. Oval gear flow meters are more accurate than turbine flow meters. However, the accuracy of oval gear flow meters decreases at low liquid viscosity. As a result, turbine flow meters are able to achieve higher accuracies than oval gear flow meters when measuring the flow rate of low viscosity liquids.
  • Choose a sensor with a measurement range suitable for your application: Select a sensor with a maximum operating range 30-50% larger than the operating range of your system. For example, measuring a pressure of 25 bar, with a sensor capable of measuring 500 bar will lead to a poor accuracy due to the low sensitivity of the sensor.
  • Do not operate the sensor outside of its compensated temperature range: Many sensors include temperature compensation to reduce their temperature error. Temperature compensation is normally only effective over part of the sensor’s total temperature operating range. Accuracy can be maximized by ensuring that the sensor remains within its temperature compensated range.
  • Perform re-calibration: Long term drift and zero errors increase over time. Their effect on accuracy can be reduced through periodic re-calibration of the sensor.
  • Minimize electrical noise: Excessive electrical noise leads to low repeatability. Entire books have been written on the topic of reducing electrical noise. Important practices include using shielded cables, minimizing cable length (particularly cables carrying unamplified signals) and keeping cables and sensitive electronics away from source of electrical noise (e.g. AC motors and relay switches).
  • Minimize mechanical vibrations: Many sensors are sensitive to mechanical vibrations (particularly piezoelectric sensing elements). Mount the sensor at a location where vibrations are at a minimum. If possible, mount the sensor through a vibration damping material such as rubber.
  • Do not misuse the sensor: Misusing the sensor, e.g. using it outside of its maximum operating conditions or failing to place a filter at the inlet to a flow meter is a sure way to decrease the sensors accuracy!