Chapter 48. Measurements and Error – Electrical Technology, Vol2: Machines and Measurements, 1/e


Measurements and Error


In this chapter you will learn about:

  • The difference between analog and digital instruments
  • The need for measurements
  • Definitions of terms such as accuracy, precision, sensitivity, resolution, etc.
  • Factors affecting accuracy
  • The significance of noise and signal-to-noise ratio
  • The practical application of probes and their importance
  • The general form of a measurement system
  • Intelligent instrument and why is it called intelligent

Measurements and error


Whether an equipment is to be designed, installed, put into operation, or repaired, for all these operations certain electrical quantities need to be measured. Basic electrical quantities are voltage drop, electric current, and resistance. Separate meters can be used for measuring these quantities, such as voltmeters for measuring voltage drop, ammeters for measuring current and Ohm meters for measuring resistance (see Figure 48.1). This is quite inconvenient, impracticable and uneconomical. Certain meters are designed to measure all of these quantities and are known as multimeters (Figure 48.2). Multimeters can be either analog or digital.

Figure 48.1 Different Types of Meters Most Commonly Encountered (Panel Meters)

Figure 48.2 Multimeters can be either Analog (a) or Digital (b)

Except electrostatic meters working on the principle of electrostatics, all meters measure the amount of current flowing through them.

Measuring instruments are devices that enable one to examine physical events, which are not apparent to our senses. They convert one kind of stimulus, to which we are normally insensitive, into another that we can detect with our eyes or ears, e.g., a neon tester converts invisible current into light. A neon tester provides information on whether an electric circuit is live, but it does no more than give us a visible signal (see Figure 48.3). It does not convert the signal into a numerical value.

Figure 48.3 A Neon Tester Gives a Simple Visible Signal

We are not in a position to answer the simplest technical question about anything without facts and figures. ‘When you can measure what you are speaking about and express it in numbers, you know something about it, and without measurement we cannot say what we are speaking about’, said Lord Kelvin.

Measurement generally involves using an instrument as a physical measure of determining a quantity or variable. The instrument serves as an extension of human faculties and in many cases enables a person to determine the value of an unknown quantity that our unaided human faculties cannot measure. An instrument then may be defined as a device for determining the value or magnitude of a quantity or variable.

Measuring instruments measure the behaviour of free electrons. Their behaviour is determined by the nature of the circuit or component part in which they are present; under normal conditions they act in a predictable way and under abnormal conditions they act differently in an unpredictable way. By suitable measurements, we can find out exactly what is happening.


Measurement work uses a number of terms, which are as follows.

Instrument: A device for determining the value or magnitude of a quantity or variable.

Accuracy: The closeness with which an instrument reading approaches the actual value of the variable being measured, a qualitative assessment of freedom from error.

Precision: A measure of the reproducibility of measurements, i.e., for a given fixed value of a variable, precision is a measure of the degree to which successive measurements differ from one another; the degree of agreement among repeated measurements of the same object or event.

Sensitivity: The ratio of the output signal or response of the instrument to a change of input or measured variables; the extent of a response to an input stimulus.

Resolution: The smallest change in a measured value to which the instrument will respond.

Error: Deviation from the true values of the measured variable, the difference between a measured value or condition, and the true specified or theoretically correct value or condition.


All test equipment must use a portion of the electrons’ energy in the circuit. If this is abundant, a substantial sample can be taken with negligible effect. But when the energy is small, as is usually the case, only a small sample can be taken without disturbing the operation of the circuit, thus obtaining a false reading. This is called loading the circuit. As an example, in Figure 48.4, a voltmeter with a d.c. resistance of 10 kΩ is used to measure the voltage across 10 kΩ load. Before connecting the voltmeter, the voltage between A and B was 40 V (4 mA × 10 kΩ). However, after the voltmeter is connected, the resistance between A and B will be only 5 kΩ. This allows the current in the circuit to increase to 5 mA, resulting in the potential drop between A and B changing to 25 V (5 mA × 5 kΩ), which is a serious error.

Figure 48.4 Loading Effect of a Meter Measurement

Obviously, we should have used a meter with less loading effect. A meter with a d.c. resistance of 10 ΜΩ would cause negligible circuit disturbance and give an accurate measurement, because it would change the potential drop between A and B by only 0.04 V. In a circuit where electron flow is small, a more sensitive instrument should be used; the choice of test instrument has a lot to do with the accuracy of the measurement.

Friction in the movement of a meter may cause the pointer to stop at a different place on the dial, although measuring the same quantity each time. This is an example of random error. It is not only limited to sticky meters but occurs to some extent in all the test equipment. As precision of measurement means repeatability of readings, we can only obtain by reducing random error as much as possible.

Imbalance in the meter movement might result in a constant offset of all readings, so that they will have precision (repeatability), but not accuracy. This kind of inaccuracy is called systematic error. Careful design and calibration are required to narrow the gap between measured values and true values caused by random and systematic errors.

But even when all errors have been reduced to a minimum, there will still remain some differences between real and indicated values. This, difference is expressed as a percentage. For example, a meter may be said to have an accuracy of ±2 per cent of full scale. This means that its readings will always be within 2 per cent of the maximum value that can be indicated on the scale. If the full-scale value is 100 V, then no reading anywhere on the scale will vary more than 2 V from the true value. All instruments have some errors usually expressed as a percentage.


Most test instruments can operate well outside the range of temperature people work in, but under normal conditions, they will be used at about +20 °C (68 °F). However, laboratory test equipment, especially calibration standards, require an environment more closely controlled because even very small temperature variations affect the accuracy of measurements. It is not advisable to operate any test equipment at higher than normal temperature.

For general use, a relative humidity not in excess of 90 or 95 per cent is satisfactory. Above this level, serious leakage would occur because of excessive dampness. Laboratory equipment has to be operated within narrower limits.

Barometric pressure is seldom a problem at the ground level, but may become so at higher altitude. Most test equipment would operate satisfactorily up to about 4500 m (15000 feet).

Test equipment should be able to withstand normal handling and transportation, but should not be subject to rough treatment.

Adaptive shielding and grounding should be provided for all test equipment. Generally, the cabinet or dust cover of an instrument is metal; it satisfactorily excludes any unwanted interference. However, this is true only if it is well grounded. The third pin of the power plug gives an electrical ground, necessary for safety reasons, but this is by no means perfect as there are often considerable lengths of conduit and wire between the wall and the real ground. On account of the presence of resistance along this path, especially in a dry climate, considerable unwanted voltage can be present on the chassis or case.

A real ground should be a copper rod or tube driven deeply into moist earth, or an equivalent, connected with thick copper cable by the shortest possible route to the chassis or cabinet. Domestic water pipes usually work well enough if the run of the pipe is not too long, but in a factory they often run for long distance and offer considerable resistance.

Much of the pickup will be in the form of hum, which is a low audio frequency having the same frequency as that of the power line or a harmonic thereof, introduced into the signal paths by induction, leakage or insufficient filtering.

Any electrical disturbance that causes undesirable responses in electronic equipment is called interference. This could be undesired signals, stray currents from electrical apparatus or other causes such as static from atmospheric disturbances.

A special case is noise, which is unwanted energy, usually of random character, present in any transmission channel or device, and due to any cause. It may be due to the electrons themselves in the circuit under test or connected to them as the movement of each electron is a tiny current that may be amplified enormously in a power amplifier; the accumulation of millions of electron movements creates noise.

The unwanted interference competes with the wanted signal. The smaller the signal, the greater the problem, as in this case, the signal will be drowned in noise. The ratio of the magnitude of the signal to that of the noise is the signal to noise ratio. This can be improved in many cases by a combination of interference reduction and avoidance of unnecessary signal attenuation.

Loss of signal strength can be caused by impedance mismatches. This occurs when the output impedance of the circuit under test is not the same as that of the connecting cable impedance or the input impedance of the test instrument. An impedance mismatch may also introduce distortion and phase shift. Of course, this does not apply to d.c. connections and is much worse at higher frequencies than at lower ones. Dissimilar impedances should be connected by using a matching pad (see Figure 48.5).

Figure 48.5 Matching Pads

A probe can avoid loading, and mismatches. Probes frequently attenuate the signal by fixed amounts, such as ×10 or ×100, which means that the signal is decreased by these factors. Such reductions are necessary with signal voltages higher than what the test equipment can withstand and also to increase the input impedance of some instruments and to prevent loading. Many probes also contain adjustments where they can be matched exactly to the instruments with which they are used. Cathode ray oscilloscope (CRO) probes are illustrated in Figure 48.6.

Figure 48.6 Probes (a) 10:1 Divide Probe (b) Equivalent Circuit of Probe Connected to Oscilloscope (c) Modified Probe Circuit with Trimmer Capacitor at the Scope End


In general, measurement systems can be considered to have three basic constituent elements.

  1. The sensing element, frequently called the transducer, is the element that produces a signal that is related to the quantity being measured. Such elements gather information about the thing being measured and change it into some form, which enables the rest of the measurement system to give a value to it.
  2. The signal converter takes the signal from the sensing element and converts it into a condition suitable for the display part of a measurement system or for use in a control system. The signal converter can be composed of three subelements: a signal conditioner that converts the signal from the sensing element into a physical form suitable for the display, a signal processor that improves the quality of the signal, e.g., amplifies it, and a signal transmitter to convey the signal some distance to the display.
  3. The display element works when the output from the measuring system is displayed, the display element takes the information from the signal converter and presents it in a form that enables and allows it to recognize it, e.g., a pointer moving across a scale.

The general form of a measurement system is thus a transducer connected to a signal converter, which in turn is connected to a display element. It can be represented by a block diagram of the form as shown in Figure 48.7.

Figure 48.7 The General Form of a Measurement Systems

48.5.1   System Transfer Function

For steady-state conditions, the transfer function of a system is the ratio output θο to input θi.

Transfer function

A measurement system, however, can be made up of a transducer, signal conditioner and display as shown in Figure 48.8. Each of these elements has its own transfer function. Thus, for the transducer, transfer function G1 with an input of θi, and an output to the signal conditioner of θ is

Figure 48.8 Transfer Functions for a Measurement System

The signal conditioner transfer functions G2 has an input of θ1 and output of θ2 Thus

The display transfer function G3 has an input of θ2 and an output of θο

The transfer functions of the measurement system can be written as follows:

The transfer function of the system is equal to the transfer function of the transducer multiplied by the transfer function of the signal conditioner multiplied by the transfer function of the display. If the system contained more elements, then provided the output signal from one element is the sole input to the next, the transfer function of the system is the product of the transfer function of each of its elements.

48.5.2  Intelligent Instruments

The term intelligent when applied to measurement systems means that a microprocessor or a computer is included in the system. With a dumb instrument, the system only gives the measure of a quantity, and an observer has to process and interpret the displayed data. With an intelligent instrument, the measurement is made, further processing occurs and the data is interpreted. Intelligent instruments can make decisions based on measurements made earlier, carry out calculations on data, manipulate information and initiate action based on the results obtained.


Calibration is the process of putting marks on a display or checking a measuring system against a standard when the transducer is in a defined environment.

The basic standards from which all others derive are the primary standards. These are defined by international agreement and are maintained by national establishments. There are seven such primary standards (mass, length, time, current, temperature, luminous intensity and mole.). There are two supplementary standards (phase angle and solid angle). Primary standards are used to define national standards, not only in the primary quantities but also other quantities that can be derived from them. These national standards are, in turn, used to define reference standards, which can be used by national bodies for the calibration of standards, which are held in calibration centres. These centres then use their standards to carry out calibration in the industry. In a company, such calibration standards might be used to check the calibration of instrumentation in day-to-day use.

Table 48.1 lists some currently used quantities and their relationship with the primary standards.


Table 48.1 Derived Units

Quantity Unit Name Unit in Terms of Primary Units


Meter per second squared

Angular acceleration

Radian per second square

Angular velocity

Radian per second squared


Square meter




kilogram per cubic meter
kg m3

Electric charge


Electric field strength

Volt per meter

Electric potential










Magnetic field strength

Ampere per meter

Magnetic flux


Magnetic flux density








Specific heat capacity

Joule per kilogram Kelvin


Meter per second

Thermal conductivity

Watt per meter Kelvin
mkg K–1s–3


Cubic meter
  1. Except electrostatic meters, working on the principle of electrostatics, all meters measure the amount of current flowing through them.
  2. Measuring instruments are devices that enable one to examine physical events that are not apparent to our senses.
  3. Measuring instruments serve as an extension of human faculties.
  4. Measuring instruments measure the behaviour of free electrons.
  5. Accuracy is the closeness with which an instrument reading approaches the actual value of the variable being measured.
  6. Precision is a measure of the reproductivity of measurements.
  7. Resolution is the smallest change in a measured value to which the instrument will respond.
  8. Error is the difference between a measured value or condition and the theoretically correct value or condition.
  9. The circuit conditions should not change by the introduction of instrument into the circuit, an effect known as loading.
  10. Precision can only be maintained by reducing random errors as much as possible.
  11. Probes can avoid loading and impedance mismatches.
  12. A measurement system can be considered to have three basic constituent elements, a transducer, a signal convertor, and a display.
  13. The transfer function of a system is the product of the transfer functions of its elements.
  14. With an intelligent system, the measurement is made, then further processing occurs and the data is interpreted.
  15. Calibration standards are used for calibration of instruments in day-to-day use.
  1. Which instruments have uniform scales?
    1. Moving coil
    2. Moving iron
    3. Hot wire
    4. Electrostatic
    5. Thermocouple
  2. Which instruments give the magnitude of the quantity to be measured directly?
    1. Absolute
    2. Secondary
    3. Indicating
    4. Recording
    5. Integrating
  3. The link between the electrical phenomenon and the mechanical responses is the
    1. Deflecting torque
    2. Meter movement
    3. Restoring torque
    4. Damping torque
  1. (a)
  2. (b)
  3. (b)
  1. What is the need for measuring instruments?
  2. Explain the significance of the following terms:
    1. Accuracy
    2. Precision
    3. Sensitivity
    4. Resolution
  3. Draw the block diagram of a measurement system and explain each block.