C for CALIBRATION
06. 03. 2024
In the past, the word had a definition in metrology that meant setting a measuring instrument rather than determining condition. At the end of the 1990s, the definition was still about adjustment. Restoring the measuring instrument to a state as close as possible to what it was when the instrument was new. As intended by the manufacturer. As prescribed in the specification of the measuring instrument.
Some of this was also due to the fact that in the previous regime (we were part of the Eastern Bloc, Yugoslavia) we had a system that did not know calibration, but only (legal) verification. This is a legal inspection, where the process is only completed if the instrument is within tolerances and often needs to be adjusted. Calibration is also used for cameras, sensors in vehicles and many other things. In these cases, the word calibration is still considered to be related to adjustment. For some time now, the definition of calibration in metrology has been different. Calibration is a set of operations to determine the relationship between the values indicated by a measuring instrument or a measuring system, and the values represented by a calibration standard or reference material, and the corresponding values realised by standards under certain conditions. Complicated? Not at all. In metrology, the word is no longer related to setting, but one could say that calibration is the determination of the actual state of the instrument. Determining the indication error (how much the gauge is lying if that makes it easier for you).
The word calibration appears relatively late, only sometime in the 19th century. It is derived from the root word calibre, which appears in 15th century France and means ‘degree of importance’. The meaning of calibre is well known from military technology, while the word calibration (calibrate) was first used to measure the range of a projectile. An ancient origin is “qalib” from Arabic, where it was used for the mould used to make bullets. However, it is more likely that it derives from the medieval Latin word “qua libra”, meaning “of what weight”.
The basis of calibration is the comparison between the reading of the measuring instrument and the reference, which comes in various forms (a physical etalon, such as a weight, or a gauge block; or as a reference material, such as a gas mixture, or a pH solvent). When the reference (a standard with a known value) and the measuring instrument are compared, an indication error, or bias, is obtained. Together with the measurement uncertainty, the error is a key result of calibration because it gives the user of the measuring instrument an indication of how good his instrument is. And when he uses it, he knows exactly what to expect or how well his instrument is measuring. All the information important for a proper understanding of the calibration result is recorded in the calibration certificate. The calibration result is the actual (accurate) state. It is therefore important that the calibration process does not directly involve adjustment. If during the calibration process it is found that the measuring instrument deviates too much, the user (owner of the instrument) decides what to do. As such, calibration is a decision of the user, who wants to know in every way what kind of instrument he has. Is it suitable for his measurement process? Is it sufficient for the requirements and expectations he has in relation to his measurement process? In principle, the calibration laboratories or their service department can also carry out the adjustment if this is necessary and possible. It is difficult to adjust a weight that is made of one piece of steel. Grinding or the like is simply not reasonable, not technologically feasible, still less economically justifiable. In any case, the calibration laboratory should ensure that the work related to adjustment (repair) and calibration is performed separately. Normally, this means separating the departments and staff carrying out one activity from the other.
Calibration laboratories worldwide most commonly use the international standard ISO/IEC 17025, which describes “General requirements for the competence of testing and calibration laboratories”. It was first published in 1999, with the first revision in 2005, and the latest in 2017. The history of standardisation in the field of calibration laboratories is even a little older, dating back to 1978, when ISO issued “ISO guide 25”, while the European CEN issued the first standard of this kind, EN 45001, in 1989. LOTRIČ Metrology received its first accreditation as a calibration laboratory in 1999, related to EN 45001. The very next assessment was already carried out according to the new ISO/IEC 17025 standard, as it is still valid today. In fact, ISO/IEC 17025 provided a major unification in the field of calibration laboratory rules. Today, with a few exceptions, all countries use this standard as a starting point. That is the basic aim of standardisation. Uniformity, comparability and not duplication of procedures.
In practice, calibration looks quite straightforward. Just as there is an ISO/IEC 17025 standard for the operation of a laboratory, there are standards for the performance of calibration that describe the method and other rules for performance. Standards do not exist for all types of measuring instruments. Where in the past there have been large differences between the performance of different laboratories, different international organisations have decided to come up with a standard (a set of rules). Again, we can speak of a standardisation of procedures. Calibration should be performed comparably anywhere in the world, by all laboratories. However, this is not the case. As a rule, standards bring more uniformity, but there are differences in performance. How so, you will say? But even a painter who paints your flat or a mechanic who repairs your car is not entirely comparable to another painter or mechanic. Each of them has their own reasons for being different. Otherwise, we would have no progress. Standardisation does indeed bring uniformity, but it is, at its core, a blocker of development. Standardisation is often rigid, but on the other hand it has more advantages than disadvantages. That is why it is developing rapidly and bringing order to technical conformity assessment. Since a user will use a product in roughly the same way anywhere in the world, it is also logical that that product should be tested in the same way everywhere in the world. However, that is also completely untrue. The EU has had the same rules here since the introduction of the so-called New Approach, using so-called harmonised (unified) standards. This means that a product tested and approved in one EU Member State meets all the requirements of the EU as a whole, i.e. in all Member States. Very similarly, in calibrations, we use harmonised standards in a sense, although they are not actually called that. But the logic is very similar.
To avoid making it look like it is really that simple, here is how the pipette calibration is done. For those of you who don’t know what a pipette is. It is a volumetric device for taking small quantities of liquids. It works by moving a piston in a cylinder to create a vacuum which “pulls” the liquid into the tip of the pipette. Small quantities in this case refer to values from 0.1 microlitres [μl] to a few millilitres [ml]. The litre has 1 000 000 microlitres and 1 000 millilitres, just for the information of those of you who have forgotten the unit prefix converters (and also not to torture yourself). Calibration is carried out using laboratory water of known cleanliness and conductivity. Why is this important? Cleanliness and conductivity. For accuracy. To obtain reproducible and reliable results. Because if we were to use normal water (tap water as we like to call it), it would contain a lot of air and other metallic particles that are not otherwise dangerous. But they are undesirable for calibration. Using a thermometer with a scale interval of 0,01 degrees Celsius [°C] (‘scale interval’ is the technical term for the resolution displayed by the measuring instrument, in this case the thermometer), the temperature of the water can be accurately determined. Based on the temperature and known cleanliness, we then calculate the density of water, which is the first part of the basic physical equation for calculating volume from density and mass. The second part is obtained by pipetting (practically transferring the water from the storage tank) onto a weighing instrument, where the mass is determined with a sufficiently high accuracy. For the smallest volumes down to a few 10 [μl], we use scales with a scale interval of 1 or even 0.1 microgram [μg]. 1 kilogram [kg] has 1,000,000,000,000 micrograms [μg]. Does it still seem simple? Maybe for those of us who are dealing with it. To ensure that the calibration is representative, the pipette is tested on 3 volumes, somewhat evenly over the operating range, with 10 repetitions on each volume. Why do we repeat the measurements? Because the average value is the best approximation to the true value. This is also the best way to describe the overall performance of the pipette.
Everything has to be under control for every calibration. Every single element used in the calibration process shall be checked, we need to know its impact well. I have already spoken about the water and the weighing instrument. Water, like any liquid, evaporates, so we have to prevent or minimise the evaporation of water during calibration. It would be a little illogical to take 1 microliter of water, put it on the scale, and in the meantime some of the water evaporates. This is not negligible. After all, that’s 1/500 of an average raindrop. In the end, we have to determine the result to within a few 10s of nanolitres [nl], whereas water evaporation can cause as much as 2 nanolitres per second per square millimetre of surface area [nl/s/mm2]. Far too much to allow it. That is why we use a so-called evaporation trap. Basically, a chamber above the cup on the scale is filled with water, saturating the air with water vapour to reduce evaporation.
Pipettes are handled by trained operators, both for use and calibration. Incorrect handling of the pipette can cause several 10% error. This is probably not what we want when our blood is analysed in a medical laboratory. Or when they analyse our tissue. That’s why it’s important to maintain and calibrate pipettes regularly. This ensures that our quality infrastructure is up and running, comparable to the rest of the world, and allows for proper analysis, research, development, etc.
Primož
Next time, 20 March 2024, HUMAN (človek in Slovenian language)