5. Quality assurance

General guidelines considering the quality assurance work within EMEP were given in the EMEP Quality Assurance Plan (EMEP/CCC-Report 1/88). While there have been considerable improvements in the quality assurance work within EMEP over the last years, there is still need for improvements. The EMEP/WMO workshop in Passau on accuracy of measurements (EMEP/CCC Report 2/94) gave a series of recommendations aiming at an improved quality assurance. These recommendations have been accepted by the EMEP Steering Body in 1994, and will form a basis for the QA programme within EMEP. Important steps in this programme are:

It was also agreed to continue exchange of views and information with the WMO, since the WMO GAW network share a number of the stations and measured parameters. Since then further discussions have taken place between EMEP and WMO/GAW and there is a strong desire to harmonize and coordinate the efforts in order not to duplicate activities and efforts.

The implementation of the recommendations above will be a gradual process, starting with the establishment of responsible National QA managers.

Guidelines for the QA work are given in the following sections.

5.1  Job description for EMEP’s National Quality Assurance Manager

The overall goal of the quality assurance activities is to provide data which meet the EMEP Data Quality Objectives (Section 5.2).

The EMEP quality management and quality system which will build further on the Quality Assurance Plan for EMEP and the Manual for Sampling and Chemical Analysis, will in general follow the guidelines in the ISO 9004 standards, and the guidance given in EN 45001, ISO/IEC Guide 25, and the WELAC Guidance Document No. WGD 2 or the updated version EAL-G4.

The quality assurance activities will therefore follow normal accepted standards and recommendations for good measurement practice. The quality system will, when fully implemented, ensure the targeted data quality.

The concept of quality system implementation requires that NQAM have the authority and full support at national level.

NQAM are then responsible for implementation of the EMEP quality system within his/her own country and for its supervision.

The responsibilities include among other duties:

and in particular

The NQAM shall have direct access to the highest level of management at which decisions are taken on measurement policy and on resources, and will work in close co-operation with the EMEP Quality Assurance Manager.

5.2  EMEP Data Quality Objectives (DQO)

5.2.1  DQO for the acidifying and eutrophying compounds

 

Accuracy

 

SO42-

0.032 mg S/l

(1 µmol/l)

NO3-

0.014 mg N/l

(1 µmol/l)

NH4+

0.028 mg N/l

(2 µmol/l)

Cl-

0.107 mg Cl/l

(3 µmol/l)

Ca2+

0.012 mg Ca/l

(0.3 µmol/l)

K+

0.012 mg K/l

(0.3 µmol/l)

Mg2+

0.007 mg Mg/l

(0.3 µmol/l)

Na+

0.007 mg Na/l

(0.3 µmol/l)


The targets for the wet analysis of components extracted from air filters are the same as for precipitation. For SO2 the limit above for sulphate is valid for the medium volume method with impregnated filter. For NO2 determined as NO2- in solution the accuracy for the lowest concentrations is 0.01 mg N/l.

The aim for data completeness is valid for the current definition used by the CCC. This definition will, however, be harmonised with the WMO GAW definition and modified.


5.2.2  DQO for heavy metals

Pb:  15% 
25% 
if c > 1 mg Pb/l
if c < 1 mg Pb/l
Cd:  15%
25%
if c > 0.5 mg Cd/l
if c < 0.5 mg Cd/l
Cr: 15%
25%
if c > 1 mg Cr/l
if c < 1 mg Cr/l
Ni: 15%
25%
if c > 1 mg Ni/l
if c < 1 mg Ni/l
Cu:  15%
25%
if c > 2 mg Cu/l
if c < 2 mg Cu/l
Zn: 15%
25%
if c > 10 mg Zn/l
if c < 10 mg Zn/l
As: 15%
25%
if c > 1 mg As/l
if c < 1 mg As/l
Hg: 15%
25%
if c > 0.01 mg Hg/l
if c < 0.01 mg Hg/l

 

5.3  Quality Assurance Plan

The quality assurance plan was first discussed at EMEP’s workshop in Freiburg, Germany in 1986, and later distributed as a separate CCC report (EMEP/CCC-Report 1/88). The objectives of the quality assurance is to make sure that the data accuracy satisfy the DQO and to document the sites, the measurements, and the quality of the collected measurement data. It consists of the following elements:

 

5.4  Measurement sites

The siting criteria are given in Section 2. Precautions to be undertaken with respect to the individual component are also described in the sampling part in Section 3.


5.4.1  Information about a monitoring site

Information about the EMEP site surroundings was presented in EMEP/CCC-Report 1/81. Since then a large number of new stations have been established.

Rather comprehensive forms for sites and site surroundings including distances to emission sources have been filled in and returned to the CCC. The key information collected is stored in the EMEP data base. The forms for this type of information will be revised as a part of the QA activity and the new forms will be simpler and less time-consuming to fill in.

The site information is available from CCC’s homepage, http://www.nilu.no/projects/ccc/network.html.

 

5.5  Field and laboratory operations

5.5.1  Common guidelines for field and laboratory activities

When relevant for the measurements taken place in participating countries, the guidelines from the CCC should be translated before being passed on to the stations or laboratories.

The staff at the measurement sites and in the laboratories should have copies of the instructions for their work, their responsibilities and their delegated authority at hand. They should be familiar with these documents. The documents should be updated when needed.

The staff should be properly instructed before being assigned to the work, and should be given refresher courses at regular intervals, e.g. in combination with the national audits.

National resources should be sufficient to give the staff both in field and laboratory the equipment and accessories including spare parts and traceable standards, needed to perform their work in accordance with the EMEP quality assurance plan and recommendations.

Routines for handling, maintenance, and calibration of instruments and samplers at regular intervals, should be established, be at hand at the site and in the laboratories, and should be followed as intended.

Corrective routines should be established in order to have a high data completeness, and a stocks of the most used spare parts should be kept at the sites and in the laboratories.

Calibrations, maintenance etc. should be recorded in field journals and in laboratory journals. There should be one journal at hand next to each instrument.

It is strongly recommended that laboratories should apply for accreditation for compliance with EN 45001 or similar standards.

Any changes in instrumentation should be reported to the CCC.

5.5.1.1  Audits

Performance audits should be carried out by representatives of the technical staff from the institution operating the site once each year to see that the field operations work as intended. System audits should be carried out by the EMEP QA Manager in cooperation with the National QA Managers at regular intervals.

A detailed check-list to be filled in during these inspections should be worked out, and the WMO GAW check-list ( WMO, 1994) may be used during audits of the wet deposition part of the measurements. The filled-in forms should be assessed by a scientist to ensure that the station operates as intended. The auditors should bring with them copies of the filled-in forms from the last visit when performing a site inspection. Corrective action should be taken immediately when necessary.

The system audits should:

An audit plan and guidelines for the audit, should be worked out for this purpose.


5.5.2  Field operations

5.5.2.1  Instrumentation

Procurement of instrument or materials is the process of obtaining instruments and materials for field use, i.e. the instructions for contracting, purchasing, testing etc. The procurement procedures for instruments and materials should involve several quality assurance steps. The complexity and the number will usually be dependent upon how important the instrument or material is for the field operations.

Procurement has been treated by the US EPA (US EPA, 1976), procedures have for example been worked out for the Canadian Air and Precipitation Network (CAPMoN), see Vet and Onlock, 1983).

Each participant should have a preventive maintenance plan which covers all instruments used in the national network. The plan should list all instruments, the maintenance procedures for each instrument, and the preventive maintenance time schedule. The plan should further contain a list of replacement parts which may be needed, and a storage of tubes and other spare parts which easily can be changed at the site should be kept at the site in order to reduce the down period for instruments and to obtain a high data completeness.

The preventive maintenance should be carried out by the technical staff from the institution responsible for the site, or from the manufacturer of the instrument. Journals should be at hand for each instrument and records made for the preventive maintenance. Inspections for leaks in the tubes and connections should be a part of the daily sample exchange procedure. Low pressure readings in air sampling equipment may indicate leaks, and tubes appearing to be unclean need to be replaced with new tubes.

A calibration plan and calibration procedure covering the various instruments at the site must exist at all sites. For gaseous and aerosol components accurate volume readings are most important for the resulting measurements accuracy, and the volume meters may need frequent calibration. The accuracy of an air volume meter should be better than 5%. The results from the calibrations should also be kept in the journals. The need for calibration will normally be specified by the manufacturer. As a general rule a calibration at least twice a year is desirable but should under no circumstances be less frequent than once every year. The institution responsible for the measurement may modify the calibration procedures or frequencies as more experience is gained with the instrument.

Written instructions for maintenance and calibration must be available at the site, and the operator should be familiar with the contents.

5.5.2.2  Changing of samples at the site

Detailed procedures for changing of samples for the recommended methods are parts of Section 2 in this Manual. Meter readings and other data of importance should be written into the field journal at the site, and copies of this information filled into the field reporting forms. Field reporting forms should follow the exposed samples and field blanks to the laboratory.

5.5.2.3  Sample storage and transportation

It is recommended to ship a one weeks supply from the laboratory to the site, and vice versa, once every week. There should be one blank sample every week.

Samples should be kept in a refrigerator, and once every week the field operator should fetch the seven exposed samples from the refrigerator as well as the one unexposed field blank, put the filter packs in the transportation box together with the site reporting form covering the past week. Field reporting forms should always be put in a separate plastic bag in case of accidental leaks from precipitation samples which may be contained in the same transportation box. In order to keep precipitation samples chilled during transportation, the boxes should be insulated and ice packs (“blue ice”) follow the samples in the transportation boxes.

The samples should be kept in a refrigerator in the laboratory until the analysis is completed. The storage before the chemical analysis should in general be short. Aliquots of the samples should be stored for re-analysis until the quality checks of the data carried out at the responsible institutions are finished (e.g. three months).

Biological materials i.e. insects, leaves etc., and dust in precipitation samples will change the sample quality during storage and have an effect on the concentrations of hydronium ions, ammonium ions and other ion species in the sample. In order to detect any possible changes in the precipitation samples, pH or conductivity may be measured at the field site and compared with the results obtained after arrival in the laboratory. Samples which contain visual contamination should be filtrated in the laboratory as fast as possible.

5.5.2.4  Field blanks

A field blank sample is a sample which has been prepared, handled, and analysed as a normal sample in every way, except that it has not intentionally been exposed, and therefore should not contain the substance to be determined. Weekly field blank samples should be used in order to check possible sample contamination or sampling errors. Field blanks should be reported regularly to the CCC. Detection limits for the measurements are calculated from field blanks. A procedure for calculation of detection limits is given in Section 5.5.

Field blanks may be unexposed filterpacks, absorption solutions, containers for precipitation etc. which are returned unexposed to the laboratory from the site and analysed. The blank samples should be handled and stored like normal samples and for the normal time periods.

Some precipitation collecting systems make use of reusable equipment which are cleaned in the field with deionized water every day when the sample has been collected. Errors may then very easily be introduced. In such systems it is particularly important to make use of field blanks by pouring a known amount of deionized water into the sampler after cleaning, immediately take it out of the sampler, handle and transport it to the laboratory, exactly like a normal precipitation sample.

It is also recommended to investigate the influence of dust and gases on the precipitation sample. This may be done on days when no precipitation has occurred the preceding 24 hours, at the time when the sample should have been collected (7-9 am local time), by adding a known amount of deionized water into the collector. This field blank should then be handled, stored and transported, as mention above.

5.5.2.5  Comparison of different field instruments

Different methods for sampling of air constituents and different collectors for rain and snow are used in the EMEP network today. The efficiency and performance of the various precipitation collectors depend upon the type of precipitation (rain, snow, etc.), wind speed, temperature, and a number of intrinsic factors related to the construction and design of the collectors.

In contrast to a precipitation collector one air sampler can collect only some of the components in the EMEP measurement programme, and more than one sampler has to be used.

The consequence of the large number of different samplers for gases, aerosols, and precipitation is that comparisons with a reference sampler are necessary in order to assess the differences in the results i.e. the between-network biases. Three large-scale field comparisons have been carried out for samplers of gaseous components and aerosols, and a deeper understanding of the differences and their causes has been gained. Nevertheless the experience shows that a quantitative relations are not easily obtained from these large experiments due to sampler problems and failure, and consequently too short data periods.

Comparisons should cover longer periods, preferably two years in order to catch different meteorological conditions. Only a smaller (random) selection of the samples need, however, to be analysed in order to obtain a reasonable basis for a quantitative estimate since EMEP has daily measurements. The comparisons should be performed with a reference sampler and a national sampler at one site in each country. Results from these types of field intercomparisons can be found on http://www.nilu.no/projects/ccc/.

The problem with comparability also arises when changing from one type of air or precipitation sampler to another, within a participating country. The two collectors should therefore be run in parallel in the same way as briefly described above.

The recommended method for the calculations is taken from North American comparisons as described by Sirois and Vet (1994) in Section 5.6.1.

5.5.2.6  Precision of field instruments and measurement systems

Two identical samplers or collectors should be run in parallel over some period in order to assess the precision in the data. As above, it is recommended to allow a two year period of comparisons. Section 5.6.2 describes the calculations.


5.5.3  Laboratory operations

The chemical analysis of the samples should, as far as possible, not be divided between several institutions within one participating country in order at least to eliminate within-country inconsistencies.

The normal analytical laboratory procedures involve a series of precautions which have to be followed during the work in order to produce data with the required accuracy and precision. The precautions which seem to be specific to the recommended methods have been formulated in Section 4, Chemical analysis. More general aspects have been given in this Section in order to prevent unnecessary repetitions. Standard operating procedures should always be applied.

5.5.3.1  Chemical analysis

Calibration should be carried out in the beginning, and end of a series of samples, not to exceed 50, and at the end of the day at the latest. The average of the calibration before and after a sample series should be applied.

In order to quantify the precision and accuracy and detection limit in the laboratory:

 

5.6  Determination of accuracy

Accuracy of a chemical analysis in the laboratory is possible through internal checks against known concentrations and through the annual laboratory com­parison exercises organized by the CCC (Hanssen and Skjelmoen, 1995). It is, however, in principle not possible to assess the accuracy in air concentration measurements carried out at a site when accuracy is defined as the deviation from the true, and unknown, concentration. Even the comparability of the data is a severe problem with a widespread monitoring network involving a large number of different sampling methods and laboratories. It is, however, possible to determine the systematic errors (bias) relative to a reference measurement system and also to determine the precision of the measurements. The bias relative to a standard system and the precision together determines the uncertainty of the measurements and will when assessed through the network, and used together with the routine data, give a comparable data set.

The basis for the assessment is parallel sampling, either by one reference method and one national measurement system giving the (relative) bias, or by running two identical national measurement systems giving the precision.

The samples should cover all seasons, and the experiment should preferably extend over two years in order, to some extent, represent different measurement conditions. For an evaluation of the results, however, only a selection of the samples needs to be analysed, and one or two samples every week selected at random may give a sufficient number of samples for an annual average. By selecting samples at random, possible systematic effects on the results from source differences during weekends compared to working days will be reduced. It will also reduce the autocorrelation in the data which simplifies some types of statistics. The bias and random errors in the measurements must be expected to depend upon several factors and the analysis of the data may necessitate a stratification of the material and more than one estimate of the bias difference or precision to be given, e.g. different results for each season. An inspection of the blanks including visualization in charts is strongly recommended before starting the calculations. For Canadian precipitation data Sirois and Vet (1994) concluded that precipitation depth, precipitation type, concentrations, location as well as season and year all influenced the precision. In this case a larger number of samples than indicated above may be necessary.


5.6.1  Determination of systematic errors

The basis for the assessment of the systematic errors (bias) relative to a reference analytical chemical method or a reference measurement system, e.g. the between-network bias, is the parallel sampling between two systems.

The importance of standard operating procedures which enables a reproduction of results should be emphasized once more, without them, clearly an effort with parallel sampling is wasted.

Following Sirois and Vet (1994) the overall difference between two measurement systems can be described by the average or median of the differences, the variability in the differences through the modified median absolute difference estimator (M.MAD), and the coefficient of variation (CoV).

A simple model is applied for the measurements:

         

         

are the concentrations obtained with the local or national measurement system and with the reference system respectively, in sample (day) i. Ti is the true and unknown concentration of the component examined which is independent of the measurement system applied.  are the possible biases in the two systems in sample i, and  contain the random errors in the data which are reflected in the precision. The random errors ei are both assumed to have mean values equal to zero while the mean values of both Bi in general are different from zero.

The difference between the two measurements a specific day i gives:

         

and the average difference between the systematic errors for a year, or in a stratum, e.g. during the winter season, can be calculated. Assuming an average over a sufficient number of samples, the averages of the random errors ei will approximate zero

         

and the average of the differences, , between the systematic errors be assessed.

The arithmetic average is often replaced by the median of  because the statistical distribution of the data frequently deviate from a normal distribution and the median is not influenced by a few extremely large or small measurements.

When , the definition of M.MAD and CoV are

         

         

The calculations should not include measurements which are considered to be extreme. Such results indicate a measurement problem which needs to be solved.

The experiment has to be repeated for all countries taking part in the network using the same standard measurement system as reference. Assuming that bias differences between sites within a country can be disregarded a correction of annual averages, or averages of possible strata as indicated above, of the routine data can be carried out.

It is necessary to complement the calculations on the parallel measurements with charts such as scatter plots and often also to include other statistical methods to further investigate the differences which may occur.


5.6.2  Determination of precision

The precision in the total measurement is more useful for a data user as a measure of the random errors than is the laboratory precision. The basis for an estimation of the measurement precision is a parallel sampling with two identical measure­ment devices following identical sampling and analytical procedures.

Several measures of precision may be used, e.g. the modified median absolute difference (M.MAD) which is used in the preceding section (Vet and McNaughton, 1994; Sirois and Vet, 1994) and which we will use. This is an estimator of the spread in the data which becomes equivalent to the standard deviation for normal distributions. In the latter case about 68 per cent of the data will be within one standard deviation from the average. The M.MAD is as in the preceding section based on the median of the differences between the corresponding measurements (i.e. usually daily results) which will be insensitive to the presence of a few extreme values.

The equations are similar to the ones in the preceding section. The statistical model for the measurements is given by

         

         

i is the sample number and  the concentration obtained with one of the sampling systems. The true value day i is Ti, and the bias, assumed to be identical for the two measurement systems, is Bi. The random error is contained in  which has mean value zero. The precision is then described by the spread in ei. Assuming that ei from each of the two samplers are drawn from the same distribution:

           or

         

The factor in front of the parentheses is included because the errors ei in the two measurements are assumed drawn from identical distributions.

         

The factor 1/0.6745 has been included to make the M.MAD equal the standard deviation for normal distributions.

The coefficient of variance is defined as

         

and where  is the average of the two corresponding (usually daily) results.

          .


5.6.3  Calculation example for precision

The example below is from a series of parallel measurements of aldehyde/ketones carried out during the winter 1994–1995 at the Birkenes site (NO 1) in Norway. The methods for sampling and analysis are described elsewhere in this Manual, and the data are the concentrations of acetone (propanone). Volatile organics are sampled twice weekly in EMEP, usually Tuesdays and Thursdays.

The Tables 5.6.1 and 5.6.2 present the resulting precision expressed by the modified median absolute deviation (M.MAD) and the coefficient of variance (CoV) making use of the formulas in the preceding section with a spreadsheet as a basis for the calculations. The “Median (H)”, in the rightmost column of Table 5.6.2, gives the M.MAD when divided by 0.6745, and the CoV is obtained by division of the M.MAD with the “Median ()” and multiplying with 100 in order to have the result in per cent.

 

Table 5.6.1:  Precision of acetone measurements expressed by the modified median absolute deviation M.MAD, and the coefficient of variance CoV.

M.MAD µg/m3

CoV per cent

0.042

4.5

 

Table 5.6.2: Calculation of precision. The two leftmost columns contain the 8-hour averages of acetone from two parallel measurements.

S 1 = Birkenes 1

S 2 = Birkenes 2


Average

 

D=S1-S2
Difference


 

G=E-F

H=|G|

 

1.57

2.49

2.030

Median (C)

-0.92

-0.6505

F = Median (E)

-0.6293

0.6293

Median (H)

1.37

1.42

1.395

= 0.9300

-0.05

-0.0354

= -0.0212

-0.0141

0.0141

= 0.0283

2.27

2.41

2.340

 

-0.14

-0.0990

 

-0.0778

0.0778

 

2.16

2.23

2.195

 

-0.07

-0.0495

 

-0.0283

0.0283

 

1.48

1.52

1.500

 

-0.04

-0.0283

 

-0.0071

0.0071

 

4.09

4.22

4.155

 

-0.13

-0.0919

 

-0.0707

0.0707

 

0.93

0.89

0.910

 

0.04

0.0283

 

0.0495

0.0495

 

1.21

1.24

1.225

 

-0.03

-0.0212

 

0.0000

0.0000

 

1.41

1.45

1.430

 

-0.04

-0.0283

 

-0.0071

0.0071

 

3.54

2.46

3.000

 

1.08

0.7637

 

0.7849

0.7849

 

1.80

1.94

1.870

 

-0.14

-0.0990

 

-0.0778

0.0778

 

2.31

2.21

2.260

 

0.10

0.0707

 

0.0919

0.0919

 

1.39

1.42

1.405

 

-0.03

-0.0212

 

0.0000

0.0000

 

1.36

1.45

1.405

 

-0.09

-0.0636

 

-0.0424

0.0424

 

0.81

0.90

0.855

 

-0.09

-0.0636

 

-0.0424

0.0424

 

0.93

0.97

0.950

 

-0.04

-0.0283

 

-0.0071

0.0071

 

0.69

0.76

0.725

 

-0.07

-0.0495

 

-0.0283

0.0283

 

0.78

0.84

0.810

 

-0.06

-0.0424

 

-0.0212

0.0212

 

0.57

0.56

0.565

 

0.01

0.0071

 

0.0283

0.0283

 

0.78

0.83

0.805

 

-0.05

-0.0354

 

-0.0141

0.0141

 

0.86

0.96

0.910

 

-0.10

-0.0707

 

-0.0495

0.0495

 

0.63

0.74

0.685

 

-0.11

-0.0778

 

-0.0566

0.0566

 

0.66

0.63

0.645

 

0.03

0.0212

 

0.0424

0.0424

 

0.56

0.56

0.560

 

0.00

0.0000

 

0.0212

0.0212

 

0.60

0.65

0.625

 

-0.05

-0.0354

 

-0.0141

0.0141

 

1.01

1.00

1.005

 

0.01

0.0071

 

0.0283

0.0283

 

0.54

0.55

0.545

 

-0.01

-0.0071

 

0.0141

0.0141

 

0.63

0.63

0.630

 

0.00

0.0000

 

0.0212

0.0212

 

0.75

0.73

0.740

 

0.02

0.0141

 

0.0354

0.0354

 

1.00

0.95

0.975

 

0.05

0.0354

 

0.0566

0.0566

 

0.55

0.51

0.530

 

0.04

0.0283

 

0.0495

0.0495

 

0.41

0.44

0.425

 

-0.03

-0.0212

 

0.0000

0.0000

 

0.42

0.44

0.430

 

-0.02

-0.0141

 

0.0071

0.0071

 

0.62

0.62

0.620

 

0.00

0.0000

 

0.0212

0.0212

 

0.87

0.93

0.900

 

-0.06

-0.0424

 

-0.0212

0.0212

 

0.95

0.95

0.950

 

0.00

0.0000

 

0.0212

0.0212

 

1.14

0.94

1.040

 

0.20

0.1414

 

0.1626

0.1626

 

1.53

1.54

1.535

 

-0.01

-0.0071

 

0.0141

0.0141

 

 

The temporal variation of the two parallels is given in Figure 5.6.1, and Figure 5.6.2 contains a scatterplot of the results.

 


 

Figure 5.6.1: Temporal variation of acetone during the winter 1994–1995 at Birkenes (NO 1), measured in two parallels. Units in µg/m3.

 

The correspondence is generally very good in the Figure above except for the results from sample pair 10 where a mistake has been made with one of the parallels.

 


 

Figure 5.6.2: Scatterplot of the two parallel measurements of acetone at Birkenes (NO 1) during the winter 1994–1995. Units in µg/m3.


5.7  Calculation of detection limit

Different definitions of detection limits can be found in the literature, and in the preliminary version of this manual, a statistical method after Currie (1968),  Wilson (1973), and Kirchmer (1983) was described. One common definition of the detection limit is important because it will highly ease the use of the data, and also simplify the data documentation. As a result of discussions, and a desire to harmonize with WMO GAW, a method different from the one above was selected in the end. The method below and the method described by Currie (1968) and others, are both based on normal distributed data, and the numeric difference in the resulting detection limits comes from a different factor to be multiplied with the standard deviation. The method described by Currie (1968) will in our case give a detection limit about fifty percent higher than the one defined below.

In order to make a detection limit relevant to a complete measurement process, it must be calculated from field blank samples.

It should be emphasized that when concentrations become less than the detection limit, the calculated concentrations should still be reported when possible, and not given as “less than the detection limit”. A data user should normally be able to take such data into account, and at the same time be aware of their limitation.


5.7.1  Basic assumption

The reported EMEP data are assumed to be the differences between measurements made on normal exposed  samples and blanks e.g. field blank samples. A field blank sample is defined as a sample which has been prepared, handled, transported, and analysed as a normal sample in every way, expect that it has not intentionally been exposed, and therefore should not contain the substance to be measured.

The blank values should be aggregated to averages before used to correct measurement results. A possible seasonal variation of blank samples needs to be investigated, and if a variation is present, the blank samples should be aggregated as seasonal or half-yearly averages or better medians, rather than as annual averages before used in corrections.

Unexpected high blank values point at a measurement problem which has to be identified and solved. Such blank values shall not be used for corrections of measurements and calculations of detection limits. The related measurement results must be flagged as less accurate than normal. As an alternative to a complete rejection of the outliers, a “Winsorization” procedure is recommended.

It is assumed that the distribution of the blanks does not deviate too much from a normal distribution.


5.7.2  Statistical considerations

5.7.2.1  Data distribution

It is well known that air pollution data have skew distributions, usually closer to lognormal than to normal distributions. It was assumed above that the data have approximate normal distributions. This is a frequently made assumption when detection limits are discussed and simple statistics based on normal distributions give generally reasonable results even if the distribution is not normal in a strict sense.

The example presented in Figure 5.7.1 is based on field blanks of sulphur dioxide on impregnated filters from the Birkenes site in Norway in 1994. The distribution looks bimodal due to a pile up of  blanks in the low-concentration end, around and partly below the detection limit of the analytical method applied (ion chromatography). This distribution is, however, accepted as a sample from a normal distribution when tested with Kolmogorov-Smirnov statistics. This only illustrates that assumptions about normal distributions of the blanks may be reasonable, although not generally valid.



Figure 5.7.1: Frequency of field blanks for SO2at Birkenes in 1994.
Unit: µg S/KOH impregnated filter.

5.7.2.2  Detection limit

The detection limit is taken to be three times the standard deviation of the blank results. The probability for having a blank of his size is less than 0.5 per cent.

The detection limit can be calculated:

          Ld  =  3.0 × Sb

where the standard deviation is defined as

         

N is the number of field blanks, Ci is the concentration of the relevant substance in the ith field blank and  is the field blank average after elimination of “extreme” blank values. M is the median value.

           

5.7.2.3  Winsorization procedure

The following procedure may be followed to “Winsorize” outliers, e.g. see Gilbert (1987). The outliers may be identified by inspection and experience rather than by statistical procedures.

As an example, the occurrence of 2 extremely high blank values is assumed.

The Winsorized standard deviation, Sw, is

         

where n is the number of blanks, Sb is the standard deviation of the new data set after the replacements described above. The number of data not replaced, v = n-2k, with k outliers (k is 2 in the example above).


5.7.3  Calculation example for air samples

Figure 5.7.1 present field blank results for sulphur dioxide measurements at Birkenes (NO1) in 1994. The unit is µg S/filter, the typical air volume is 24 m3, a normal air volume with the type of equipment used (NILU EK air sampler) at Norwegian sites. A one weeks supply of filterpacks is sent to the site every week and returned and analysed after one week. Figure 5.7.2 shows the variation of the concentrations of sulphur dioxide in the field blanks through 1994. It is recommended to perform a separate calculation for each quarter.

 

 

 

 

 

 

 

Figure 5.7.2: SO2 field blanks from Birkenes in 1994.


The results obtained with the data presented in Figure 5.7.2 are given in Table 5.7.1, based on 24 m3 air/day.


Table 5.7.1: Blank results and detection limits for SO2 at Birkenes in 1994.

 

jan-mar

apr-jun

jul-sep

oct-dec

 

mg/filter

mg/m3

mg/filter

mg/m3

mg/filter

mg/m3

mg/filter

mg/m3

0.149

<0.01

0.418

0.02

0.574

0.02

0.544

0.02

Sb

0.129

<0.01

0.258

0.01

0.222

0.01

0.166

<0.01

M

0.071

<0.01

0.364

0.02

0.609

0.03

0.521

0.02

LD

0.02

0.03

0.03

0.02


5.8  Training of personnel

Training courses may be organized by the CCC in cooperation with other institutions.


5.8.1  Training of station personnel

Proper training and instruction of site operators is of great importance of the data quality, and all new operators should receive their instructions directly from the scientist responsible for the performance of the station. The training and instruc­tion should take place at the actual measuring station, if necessary after some basic instructions at the laboratory. The operators responsibilities at the site must correspond with his/hers technical qualifications, and the operation of complica­ted sampling equipment may require technical education.


5.8.2  Training of laboratory personnel

Laboratory personnel should be properly trained in sample handling and analytical work before they are allowed to carry out the routine analyses. Before being assigned on a routine basis to new instruments or methods, they should preferably work on split samples in order to ensure that the requirements to precision and accuracy are met.

 

5.9  References

CEN (1989) General criteria for the operation of testing laboratories. Brussels (EN 45001).

EMEP (1995) The status of monitoring within EMEP: Quality of measurements and data completeness. Monitoring strategy. Kjeller, Norwegian Institute for Air Research (EMEP/CCC-Note 3/95).

EURACHEM/WELAC Chemistry Working Group Secretariat (1993) Accreditation for chemical laboratories. Guidance on the interpretation of the EN 45000 series of standards and ISO/IEC Guide 25. Teddington, United Kingdom (WELAC WGD 2/EURACHEM GD 1).

Gilbert, R.O. (1987) Statistical methods for environmental pollution monitoring. New York, Van Nostrand Reinhold.

Hjellbrekke, A.G., Lövblad, G., Sjöberg, K., Schaug, J. and Skjelmoen, J.E. (1995) Data Report 1993. Part 1: Annual summaries. Kjeller, Norwegian Institute for Air Research (EMEP/CCC-Report 7/95).

ISO (1990) General requirements for the competence of calibration and testing laboratories. Geneva (ISO/IEC Guide 25).

ISO (1994) Quality management and quality assurance standards. Part 1: Guidelines for selection and use. Geneva (ISO 9000-1).

ISO (1994) Quality management and quality assurance vocabulary. Geneva (ISO 8402).

ISO (1994) Quality management and quality system elements. Part 1: Guidelines. Geneva (ISO 9004-1).

ISO (1991) Quality management and quality system elements - Part 2: Guidelines for services. Geneva (ISO 9004-2).

ISO (1993) Quality management and quality system elements - Part 4: Guidelines for quality improvement. Geneva (ISO 9004-4).

Hanssen, J.E. and Skjelmoen, J.E. (1995) The fourteenth intercomparison of analytical methods within EMEP. Kjeller, Norwegian Institute for Air Research (EMEP/CCC-Report 3/95).

Schaug, J. (1988) Quality assurance plan for EMEP. Lillestrřm, Norwegian Institute for Air Research (EMEP/CCC-Report 1/88).

Sirois, A. and Vet, R.J. (1994) Estimation of the precision of precipitation chemistry measurements in the Canadian air and precipitation monitoring network (CAPMoN). In: EMEP Workshop on the accuracy of measurements. Passau, 1993. Edited by T. Berg and J. Schaug. Kjeller, Norwegian Institute for Air Research (EMEP/CCC-Report 2/94). pp. 67-85.

Sirois, A. and Vet, R.J. (1994) The comparability of precipitation chemistry measurements between the Canadian air and precipitation monitoring network (CAPMoN) and three other North American networks. In: EMEP Workshop on the accuracy of measurements. Passau, 1993. Edited by T. Berg and J. Schaug. Kjeller, Norwegian Institute for Air Research (EMEP/CCC-Report 2/94). pp. 88-114.

Vet, R.J. and McNaughton, D. (1994) The precision, comparability and uncertainty of air and precipitation chemistry measurements made during the Canada-United States Eulerian Model Evaluation Field Study (EMEFS). In: EMEP Workshop on the accuracy of measurements. Passau, 1993. Edited by T. Berg and J. Schaug. Kjeller, Norwegian Institute for Air Research (EMEP/CCC-Report 2/94). pp. 115-134.

WMO (1992) Report of the WMO meeting of experts on the quality assurance plan for the Global Atmosphere Watch. Garmisch-Partenkirchen, Germany, 26–30 March 1992. Geneva (WMO/GAW No. 80).

WMO (1994) Report of the workshop on precipitation chemistry laboratory techniques. Hradec Kralove, Czech Republic, 18–21 October 1994. Edited by V. Mohnen, J. Santroch and R. Vet. Geneva (WMO/GAW No. 102).


Last revision: November 2001