Methods

Supported Devices

EC-PeT supports up to four different devices taking measurements at high frequency (referenced as fast data):

  • A three-dimensional sonic anemometer

  • A separate fast thermometer

  • A fast hygrometer

  • A fast CO₂ sensor

As well as up to three conventional sensors that can be used to provide potentially more stable reference values for mean values (referenced as slow data):

  • A thermometer

  • A barometer

  • A hygrometer

Each sensor is optional, except for the anemometer. If multiple measurements are taken by the same device (for example: humidity and CO₂), they are configured as if they were taken by individual devices.

Quality Control and Quality Assessment (QC/QA)

EC-PeT uses two approaches to assess the quality of the resulting fluxes:

Error Estimation Approach

As EC-PACK does [Van Dijk et al., 2004], EC-PeT calculates fluxes and their errors similar to [Lenschow et al., 1994] for every requested interval. This allows quantification of the uncertainty of each result which enables the user to select usable values upon their own needs or propagate the error in further calculations.

Quality Flag Approach

A different concept described by [Foken and Wichura, 1996] and [Vickers and Mahrt, 1997] is to assign a quality index to each interval. Each of these examines the data from each interval using a series of tests, whose results are then combined to a final verdict. Whereas Foken & Wichura use a quality scale based on personal experience, Vickers & Mahrt use a simpler concept of flagging records.

EC-PeT uses the more simple concept by Vickers & Mahrt using only three flag values:

  • 0 = good

  • 1 = suspicious

  • 2 = bad

Quality Control Tests

The tests implemented fall into two categories:

  1. Preprocessor tests - operate on individual measured input variables

  2. Postprocessor tests - operate on mean values, fluxes and other turbulence characteristics calculated by EC-PACK

Preprocessor Tests

Preprocessor Quality Control Tests

Nr

Code

Test

Description

1

spk

Standard deviation spike

Detects and removes spikes based on standard deviation over a moving window applied over multiple passes [Vickers and Mahrt, 1997]

2

res

Amplitude resolution

Detects poor instrument or data resolution by detecting gaps in the value distribution [Vickers and Mahrt, 1997]

3

drp

Dropouts

Detects “stuck” instruments by looking for repeated values [Vickers and Mahrt, 1997]

4

lim

Absolute limits

Detects and removes unrealistic or physically impossible values [Vickers and Mahrt, 1997]

5

mom

Higher moments

Rejects records with skewness and kurtosis out of range [Vickers and Mahrt, 1997]

6

dis

Discontinuities

Rejects records containing discontinuities, detected by a Haar transform or containing coherent changes [Vickers and Mahrt, 1997]

7

nst

Nonstationarity

Rejects records with insufficient stationarity of the horizontal wind [Vickers and Mahrt, 1997]

8

lag

Lag correlation

Rejects records where the maximum correlation between anemometer and gas analyzer measurements occurs at non-zero lag [Vickers and Mahrt, 1997]

9

chr

Change rate spike

Detects and removes spikes where the variable changes from one value to the next by more than a threshold value [Kolle and Rebmann, 2007]

10

mad

MAD spike

Detects and removes spikes using median absolute deviation

11

fws

Flux stationarity

Rejects records where the stationarity of covariances is poor [Foken and Wichura, 1996]

12

cot

Covariance trend relation

Rejects records where detrending of input variables changes covariances too much

13

bet

Beta distribution

Rejects records where input variables violate the assumption of a Gaussian distribution

14

vst

Variance stationarity

Rejects records where variances are nonstationary

15

ftu

Fraction of turbulence

Rejects records where turbulence is intermittent

16

srv

Surviving values

Rejects records where too few data survive despiking and limit checks

Postprocessor Tests

Postprocessor Quality Control Tests

Nr

Code

Test

Description

1

mnw

Mean w

Rejects records where the mean vertical wind after applying wind field correction exceeds limits

2

itc

Integral turbulence characteristics

Rejects records if the difference between measured and calculated integral characteristics is too large [Foken and Wichura, 1996]

3

exs

Excluded sector

Rejects records if the mean horizontal wind comes from one or more sectors defined as bad

4

fkm

Footprint fraction

Rejects records where insufficient flux originates from the area of interest

5

exe

Excess error

Rejects records where flux uncertainties exceed acceptable thresholds

Detailed Test Descriptions

MAD Spike Detection

Median and median absolute deviation (MAD) provide estimators for a quantity and its dispersion that are much more robust against outliers than mean and standard deviation. Hence, several authors [Lee and Kassam, 1985] suggest detecting outliers using the record MAD.

Since MAD is robust against extreme outliers at the fringes of the value distribution, MAD does not change if outliers are removed. In turn spikes can be removed in one single pass, in contrast to the standard deviation method spk that iterates as long as needed. Although this method has been found useful for finding out-of-range flux values [Papale et al., 2006], its usefulness for quality control of raw data still has to be proven.

This test calculates median and MAD of a variable over the full record. Then all values that differ from the median by more than madth times MAD are removed as spikes. A record is hard flagged if a fraction greater than madcr of all values is removed.

Covariance Trend Relation

There is an ongoing controversial discussion whether linear detrending of raw data before calculation of covariances is useful or even permissible. This test is designed to exclude records that could be questioned by either party.

This test is only applied to scalar quantities. It calculates covariance between vertical wind and the quantity once with detrending and once without. If the relative difference of both results differs by more than cotlimit the record is rejected.

Beta Distribution Test

This test is suggested by [Graf et al., 2010], who showed that the skewness–kurtosis relation of their data can be described by the beta distribution in more than 90% of all cases. In turn, records are hard flagged if the point in the skewness-kurtosis plane corresponding to this record is more than a given tolerance value off the curve describing the beta distribution. EC-Frame contains a slightly improved version supplied by A. Graf (personal communication).

The test fits parameters \(p\) and \(q\) to estimate the probability density function (PDF) of the beta distribution defined by:

\[p_{\mathrm{df}}(x) = \frac{\Gamma(p+q)}{\Gamma(p) \Gamma(q)} \quad \frac{(x-x_{\mathrm{min}})^{(p-1)} (x_{\mathrm{max}}-x)^{(q-1)}} {(x_{\mathrm{max}}-x_{\mathrm{min}})^{(p+q-1)}}\]

Variance Stationarity

Since sudden changes of the mean are not exclusively bound to instrument errors but also occur in natural flows :cite`kek_map00`, the discontinuities test by [Vickers and Mahrt, 1997] may be too restrictive. In particular, if such changes are associated with flux contributions at low wavenumbers [Mahrt, 2010], it might prevent proper energy balance closure. However, if turbulence becomes intermittent, the time fraction of turbulent and non-turbulent flow may significantly influence flux calculation [Drüe and Heinemann, 2007].

This test rejects records if the variance of a variable exhibits strong sudden changes. Similar to the discontinuities test dis, the variance for both halves of a 1000-point window moving in 500-point steps is calculated. A record is flagged if both variances differ by more than a factor vstlim anywhere in the record.

Fraction of Turbulence

Although intermittence does not change flux calculation in principle, it leads to erroneous results in spectral corrections [Massman and Lee, 2002] and it can render the lag correlation test or covariance maximization useless, due to indifferent autocorrelation.

This test calculates covariances for windows of length Lf seconds and sorts the values with descending magnitude. The number of subrecords which accumulatively account for 90% of the full record covariance, divided by the number of subrecords is the turbulent fraction. This value is scaled by 1/0.9 and limited to 1.0, in order to span the value range zero to unity.

Surviving values

This test flags records if after deleting values flagged by the instruments, deleting values outside the absolute limits, and removing spikes less than a predefined fraction minsurv1/minsurv2 of values is left. Two values may be supplied that serev as threshold for a soft (1) or hard (2) flag, respectively.

Excluded Sector

At least since the Kansas experiment, flow distortion by the structures supporting the turbulence measurements has been an issue [Wyngaard, 1988]. Especially if sensors are in the lee of a measurement tower, a range of wind directions is associated with significantly changed readings [Dabberdt, 1968]. And a Campbell CSAT3, by design, cannot measure correctly when the flow comes from a 20° sector behind its support [Friebel et al., 2009].

To remove such records, where measurements are surely disturbed by nearby obstacles, this test allows definition of one or more excluded sector(s). A hard flag (2) is raised if the mean wind direction of a record falls into one of these sectors.

Footprint Fraction

In order to ensure sufficient fetch for the measurements, the matching of flux footprint and the area of interest is assessed using an analytical footprint model. For EC-PeT, the model by [Kormann and Meixner, 2001] was chosen, since its usability for this purpose has been proven by several authors, e.g. [Neftel et al., 2008].

In literature, two variants of defining the area of interest are found. One group defines a grid of variables describing land use, roughness length, and so on, e.g. [Göckede et al., 2004]. The other group uses polygons to define (fairly) homogeneous source areas, e.g. [Neftel et al., 2008]. For simplicity, EC-Frame allows only the definition of one single polygon that represents the area of interest (AOI). The polygon is rotated for each record in a way, so the mean flow comes from on the positive \(x\) axis of the integration plane

To ensure adequate accuracy while keeping the computational load reasonable the model is integrated over a cone centered along the \(x\) axis. The width of this cone is chosen to cover 99.9% (\(3\sigma\)) of the flux. Inside this cone, the footprint is integrated over all grid cells who’s center is inside the AOI polygon. The integration is stopped at the distance \(x\), where more than 99% of the footprint have been covered or \(x\) exceeds the maximum extension of the AOI polygon in \(x\) direction, whatever occurs first. The grid spacing is chosen as \(0.125\) [1] times the distance xmax of the maximum of the footprint function \(\phi\).

The model is integrated over a cone centered along the upwind direction. The footprint is integrated over all grid cells whose center is inside the area of interest (AOI) polygon. A record is hard flagged if the flux fraction stemming from the AOI is below the specified threshold.

Excess Error

A simple method to accept or reject values, if their uncertainty is known, is to remove values with “a large error”. However, if values change very much in magnitude, neither an absolute nor relative error may be used as sole criterion.

To allow for the strongly varying magnitude of fluxes, EC-PeT defines the error threshold for a variable \(\zeta\) as:

\[u_{\mathrm{max}}(\zeta) = \sqrt{u_{\mathrm{min}}^2 + (\zeta \cdot u_{\mathrm{fac}})^2}\]

where \(u_{min}\) is the absolute error threshold (at \(\zeta=0\)) and \(u_{fac}\) the relative error threshold (approached for large \(\zeta\)).

A record is hard flagged (2) if the flux error \(d\zeta\) calculated by EC-PACK exceeds \(u_{max}(\zeta)\).

Planar Fit Method

The planar fit method [Wilczak et al., 2001] is used for coordinate rotation to align the coordinate system with the mean flow. This method assumes that the setup has a stationary misalignment and estimates this misalignment from the collection of run-mean velocity vectors.

The planar fit method can be extended to effectively reproduce the results of the triple-tilt-correction, with the advantage that the angles of different runs and of different setups are comparable.

Although the geometric conditions around the sensor setup are almost stationary, they may change gradually (e.g. grass growth) or suddenly (e.g. harvesting crops, moving instruments during maintenance). To account for this, the fit is usually performed at intermediate intervals, such as one week or one fortnight, or between maintenance visits, rather than over the entire measurement period.

Flux Calculation Recipe

Following Van Dijk et al. (2004) [Van Dijk et al., 2004], the sequence of working steps can be summarized as follows:

  1. Sonic anemometer corrections: Wind speed and temperature measurements are corrected by removing known bias values (not yet covered by the calibration). Sonic temperature estimates are corrected for side-wind effects (in case this is not by the anemometer itself, e.g. in Campbell CSAT3).

    In the second iteration of this recipe, tilt corrections are applied to the raw data as part of the calibration. In both calibration iterations, all velocity vectors are corrected for flow-distortion.

  2. Provisional mean quantities: Calculated for each averaging interval.

  3. Hygrometer drift correction: Slow measurements of a reference hygrometer are used to assess whether the optical hygrometer suffers from calibration drift.

  4. Variance and covariance calculation: For each run, mean and (co-)variances of the calibrated quantities are estimated.

  5. Trend correction: Variances and covariances are corrected for linear additive trends (optional).

  6. Tilt correction: The effects of misalignment of the setup on mean quantities and (co-)variances are corrected using either:

    • pre-defined angles: fixed angles set in the configuration (not recommended).

    • double rotation: Yaw, pitch and roll are calculated speratedly for each averaging interval. It is assumed that the mean velocity per run cannot have a vertical component (i.e. \(w = 0\)) and that lateral velocity correlations must vanish (i.e. \(\left.\overline{v´w´}\right. = 0\)).

    • planar fit: application of the angles determined from the planar fit.

  7. Second iteration: All previous steps (except the first two) are repeated, but now the tilt-corrections are carried out on the raw data. Tolerance estimates are generated for both mean quantities and all (co-)variances. To correctly perform tilt-corrections one would have had to record all possible third- and fourth-order correlations. Application of a second iteration eliminates the necessity for tilting of tolerances of covariances, because the tilting is now performed on the raw data.

  8. Humidity corrections: All mean values and (co-)variances which involve the sonic temperature are corrected for humidity effects. This correction is not applied to the raw data in the calibration process, because in practice the hygrometer may drop out for certain samples or even during (short) periods. Skipping the bad samples of the hygrometer will still permit for the reliable estimation of mean humidity and of covariances with humidity. Therefore humidity corrections which rely on these estimates can still be used, whereas individual samples can no longer be corrected.

  9. Temperature calibration (only if Thermocouple temperature Tcoup is available): After correction of the sonic temperature for humidity, the mean sonic temperature \(\bar T_s\) is compared with the mean thermocouple temperature \(\bar T_c\) (if available) and adjusted if necessary. A small error in the estimation of the acoustic pathlength can easily lead to a systematic error in the sonic temperature of Kelvins. Possible errors in the estimate for the acoustic pathlength are eliminated by mapping \(\bar T_s\) on \(\bar T_c\). This is done by multiplication of all factors \(\bar T_c\) in the (co-)variances with a factor \(\bar T_c / \bar T_s\).

  10. Oxygen correction: All (co-)variances involving humidity are corrected for Oxygen sensitivity of the optical hygrometer. The temperature estimates, which are used in the Oxygen correction procedure, were corrected for humidity in a previous step. This indicates that the relations for estimation of temperature and of humidity are coupled and should therefore in principle be solved simultaneously. [Van Dijk et al., 2004] assume that a decoupled approach, which is first order in the errors involved, provides sufficiently accurate estimates when these errors are sufficiently small (corrections on corrections are considered to be second order effects and consequently neglected).

  11. Frequency-response correction: The Moore/Horst model presented [Van Dijk et al., 2004] follwoing [Moore, 1986] and [Horst, 2000] is used to correct (co-)variances for all types of frequency-response errors. Half of the absolute corrections which are made to the covariances are quadratically added to the tolerance estimates which were already made for the covariances.

  12. WPL correction: The Webb-velocity according to the relation presented in [Webb et al., 1980] is added to the direct estimate for the mean vertical velocity.

  13. Surface fluxes are estimated from the mean values and (co-)variances at measurement height \(z\) for scalars (H₂O, CO₂), for surface friction and for sensible heat.

  14. Tolerance levels are estimated for the surface fluxes. Since both scalar flux \(F(\rho_\xi)\) and surface friction \(\tau\) have a term dependent on mean velocity \(w\), one should take care in incorporating the tolerance of \(w\) into the tolerance levels of the surface fluxes. Even when the mean vertical velocity is rotated out with a tilt-correction, then a statistical error in \(w\) remains.

Alternative Stochastic Error Calculation

As an alternative to step 14, the stochastic error of eddy-covariance fluxes can be calculated using the algorithm of [Finkelstein and Sims, 2001]. The calculation of the stochastic error of eddy-covariance fluxes based on raw data depends on the number of independent observations, which is not identical with the absolute number of observations n when time series are auto- and cross-correlated.

The integral time scale is a measure for how long the variables are correlated and therefore not independent. The integral time scale is therefore often used to determine the number of independent observations. However, this metric is not always estimated reliably [Lenschow et al., 1994].

The Finkelstein-Sims method provides an alternative approach that accounts for the autocorrelation structure of the time series more robustly than traditional methods based on integral time scales.

Iterative Correction Process

Steps 10-12 (Oxygen correction, Frequency-response correction, and WPL correction) are performed iteratively until a stable amount of correction is reached, as suggested by [Mauder et al., 2007]. This iterative approach is necessary because these corrections are interdependent:

  • The WPL correction for CO₂ flux depends on both sensible and latent heat fluxes

  • The Schotanus correction for sonic temperature depends on humidity measurements

  • The frequency response corrections affect covariances that are used in other corrections

The iteration continues until the relative change in corrections between successive iterations falls below a specified threshold (typically 0.1% for fluxes). [Mauder et al., 2007] found that:

  • The CO₂ flux shows the largest impact from iteration due to the interdependence of its WPL correction with both sensible and latent heat fluxes

  • The sensible heat flux shows the second largest impact (-1.5%) due to the strong interdependence of its Schotanus correction on the latent heat flux

  • The latent heat flux shows smaller impact (-0.3%) compared to CO₂ flux, because its WPL correction dependence on sensible heat flux is relatively smaller

  • The iteration increased the mean energy balance residual by 2.3%

Typically, convergence is achieved within 3-5 iterations for most atmospheric conditions.

Device-Specific Corrections

Frequency Response Correction

Due to lack of information on the cospectra of \(u\) and \(v\) with scalars and of different scalars, the frequency response correction is only applied to variances and to covariances of scalars with the vertical velocity \(w\). Currently no information is used on the exact arrangement of the sonic transducers.

The correction accounts for:

  • Path averaging effects: Spatial averaging over the sonic path length

  • Sensor response time: Finite response time of gas analyzers

  • Data acquisition effects: Anti-aliasing filters and sampling frequency

  • Tube attenuation: For closed-path gas analyzers

Thermo Couple

If no name for a reference temperature variable is given in the main configuration file (i.e. Tref_var is empty) it is assumed that the variable Tcouple_var contains the temperature itself (in deg Celsius).

If a reference temperature is given, it is assumed that the variable Tcouple_var contains a voltage, which is the voltage between the actual sensor and the reference junction. The temperature is then determined as:

\[T = f(\mathrm{voltage} + f^{-1}(\mathrm{reference T}))\]

where \(f\) is the calibration function given in the calibration file, and \(f<{-1}\) is the inverse of the calibration function. It is assumed that the calibration function yields a temperature in degrees Celsius. The reference temperature is also supposed to be in degrees Celsius.

This procedure is in accordance with what is done in the P14 instruction of the Campbell dataloggers.

Krypton Hygrometer

The calibration of the Krypton hygrometer as given in the manual of the instrument is not really compatible with the way in which calibrations are treated in the present program. According to the Krypton manual the calibration is:

\[ln(V) = ln(V_0) - \rho x K\]

where \(V_0\) is the output voltage, \(V_0\) is the voltage in absence of water vapour, \(x\) is the path length and \(K\) is the effective extinction coefficient. In order to convert this calibration to a (log) polynomial, the calibration curve has to be fitted to either model. This implies that when the path length is changed, a new calibration curve needs to be fitted (of course no recalibration needed). To cope with the ‘wet’ calibration and ‘dry’ calibration as suggested by Campbell a second order log-polynomial can be used.

LiCor 7500 IR Hygrometer / CO₂ Sensor

You have a variety of choices in the output of this sensor. EC-PeT wants densities in kg/m³. You can either do the ppm to density conversion in the datalogger program, or in the calibration file of EC-PeT (or even through the gain/offset mechanism of NetCDF, but this is less transparent).

Sonic Temperature

If no thermocouple is present, the sonic temperature is used for the calculations where a temperature is needed. But in order to derive specific humidity from either Krypton or Lyman-alpha data, one needs a temperature. On the other hand one part of the ‘Schotanus correction’ [Schotanus et al., 1983] for the sonic temperature depends on specific humidity. Therefore, a temporary temperature is constructed with both parts of the Schotanus correction applied (where the humidity part is done iteratively: find the correct temperature and specific humidity for the given absolute humidity and sonic temperature). The resulting temperature is used to determine the specific humidity.

The actual humidity part of the Schotanus correction is applied only to the averaged data, not to the raw data.

Wind Tunnel Calibrated Sonic

At KNMI sonic anemometers are sometimes calibrated in a wind tunnel to determine the directional dependence of the signal. For this purpose a special instrument type, QQType = 6, has been introduced. For this type six extra calibration factors have to be defined in the calibration file (add six lines at the end for QQExt6 to QQExt11).

The method is based on a wind tunnel calibration of the sonic. The real velocity components can be derived from the measured components and the real azimuth and elevation angle. But the latter are not known and have to be determined iteratively from the measured components.

The relationship between the real components and the measured components is:

\begin{align} U_{\text{real}} &= \frac{U_{\text{meas}}} {U_{C1} \cdot (1 - 0.5 \cdot ((Azi + \frac{Elev}{0.5236} \cdot U_{C2}) \cdot (1 - U_{C3} \cdot \left|\frac{Elev}{0.5236}\right|))^2)} \\ V_{\text{real}} &= V_{\text{meas}} \cdot (1 - V_{C1} \cdot \left|\frac{Elev}{0.5236}\right|) \\ W_{\text{real}} &= \frac{W_{\text{meas}}} {W_{C1} \cdot (1 - 0.5 \cdot (Azi \cdot W_{C2})^2)} \\ Azi &= \arctan(V/U) \\ Elev &= \arctan(W/\sqrt{U^2 + V^2}) \end{align}

where \(U_{C1}\), \(U_{C2}\), \(U_{C3}\), \(V_{C1}\), \(W_{C1}\), \(W_{C2}\) are fitting coefficients (corresponding to configration values QQExt6, QQExt7, QQExt8, QQExt9, QQExt10 and QQExt11, respectively).

An azimuth angle of zero is supposed to refer to a wind direction from the most optimal direction (i.e. the ‘open’ side of a sonic). Samples with an absolute azimuth angle of more than 40 degrees are rejected.

Generic Sonic

In an attempt to make the software more flexible, a generic sonic has been defined, for which the most important coefficients can be set through calibration parameters. To this end, five extra calibration coefficients are defined: QQExt5, QQExt6, QQExt7, QQExt8, QQExt9.

Side Wind Correction

One of the instrument dependent corrections is the side-wind correction for the sonic temperature:

\[T_s = T + (\mathrm{humidity correction}) + A u^2 + B v^2 + C w^2\]

For a symmetric configuration \(A\) and \(B\) are equal to \(0.5 * (1 + cos(\alpha)^2)\) and \(C = sin(\alpha)^2\), where \(\alpha\) is the angle of the sonic path relative to the vertical. For a sonic with orthogonal axes of which one is vertical \(A\), \(B\) and \(C\) are equal to \(2/3\).

Note

For a CSAT these factors should be set to zero, since the correction is done internally in the sonic.

Coordinate Definition

For the coordinate definition, two coefficients are defined:

  • QQExt8: defines the handedness of the coordinate system (\(1\) for right handedness like the CSAT, and \(-1\) for left handedness, like the METEK USA-1)

  • QQExt9: extra rotation to be done (in degrees); this appeared to be needed for the Kaijo Denkis

Physical Corrections

Webb-Pearman-Leuning (WPL) Correction

The WPL correction accounts for density fluctuations due to temperature and humidity variations that affect open-path gas analyzer measurements. The correction is applied to both CO₂ and H₂O fluxes.

For CO₂ flux:

\[F_{c,corrected} = F_{c,measured} + \mu \frac{\rho_c}{\rho_d} \overline{w'\rho_d'} + (1 + \mu \sigma) \frac{\rho_c}{\rho_v} F_E\]

For H₂O flux:

\[F_{E,corrected} = F_{E,measured} + (1 + \mu \sigma) \frac{\rho_v}{\rho_d} \overline{w'\rho_d'}\]

where:

  • \(\mu\) is the ratio of molecular weights of dry air to water vapor (\(\approx 1.61\))

  • \(\sigma\) is the ratio of water vapor density to dry air density

  • \(\rho\) terms are densities

  • \(F_E\) is the latent heat flux

Schotanus Correction

The Schotanus correction accounts for humidity effects on sonic temperature measurements:

\[T = T_s / (1 + 0.51 \cdot q)\]

where \(T_s\) is the sonic temperature, \(T\) is the actual temperature, and \(q\) is the specific humidity.

High-Frequency Spectral Corrections

Spectral corrections account for high-frequency flux losses due to:

  • Sensor separation: Lateral and vertical separation between sensors

  • Path averaging: Spatial averaging over the sonic path

  • Sensor response time: Finite response time of gas analyzers

  • Tube effects: For closed-path systems, attenuation in sampling tubes

The corrections are typically applied as multiplication factors derived from theoretical or empirical cospectral models.

Uncertainty Estimation

EC-PeT provides comprehensive uncertainty estimates for all calculated fluxes following the approach by [Lenschow et al., 1994]. The uncertainty estimation includes:

Random Sampling Error

The random sampling error due to finite averaging time is estimated as:

\[\sigma_F^2 = \frac{2 \Delta t}{T} \int_0^{\infty} S_F(n) dn\]

where: - \(\Delta t\) is the sampling interval - \(T\) is the averaging time - \(S_F(n)\) is the flux cospectrum - \(n\) is the frequency

Systematic Errors

Systematic errors from various sources are estimated and propagated:

  • Calibration uncertainties: From sensor calibration procedures

  • Correction uncertainties: From physical correction procedures

  • Processing uncertainties: From data processing algorithms

The total uncertainty is computed by combining random and systematic components:

\[u_{total}^2 = u_{random}^2 + u_{systematic}^2\]

Quality Flag Integration

The final quality assessment integrates results from all quality control tests according to the flag rules defined in the configuration. The integration follows these principles:

  1. Test independence: Each test evaluates a specific aspect of data quality

  2. Hierarchical flagging: Tests can raise soft (1) or hard (2) flags

  3. Rule-based combination: User-defined rules determine how individual test results combine

  4. Flux interdependence: Quality flags for different fluxes can be linked

The resulting quality flags provide a standardized assessment of flux reliability, enabling:

  • Automated data screening: Systematic rejection of poor-quality data

  • Uncertainty-aware analysis: Weighting of data by quality

  • Quality reporting: Standardized quality metrics for data users

  • Method comparison: Consistent quality assessment across different processing approaches

Footnotes