Restek
Home / Resource Hub / ChromaBLOGraphy / More Than You Ever Wanted to Know About Calibrations, Part 6 (Continued) – Even More On Calibration Spacing

More Than You Ever Wanted to Know About Calibrations, Part 6 (Continued) – Even More On Calibration Spacing

8 Sep 2023

In the previous half of this post, I covered a lot of ground on calibration spacing, using some simple models to test the differences between the calibration used in method EPA 1633, with six out of seven points placed in the lower fifth of the calibration, and a calibration with the points equally spaced across the entire range. I showed that for equal weighted calibrations stacking points on the low end gives better results, but when calibrations are weighted the exact calibration points don’t seem to matter much. The models I used assumed that all calibration points had the same ±10% relative error in response, and I teased at the end that if that assumption isn’t true the results may change, so let’s dig into this.

Most methods have routine calibration checks that are generally towards the middle of the calibration range and have recovery requirements around ±10 - 30%. Some methods also require routine low level calibration checks, and those often have wider ranges, commonly ±50%. From this and anecdotal experience, I feel it’s safe to assume that there is usually more relative variation at the low end of most calibrations. With that in mind, I’ve adjusted the relative error for each calibration point as shown in Table 1.

method Relative error equal Relative error
0.2 50% 0.2 50%
0.5 40% 10 20%
1.25 30% 20 20%
2.5 30% 30 20%
5 20% 40 10%
12.5 20% 50 10%
62.5 10% 62.5 10%

Table 1 – Relative error amounts for calibrations points in the 1633 method and equal spaced calibrations.

From there the same process outlined in the previous blog post of randomly generating five calibrations was done. The equal weighted calibration curves are shown below in Figure 1.

Equal weighted  curves for 1633 method

Equal spaced calibrations for 1633 method

Figure 1 – Equal weighted  curves for 1633 method (top) and equal spaced calibrations (bottom).

From this we see similar results to the last blog, with the method calibration converging at the y-intercept, with a standard deviation of 0.2846, while the equal spaced calibration does not converge, and has a standard deviation over three times higher, at 1.0562. Table 2 shows the % error at each calibration point for each of the calibrations.

Cal point Equal Weighted % error, method calibration
0.2 95.8 47.3 17.8 243.0 88.2
0.5 0.6 12.7 33.1 114.9 34.7
1.25 2.7 6.1 9.3 19.7 31.1
2.5 2.1 35.4 19.4 29.3 2.5
5 12.2 0.6 12.6 6.5 1.6
12.5 4.6 7.8 8.1 17.0 7.4
62.5 0.1 0.3 0.3 0.7 0.3
 total  118.0 110.1 100.6 431.1 165.9
average 185.1        
Cal point Equal Weighted % error, equal spacing
0.2 157.0 716.1 682.6 526.3 84.6
10 1.4 1.6 31.1 8.4 20.3
20 8.8 14.0 10.7 18.3 9.0
30 9.1 7.2 12.9 11.0 0.9
40 3.1 3.7 7.3 5.0 4.6
50 4.7 2.4 5.4 2.7 1.2
62.5 0.5 2.8 2.8 2.9 2.8
total 184.5 747.8 752.8 574.5 123.5
average 476.6        

Table 2 - % Error for 1633 method and equal spaced calibrations using linear, equal weighted fits.

Again, we see results similar to the last blog post using equal variation. The higher variability in the y-intercept on the equal spaced calibrations leads to increased error in the low end, which causes much higher average total error. The lower points may have more error associated with them, but due to the equal weighted fit they have less influence on the curve fit, and there still needs to be more points at the low end to accurately define the y-intercept and reduce low end error. For equal weighted curves, stacking your calibration on the low-end gives better results, even if there is more relative error at those points.

Of course, this wouldn’t be complete without evaluating the weighted curves as well, which are shown below in Figure 2.

Weighted linear curves for 1633 method (top) and equal spaced (bottom) calibrations

Figure 2 – Weighted linear curves for 1633 method (top) and equal spaced (bottom) calibrations.

Visually, the curves again look very similar to what we saw in the previous blog post, with all of them converging at the y-intercept. Again, we would expect less error at the low end due to this, and the error data in Table 3 seems to verify this.

cal point 1/x % error, method calibration
0.2 44.1 12.6 18.0 4.6 2.4
0.5 20.4 26.0 33.0 22.7 0.6
1.25 10.1 11.1 9.2 15.7 17.6
2.5 5.4 33.0 19.5 14.5 8.2
5 13.3 1.4 12.7 12.7 0.6
12.5 4.5 7.7 8.1 17.6 7.3
62.5 0.6 0.7 0.2 4.1 1.5
total 98.4 92.5 100.7 91.9 38.2
avg 84.3        
 
cal point 1/x % error, equal spacing
0.2 2.1 15.9 1.9 8.2 8.3
10 3.5 8.2 20.9 1.4 19.0
20 8.3 17.2 12.5 19.4 9.3
30 9.2 7.4 12.4 10.6 0.8
40 2.8 2.7 5.9 4.0 4.4
50 5.1 0.6 6.9 1.3 0.9
62.5 0.0 5.5 4.9 4.5 3.1
total 31.1 57.6 65.4 49.4 45.9
avg 49.9        
 
cal point 1/x2 % error, method calibration
0.2 10.6 9.7 12.7 6.9 1.8
0.5 24.6 26.4 33.5 21.2 0.5
1.25 5.9 10.7 8.5 15.0 17.6
2.5 1.4 33.3 20.2 17.6 8.4
5 4.2 0.7 13.8 10.0 0.7
12.5 12.9 6.8 6.4 14.7 7.5
62.5 9.8 1.6 1.8 7.9 1.3
total 69.5 89.1 97.0 93.3 37.7
avg 77.3        
 
cal point 1/x2 % error, equal spacing
0.2 0.0 0.2 0.2 0.0 0.2
10 3.2 6.0 20.7 2.6 18.0
20 8.7 15.0 12.8 17.8 10.8
30 8.9 10.4 12.7 11.8 0.5
40 2.5 0.0 6.2 2.6 5.9
50 5.5 2.1 6.6 0.1 2.3
62.5 0.4 8.4 4.6 5.8 1.8
total 29.2 42.1 63.9 40.7 39.5
avg 43.1        

Table 3 - % Error for 1633 method and equal spaced calibrations using weighted linear fits.

While we do see the overall reduction in error expected, this is the point where we start to see some differences from the equal variation models. Previously, the method and equal spaced calibrations had very similar total errors when comparing the 1/x and 1/x2 calibrations, but now the equal spaced calibrations have almost ½ the total error that the method calibration does. By weighting the curves we’ve allowed the equal spaced curve to accurately define the y-intercept, but we’ve also given more influence on the low points, which allows for their increased error to rear its ugly head.

At this point the results matched my expectations, and I was ready to call the equal spaced calibration the clear winner. I was looking over the data later though and realized something. The method calibration had higher total error not necessarily because it was less accurate, but because it was measuring more low-accuracy points. The deck had been stacked in favor of the equal spaced calibration from the beginning. To do an apples-to-apples comparison I calculated the recovery of the method calibration responses using the equal spaced calibration and vice versa and found the average total error for each calibration type, which is shown in Table 4.

Overall Average Totals
1/x, method 1/x2, method
153.6% 177.6%
1/x, equal 1/x2, equal
153.5% 147.4%

Table 4 – Average total error for weighted equal spaced and method calibrations.

When using the same data set, the 1/x weighting results are almost identical, while the 1/x2 are slightly better with the equal spacing, it’s not a big enough difference that I feel I can state that equal spacing is superior to the method calibration.

I did the same analysis for quadratic fits as well. In the interests of brevity, I’ll spare everyone the tables of raw data and just show the overall summary below in Table 5.

Constant Variation
Method calibration % error
equal 1/x 1/x2
121.2% 76.4% 96.5%
Equal Spaced calibration % error
equal 1/x 1/x2
984.3% 70.2% 134.2%
 
Ramped Variation
Method calibration % error
equal 1/x 1/x2
311.0% 176.7% 453.1%
Equal Spaced calibration % error
equal 1/x 1/x2
682.8% 186.2% 397.9%

Table 5 – Total error for quadratic fits

Once again, we see that for equal weighted calibrations it’s better to stack calibration points at the low end as in the method calibration, but once the calibrations are weighted there’s no clear benefit to either calibration spacing.

So, the final guidance I can give on calibration levels is this: if your method restricts you to equal weighted curves, whether they are linear or quadratic, always stack points on the low end of the curve. If weighting is allowed, I would suggest a more equal spacing, not because it improves the overall accuracy of the curve, but because you can calculate the residual error of the points and get an idea of how your calibration may be biased across the curve. Depending on the pipettes, syringes, and standards you have available to you certain concentrations may be easier to make, so don’t get too hung up on having exact equal spacing. A calibration that’s easy to make will probably have less errors during preparation and be more accurate in the long run.

To distill all this down to a final takeaway, I would suggest the following guidelines for calibrations.

  1. Determine your linear range.
  2. Decide on your calibration range (i.e., are you using the entire linear range, or targeting a concentration you expect samples to be in).
  3. If you’re restricted to equal weighted calibrations, keep most of your calibration points near the low end of the curve.
  4. If you’re allowed weighted calibrations, choose a more equal spacing to be able to evaluate the curve across the entire span.
  5. Tweak the exact concentrations of the calibration points to simplify the calibration prep. If you want a calibration point at 5 ppb and it requires 12 µL, but you only have a 10 µL syringe, change the cal point to 4.17 ppb and use 10 µL.
  6. Once you’ve run the calibration, use %RSE or residual error to evaluate it. Avoid using r2 unless required.

Step 5 brings us to the next step in our calibration journey. Once you determine your calibration points, you then must figure out the appropriate dilutions to make them. That will be the topic of the next blog, which I’ll hand off to Colton Myers to handle.

View all of the posts in the "More Than You Ever Wanted to Know About Calibrations" series.