Fig. 17.1: Temperature anomaly for Newcastle Nobbys Signal Station since 1860. Also shown are the standard deviation of the data (σ) and the predicted 95% (±2σ) and 99.7% (±3σ) confidence levels.
i) The noise spectrum
The temperature anomaly for the Newcastle station is shown in Fig. 17.1 above. The monthly reference temperature (MRT) for each month was calculated for the period 1961-1990, and these 12 values were subtracted from the raw data for the respective month to yield the anomalies shown in Fig. 17.1. The dataset has a standard deviation (σ) of 0.907 °C, which is indicated by the red lines on the graph. Also shown are the ±2σ (in green) and ±3σ boundaries (in orange).
Fig. 17.2: The predicted 68.27% (±σ), 95% (±2σ) and 99.7% (±3σ) confidence levels for data with a Gaussian probability density function and a standard deviation of σ.
If the data in Fig. 17.1 has a Gaussian frequency spectrum, you would expect to see a distribution of the data similar to that illustrated in Fig. 17.2 above, with 68.27% of data located within ±σ of the mean, 95.45% within ±2σ, and 99.73% within ±3σ. The data in Fig. 17.1 appears to be distributed roughly along these lines. In Fig. 17.3 below I have converted the anomaly data in Fig. 17.1 to a frequency distribution (with a certain amount of smoothing added in order to reduce the discrete data to a continuum) and then compared this distribution (in blue) to the expected Gaussian distribution with the same standard deviation (σ) of 0.907 °C (red curve). As can be readily seen, the degree of agreement is good. This suggests that this anomaly data, and probably most other anomaly data, does indeed have a probability distribution that is Gaussian.
Fig. 17.3: The calculated probability distribution of temperature 68.27% (±σ), 95% (±2σ) and 99.7% (±3σ) confidence levels for data with a Gaussian probability density function and a standard deviation of σ.
The real benefit of knowing the nature of the probability density function is that it allows us to calculate the likelihood that an extreme temperature change is the the result of a random process, or if not, then whether it may then be anthropogenic in origin. However, the data in Fig. 17.1 is not the temperature change. It represents the difference of each month's temperate from the mean, not the difference from a previous time. We can however, use the data for the former to simulate the latter. This is demonstrated in Fig. 17.4 below.
Fig. 17.4: Year on year change in the monthly temperature anomaly for Newcastle Nobbys Signal Station
since 1860. Also shown are the ±σ, ±2σ and ±3σ confidence levels for the monthly anomalies in Fig. 17.1 for comparison.
The data in Fig. 17.4 was derived by calculating the temperature change between identical months one year apart using the anomaly data in Fig. 17.1. What is noticeable is that the spread of the data is greater in Fig. 17.4 than in Fig. 17.1 as indicated by the ±σ, ±2σ and ±3σ lines that refer back to the Fig. 17.1 data. In fact the standard deviation of the data in Fig. 17.4 should be a factor of √2 greater than that from which it is derived. This is because the error (or standard deviation) in the difference of two independent data readings, σ12, depends on the squares of the errors of the original two readings.
(17.1)
As the two readings being subtracted are from the same dataset, they will have the same standard deviation. Therefore, it follows that if σ1 = σ2 = σ, then σ12 = σ.√2 (so the standard deviation of the temperature difference is 41% larger than that of the original data). This means that extreme changes in temperature are more frequent than the data in Fig. 17.1 might suggest if one were to look only at the proportion of data that lies outside the ±σ, ±2σ or ±3σ boundaries.
ii) The scaling of the noise with data averaging or smoothing
A key point of note in Post 9 was the scaling behaviour of the noise. It has become apparent that some readers of this blog do not fully understand the methodology I used in Post 9, so I thought it might be useful to explain it again in a bit more detail here.
The central issue is how the data behaves as you smooth it. The smoothing process involves a moving average with a sliding window of data centred on the point that is being averaged. The aim is usually to reduce or eliminate the high frequency noise so that only the underlying trend remains. This is important in analysing climate data because the data in often very noisy, as is illustrated in Fig. 17.1. There is clearly an upward trend in that data between 1960 and 2010, but it is difficult to discern precisely what is happening before 1960. Smoothing can help identify this. However, smoothing also has some negative consequences. It inevitably removes any genuine structure in the data that has a similar frequency to the noise. It will also reduce the heights of maxima and raise the levels of minima in the data. The degree to which this happens will depend upon the sharpness of the turning points relative to the width of the sliding window.
The smoothing process is as follows. First you choose the number, N, of adjacent data points that you wish to average. The general rule is that the bigger the number N, the smaller in value the noise will become, and so the clearer the underlying trend will be.
Suppose you choose N = 5. That means for a given data value for the temperature in Fig. 17.1 (say for March 1966), you add the five temperature anomaly values centred on March 1966 and find the average or mean. That means adding the values for January, February, March, April and May anomalies together and dividing the result by five. Then you repeat this for every data point in Fig. 17.1 by moving along the dataset point by point. As you do so the set of five points to be averaged moves, hence the term "moving average". The width of the data being averaged (in this case N = 5) is the width of your "sliding window". The end result is a completely new dataset that follows the original data but has less noise. This is shown in Fig. 17.5 below where the 4 month smoothed data (N = 4) in green clearly has less noise than the original data in Fig. 17.1. However, smoothing the data by using 16 points (black curve in Fig. 17.5) reduces the noise level even further. What is interesting is why?
Fig. 17.5: The 4-month and 16-month smoothed monthly temperature anomalies for Newcastle Nobbys Signal Station.
The noise reduces because it is random, and if you add enough random numbers together eventually they will average to zero as the approximately equal number of negative and positive values you are adding will gradually cancel. If the noise is white noise, you would expect the mean amplitude of the noise, as represented by the standard deviation, σ, to decrease by a factor of √N. So, smoothing with a 4-month moving average (N = 4) should halve the noise amplitude or standard deviation, while smoothing with a 16-month moving average (N = 16) should reduce the standard deviation of the noise by a factor of 4. Except that does not happen with most temperature data.
Instead what we find is that a 4-month moving average (N = 4) only reduces the standard deviation of the noise by a factor of approximately √2, while a 16-month moving average (N = 16) only reduces it by a factor of about 2. In fact while the standard deviation, σ, for the anomaly data in Fig. 17.1 is 0.907 °C, we find that σ = 0.635 °C for N = 4, and σ = 0.429 °C for N = 16. So, rather than the noise decreasing by a factor of √N, the smoothing appears to reduce it much more slowly. How much more slowly is illustrated in Fig. 17.6 below.
Fig. 17.6: The scaling behaviour of the smoothed temperature anomaly data for Newcastle Nobbys Signal Station. The graph shows the standard deviations (S.D.) for data smoothed with different moving averages (N). The gradient of the best fit line is -0.266 ± 0.006 and R2 = 0.9978.
The data in Fig. 17.6 shows the results for the standard deviation of the original data in Fig. 17.1 (N = 1), together with the standard deviation of same data after it has been smoothed with six different values for N (3, 6, 9, 12, 24, 60). The linear regression or best fit to the log-log plot of the data indicates that the smoothing reduces the standard deviation each time by a factor close to N 0.266 rather than the expected √N.
Because the trend line in Fig. 17.6 fits the data so well, it means we can use it to extrapolate with a high degree of confidence. That allows us to predict the standard deviation of the smoothed data for any level of smoothing, N, we might choose. For example, if N = 120, then each data point in the smoothed dataset would represent the mean temperature over a decade. Likewise, if N = 1200, each data point would represent the mean temperature for the century of data that was centred on that point in time.
Thus, the best fit line in Fig. 17.6 allows us to predict that σ = 0.252 °C for N = 120, and σ = 0.137 °C for N = 1200 for the smoothed temperature anomaly at this location. Given the rate of change in global temperatures that is claimed for global warming (i.e. 0.7-1.0 °C), these numbers are not insignificant and suggest that low frequency noise may be a significant component of any temperature rise climate scientists claim they are detecting.
iii) The implications for 100-year temperature trends
To illustrate the potential implications of noise in the temperature record, consider the following example. The data in Fig. 17.1 has a standard deviation of 0.907 °C. When smoothed with a 5-year moving average this reduces to 0.31 °C. That implies that 95.45% of the data will be within ±0.62 °C of the mean and 2.275 % will have values that are more than 0.62 °C above the mean. The final 2.275% will have values that are more than 0.62 °C below the mean.
But as we saw in Fig. 17.4, when we look at changes in temperature between years, the standard deviation increases by about a factor of √2. For the data in Fig. 17.4 that means a value of 0.43 °C, and now the fraction of data that sees a rise of 0.86 °C will be p = 2.275%.
But what we really want to know is how likely any temperature rise of this magnitude will be over a 100 year period if it occurs purely by chance. The answer to that will (approximately) be that the overall probability P100 will be determined by
P100 = 1 - (1 - p)20
(17.2)
As p = 0.02275, then P100 = 0.37. The power of 20 in Eq. 17.2 reflects the fact that in a 100 year interval there will be an average of 20 possible attempts for the 5-year average to jump by the desired amount or more. In other words, there is a 37% probability of a 0.86°C temperature rise occurring purely by random chance. That is not insignificant. To find the probability of other temperature rises occurring, the maths is more difficult, but not impossible. We just need to find p.
We now know that the anomaly fluctuations obey a Gaussian probability distribution of the form
(17.3)
where σ is the standard deviation and ∆T is the change in temperature from the mean. The probability p that ∆T exceeds some critical value ∆T0 will be given by
(17.4)
where x = ∆T0/σ√2 and erfc(x) is the complementary error function. Using these equations we can now find the probability of other temperature rises occurring naturally via random processes.
As I have already demonstrated, there is a 37% probability of a 0.86°C temperature rise in 100 years due to random chance. In other words, if ∆T0 = 2σ, and σ = 0.43 °C, then p = 0.02275 and P100 = 0.37. Similarly, the above analysis means that this probability (P100) will increase to 89% for a ∆T0 = 0.7 °C temperature rise and fall to 18% for a 1.0 °C rise for the same value of σ. In addition, the data implies that there will be a 50% probability of a random temperature rise of more than 0.78 °C at some time over the century. It should also be remembered that all these probabilities are based on the initial standard deviation of 0.907 °C. If the standard deviation increases (as is seen, for example, in other datasets), then so too will the probabilities.
iv) The scaling of the noise frequency
So far I have only considered the scaling behaviour of the anomaly amplitude. But it is clear from the data in Fig. 17.1 and Fig. 17.5 that smoothing the data changes the underlying frequency of the fluctuations as well. One might expect this frequency to decrease linearly with the smoothing factor N, but it does not.
The easiest way to quantify the frequency of the fluctuations is to count the number of times the data crosses the mean value of the anomaly in a given time interval. For example, in Fig. 17.1 the data crosses the mean value 317 times over the course of 140 years. This decreases to 155 for a 3-month moving average, and 54 for a 12-month moving average.
Fig. 17.7: The scaling behaviour of the mean period of the smoothed temperature anomalies for Newcastle Nobbys Signal Station for different moving averages (N). The gradient of the best fit line is 0.756 ± 0.031 and R2 = 0.9917.
Plotting this data on a log-log plot again yields a linear trend indicating that the period of the fluctuations increases as N a. In this case, though, a = 0.756 ± 0.031 with a residual that is 9% of the standard deviation of y-values(see Fig. 17.7 above).
In Post 9 I showed theoretical plots (see Figs. 9.7 - 9.10) to highlight how the scaling could be used to predict the qualitative behaviour of longer time series. However, these plots did not include the appropriate scaling of the time axis. It assumed the scaling of the time base was proportional to N. If the correct scaling is now included, the results will be similar, but the time axis will expand slightly.
Fig. 17.8: Simulated 2000 year temperature anomalies with 50-year mean for Newcastle Nobbys Signal Station.
As an example of the power of predictive scaling, consider the scaled data shown in Fig. 17.8 above. This data aims to simulate the behaviour of the Newcastle Nobbys Signal Station (Berkeley Earth ID - 152044) time series over an arbitrary 2000 year time period where the data will typically have been smoothed to a resolution of 50 years. That means N = 600 and the standard deviation of the fluctuations will be 0.183 °C. The time base and the mean period of the fluctuations, however, will increase by a factor of about 120. Finally the data is filtered so that only every 6th point is plotted. This corresponds to one point per decade. What Fig. 17.8 shows is that large swings in the mean decadal temperature of more up to 0.5 °C within a century are expected to be fairly common. The big question is, is it realistic? And can we find any corroboration for it?
Fig. 17.9: A 2000 year proxy temperature record for China.
Of course we do not have temperature records that are 2000 years long, but we do have some proxy records, including this one from China (see Fig. 17.9 above). Intriguingly, the China data in Fig. 17.9 shows many features that are not that dissimilar to the simulated data in Fig. 17.8. The standard deviations of the fluctuations in both appear to be around to 0.2 °C and the period of the fluctuations appear similar as well. So, is the China data evidence of the validity of our approach? Quite possibly, but then I might be biased.
v) Conclusions
Wherever I look in climate science data, all I can see is noise. It appears that much of this noise is more persistent than that seen with typical white noise, and this becomes significant in temperature trends that are several decades or centuries in length. It leads to long timescale (low frequency) fluctuations that are comparable in amplitude to those currently being attributed to global warming. This represents a serious challenge to current climate science orthodoxy, because I cannot see how you can definitively distinguish one effect from the other, based on the available data.
In physics we normally aim for a 3-sigma level of confidence before we even begin to make claims of proof regarding any theory or hypothesis. Normally it requires 5-sigma levels of confidence. The Higgs boson was confirmed with a 5-sigma level of confidence that has now risen to nearly 7-sigma. Yet most climate data seems incapable of exhibiting even a 1-sigma level of confidence when compared to any available theory or model. But the claims made by many climate scientists about said data would seem to imply otherwise. None of their pronouncements ever seem to be qualified. It is this lack of doubt and critical thinking among climate scientists that I find most troubling.
Fig. 17.10: A 10,000 year proxy temperature record for Greenland based on the GISP2 ice core isotope data.
Take the data in Fig. 17.10 above, for example. This is a plot of the inland temperature of the Greenland ice sheet over the last 10,000 years based on ice core data. It shows large fluctuations in temperature that are comparable to those seen in modern temperature records, but the mean period of the fluctuations is much larger. That is probably because the timescale is much larger. But the question is: are these fluctuations real in the sense that there is an immediate identifiable cause for them? Or are they just noise? The fact is we don't know. Climate scientists have proposed countless theories to account for the fluctuations, but none are watertight. Yet the possibility that it might all be just noise resulting from the afterglow of a much bigger physical event (such as Milankovitch oscillations in this case), in the same way that the cosmic microwave background is the afterglow of the Big Bang, still doesn't appear to register with most of them.
No comments:
Post a Comment