Thursday, December 31, 2020

45. Review of the year 2020

I started this blog in May, in part to occupy my time during the Covid-19 lockdown. But I was also motivated by a growing dissatisfaction with the quality of data analysis I was witnessing in climate science, and in particular the lack of any objectivity in the way much of the data was being presented and reported. My concerns were twofold. 

The first was the drip-drip of selective alarmism with an overt confirmation bias that kept appearing in the media with no comparable reporting of events that contradicted that narrative. The worry here is that extreme events that are just part of the natural variation of the climate were being portrayed as the new normal, while events of the opposite extreme were being ignored. It appeared that balance was being sacrificed for publicity.

The second was the over-reliance of much of the climate analysis on complex statistical analysis techniques of doubtful accuracy or veracity. To paraphrase Lord Rutherford: if you need to use complex statistics to see any trends in your data, then you would be better off using better data. Or to put it more simply, if you can't see a trend with simple regression analysis, then the odds are there is no trend to see.

The purpose of this blog has not been to repeat the methods of climate scientists, nor to improve on them. It has merely been to set a benchmark against which their claims can be measured and tested.

My first aim has been to go back to basics, to examine the original temperature data, look for trends in that data, and to apply some basic error analysis to determine how significant those trends really are. Then I have sought to compare what I see in the original data with what climate scientists claim is happening. In most cases I have found that the temperature trends in the real data are significantly less than those reported by climate scientists. In other words, much of the reported temperature rises, particularly in Southern Hemisphere data, result from the data manipulations performed by the climate scientists on the data. This implies that many of the reported temperature rises are an exaggeration.

In addition, I have tried to look at the physics and mathematics underpinning the data in order to test other possible hypotheses that could explain the observed temperature trends that I could detect. Below I have set out a summary of my conclusions so far.


1) The physics and mathematics

There are two alternative theories that I have considered as explanations of the temperature changes. The first is natural variation. The problem here is that in order to conclusively prove this to be the case you need temperature data that extends back in time for dozens of centuries, and we simply do not have that data. Climate scientists have tried to solve this by using proxy data from tree rings and sediments and other biological or geological sources, but in my opinion these are wholly inadequate as they are badly calibrated. The idea that you can measure the average annual temperature of an entire region to an accuracy of better than 0.1 °C simply by measuring the width of a few tree rings, when you have no idea of the degree of linearity of your proxy, or the influence of numerous external variables (e.g. rainfall, soil quality, disease, access to sunlight), is preposterous. But there is another way.

i) Fractals and self-similarity

If you can show that the fluctuations in temperature over different timescales follow a clear pattern, then you can extrapolate back in time. One such pattern is that resulting from fractal behaviour and self-similarity in the temperature record. By self-similarity I mean that every time you average the data you end up with a pattern of fluctuations that looks similar to the one you started with, but with amplitudes and periods that change according to a precise mathematical scaling function.

In Post 9 I applied this analysis to various sets of temperature data from New Zealand. I then repeated it for data from Australia and then again in Post 42 for data from De Bilt in the Netherlands. In virtually all these cases I found a consistent power law for the scaling parameter indicative of a fractal dimension of between 0.20 and 0.30, with most values clustered close to 0.25. The low magnitude of this scaling term suggests that the fluctuations in long term temperatures are much greater in amplitude than conventional statistical analysis would predict. 

For example, in the case of De Bilt it suggests that the standard deviation in the average 100-year temperature is more than 0.2 °C. This means that there is a 16% probability of the mean temperature for any century being more than 0.3°C more (or less) than the mean temperature for the previous century, and therefore a one in six possibility of a 0.6 °C temperature rise in any given century. So a 0.6 °C temperature rise over a century could occur once every 600 years purely because of natural variations in temperature. It also suggests that similar temperature variations that we have seen in temperature data in the last 50 or 100 years might have been repeated frequently in the not so distant past.

ii) Direct anthropogenic surface heating (DASH) and the urban heat island (UHI)

Another possible explanation for any observed rise in temperature is the heating of the environment that occurs due to human industrial activity. All energy use produces waste heat. Not only that, but all energy must end up as heat and entropy in the end. The Second Law of Thermodynamics tells us that. It is therefore inevitable that human activity must heat the local environment. The only question is by how much.

Most discussions in this area focus on what is known as the urban heat island (UHI). This is a phenomenon whereby urban areas either absorb extra solar radiation because of changes made to the surface albedo by urban development (e.g. concrete, tarmac, etc), or tall buildings trap the absorbed heat and reduce the circulation of warm air, thereby concentrating the heat. But there is another contribution that continually gets overlooked - direct anthropogenic surface heating (DASH). 

When humans generate and consume energy they liberate heat or thermal energy. This energy heats up the ground, and the air just above it, in much the same way that radiation from the Sun does. In so doing DASH adds to the heat that is re-emitted from the Earth's surface, and therefore increases the Earth's surface temperature at that location.

In Post 14 I showed that this heating can be significant - up to 1 °C in countries such as Belgium and the Netherlands with high levels of economic output and high population densities. In Post 29 I extended this idea to look at suburban energy usage and found a similar result. 

What this shows is that you don't need to invoke the Greenhouse Effect to find a plausible mechanism via which humans are heating the planet. Simple thermodynamics will suffice. Of course climate scientists dismiss this because they assume that this heat is dissipated uniformly across the Earth's surface - but it isn't. And just as significant is the fact that the majority of weather stations are in places where most people live, and therefore they also tend to be in regions where the direct anthropogenic surface heating (DASH) is most pronounced. So this direct heating effect is magnified in the temperature data.

iii) The data reliability

It is taken as read that the temperature data used to determine the magnitude of the observed global warming is accurate. But is it? Every measurement has an error. In the case of temperature data it appears that these errors are comparable in magnitude to many of the effects climate scientists are trying to measure.

In Post 43 I looked at pairs of stations in the Netherlands that were less than 1.6 km apart. One might expect that most such pairs would exhibit identical datasets for the two stations in the pair, but they don't. In virtually every case the fluctuations in the difference in their monthly average temperatures was about 0.2 °C. While this was consistent with the values one would expect based on error analysis, it does highlight the limits to the accuracy of this data. It also raises questions about how valid techniques such as breakpoint adjustment are, given that these techniques depend on detecting relatively small differences in temperature for data from neighbouring stations.

iv) Temperature correlations between stations

In Post 11 I looked at the product moment correlation coefficients (PMCC) between temperature data from different stations, and compared the correlation coefficients with the station separation. What became apparent was evidence for a strong negative linear relationship between the maximum correlation coefficient for temperature anomalies between pairs of station and their separation. For station separations of less than 500 km positive correlations of better than 0.9 were possible, but this dropped to a maximum correlation of about 0.7 for separations of 1000 km and 0.3 at 2000 km.

There were also clear differences between the behaviour of the raw anomaly data and the Berkeley Earth adjusted data. The Berkeley Earth adjustments appear to reduce the scatter in the correlations for the 12-month averaged data, but do so at the expense of the quality of the monthly data. This suggests that these adjustments may be making the data less reliable not more so. The improvement in the scatter of the Berkeley Earth 12-month averaged data is also curious. Is it because it is this data that is used to determine the adjustments and not the monthly data, or is this not the case and instead there is some other reason? And what of the scatter in the data? Can we use this to measure the quality and reliability of the original data? This clearly warrants further study.


Fig. 45.1: Correlations (PMCC) for the period 1971-2010 between temperature anomalies for all stations in New Zealand with a minimum overlap of 200 months. Three datasets were studied: a) the monthly anomalies; b) the 12-month average of the monthly anomalies; c) the 5-year average of the monthly anomalies. Also studied were the equivalent for the Berkeley Earth adjusted data.



2) The data

Over the last eight months I have analysed most of the temperature data in the Southern Hemisphere as well as all the data in Europe that predates 1850. The results are summarized below.

i) Antarctica

In Post 4 I showed that the temperature at the South Pole has been stable since the 1950s. There is no instrumental temperature data before 1956 and there are only two stations of note near the South Pole (Amundsen-Scott and Vostok). Both show stable or negative trends.

Then in Post 30 I looked at the temperature data from the periphery of the continent. This I divided into three geographical regions: the Atlantic coast, the Pacific coast and the Peninsula. The first two only have data from about 1950 onwards. In both cases the temperature data is also stable with no statistically significant trend either upwards or downwards. Only the Peninsula exhibited a strong and statistically significant upward trend of about 2 °C since 1945.


ii) New Zealand

Fig. 45.2: Average warming trend of for long and medium stations in New Zealand. The best fit to the data has a gradient of +0.27 ± 0.04 °C per century.

In Posts 6-9 I looked at the temperature data from New Zealand. Although the country only has about 27 long or medium length temperature records, with only ten having data before 1880, there is sufficient data before 1930 to suggest temperatures in this period were almost comparable to those of today. The difference is less than 0.3 °C.


iii) Australia

Fig. 45.3: The temperature trend for Australia since 1853. The best fit is applied to the interval 1871-2010 and has a gradient of 0.24 ± 0.04 °C per century.

The temperature trend for Australia (see Post 26) is very similar to that of New Zealand. Most states and territories exhibited high temperatures in the latter part of the 19th century that then declined before increasing in the latter quarter of the 20th century. The exceptions were Queensland (see Post 24) and Western Australia (see Post 22), but this was largely due to an absence of data before 1900. While there is much less temperature data for Australia before 1900 compared to the latter part of the 20th century, there is sufficient to indicate that, as in New Zealand, temperatures in the late 19th century were similar to those of the present day.


iv) Indonesia

Fig. 45.4: The temperature trend for Indonesia since 1840. The best fit is applied to the interval 1908-2002 and has a negative gradient of -0.03 ± 0.04 °C per century.

The temperature data for Indonesia is complicated by the lack of quality data before 1960 (see Post 31). The temperature trend after 1960 is the average of between 33 and 53 different datasets, but between 1910 and 1960 it generally comprises less than ten. Nevertheless, this is sufficient data to suggest that temperatures in the first half of the 20th century were greater than those in the latter half. This is despite the data from Jakarta Observatorium which exhibits an overall warming trend of nearly 3 °C from 1870 to 2010 (see Fig. 31.1 in Post 31).

It is also worth noting that the temperature data from Papua New Guinea (see Post 32) is similar to that for Indonesia for the period from 1940 onwards. Unfortunately Papua New Guinea only has one significant dataset that predates 1940, so conclusions regarding the temperature trend in this earlier time period are difficult to ascertain.


v) South Pacific

Most of the temperature data from the South Pacific comes from the various islands in the western half of the ocean. This data exhibits little if any warming, but does exhibit large fluctuations in temperature over the course of the 20th century (see Post 33). The eastern half of the South Pacific, on the other hand, exhibits a small but discernible negative temperature trend of between -0.1 and -0.2 °C per century (see Post 34).


vi) South America

Fig. 45.5: The temperature trend for South America since 1832. The best fit is applied to the interval 1900-1999 and has a gradient of +0.54 ± 0.05 °C per century.

In Post 35 I analysed over 300 of the longest temperature records from South America, including over 20 with more than 100 years of data. The overall trend suggests that temperatures fluctuated significantly before 1900 and have risen by about 0.5 °C since. The high temperatures seen before 1850 are exclusively due to the data from Rio de Janeiro and so may not be representative of the region as a whole.


vii) Southern Africa

Fig. 45.6: The temperature trend for South Africa since 1840. The best fit is applied to the interval 1857-1976 and has a gradient of +0.017 ± 0.056 °C per century.

In Posts 37-39 I looked at the temperature trends for South Africa, Botswana and Namibia. Botswana and Namibia were both found to have less than four usable sets of station data before 1960 and only about 10-12 afterwards. South Africa had much more data, but the general trends were the same. Before 1980 the temperature trends were stable or perhaps slightly negative, but after 1980 there was a sudden rise of between 0.5 °C and 2 °C in all three trends, with the largest being found in Botswana. This does not correlate with accepted theories on global warming (the rises in temperature are too large and too sudden, and do not correlate with rises in atmospheric carbon dioxide), and so the exact origin of these rises appears to be unexplained.

 

viii) Europe

Fig. 45.7: The temperature trend for Europe since 1700. The best fit is applied to the interval 1731-1980 and has a positive gradient of +0.10 ± 0.04 °C per century.

In Post 44 I used the 109 longest temperature records to determine the temperature trend in Europe since 1700. The resulting data suggests that temperatures were stable from 1700 to 1980 (they rose by less than 0.25 °C), and then rose suddenly by about 0.8 °C after 1986. The reason for this change is unclear, but one possibility is that it has occurred due to a significant improvement in air quality that reduced the amount of particulates in the atmosphere. These particulates, that may have been present in earlier years, could have induced a cooling that compensated for the underlying warming trend. Once removed, the temperature then rebounded. Even if this is true, it suggests a maximum warming of about 1 °C since 1700, much of which could be the result of direct anthropogenic surface heating (DASH) as discussed in Post 14. In countries such as Belgium and the Netherlands the temperature rise is even less than that expected from such surface heating. It is also much less than that expected from an enhanced Greenhouse Effect due to increasing carbon dioxide levels in the atmosphere (i.e. about 1.5 °C in the Northern Hemisphere since 1910). In fact the total temperature rise should exceed 2.5 °C. So here is the BIG question? Where has all that missing temperature rise gone?


Thursday, December 10, 2020

44. Europe - temperature trends since 1700 - STABLE to 1980

The longest temperature records that we have are almost all found in Europe. In fact Europe has over 30 records that predate 1800, and three that go back beyond 1750. One of those three is the De Bilt record from the Netherlands (Berkeley Earth ID: 175554) that I discussed in both Post 41 and Post 42 and which dates back to 1706. The second is Uppsala in Sweden (Berkeley Earth ID: 175676) which dates back to 1722, and the third is Berlin-Tempelhof in Germany (Berkeley Earth ID: 155194) which has data as far back as 1701. Overall, there are nearly 120 temperature records with over 1200 months of data that also have data that predates 1860 (see here for a list). If we average the anomalies from these records, we get the temperature trend shown in Fig. 44.1 below.

 

Fig. 44.1: The temperature trend for Europe since 1700. The best fit is applied to the interval 1731-1980 and has a positive gradient of +0.10 ± 0.04 °C per century. The monthly temperature changes are defined relative to the 1951-1980 monthly averages.

 

To construct the trend in Fig. 44.1 above the raw temperature data from each of 109 records was first converted to monthly anomaly data by subtracting the monthly reference temperatures (MRTs). The MRTs were in turn calculated for the time interval 1951-1980 by averaging the data in that record over all months in that period. This is the same time frame that was used by climate scientists in the 1980s to analyse temperature data, but is significantly earlier than the time intervals normally used today which tend to be 1961-1990 or 1981-2010. The reasons for the differences in time frame I intend to discuss in a later post.

The temperature trend in Fig. 44.1 has two features of note. The first is the very slight upward trend from 1730 to 1980 of approximately 0.10 °C per century. This amounts to a total temperature increase over that time period of about 0.25 °C which is significantly less than the standard deviation of the 10-year moving average of the same data. This suggests that this trend is insignificant when compared to natural variations in temperature.

The second feature is the sudden temperature rise of almost 0.8 °C seen in 1988. This looks unnatural. So much so that, if it were to occur in just one temperature record, then it could be ascribed to a random fluctuation, or a sudden change in the local environment or undocumented location change. But this is not seen in just one record; it is seen in the average of over 100 temperature records, as the data in Fig. 44.2 below shows.

 

Fig. 44.2: The number of sets of station data included each month in the temperature trend for Europe.

 

Nor can we claim that this is just a local effect. The map below in Fig. 44.3 shows the approximate location of all 109 stations whose data was used to construct the trend in Fig. 44.1 above. While it is clear that the greatest concentration of stations is in central Europe between France and Poland, it is also evident that there are significant numbers of stations with very long records located on the edges of Europe such as in the UK, Scandinavia and eastern Europe. This suggests that the sudden rise in temperature seen in 1988 is real and widespread.

 


 Fig. 44.3: The locations of long stations in Europe with more than 1800 months of data, or more than 1200 months of data but with significant data from before 1860. Those stations with a high warming trend from 1700-1980 are marked in red.

 

For comparison, I have performed the same averaging process on the adjusted data for each station created by Berkeley Earth. This adjusted data for each station incorporates two adjustments to the data. Firstly, the monthly reference temperatures (MRTs) are constructed from homogenized data for the region rather than from the raw station data. Secondly, the trend of each temperature record is spliced into segments using breakpoints, and each segment is adjusted up or down relative to its original position. These breakpoint adjustments are supposed to remove local measurement errors (such as those due to changes in instrumentation or location) and thus make the data more reliable, but as I pointed out in my previous post, reliability in temperature data is very hard to measure due to the amount of natural variability that it contains.

 

Fig. 44.4: Temperature trends for all long and medium stations in Europe since 1750 derived by aggregating and averaging the Berkeley Earth adjusted data. The best fit linear trend line (in red) is for the period 1801-1980 and has a gradient of +0.33 ± 0.03 °C/century.

 

The results of averaging the Berkeley Earth adjusted data are shown in Fig. 44.4 above. Three things are noticeable in this data. Firstly, the trend in the data before 1980 has increased by a factor of three. There are two main reasons for this. One reason is that the adjustments made to the data have increased the trend slightly and smoothed out some of peaks before 1830 (see Fig. 44.6 below). The other is that the interval used for the fitting of the linear regression is shorter. This in turn reduces the gradient of the trend.

The second feature of the data in Fig. 44.4 above is that the jump in temperature after 1988 is still present, and is just as large as that seen in Fig. 44.1.

The third feature of the data in Fig. 44.4 is that it closely resembles that data shown for the 12-month and 10-year trends that has been published by Berkeley Earth (see Fig. 44.5 below). This suggests that the averaging process I have used is sufficiently accurate without the need to apply different weightings to the data from different stations as Berkeley Earth does. The weightings that Berkeley Earth use are supposedly to correct for any clustering of stations, but the map in Fig. 44.3 suggests these weightings are not likely to vary significantly for most stations, and so are not likely to be of primary importance. The agreement between the data in Fig. 44.4 and that in Fig. 44.5 appears to confirm that hypothesis.

 

Fig. 44.5: The temperature trend for Europe since 1750 according to Berkeley Earth.

 

It can be seen from these results that the differences between the trends I have constructed using the original data and the trends derived using Berkeley Earth's adjusted data are not as large as has been seen in previous regional analyses, such as those for South Africa (Post 37), South America (Post 35), the South Pacific (Post 33 and Post 34), Papua New Guinea (Post 32), Indonesia (Post 31), Australia (Post 26) and New Zealand (Post 8). These differences for Europe are shown in Fig. 44.6 below.

 

Fig. 44.6: The contribution of Berkeley Earth (BE) adjustments to the anomaly data in Fig. 44.4 after smoothing with a 12-month moving average. The linear best fit (red line) to the breakpoint adjustment data (shown in orange) is for the period 1841-2010 and has a gradient of 0.057 ± 0.001 °C per century. The blue curve represents the total BE adjustments including those from homogenization.

 

Overall, the adjustments made by Berkeley Earth to their data have probably only added about 0.2 °C to the warming. More significant are the adjustments made to data before 1830 which appear to be designed to flatten the curve. Such adjustments, though, assume that the mean temperature before 1830 was stable. Yet data from 1830 to 1980 suggests that the temperature trend for Europe was anything but stable, even though the trend shown in Fig. 44.1 was constructed from between 50 and 109 different datasets over that period. The full extent of that instability for the 5-year average temperature can be seen in Fig. 44.7 below.

 

Fig. 44.7: The 5-year moving average of the temperature trend for Europe since 1700. The best fit is applied to the monthly anomaly data for the interval 1731-1980 and has a positive gradient of +0.10 ± 0.04 °C per century.


Conclusions

In 1981 James Hansen and co-workers at NASA's Goddard Institute for Space Studies (GISS) published a paper in the pre-eminent journal Science (which incidentally, has an impact factor of 41.8, where impact factors over 1.0 are considered good) that was one of the first to warn of the impact that increased levels of carbon dioxide in the atmosphere could have on global warming and climate change. But here is the problem: the data shown here appears to indicated that there was no significant warming in Europe before 1981. As the data shown in Fig. 44.1 indicates, the total warming in Europe for the 250 years before 1981 was so small (less than 0.25 °C) that it was less than the natural variation in the mean decadal temperatures over the same period.

Then, in 1988 the mean temperatures in Europe suddenly jumped by over 0.8 °C (see Fig. 44.1), just in time for the IPCC's  first assessment report on climate change in 1990 (PDF). A similar abrupt jump was seen at about the same time in Botswana and, to a lesser extent, in South Africa. Convenient, certainly. But is this just coincidence or 20:20 foresight by the IPCC?

As I have shown throughout the course of this blog, before 1981 there does not appear to have been any exceptional warming in most of the Southern Hemisphere either. So the above analysis raises important concerns regarding the reported extent of climate change in Europe and beyond. The most important question is: is the temperature rise seen after 1988 in Fig. 44.1 real? And if so, what is causing it? 

If it is being driven by CO2, then why does it not correlate with increases in CO2 levels in the atmosphere? If it is a natural phenomenon, why are there no other jumps of a similar magnitude in the previous 250 years? Could it be another example of chaotic behaviour similar to the self-similarity I explored in Post 42? And if so, is it just random, or is it the consequence of a complex system being driven between meta-stable states by, for example, greenhouse gases? What I don't see so far is conclusive evidence either way.


Tuesday, December 8, 2020

43. The reliability of individual temperature records

One of my many criticisms of climate scientists is their use of adjustments to temperature data to supposedly correct for errors in the measurements, corrections which in my opinion are probably not needed for errors that are not real. These corrections come in two main types: homogenization and breakpoint adjustments.

In the case of homogenization, records from neighbouring stations (and the definition of what constitutes a neighbouring station can be somewhat variable) are used to create an average expected temperature for that location, with differences in latitude and elevation compensated for during the process. This homogenization is used to infill missing monthly data points in each record. But it is also used to define the monthly reference temperatures (MRTs) that then define the monthly anomalies.

Breakpoint adjustments, or changepoint adjustments (see this PDF from NOAA) as they are alternatively called, are supposedly used to correct for false trends in the data. This generally means adjusting the slope of all sets of station data so that they look more or less the same, and more importantly have the same general trends as those quoted by the IPCC. So a station like Jakarta Observatorium in Indonesia (Berkeley Earth ID:155660) which actually has a very large warming trend of 1.84 °C per century, and has had since 1870 (and therefore has a total warming since 1870 of over 2.6 °C), gets adjusted down so that its trend is only 0.95 °C. This is because its warming is too high to fit with the IPCC narrative of only 1.0 °C of warming in the Southern Hemisphere since 1900. 

On the other hand, Dubbo (Darling Street) in New South Wales (Berkeley Earth ID:152082) which also has temperature data dating from about 1870, but instead has a negative warming (or cooling) trend of -0.32 °C per century, gets adjusted up so that its trend becomes +0.56 °C per century, and thus closer to accepted "real" value of 1.0 °C per century. 

If this all sounds a bit fishy, then welcome to the wonderful and wacky world of climate science, where nothing is quite as it seems. Central to all these data corrections is the assumption that most of the underlying data is reliable, but more importantly, that it is possible to detect the bad data from the good data. The questions, is any of this true? Is most of the data good? Can we really detect the small amount of bad data? And can the good data actually be so unreliable, or subject to so many unknown hidden variables, that it looks like bad data? One way to test this is by comparing data from stations that are very close neighbours.

As I pointed out in Post 41, the Netherlands has a number of stations that are located very close to a neighbouring station. In fact I have identified nine pairs of stations in the Netherlands where both stations have over 480 months of data, where there is significant temporal overlap of their data (i.e. they have a lot of months where both stations have active data), and where their spatial separation is less than 1.6 km (or about one mile for those dinosaurs from the USA who can't do metric). This allows direct comparisons of data to be made for stations that are, or should be, virtually identical. It is worth noting here that for this purpose the Netherlands has another unique advantage: it is very flat. That means that we do not need to worry about temperature differences occurring between stations due to differences in altitude.

In order to test the reliability of these temperature records I will apply three tests to their data. The first will look at the difference in the mean temperature of each set of station data in the pair. Ideally this should be zero, but there may be a systematic offset between stations due to local geography that could be significant. Such a difference would not necessarily raise question-marks over the validity of the data.

The second test will look at the difference in monthly temperatures between the two stations over time. The issue here is how much randomness is there in the temperature difference, and how significant is it. This will be measured by calculating the standard deviation of the temperature difference. Again, I would expect to see a low value here with noise levels in this data being at least at least a factor of √30 less than the accuracy of the daily mean temperature of each station (which I would estimate conservatively at 1 °C). Overall, this suggests that the standard deviation of this dataset should be less than 0.2 °C, and probably less than 0.1 °C.

Finally, I will look at the trend of the difference in temperature over time. If this is significantly large and comparable to the trends seen in the anomaly data for either station, that would indicate significant reliability problems with this type of data.

The results of these three test are summarized below for each of the nine pairs of stations.


Case 1: Soesterberg

Fig 43.1: The difference is monthly mean temperatures for two stations at Soesterberg. The mean of the monthly differences is 0.17 °C, the standard deviation of the differences is 0.27 °C, and the trend in the differences is -0.29 ± 0.10 °C per century.


The two stations at Soesterberg are BE-92835 (trend of +2.55 °C per century) and BE-139138 (trend of +2.29 °C per century). According to Berkeley Earth they are 1.06 km apart.



Case 2: Schiphol

Fig 43.2: The difference is monthly mean temperatures for two stations at Schiphol. The mean of the monthly differences is 0.09 °C, the standard deviation of the differences is 0.17 °C, and the trend in the differences is 0.017 ± 0.056 °C per century.


The two stations at Schiphol are BE-18517 (trend of +2.53 °C per century) and BE-157005 (trend of +2.12 °C per century). According to Berkeley Earth they are 1.2 km apart.



Case 3: Valkenberg

Fig 43.3: The difference is monthly mean temperatures for two stations at Valkenberg. The mean of the monthly differences is 0.18 °C, the standard deviation of the differences is 0.20 °C, and the trend in the differences is -0.07 ± 0.09 °C per century.


The two stations at Valkenberg are BE-174609 (trend of +2.29 °C per century) and BE-157004 (trend of +1.65 °C per century). According the Berkeley Earth they are 0.25 km apart.



Case 4: Eindhoven

Fig 43.4: The difference is monthly mean temperatures for two stations at Eindhoven. The mean of the monthly differences is 0.10 °C, the standard deviation of the differences is 0.20 °C, and the trend in the differences is 0.20 ± 0.06 °C per century.


The two stations at Eindhoven are BE-18478 (trend of +2.31 °C per century) and BE-156991 (trend of +2.06 °C per century). According to Berkeley Earth they are 1.42 km apart.



Case 5: Volkel

Fig 43.5: The difference is monthly mean temperatures for two stations at Volkel. The mean of the monthly differences is 0.10 °C, the standard deviation of the differences is 0.23 °C, and the trend in the differences is 0.20 ± 0.07 °C per century.


The two stations at Volkel are BE-92832 (trend of +2.31 °C per century) and BE-156995 (trend of +2.10 °C per century). According to Berkeley Earth they are 0.81 km apart.



Case 6: Gilze Rijen

Fig 43.6: The difference is monthly mean temperatures for two stations at Gilze Rijen. The mean of the monthly differences is 0.11 °C, the standard deviation of the differences is 0.30 °C, and the trend in the differences is -0.01 ± 0.09 °C per century.


The two stations at Gilze Rijen are BE-18485 (trend of +2.41 °C per century) and BE-156994 (trend of +1.93 °C per century). According to Berkeley Earth they are 0.16 km apart.



Case 7: Deelen

Fig 43.7: The difference is monthly mean temperatures for two stations at Deelen. The mean of the monthly differences is 0.11 °C, the standard deviation of the differences is 0.25 °C, and the trend in the differences is -0.13 ± 0.09 °C per century.


The two stations at Deelen are BE-18506 (trend of +2.50 °C per century) and BE-157001 (trend of +1.78 °C per century). According to Berkeley Earth they are 1.62 km apart.



Case 8: Rotterdam

Fig 43.8: The difference is monthly mean temperatures for two stations at Rotterdam. The mean of the monthly differences is 0.21 °C, the standard deviation of the differences is 0.21 °C, and the trend in the differences is -0.26 ± 0.14 °C per century.


The two stations at Rotterdam are BE-18497 (trend of +2.17 °C per century) and BE-18496 (trend of +1.80 °C per century). According to Berkeley Earth they are 0.89 km apart.



Case 9: Hoek Van Holland

Fig 43.9: The difference is monthly mean temperatures for two stations at Hoek Van Holland. The mean of the monthly differences is 0.07 °C, the standard deviation of the differences is 0.29 °C, and the trend in the differences is 0.50 ± 0.18 °C per century.


The two stations at Hoek Van Holland and BE-156999 (trend of +1.95 °C per century) and BE-18500 (trend of +1.62 °C per century). According to Berkeley Earth they are 0.87 km apart.


Summary

The three measures I have used to assess the reliability of the temperature records are the difference in the mean temperatures of various pairs of stations, the standard deviation of that difference in monthly temperatures between the two stations, and the magnitude of the trend difference in monthly temperatures. It is important to point out that the data used in the analysis shown in the figures above was the raw monthly temperature data, and not the monthly anomaly data. Overall, the results can be summarized as follows.

1) The difference in mean temperatures

The data shown above for nine pairs of stations indicates that in each case the mean temperature of the two stations can differ by up to 0.2 °C. In fact the mean difference is about 0.13 °C. The question we then need to answer is, is this difference in line with expectations based on known measurement accuracies for the actual data? Or is it determined by other factors such as random variations in the local climate or systematic differences due to differing local environments?

The expected error in the difference in mean temperatures comes from two main sources. One arises from the error in calculating the mean temperature of each station, while the second comes from the expected temperature difference due to their spatial separation.

In order to estimate the first error we start with the original measurement error in the mean daily temperature. This should be less than 1 °C. Then, as each station has over 480 months of data, and each month is itself the average of approximately 30 daily readings, the total number of daily readings being averaged for each station will be N ≥ 30x480. This implies that N ≥ 14400. Now statistical theory states that the error in measuring the mean temperature of a particular station over N readings should be a factor of √N less than the error in a single daily mean temperature measurement. So, this component of the error should be less than 1/120 of 1 °C, in other words less than about 0.008 °C. Combining the error from second station will increase this error by a factor of √2 to give 0.012 °C

The second error component can be estimated by looking at how the global mean temperature changes with latitude. At the equator mean temperatures are about 25 °C, while around the Arctic Circle they drop to near zero. this implies that mean temperatures drop by about 1 °C for every 300 km of latitude. As the two stations in each station pairs are never more than about 1.5 km apart, this implies a maximum difference in temperature due to location of about 0.005 °C. 

Combining the two errors above (by summing their squares) give a combined maximum expected error of 0.013 °C. This is an order of magnitude less than what we observe. This suggests the difference in the mean temperatures is too high to be solely due to measurement uncertainties, even if we allow for differences in local geographical location. It seems likely that local environment differences are the dominant factor here, but these will probably be in the form of fixed temperature offsets that should not impact significantly on the anomaly data over time. If they do, then there will be evidence for this in the form of excessive differences in the trends.


2) The standard deviation

The mean standard deviation of the monthly temperature differences for the nine pairs of stations shown in the figures above is 0.24 °C. While this is much less than the standard deviation of the monthly anomalies of individual stations (typically about 1 °C), it is still significant.

At the start of this post I suggested that 0.2 °C should be a more likely upper limit for the standard deviation, based on the measurement accuracy of the daily mean temperatures, and the number of daily readings that combine to form the monthly mean temperature. This will be heavily dependent on the accuracy of the mean daily temperature, though. 

If the daily mean temperature measurements have an error or uncertainty of 1 °C, then combining 30 of them into a monthly mean will decrease the error or uncertainty for the monthly mean by a factor of √30. However, then comparing the monthly means of two different stations will increase the error in the temperature difference by √2, so overall, the error in the difference in monthly temperatures should be a factor of √15 less than the error in a single mean daily temperature. This is approximately what we see.


3) The long term trend of the temperature difference

Of the three test results, this is probably the most surprising. While one might expect adjacent stations to experience a relative offset in their local temperatures, or differences due to statistical fluctuations over time, generally one would expect their temperature trends to follow each other. Yet the data shown above suggests otherwise.

Overall, the various station pairs exhibited a wide range of trends for their difference in monthly temperatures over time, as illustrated in the figures above. The mean trend seen for the first five pairs of stations (ignoring sign) is approximately 0.15 °C per century. This seems much higher than I would intuitively expect, but is it?

The difference in the trends is likely to be related to the uncertainty in the trends for the anomalies of each station dataset. These depend to the standard deviation of the residuals and inversely with the length of the dataset. For any best fit or trend line the error in the gradient can by estimated by dividing the standard deviation of the residuals by the standard deviation of the x-values multiplied by the square root of the number of x-values. 

In this case the residual is effectively the difference in monthly mean temperatures between stations, and the x-values are the time axis in the graphs above. The standard deviation of the x-values is roughly 12 years and there are roughly 400 points, while the standard deviation of the residuals is effectively 0.24 °C. This suggests that the trend seen in the temperature difference data is likely to be in the range ±0.001 °C per year, or ±0.1 °C per century. Again this is roughly what we see, although the actual trends in the graphs shown above are about double this value, so maybe there is some additional (but relatively small) influence here due to differences in the local environment for the two stations over time. 


Conclusions

The analysis above indicates that even weather stations that are located close together can yield significantly different results from each other for their temperature trends, mean temperatures and temperature distributions over time, just through the presence of known measurement errors. These differences between nearby stations are much greater than I expected to see before I performed this analysis, but are generally consistent with the measurement data and known sources of error. What it does indicate, though, is that even the best data is not that accurate, reproducible or reliable. Given the lack of long term temperature data for many parts of the world, this raises questions over the accuracy of any climate analysis that relies on this imperfect data.


Sunday, December 6, 2020

42. A study of fractal self-similarity and scaling for the De Bilt temperature data

In Post 9 (Fooled by randomness) I looked at the possibility of fractal behaviour occurring in the temperature records of individual stations and regions. In particular, I was interested to see if those records exhibited any form of self-similarity, and whether that self-similarity could account for the magnitude of fluctuations seen in the long term temperature records.

My initial analysis was performed on data from New Zealand and it seemed to suggest that fractal behaviour may be present. This behaviour is quantified by the fractal dimension which, in the case of temperature data, defines how the amplitude of the temperature fluctuations changes with the time interval those readings represent. Most of the data I look at on this blog consists of monthly average temperatures. For these readings the data typically has a spread of up to ±5 °C, while the standard deviation of the monthly fluctuations is usually between 1 °C and 2 °C. But what would the same temperature records look like if one considered the 12-month averages? Or the 10-year averages? 

Well, as I explained previously in Post 9, if the fluctuations in the temperature data conformed to a white noise spectrum, the power spectrum would be expected to be independent of frequency for all frequencies below the fundamental or cutoff frequency (see Eq. 9.1). The consequence of this is that smoothing the data with a sliding window, or moving average, of width N (where N is the number of months in the new average) should reduce the cutoff frequency by a factor of N, and thus reduce the signal power below the cutoff by a factor of N. That in turn should reduce the amplitude of the random noise fluctuations by a factor of √N. So, smoothing the monthly average data with a 24 month moving average should reduce the amplitude of the fluctuations by a factor of √24, or about a factor of five. Except that this does not happen.

As I have demonstrated in numerous previous posts, the noise amplitude in the monthly temperature data decreases much more slowly than expected as the width of the sliding window in the smoothing algorithm is increased. In fact it appears to decrease as N -p. where p tends to be in the range 0.20 < p < 0.35, but is generally concentrated around p = 0.25. This was shown in Post 9 for New Zealand data, in Post 17 for individual sites in Australia, and in Posts 18-21 for various Australian states. In most cases the same behaviour was seen. The only two exceptions I have found so far were for data from the South Pole (Amundsen-Scott), and also for the trend for South Australia (see Post 21) but only if the parabolic long term trend was removed. In both cases the data behaved like classical white noise with p = 0.5.

Why is this behaviour important? Well, for three reasons. Firstly, if there is a definite trend, it would allow us to estimate the amplitude of natural temperature fluctuations over timescales that are much longer than we have data for. Secondly, it could allow us to differentiate between natural and anthropogenic sources of climate change. And finally, it may shed light on possible natural mechanisms that may underpin long term climate change rather than assuming that everything is a consequence of carbon dioxide emissions, or that every current change in climate behaviour has a cause that is local, either spatially or temporally.

So why I am revisiting this now? Well, because in my last post I presented some data from a station at De Bilt (Berkeley Earth ID: 175554) in the Netherlands that is one of the longest continuous sets of temperature data that exists. What also set this data apart, though, was the fact that there was complex structure to the data that was much greater in amplitude than the continuous upward trend expected from global warming. Moreover, the underlying continuous upward trend could also be easily removed so that the remaining data could be studied, just as I removed the parabolic background from the South Australia data in Post 21. The question is, would I see the same result as for South Australia? Namely, that the remaining temperature fluctuations behaved like white noise. Well, the answer is no.


Fig. 42.1: The monthly temperature anomalies for De Bilt since 1706 with the linear trend of +0.29 ± 0.04 °C per century removed (blue curve). The standard deviation is 1.855 °C (for N = 1). The yellow curve is the 12-month moving average of the blue data (N = 12) and has a standard deviation of 0.810 °C.


The data in Fig. 42.1 above shows the same monthly temperature anomaly data that was presented in Fig. 41.1 of the previous post, except with the long-term upward trend of 0.23 °C per century removed. The yellow curve is the 12-month moving average of the blue data which clearly has a much lower noise amplitude, and therefore a lower standard deviation of 0.81 °C compared to 1.85 °C for the monthly data. In both cases the standard deviation was measured for data over the same 275 year period (or 3300 months) from 1731-2005.


Fig. 42.2: The 3-month (N = 3) moving average (blue curve) of the monthly data in Fig. 42.1 above. The standard deviation of this data is 1.326 °C. The yellow curve is the 24-month moving average (N = 24) of the same data in Fig. 42.1 and has a standard deviation of 0.664 °C.

 

The data in Fig. 42.2 above shows the same monthly temperature anomaly data as shown in Fig. 42.1, but after smoothing with a 3-month moving average (blue curve) and alternatively a 24-month moving average (yellow curve). As a result of the smoothing, the standard deviation reduces to 1.326 °C for the 3-month window (N = 3) and 0.644 °C for the 24-month sliding window (N = 24).


Fig. 42.3: The 6-month (N = 6) moving average (blue curve) of the monthly data in Fig. 42.1 above. The standard deviation of this data is 1.046 °C. The yellow curve is the 5-year moving average (N = 60) of the same data in Fig. 42.1 and has a standard deviation of 0.536 °C.


Next, if we smooth the original monthly temperature anomaly data in Fig. 42.1 with 6-month and 5-year moving averages or sliding windows we get the data shown in Fig. 42.3 above. Now, after smoothing with a 6-month moving average (blue curve) the standard deviation has reduced to 1.046 °C (and N = 6), while that for the 5-year moving average (yellow curve) is now 0.536 °C (and N = 60).


Fig. 42.4: The 9-month (N = 9) moving average (blue curve) of the monthly data in Fig. 42.1 above. The standard deviation of this data is 0.894 °C. The yellow curve is the 10-year moving average (N= 120) of the same data in Fig. 42.1 and has a standard deviation of 0.462 °C.


Finally, if we smooth the original monthly temperature anomaly data in Fig. 42.1 with 9-month and 5-year moving averages or sliding window we get the data shown in Fig. 42.4 above. Now, after smoothing with a 9-month moving average (blue curve) the standard deviation has reduced to 0.894 °C (and N = 9) while that for the 10-year moving average (yellow curve) is now 0.462 °C (and N = 120).

All the standard deviations (σ) for the different sets of smoothed data are summarized in the table below.


Table 42.1
N σ ln(N) ln(σ)
1
1.855
0.000
0.618
3
 1.326  1.099  0.282
 6  1.046  1.792  0.045
 9  0.894  2.197  -0.112
 12  0.810  2.485  -0.211
 24  0.664  3.178  -0.409
 60  0.536  4.094  -0.624
 120  0.462  4.787  -0.772


If we now combine these results into a single plot we get the graph shown in Fig. 42.5 below. The gradient of this log-log plot is the exponent of N -p in the power law we expect to see for the decrease in the noise amplitude as we increase the smoothing interval N. Once again we see that the value for the exponent p is well below the 0.5 expected for white noise. In fact p = +0.31 ± 0.01, indicating that the fractal dimension is 0.31. In addition, the quality of the fit (as indicated by the R2 value) is very high.



Fig. 42.5:  Plot of the standard deviation of the smoothed anomaly data against the smoothing interval N for temperature data from De Bilt. The best fit line is fitted to all the data except that from the 10-year moving average (as indicated by the length of the red line). The gradient of the best fit line is -0.31 ± 0.01 and R2 = 0.9943.


Conclusions

  1. The quality and linearity of the best fit in Fig. 42.5 indicates that there is a high degree of self-similarity in the data. This in turn also suggests that all the significant features seen in the original data (such as the large broad peaks at 1725 and 1860) are natural and not the result of external or artificial biases. If such artificial biases were present, and were significant in magnitude, they would probably manifest themselves as significant deviations of the data in Fig. 42.5 from a linear trend.
  2. From the gradient of the trend line in Fig. 42.5 we can estimate the standard deviation of temperature data for the case N = 1200 as being 0.20 °C. In other words, the fluctuations in the 100-year average will typically be of the order of ±0.2 °C. This in turn suggests that changes in the mean temperature from century to century of more than 0.5 °C are likely to be very common.



Tuesday, November 24, 2020

41. Netherlands - temperature trends - VARIABLE

The Netherlands has one of the longest instrumental temperature records in the world, and probably the most complete record covering the last 300 years. The record from De Bilt (Berkeley Earth ID: 175554) had nearly 3700 months of data in 2013 that stretched back to 1706 (see Fig. 41.1 below). Only Berlin Tempelhof (Berkeley Earth ID: 155194) has earlier data that extends to 1701, but it has fewer months overall and significant gaps in its record before 1756.


Fig. 41.1: The temperature trend for De Bilt since 1706. The best fit is applied to the interval 1731-2005 and has a positive gradient of +0.29 ± 0.04 °C per century. The monthly temperature changes are defined relative to the 1976-2005 monthly averages.


As I showed in the last post, Belgium also has one long record that stretches back to the 18th century, but it has virtually no other data before 1973. The Netherlands is much better in this respect. There is one other dataset with some sporadic 19th century data, and overall there are five long station records with more than 1200 months of data. In addition, there are another 25 medium records with more than 480 months of data. Details of all these 30 stations (and other shorter records) are listed here, while their geographical locations are shown on the map in Fig. 41.2 below.

 

Fig. 41.2: The locations of long stations (large squares) and medium stations (small diamonds) in the Netherlands. Those stations with a high warming trend are marked in red.


It can be seen from the map above that the stations in the Netherlands are fairly randomly distributed across the country, but that their number appears to be significantly less than the 30 stations stated previously. This is because in nearly a dozen cases two or more stations are located within 10 km of each other. I intend to look at this is more detail in a later post, where I will look at what this says about data reliability. 

The other impact of this clustering is the effect it could have on the station weightings in the regional average. Normally if a cluster of records is found the weighting of each record should probably be reduced as they will tend to repeat each other's data and geographical coverage. However, as most of the station records appear in effect to be paired up, they will almost all have the same reduced weighting, so the weighting reduction should largely cancel. This is largely confirmed by the results I will show later in this post. The other point to note, is that the clustering really only impacts the medium stations, most of which have data after 1970 only. So the weighting problem will only have a slight effect on the overall trends after 1970.


Fig. 41.3: The temperature trend for the Netherlands since 1706. The best fit is applied to the interval 1731-2005 and has a positive gradient of +0.31 ± 0.04 °C per century. The monthly temperature changes are defined relative to the 1976-2005 monthly averages.


If we average the anomaly data for all the long and medium stations we get the trends shown above in Fig. 41.3. The overall trend indicates that the region has warmed by about 0.31 °C per century since 1700. This equates to an overall warming of about 0.97 °C. But as I explained in Post 14, the current human industrial and domestic energy consumption in the country suggests that the region should have warmed by at least 1.0 °C over the same period simply as a consequence of all the heat that is produced each year by human activity. So, just as for Belgium, we see little need to call on the effects of carbon dioxide emissions and the Greenhouse Effect to explain the observed temperature rise.

The other interesting feature of the data in Fig. 41.3 is the shape of the temperature trend between 1800 and 1950. There is clearly a peak around 1860 that is seen not just in the De Bilt record in Fig. 41.1, but also in the Zuid-Limburg station data. This suggests that temperatures in the mid-19th century in the Netherlands were actually higher than they are today. This is a phenomenon that I have identified and highlighted previously in other countries and regions such as New Zealand (see Post 8), Australia (see Post 26) and South America (see Post 35). In fact it appears to occur over most of the Southern Hemisphere, or at least in those parts that have sufficient data before 1900.

The anomaly data used to construct the trend in Fig. 41.3 was derived by first calculating the monthly reference temperatures (MRT) for the period 1976-2005 for each record, and then subtracting these from the raw data. These were then averaged. Temperature records were only included in the trend in Fig. 41.3 if they had at least 480 months of data, and at least 320 months of this data was within the MRT interval of 1976-2005. This was to ensure that all temperature anomaly records were measured relative to identical reference points. The result was that three medium stations were excluded because they had insufficient data after 1975. These were the stations at Den Helder, Maastricht and Groningen


Fig. 41.4: The number of sets of station data included each month in the temperature trend for the Netherlands.


The actual number of stations used to construct each monthly point in the trend in Fig. 41.3 is illustrated above in Fig. 41.4. This shows that the trend before 1900 is almost entirely due to the data from De Bilt in Fig. 41.1, while the data from 1900 to 1950 comes from the five long stations. After 1950 as many as 27 station records were used for each monthly average.


Fig. 41.5: Temperature trends for all long and medium stations in the Netherlands since 1750 derived by aggregating and averaging the Berkeley Earth adjusted data. The best fit linear trend line (in red) is for the period 1801-1980 and has a gradient of +0.29 ± 0.03 °C/century.


So the question is, how significant are these results? And also how reliable are they?

Well, one way to test this is to compare these results against those produced by climate science groups like Berkeley Earth. The first thing to remember, though, is that the Berkeley Earth anomaly data for each station record is different from that which I have calculated here because it uses homogenization and breakpoint alignment to adjust the data, techniques that I have profound misgivings about because they could introduce warming to the overall trend that is not actually there. That is why I restrict my analysis to the raw data with all its imperfections.

However, if we apply the same averaging process to the Berkeley Earth adjusted data as I have employed to the raw data, we see that the trends we get (as illustrated in Fig. 41.5 above) agree very well with those published by Berkeley Earth and shown in Fig. 41.6 below. In fact the size and positions of most of the peaks in the two figures are virtually identical. This suggests that the two processes (mine and Berkeley Earth's) are broadly consistent, even if the anomaly data for each station that is used in the averaging is different. What it also shows, though, is that the Berkeley Earth trend that incorporates homogenization and breakpoint adjustments is somewhat different from the trend I have presented in Fig. 41.3 that avoids using such controversial techniques. For example, according to Berkeley Earth, the warming in the Netherlands since 1900 is at least 1.5 °C, and there was no warm period in the mid-19th century. It is these disagreements over data and methodology, and the effects they have on the resulting temperature trends, that partly fuels the climate scepticism debate.


Fig. 41.6: The temperature trend for the Netherlands since 1750 according to Berkeley Earth.


If we try to quantify the difference between the Berkeley Earth temperature trend and the raw trend I have constructed in Fig. 41.3 we find that the adjustments made by Berkeley Earth  have two main effects. The first is to flatten the trend before 1900. The second is to exaggerate the temperature rise after 1900 by about 0.3 °C. These adjustments are illustrated in Fig. 41.7 below.


Fig. 41.7: The contribution of Berkeley Earth (BE) adjustments to the anomaly data after smoothing with a 12-month moving average. The linear best fit to the data is for the period 1901-2010 (red line) and the gradient is 0.266 ± 0.009 °C per century. The orange curve represents the contribution made to the BE adjustment curve by breakpoint adjustments only.


Conclusions

It is clear from Fig. 41.3 that there has been a large degree of warming in the Netherlands over the last 300 years, but that this is probably less than than the 1.5 °C we are being led to expect for anthropogenic global warming (AGW) in the Northern Hemisphere as claimed by the IPCC and the HadCRUT4 data

The magnitude of this warming is probably only about 1 °C. However, this temperature rise is only what one would expect from the growth of industrial energy use over this period (for the Netherlands I have previously calculated that it should be about 1.0 °C) as I explained in Post 14

However, there is also evidence of significant natural variation in the temperature record (such as the warming in the mid-19th century) that is inconsistent with current IPCC claims.

Consequently, the data presented here does not really add support to the theory that carbon dioxide is the primary driver of warming, otherwise the warming seen in the Netherlands should be much larger, and there would be no anomalous fluctuations in temperature before 1900.

Finally, there is the issue of historical perspective. If temperatures in the recent past were both higher than now and at times lower than now, why are we worried about current temperatures when they appear to be fluctuating between normal bounds?