Tuesday, February 16, 2021

49. Germany - temperature trends

If any country in Europe were to exhibit the effects of anthropogenic global warming (AGW) and climate change, then you might expect that country to be Germany. Except that it doesn't.

There are over 135 sets of weather data for Germany that contain over 480 months of data (see here). Of these 34 are long stations with over 1200 months of data while the remainder I denote as medium stations. In fact ten temperature records have over 2000 months of data. This makes the temperature data for Germany some of the best available.

The geographical locations of these weather stations are indicated on the map below (see Fig. 49.1). This shows that both the long and medium stations are distributed fairly evenly, although there appear to be slightly fewer medium stations in the former East Germany. The stations are also differentiated according to the strength of their warming trend. Those with a large warming trend are marked in red, where a large trend is defined to be one that is both greater than 0.25 °C in total and also more than twice the uncertainty. 

The threshold of 0.25 °C is set equal to the temperature rise that one would expect in the EU as a whole due to waste heat or direct anthropogenic surface heating (DASH) due to human and industrial activity. In fact for Germany, based on its population, area and energy consuption, we would expect the temperature rise since 1700 due to DASH to be at least 0.6 °C (see Post 14), even without the effects of an enhanced greenhouse effect.

 

Fig. 49.1: The locations of long stations (large squares) and medium stations (small diamonds) in Germany. Those stations with a high warming trend are marked in red.

 

The longest data set is for Berlin-Tempelhof (Berkeley Earth ID: 155194) which has data that extends back to 1701. This data is shown in Fig. 49.2 below as the temperature anomaly after subtracting the monthly reference temperatures (MRTs) based on the 1971-2000 averages. The method for calculating the anomalies and MRTs from the raw temperature data is described in Post 47. However, there are two caveats that need to be applied to the data in Fig. 49.2. Firstly, there are significant gaps in the data before 1756, and secondly any data before 1714 needs to be treated with caution simply because thermometers did not exist then, at least not in their current form. 


Fig. 49.2: The temperature trend for Berlin-Tempelhof since 1700. The best fit is applied to the interval 1821-1980 and has a positive gradient of +0.13 ± 0.10 °C per century. The monthly temperature changes are defined relative to the 1971-2000 monthly averages.


In order to determine the temperature trend for Germany I have averaged the temperature anomalies from all 135 long and medium stations. The result is shown in Fig. 49.3 below. All stations with data less than 480 months are excluded as they add no real value to the result, particularly if the data is very recent (i.e. after 1980). This is because the temperature change over time is small, typically 1 °C per century, so you really need at least 40 years of data to detect a measurable trend above the noise.


Fig. 49.3: The temperature trend for Germany since 1700. The best fit is applied to the interval 1756-2005 and has a negative gradient of -0.02 ± 0.05 °C per century. The monthly temperature changes are defined relative to the 1971-2000 monthly averages.


What is immediately apparent is that the trend in Fig. 49.3 differs significantly from the widely publicized IPCC version. Firstly, temperatures before 1850 appear to be higher than they are now, not lower. Secondly, temperatures were stable or declining for over 150 years prior to 1980, not rising. And finally, the mean temperature appears to jump suddenly in 1988 just as the IPCC was being established. Some of these traits are also seen in the mean temperature trend I constructed for the whole of Europe that was published in Post 44. The 19th century cooling is also seen in the temperature data of New Zealand (see Post 8) and Australia (see Post 26).

 

Fig. 49.4: The amount of temperature data from Germany included in the temperature trend each month for three different choices of MRT interval.


As I pointed out in Post 47, the choice of interval for determining the MRTs can influence the number of station records that are included in the final average for the temperature trend, and thus can also influence the nature of the trend itself. In order to test how robust the trend in Fig. 49.3 is regarding changes to the MRT interval, I repeated the calculation for three different MRT intervals. The curves in Fig. 49.4 above show how the number of stations in the final trend changes for each of the different MRT intervals. 

It is clear that there is very little difference between choosing MRT intervals of 1956-1985 and 1971-2000, although the latter does result in a slightly larger number of stations being included in the trend calculation after 1960. The advantage of using the former interval is that it corresponds to a part of the temperature record where the mean temperature is fairly stable whereas the latter interval spans the abrupt increase in temperature seen around 1988. Despite this, in both cases the final trends are very similar, with the best fit in each case being -0.015 °C/century for the 1971-2000 MRT and -0.032 °C/century for the 1956-1985 MRT. In both cases the fitting range was 1756-2005.

The 1901-1930 interval enables more data from before 1930 to be included in the trend (from stations that were closed down before 1930), but significantly less after 1950 when many new stations were set up. Nevertheless, the final trend is almost identical to the those for other two MRT intervals with the best fit being only slightly higher at +0.0004 °C/century. In all three cases temperatures before 1850 were about as high as those after 2000, and in all three cases the mean temperature trend exhibited a large jump in temperature in 1988 as is shown clearly in the 5-year moving average in Fig. 49.3.


Fig. 49.5: The temperature trend for Germany since 1750 according to Berkeley Earth.


Irrespective of which interval is used to determine the MRTs, the resulting temperature trend I have constructed and published in Fig. 49.3 differs significantly from that published by Berkeley Earth which is shown in Fig. 49.5 above. The difference, as I have noted before, is due to homogenization and breakpoint adjustments used by Berkeley Earth to create their adjusted anomalies for each station. Averaging their adjusted anomalies yields the trend shown below in Fig. 49.6, which is virtually identical to the one shown above in Fig. 49.5. This demonstrates that it is not a difference in averaging method that is responsible for the difference between my results in Fig. 49.3 and the Berkeley Earth result. So it must be a difference in the anomaly data itself that is responsible. This can only be due to the adjustments made by Berkeley Earth.


Fig. 49.6: Temperature trends for all long and medium stations in Germany since 1750 derived by aggregating and averaging the Berkeley Earth adjusted data. The best fit linear trend line (in red) is for the period 1801-1980 and has a gradient of +0.29 ± 0.03 °C/century.


The actual temperature difference between the data in Fig. 49.6 and that in Fig. 49.3 is shown below in Fig. 49.7 (blue curve) as the the total adjustment made to the data by Berkeley Earth. The data in Fig. 49.7 highlights two points of note. Firstly, the Berkeley Earth adjustments are not neutral: they add about 0.3 °C to the warming after 1840. Secondly, the adjustments flatten the curve before 1840 and so remove the warm period that mirrors the one seen after 1988. In so doing these adjustments radically change the nature of the temperature trend from an oscillatory one in Fig. 49.3 to the infamous hockey stick shape in Fig. 49.6 that is now synonymous with anthropogenic global warming (AGW).


Fig. 49.7: The contribution of Berkeley Earth (BE) adjustments to the anomaly data in Fig. 49.6 after smoothing with a 12-month moving average. The blue curve represents the total BE adjustments including those from homogenization. The linear best fit (red line) to these adjustments for the period 1841-2010 has a gradient of +0.173 ± 0.003 °C per century. The orange curve shows the contribution from breakpoint adjustments.


Conclusions

The results I have presented here clearly show that the real temperature trend for Germany over the last 300 years differs significantly from the conventional view of global warming. These differences can be summarized as follows.

1) Temperatures before 1840 were comparable to those of today (see Fig. 49.3).

2) The overall temperature trend since 1800 is broadly flat (see the best fit line in Fig. 49.3). 

3) At least 0.6 °C of any temperature rise since 1700 should be due to direct anthropogenic surface heating (DASH) or waste heat from human activity, and not from greenhouse gas emissions.

4) There is a large and seemingly unnatural temperature rise of 0.97 °C in 1988 that occurs at the very moment the IPCC is being formed (see the 5-year mean in Fig. 49.3).

5) Berkeley Earth adjustments have added 0.3 °C of warming to the temperature trend since 1840 and erased most of the warm temperatures before 1840 (see Fig. 49.7).

6) Of the 1.5 °C of warming since 1750 claimed by Berkeley Earth (see Fig. 49.6), 0.6 °C could be due to DASH (see point 3 above) and 0.3 °C is due to adjustments made to the temperature data by Berkeley Earth (see point 5 above).


Friday, February 12, 2021

48. Denmark - temperature trends

In total, Denmark has twenty-two sets of temperature data that exceed 480 months in length (see here). Of these, eight contain over 1200 months of data (long stations), with the longest being Copenhagen (Berkeley Earth ID: 154574) which has continuous data from 1798, and some data fragments that go as far back as 1768. This suggests that the country has a similar number of station temperature records as New Zealand (see Post 8), but surprisingly it is less than is found for the Danish autonomous territory of Greenland which has a population of less than 60,000 and which I will look at in detail at some point in the future.


Fig. 48.1: The locations of long stations (large squares) and medium stations (small diamonds) in Denmark. Those stations with a high warming trend are marked in red.


The distribution of these weather stations in Denmark is indicated in Fig. 48.1 above. It shows a fairly even spread that covers most of the country. It also shows that most of the station data exhibits some significant degree of warming, with only two stations exhibiting a cooling trend (defined as being a trend that is less than twice the uncertainty in the trend).

The data from Denmark is interesting in one other respect in that, of its fourteen medium length station temperature records (i.e. those with over 480 months of data but less than 1200), four have no data after 1970 but do have data going back to the 19th century, while six have no data before 1970. This means that these two groups of stations require different time periods for the calculation of the reference temperatures needed to find their monthly temperature anomalies. For an explanation of the rationale and process used to determine the temperature anomalies via the calculation of monthly reference temperatures (MRTs), please refer to my previous post.

 

Fig. 48.2: The maximum amount of temperature data available from Denmark each month for inclusion in the mean temperature trend.


This problem is illustrated in Fig. 48.2 above. The two peaks in the frequency distribution indicate the two different possibilities for the MRT period. As I pointed out in Post 47, ideally the MRT period needs to be about 30 years in length with at least 40% data coverage. One way to circumvent this problem is to calculate the temperature trend for two MRT time intervals (the data in Fig. 48.2 suggests that 1891-1920 and 1971-2000 should be optimal), compare the results, and if necessary take a weighted average. 

 

Fig. 48.3: The temperature trend for Denmark since 1750. The best fit is applied to the interval 1851-2000 and has a positive gradient of +1.18 ± 0.09 °C per century. The monthly temperature changes are defined relative to the 1971-2000 monthly averages.


If we choose 1971-2000 as our reference period for the MRTs, then the overall temperature trend is as shown in Fig. 48.3 above. The number of stations included each month in this overall trend is shown in Fig. 48.4 below. Overall, up to seventeen stations are included, but before 1970 that drops to less than ten with only one station with data before 1870. The result is that there appears to be a fairly continuous warming trend from 1851 to 2000 as indicated by the data in Fig. 48.3.


Fig. 48.4: The number of sets of station data included each month in the temperature trend for Denmark when the MRTs are calculated for the period 1971-2000.


Now consider what happens if we choose 1891-1920 as our reference period for the MRTs. The result is that there are more stations included before 1970, but less after (see Fig. 48.5 below). This also changes the form of the temperature trend in Fig. 48.6.


Fig. 48.5: The number of sets of station data included each month in the temperature trend for Denmark when the MRTs are calculated for the period 1891-1920.


What we now see in Fig. 48.6 is a much smaller warming trend before 1980 (less than 0.6 °C with possibly higher temperatures before 1800), but a more pronounced jump in temperatures after 1988. This similar to the trend seen for South Africa (see Post 37) and also for Europe as a whole (see Post 44). It is important to note, though, that all the data before 1860 in both Fig. 48.6 and Fig. 48.3 comes from just one station record: Copenhagen (Berkeley Earth ID: 154574). This means the accuracy and reliability of this data cannot be truly ascertained.


Fig. 48.6: The temperature trend for Denmark since 1750. The best fit is applied to the interval 1768-1980 and has a positive gradient of +0.30 ± 0.05 °C per century. The monthly temperature changes are defined relative to the 1891-1920 monthly averages.


The analysis outlined above means that we have two possible results for the temperature trend in Denmark. Both are fairly similar, and for once both are in general agreement with the trend published by Berkeley Earth (see Fig. 48.7 below). But can we combine them into a single result?


Fig. 48.7: The temperature trend for Denmark since 1750 according to Berkeley Earth.


The answer is yes. If we take the weighted average of each of the two trends in Fig. 48.3 and Fig. 48.6 we get the result shown in Fig. 48.8 below. The relative weightings of each month's data is determined by the number of stations included in the average for that month as indicated in Fig. 48.4 and Fig. 48.5 respectively. There is, though, one other factor we need to take into account: the different MRT intervals for the two original trends. Without a correction term this will distort the final data.

In order to allow for the differing MRTs, the trend curve in Fig. 48.3 needs to be first adjusted upwards so that the mean temperature anomaly for the period 1891-1920 is zero in order to be consistent with the data in Fig. 48.6. This requires an upward adjustment of 0.634 °C. Only after this adjustment has been made can the weighted average be determined.


Fig. 48.7: The weighted temperature trend for Denmark since 1750. The best fit is applied to the interval 1851-2000 and has a positive gradient of +1.02 ± 0.09 °C per century. The monthly temperature changes are defined relative to the 1891-1920 monthly averages.


Conclusions

The data from Denmark appears to shown a warming trend of about 1.5 °C since 1850. This is by far the largest warming see in any of the regional records that I have investigated so far.

This temperature rise cannot be explained entirely by direct anthopogenic surface heating (DASH) or waste heat. The best estimate of the expected magnitude of DASH for Denmark (based on its population density) is about 0.35 °C since 1850. However, this could be greater if the source of the heating from human industrial activity is concentrated around the locations of the major weather stations. For example, a city like Greater London with an area 1569 km2, consumes over 132 TWh of energy each year. This equates to a power density of 9.6 W/m2, or an effective temperature rise of over 4 °C.

In 1988 there is evidence of a sudden dramatic upward shift in temperature by over 1 °C. This was seen in the Europe data in Post 44 as well. The reason for this is still unclear (at least to me). In Post 45 I speculated that it could be the result of improved air quality in Europe due to EU legislation. Alternatively, it could be the consequence of a change in measurement method, such as a change from liquid-in-glass thermometers to electronic systems which occurred around that time. What is strange is the timing and suddenness of this increase.


Monday, February 1, 2021

47. Calculating the monthly anomalies using MRTs

Over the next few posts I intend to return to my investigation of the temperature records in Europe by extending my analysis to other countries of the EU besides those of Belgium and the Netherlands that were studied in Post 40 and Post 41 respectively. Central to all these studies is the concept of the temperature anomaly. In many of my previous posts (including Post 4) I have given explanations of how these have been calculated. However, because the data for each country or region tends to have different temporal distributions of data, this has sometimes necessitated using slightly different methods for determining the anomalies of some countries compared to others. So, as this is the start of a new year, I thought this would be a good time to outline exactly how the anomalies are derived, and what drives the decisions to change the methodology from time to time.

The process I use is broadly similar to that used by the main climate science groups (i.e. NOAA, NASA-GISS, Hadley-CRU and Berkeley Earth). However, there are a number of differences, and given how important the analysis process is in terms of its impact on the resulting temperature trend, it therefore seems important that I describe explicitly what I do, and why I do it, not least so that it can be easily referenced in the future.

When studying climate change it is essential that we are able to compare temperature data from different epochs and different locations. There are essentially two problems here. The first is that not all temperature records are of the same length (in terms of time). The second is that the temperatures from different regions can have massively different mean values and ranges of temperature fluctuation.  

For example, Europe has over 100 temperature records that predate 1850 and have over 1800 months of temperature data. The whole of the Southern Hemisphere has two (Rio de Janeiro and Hobart). 

When it comes to temperature ranges and mean values, there are similar extremes. Station records from near the equator (such as Manaus) can have very high mean temperatures of +27 °C with the monthly mean only varying by about ±1 °C across the seasons (such as they are). In contrast, at the South Pole the mean temperature is about -48 °C and the variation of the monthly mean over the year can be ±15 °C or more.

If all temperature records were of the same length, and if all regions of the planet had the same number of stations, these differences would not matter. But because records have different lengths and are often clustered in certain regions, they do. To see how, consider the following.

i) The mathematical basis of the temperature anomaly

In an ideal world we could calculate the change in temperature just by averaging all the temperatures from all the different stations for a given month. If this average was different from month to month then that would be evidence of climate change. We can represent this mathematically as follows.

If Ti(m) is the mean temperature of station i for month m, and Mi is the mean temperature of station i over all time, then

 Ti(m) = Mi + εi(m

(47.1)

where εi(m) is the variation of the monthly mean temperature for station i for each different month m (see Post 5 for more explanation of temperature anomalies, weather and climate). The index m has a different integer value for each month of data in the temperature record for that station. So, if the temperature record has 1200 months of data, m will take values from 1 to 1200. 

The term εi(m) in Eq. 47.1 is the temperature anomaly. It is the amount by which the temperature in station record i varies from a reference value, usually taken to be the long term mean temperature, Mi. Now consider what happens when we sum the temperatures for month m from all i station records.

 i Ti(m) = i Mi + i εi(m

(47.2)

If we now calculate the average of each term we get the result

 <T(m)> = <M> + <ε(m)>

(47.3)

where <T(m)> is the mean of Ti(m) averaged over all i stations, and repeated for each month m. Similarly, <ε(m)> is the mean of εi(m) averaged over all i stations for each month m, and <M> is the mean of Mi averaged over all i stations for each month m

In an ideal world where all temperature records have valid data in each given month m, the term <M> is just a constant that does not vary with the month m. It then follows that the change in the mean temperature <T(m)> from month to month m will be the same as the change in the anomaly <ε(m)> from month to month. So, in an ideal world (where all temperature records are of the same length), averaging all station temperature records should give us the climate change over time.

But we don't live in an ideal world and all temperature records are not of the same length. This means that <M> will not be the same for every month, but will generally be different for different months, depending on how many of the temperature records have data for that month and how many don't. So <M> will vary from month to month. That means it will contribute to (and possibly dominate) the temperature trend over time, and consequently this means that we can't use the average of all temperatures <T(m)> to determine the temperature change over time. Instead we have to use the average of the anomalies for each month <ε(m)>. That in turn means we need a reliable way of calculating the anomalies.

ii) Defining the temperature anomaly

The anomalies are the amount by which the temperatures in a given temperature record deviate from a reference value. That reference value is usually taken to be a mean of the monthly temperature data over a particular time interval. Ideally that time interval should be as long as possible so that it is as accurate as possible. However, there are two problems here. 

The first is that, if the temperature records have a strong trend, either upwards or downwards, and you use different time periods to calculate the reference mean temperature of the different records, this will distort the mean temperature trend, particularly if the different temperature records have differing amounts of data. So you really need to use the same time period for all temperature records when calculating their reference temperatures. That means that you generally can't use extremely long time time periods that use all the data from that record, but instead must use shorter time periods (usually 20 or 30 years) for which as many station records as possible have sufficient data. And if a particular temperature record has no data or insufficient data in that time period (say for the period 1961-1990) when many other records do, then that record will have to be excluded from the calculation of the mean temperature trend. If a significant number of long datasets are excluded in this way, then there are ways of repeating the calculation of the mean reference temperature using other time periods in order to include them, but that is a subsidiary problem. Essentially you will end up with multiple mean temperature trends which could then be merged into a single trend through a weighted averaging process based on the number of station records incorporated into each one.

The second problem is that as you move away from the equator, the seasonal variation of the mean temperature increases. As I pointed out above, near the equator typical variations of the mean temperature each month can vary by as little as ±1 °C. However, at latitudes of 50°N (i.e. in Europe or North America) typical variations of the mean temperature each month can exceed ±10 °C. This means that it is more accurate to calculate a mean temperature for each month (so twelve in total), rather than just calculating one overall. These mean temperatures I have referred to in previous posts as the monthly reference temperatures or MRTs.

iii) Choosing a time-frame for the MRTs

In order to determine the temperature trend for a country or region you need to average the temperature anomalies from all of its temperature records. So you need to first calculate the anomalies for each station record. In order to calculate the temperature anomalies you need to first calculate the monthly reference temperatures or MRTs for all twelve months of the year for that record. However, in order to do this with the minimum amount of statistical error you need to consider a number of factors that may influence how you choose your reference period for the MRTs.

The first thing you need to ensure is that the MRTs are calculated over the same time-frame for each temperature record. The reason for this is the the same as was expounded in (ii) above. If the temperature records are of differing lengths, then using different MRTs for each one will have the same effect as using different values of <M> for each month in Eq. 47.3. It will distort the mean temperature trend.

Next, you want the time-frame you use to calculate the MRTs to be as long as possible so that it is as accurate as possible. Unfortunately, because the majority of temperature records tend to be fairly recent with less than 40 years of data, this means that you will lose accuracy in your mean temperature trend due to insufficient temperature records qualifying for the averaging process.

So in order to incorporate as much of the available temperature data as possible into the final mean temperature trend you need to reduce the time-frame, but not reduce it so much that the MRTs are no longer sufficiently accurate. This is basically a compromise between maximizing the time-frame you use for the MRT calculations, and maximizing the amount of data that can then be used in the mean temperature trend calculation. The most effective way to do this is to create a frequency histogram for each month m, where the value for each month is the sum of all the record lengths for station data that have valid data for that month m. So if temperature record i has Li months of data, and δi(m) is a binary function for station i that takes the value 1 if month m in record i has valid data and 0 if it does not, then the monthly data frequency function f(m) will be

 f(m) = i Li δi(m)

(47.4)

An example of such a frequency function is shown below in Fig. 47.1 for data from the Netherlands. The peak in the distribution indicates where a time-frame chosen to determine the MRTs is likely to generate the maximum amount of anomaly temperature data for inclusion in the mean temperature trend.



 Fig. 47.1: The amount of temperature data available from the Netherlands when each month is included in the MRT.

 

The data in Fig. 47.1 above suggests that the optimal 30-year time-frame that allows the most data to be included in the final trend is likely to be around 1975-2010. In fact the time-frame 1976-2005 was eventually chosen (see Post 41). However, there are two other considerations that need to be taken into account before finally settling on a time-frame. 

Ideally you want the overall temperature trend for your time-frame to be close to zero. This is to improve accuracy in the MRT calculations. Unfortunately that is not always possible. In fact it rarely is. This is because the majority of data in most temperature data tends to be fairly recent, as illustrated in Fig. 47.1, but recent data also tends to exhibit the greatest warming trend.

The final problem is the issue of missing or incomplete data. Even some of the best temperature records will have several months of missing data within your chosen MRT time-frame. The way to address this is to set minimum thresholds for the number of months of data that need to be present within the MRT time-frame in order for data from that station to be included in the mean temperature trend.

Based on the above conditions I have generally used the following criteria to determine the MRTs.

1) Select a time-frame of 30 years where there is the most data available. Failing that use a 20 year time-frame.

2) For each of the 12 monthly MRTs, only calculate the MRT if there is at least 40% of the data available within the time-frame (i.e. 12 out of 30 years). For a 20 year time frame increase this to 60% (i.e. 12 out of 20 years). 

3) If the MRT cannot be determined for any of the 12 months in a given record, then all data for that month of the year from that record is excluded for all years.

iv) Calculating the MRTs and the anomalies

The MRT for each month of a given record is calculated by first determining the time-frame for its calculation as set out above in (iii), and then averaging all the available temperature readings for that month within the time-frame. 

To see how this works in practice, consider the case of the temperature record from Volken (Berkeley Earth ID: 92832) in the Netherlands. We have already seen in Fig 47.1 above that the optimal time-frame for calculating the MRTs in the Netherlands is around 1976-2005. This will be the same for all station records in the region. The mean monthly temperatures for Volken are shown in Fig. 47.2 below with data in the MRT reference period (1996-2005) shown in yellow.

 

Fig. 47.2: The raw monthly temperature data for Volken with the data in the MRT time-frame highlighted in yellow.

 

We next find the mean temperature values for each of the twelve months January-December using the yellow data in Fig. 47.2 above. These mean temperatures are listed in Table 47.1 below. These are the MRT values described above.

 

Table 47.1
Month Mean Temperature or MRT (°C)
January
 2.4818
February
 2.9020
March
 6.0305
April
 8.6293
May
 12.9136
June
 15.5573
July
 17.5369
August
 17.4261
September
 14.3748
October
 10.6354
November
 6.0696
December
 3.4268



The MRT values are then subtracted from the raw data in Fig. 47.2 to yield the anomalies for each month. This is shown in Fig. 47.3 below, where the MRTs are plotted repeatedly in green and the anomalies are in red. It therefore follows that adding the red curve in Fig. 47.3 to the green curve in Fig. 47.3 will recreate the original data in Fig. 47.2.

 

Fig. 47.3: The MRT values and the temperature anomalies for Volken.


In the next few posts I will look at the temperature data for a number of different countries in Europe. In each case the temperature anomalies for each station will be calculated using the method outlined here. The most significant differences in the method used from country to country will be in the choice of MRT time-frame (this will normally be 1961-1990 but may be later or earlier) and the length of the time-frame (normally 30 years, but sometime 20 years will be required due to a lack of available data). In each case this will be indicated, as will the reasons for the choices.


Thursday, January 21, 2021

46. The problem with electric vehicles

 


If there is one thing that is synonymous with carbon-free green energy it is probably the electric vehicle (EV). But if there is one thing that highlights the gap between idealism and the reality of green energy it is probably the electric vehicle. That is not to say electric vehicles are bad, or totally impractical. But they do have their limitations, and more importantly, they probably always will. The problem is, much of the media and the green movement have so far failed to acknowledge this, and probably never will.

Traditionally, electric cars have suffered from two major drawbacks: cost and vehicle range. However, over the last ten years we have seen significant improvements in both to the point where, for many people, electric vehicles are now both affordable and practical. This has prompted the UK government to announce recently its plan to ban the sale of new petrol and diesel cars by 2030, and to effectively force people to buy electric vehicles instead. This is part of its plan for a green industrial revolution; a policy that is intended to boost growth and save the planet. Whether those two aims are mutually compatible has yet to be demonstrated, but the plan itself is not wholly without merit. For one thing, it would certainly help to improve air quality in our major cities and thus help prevent many unnecessary and premature deaths such as that of Ella Kissi-Debrah which hit the national news headlines recently

Unfortunately, there is a major problem with this policy: the recharging time for electric batteries, and therefore for EVs as well. Except that even here there is now some really good news. This week it has been reported that new battery technologies are being developed that can be recharged in under five minutes. So to the casual observer this may look like another triumph of technology, but unfortunately it is not quite that simple. This is because there are some things in physics that cannot be circumvented, like the law of conservation of energy. 

Electric vehicles use electrical energy. Batteries store that energy, and when it is used up it has to be replaced. And the faster you try to replace it, the more electrical power you need to do so. The root of the problem here is the vast amount of energy that needs to be replaced. So to replace that amount of energy in a very short time (like five minutes) requires very high rates of energy transfer: i.e. very high power for your power source. And it is the consequence of using very high power to recharge EVs that is the problem, particularly with regard to customer safety. The only way to eliminate this problem is to reduce the amount of energy EVs use, but as I will explain that is virtually impossible to do.

i) Energy comsumption

In a petrol or diesel car the energy is stored in the form of chemical energy in the fuel (petrol). This is highly concentrated (13 kWh/kg) and easy to replenish. Combustion releases this energy and allows the car to do work overcoming external forces such as air resistance and gravity (if travelling uphill) in order to first accelerate to its cruising speed, and then to maintain that speed against the forces of air resistance and friction. In order to do this effectively engines in most family cars need to be able to generate over 120 brake horsepower (bhp), which is the equivalent of about 90 kW. Even when cruising at 70 mph they still usually require over 50 kW of power to maintain a constant speed. This is because of the the power needed to overcome air resistance, friction and gravity.

For the case of air resistance, the power required to overcome it increases as the cube of the vehicle speed, v, while also being proportional to the density of air (ρ = 1.3 kg.m-3), the cross-sectional area of the vehicle (A ~ 2 m2) and the drag coefficient (Cd ~ 0.4). So if v = 31.3 m/s (i.e. 70 mph), the power needed just to overcome air resistance is 15 kW ( = ½CdAv3). This means every mile of travel at 70 mph requires almost 0.2 kWh of energy (in one hour the car will travel 70 miles and use 15 kWh of energy). The only way this can be reduced is by reducing the speed, the size of the vehicle (A), or its drag coefficient. The first of these would increase journey times, while the last two are more or less fixed and already optimized in the car's design (unless you want to drive around in a torpedo).

The second source of energy loss comes from friction with the road. This is proportional to the car's mass (m ~ 2000 kg) and speed (v), the acceleration due to gravity (g = 9.81 ms-2), and the rolling resistance coefficient of friction of the car tyres (Crr ~ 0.01). This adds about 6 kW to the required power at 70 mph. 

Then there is the energy needed to overcome gravity when travelling uphill. Even a modest incline with a gradient of only 5% would require a power of 30 kW to overcome the effects of gravity when travelling at 70 mph. The net result is that most engines operate at between 20 kW and 50 kW when travelling at 70 mph. If we take the midpoint of these two values, this amounts to 0.5 kWh of energy per mile (i.e. in one hour the car would travel 70 miles and use 35 kWh of energy). The key point here is that none of the numbers listed above can be significantly improved upon. They are all set by the physical properties of the world we live in, such as gravity, air resistance, friction, and the the size of a typical human.

ii) Recharging power

Now suppose you want the range of your electric vehicle to exceed 300 miles. This will require an energy storage capacity for your battery of 150 kWh. Currently most EVs have a capacity of less than half this (the Nissan Leaf is 40-62 kWh at 350 V).  

In the UK the standard mains voltage is 230 V, and the maximum current of most domestic circuits is 13 amps. That equates to a charging power of about 3 kW. So it would take 50 hours (or about two days) to fully recharge your EV.

You could of course use higher voltages (e.g. a 3-phase supply of 400 V) and currents of up to 30 A. The recharging power is now 12 kW, and the time required to recharge your EV battery is only 12.5 hours. But this is still about 2.5 hours of charging for every one hour of driving at 70 mph.

So what about using this new battery technology that can recharge in five minutes? Well if you want to recharge a 150 kWh battery in five minutes you would need a 1.8 MW power supply. That is almost the equivalent of the output of a small power station. And then you need to consider the currents and voltages that would be required.

A 350 V supply would require a current of over 5000 A to provide an energy transfer rate of 1.8 MW. Now assuming the power cable used to carry this current from the generator to the car was about five metres long and had a cross-sectional area of about 5 cm2 (which is a fairly chunky cable), the power dissipation in the cable would exceed 4 kW. That would require some pretty heavy-duty insulation and cooling.

Alternatively, you could reduce the current to a few hundred amperes and operate at voltages of over 10 kV, but I doubt the HSE would look that favourably on such an outcome. However, irrespective of which current-voltage option is chosen, 1.8 MW high power charging points would place an enormous strain on the national grid.

Finally, consider this. There are currently about 40 million cars in the UK and the average driver drives 10,000 miles per year. That is 400 billion miles in total. If this is to be achieved using only electric vehicles it will require an extra 200 billion kWh of electrical energy to be generated, or 23 GW of generating capacity. That represents an increase of about 30% in the current UK generating capacity, or the equivalent of over twenty new power stations, and a 60% increase in total electricity usage. And all this is to be achieved in ten years.

Summary

For those who only use their cars for short journeys recharging times are not a significant issue. This is because the amount of energy such journeys need is small, and so the recharging time is much less than the time that the vehicles remain idle for. The problem only really becomes acute where the journeys are long, undertaken at high speed (i.e. over 60 mph) and are repeated daily. This is not just a problem with the distance that electric cars can travel on a single charge, although that can still be an issue. Nor is it a problem with a lack of available charging points, which also needs to improve as well. Even if both these problems are overcome the underlying problem remains: the recharging time and the rate of energy transfer.

Those who don't understand the physics, or have maybe just failed to fully consider the implications of the physics, may believe that this is just another technical issue that technology will fix in time. It is not. The key point is this: if you want to increase the range of EVs, then you need to increase the energy storage capacity of their batteries. But that energy will need to be replaced on a regular basis and the rate you can do this is not set by the battery; it is set by the safety regulations around the charging point.


Thursday, December 31, 2020

45. Review of the year 2020

I started this blog in May, in part to occupy my time during the Covid-19 lockdown. But I was also motivated by a growing dissatisfaction with the quality of data analysis I was witnessing in climate science, and in particular the lack of any objectivity in the way much of the data was being presented and reported. My concerns were twofold. 

The first was the drip-drip of selective alarmism with an overt confirmation bias that kept appearing in the media with no comparable reporting of events that contradicted that narrative. The worry here is that extreme events that are just part of the natural variation of the climate were being portrayed as the new normal, while events of the opposite extreme were being ignored. It appeared that balance was being sacrificed for publicity.

The second was the over-reliance of much of the climate analysis on complex statistical analysis techniques of doubtful accuracy or veracity. To paraphrase Lord Rutherford: if you need to use complex statistics to see any trends in your data, then you would be better off using better data. Or to put it more simply, if you can't see a trend with simple regression analysis, then the odds are there is no trend to see.

The purpose of this blog has not been to repeat the methods of climate scientists, nor to improve on them. It has merely been to set a benchmark against which their claims can be measured and tested.

My first aim has been to go back to basics, to examine the original temperature data, look for trends in that data, and to apply some basic error analysis to determine how significant those trends really are. Then I have sought to compare what I see in the original data with what climate scientists claim is happening. In most cases I have found that the temperature trends in the real data are significantly less than those reported by climate scientists. In other words, much of the reported temperature rises, particularly in Southern Hemisphere data, result from the data manipulations performed by the climate scientists on the data. This implies that many of the reported temperature rises are an exaggeration.

In addition, I have tried to look at the physics and mathematics underpinning the data in order to test other possible hypotheses that could explain the observed temperature trends that I could detect. Below I have set out a summary of my conclusions so far.


1) The physics and mathematics

There are two alternative theories that I have considered as explanations of the temperature changes. The first is natural variation. The problem here is that in order to conclusively prove this to be the case you need temperature data that extends back in time for dozens of centuries, and we simply do not have that data. Climate scientists have tried to solve this by using proxy data from tree rings and sediments and other biological or geological sources, but in my opinion these are wholly inadequate as they are badly calibrated. The idea that you can measure the average annual temperature of an entire region to an accuracy of better than 0.1 °C simply by measuring the width of a few tree rings, when you have no idea of the degree of linearity of your proxy, or the influence of numerous external variables (e.g. rainfall, soil quality, disease, access to sunlight), is preposterous. But there is another way.

i) Fractals and self-similarity

If you can show that the fluctuations in temperature over different timescales follow a clear pattern, then you can extrapolate back in time. One such pattern is that resulting from fractal behaviour and self-similarity in the temperature record. By self-similarity I mean that every time you average the data you end up with a pattern of fluctuations that looks similar to the one you started with, but with amplitudes and periods that change according to a precise mathematical scaling function.

In Post 9 I applied this analysis to various sets of temperature data from New Zealand. I then repeated it for data from Australia and then again in Post 42 for data from De Bilt in the Netherlands. In virtually all these cases I found a consistent power law for the scaling parameter indicative of a fractal dimension of between 0.20 and 0.30, with most values clustered close to 0.25. The low magnitude of this scaling term suggests that the fluctuations in long term temperatures are much greater in amplitude than conventional statistical analysis would predict. 

For example, in the case of De Bilt it suggests that the standard deviation in the average 100-year temperature is more than 0.2 °C. This means that there is a 16% probability of the mean temperature for any century being more than 0.3°C more (or less) than the mean temperature for the previous century, and therefore a one in six possibility of a 0.6 °C temperature rise in any given century. So a 0.6 °C temperature rise over a century could occur once every 600 years purely because of natural variations in temperature. It also suggests that similar temperature variations that we have seen in temperature data in the last 50 or 100 years might have been repeated frequently in the not so distant past.

ii) Direct anthropogenic surface heating (DASH) and the urban heat island (UHI)

Another possible explanation for any observed rise in temperature is the heating of the environment that occurs due to human industrial activity. All energy use produces waste heat. Not only that, but all energy must end up as heat and entropy in the end. The Second Law of Thermodynamics tells us that. It is therefore inevitable that human activity must heat the local environment. The only question is by how much.

Most discussions in this area focus on what is known as the urban heat island (UHI). This is a phenomenon whereby urban areas either absorb extra solar radiation because of changes made to the surface albedo by urban development (e.g. concrete, tarmac, etc), or tall buildings trap the absorbed heat and reduce the circulation of warm air, thereby concentrating the heat. But there is another contribution that continually gets overlooked - direct anthropogenic surface heating (DASH). 

When humans generate and consume energy they liberate heat or thermal energy. This energy heats up the ground, and the air just above it, in much the same way that radiation from the Sun does. In so doing DASH adds to the heat that is re-emitted from the Earth's surface, and therefore increases the Earth's surface temperature at that location.

In Post 14 I showed that this heating can be significant - up to 1 °C in countries such as Belgium and the Netherlands with high levels of economic output and high population densities. In Post 29 I extended this idea to look at suburban energy usage and found a similar result. 

What this shows is that you don't need to invoke the Greenhouse Effect to find a plausible mechanism via which humans are heating the planet. Simple thermodynamics will suffice. Of course climate scientists dismiss this because they assume that this heat is dissipated uniformly across the Earth's surface - but it isn't. And just as significant is the fact that the majority of weather stations are in places where most people live, and therefore they also tend to be in regions where the direct anthropogenic surface heating (DASH) is most pronounced. So this direct heating effect is magnified in the temperature data.

iii) The data reliability

It is taken as read that the temperature data used to determine the magnitude of the observed global warming is accurate. But is it? Every measurement has an error. In the case of temperature data it appears that these errors are comparable in magnitude to many of the effects climate scientists are trying to measure.

In Post 43 I looked at pairs of stations in the Netherlands that were less than 1.6 km apart. One might expect that most such pairs would exhibit identical datasets for the two stations in the pair, but they don't. In virtually every case the fluctuations in the difference in their monthly average temperatures was about 0.2 °C. While this was consistent with the values one would expect based on error analysis, it does highlight the limits to the accuracy of this data. It also raises questions about how valid techniques such as breakpoint adjustment are, given that these techniques depend on detecting relatively small differences in temperature for data from neighbouring stations.

iv) Temperature correlations between stations

In Post 11 I looked at the product moment correlation coefficients (PMCC) between temperature data from different stations, and compared the correlation coefficients with the station separation. What became apparent was evidence for a strong negative linear relationship between the maximum correlation coefficient for temperature anomalies between pairs of station and their separation. For station separations of less than 500 km positive correlations of better than 0.9 were possible, but this dropped to a maximum correlation of about 0.7 for separations of 1000 km and 0.3 at 2000 km.

There were also clear differences between the behaviour of the raw anomaly data and the Berkeley Earth adjusted data. The Berkeley Earth adjustments appear to reduce the scatter in the correlations for the 12-month averaged data, but do so at the expense of the quality of the monthly data. This suggests that these adjustments may be making the data less reliable not more so. The improvement in the scatter of the Berkeley Earth 12-month averaged data is also curious. Is it because it is this data that is used to determine the adjustments and not the monthly data, or is this not the case and instead there is some other reason? And what of the scatter in the data? Can we use this to measure the quality and reliability of the original data? This clearly warrants further study.


Fig. 45.1: Correlations (PMCC) for the period 1971-2010 between temperature anomalies for all stations in New Zealand with a minimum overlap of 200 months. Three datasets were studied: a) the monthly anomalies; b) the 12-month average of the monthly anomalies; c) the 5-year average of the monthly anomalies. Also studied were the equivalent for the Berkeley Earth adjusted data.



2) The data

Over the last eight months I have analysed most of the temperature data in the Southern Hemisphere as well as all the data in Europe that predates 1850. The results are summarized below.

i) Antarctica

In Post 4 I showed that the temperature at the South Pole has been stable since the 1950s. There is no instrumental temperature data before 1956 and there are only two stations of note near the South Pole (Amundsen-Scott and Vostok). Both show stable or negative trends.

Then in Post 30 I looked at the temperature data from the periphery of the continent. This I divided into three geographical regions: the Atlantic coast, the Pacific coast and the Peninsula. The first two only have data from about 1950 onwards. In both cases the temperature data is also stable with no statistically significant trend either upwards or downwards. Only the Peninsula exhibited a strong and statistically significant upward trend of about 2 °C since 1945.


ii) New Zealand

Fig. 45.2: Average warming trend of for long and medium stations in New Zealand. The best fit to the data has a gradient of +0.27 ± 0.04 °C per century.

In Posts 6-9 I looked at the temperature data from New Zealand. Although the country only has about 27 long or medium length temperature records, with only ten having data before 1880, there is sufficient data before 1930 to suggest temperatures in this period were almost comparable to those of today. The difference is less than 0.3 °C.


iii) Australia

Fig. 45.3: The temperature trend for Australia since 1853. The best fit is applied to the interval 1871-2010 and has a gradient of 0.24 ± 0.04 °C per century.

The temperature trend for Australia (see Post 26) is very similar to that of New Zealand. Most states and territories exhibited high temperatures in the latter part of the 19th century that then declined before increasing in the latter quarter of the 20th century. The exceptions were Queensland (see Post 24) and Western Australia (see Post 22), but this was largely due to an absence of data before 1900. While there is much less temperature data for Australia before 1900 compared to the latter part of the 20th century, there is sufficient to indicate that, as in New Zealand, temperatures in the late 19th century were similar to those of the present day.


iv) Indonesia

Fig. 45.4: The temperature trend for Indonesia since 1840. The best fit is applied to the interval 1908-2002 and has a negative gradient of -0.03 ± 0.04 °C per century.

The temperature data for Indonesia is complicated by the lack of quality data before 1960 (see Post 31). The temperature trend after 1960 is the average of between 33 and 53 different datasets, but between 1910 and 1960 it generally comprises less than ten. Nevertheless, this is sufficient data to suggest that temperatures in the first half of the 20th century were greater than those in the latter half. This is despite the data from Jakarta Observatorium which exhibits an overall warming trend of nearly 3 °C from 1870 to 2010 (see Fig. 31.1 in Post 31).

It is also worth noting that the temperature data from Papua New Guinea (see Post 32) is similar to that for Indonesia for the period from 1940 onwards. Unfortunately Papua New Guinea only has one significant dataset that predates 1940, so conclusions regarding the temperature trend in this earlier time period are difficult to ascertain.


v) South Pacific

Most of the temperature data from the South Pacific comes from the various islands in the western half of the ocean. This data exhibits little if any warming, but does exhibit large fluctuations in temperature over the course of the 20th century (see Post 33). The eastern half of the South Pacific, on the other hand, exhibits a small but discernible negative temperature trend of between -0.1 and -0.2 °C per century (see Post 34).


vi) South America

Fig. 45.5: The temperature trend for South America since 1832. The best fit is applied to the interval 1900-1999 and has a gradient of +0.54 ± 0.05 °C per century.

In Post 35 I analysed over 300 of the longest temperature records from South America, including over 20 with more than 100 years of data. The overall trend suggests that temperatures fluctuated significantly before 1900 and have risen by about 0.5 °C since. The high temperatures seen before 1850 are exclusively due to the data from Rio de Janeiro and so may not be representative of the region as a whole.


vii) Southern Africa

Fig. 45.6: The temperature trend for South Africa since 1840. The best fit is applied to the interval 1857-1976 and has a gradient of +0.017 ± 0.056 °C per century.

In Posts 37-39 I looked at the temperature trends for South Africa, Botswana and Namibia. Botswana and Namibia were both found to have less than four usable sets of station data before 1960 and only about 10-12 afterwards. South Africa had much more data, but the general trends were the same. Before 1980 the temperature trends were stable or perhaps slightly negative, but after 1980 there was a sudden rise of between 0.5 °C and 2 °C in all three trends, with the largest being found in Botswana. This does not correlate with accepted theories on global warming (the rises in temperature are too large and too sudden, and do not correlate with rises in atmospheric carbon dioxide), and so the exact origin of these rises appears to be unexplained.

 

viii) Europe

Fig. 45.7: The temperature trend for Europe since 1700. The best fit is applied to the interval 1731-1980 and has a positive gradient of +0.10 ± 0.04 °C per century.

In Post 44 I used the 109 longest temperature records to determine the temperature trend in Europe since 1700. The resulting data suggests that temperatures were stable from 1700 to 1980 (they rose by less than 0.25 °C), and then rose suddenly by about 0.8 °C after 1986. The reason for this change is unclear, but one possibility is that it has occurred due to a significant improvement in air quality that reduced the amount of particulates in the atmosphere. These particulates, that may have been present in earlier years, could have induced a cooling that compensated for the underlying warming trend. Once removed, the temperature then rebounded. Even if this is true, it suggests a maximum warming of about 1 °C since 1700, much of which could be the result of direct anthropogenic surface heating (DASH) as discussed in Post 14. In countries such as Belgium and the Netherlands the temperature rise is even less than that expected from such surface heating. It is also much less than that expected from an enhanced Greenhouse Effect due to increasing carbon dioxide levels in the atmosphere (i.e. about 1.5 °C in the Northern Hemisphere since 1910). In fact the total temperature rise should exceed 2.5 °C. So here is the BIG question? Where has all that missing temperature rise gone?


Thursday, December 10, 2020

44. Europe - temperature trends since 1700

The longest temperature records that we have are almost all found in Europe. In fact Europe has over 30 records that predate 1800, and three that go back beyond 1750. One of those three is the De Bilt record from the Netherlands (Berkeley Earth ID: 175554) that I discussed in both Post 41 and Post 42 and which dates back to 1706. The second is Uppsala in Sweden (Berkeley Earth ID: 175676) which dates back to 1722, and the third is Berlin-Tempelhof in Germany (Berkeley Earth ID: 155194) which has data as far back as 1701. Overall, there are nearly 120 temperature records with over 1200 months of data that also have data that predates 1860 (see here for a list). If we average the anomalies from these records, we get the temperature trend shown in Fig. 44.1 below.

 

Fig. 44.1: The temperature trend for Europe since 1700. The best fit is applied to the interval 1731-1980 and has a positive gradient of +0.10 ± 0.04 °C per century. The monthly temperature changes are defined relative to the 1951-1980 monthly averages.

 

To construct the trend in Fig. 44.1 above the raw temperature data from each of 109 records was first converted to monthly anomaly data by subtracting the monthly reference temperatures (MRTs). The MRTs were in turn calculated for the time interval 1951-1980 by averaging the data in that record over all months in that period. This is the same time frame that was used by climate scientists in the 1980s to analyse temperature data, but is significantly earlier than the time intervals normally used today which tend to be 1961-1990 or 1981-2010. The reasons for the differences in time frame I intend to discuss in a later post.

The temperature trend in Fig. 44.1 has two features of note. The first is the very slight upward trend from 1730 to 1980 of approximately 0.10 °C per century. This amounts to a total temperature increase over that time period of about 0.25 °C which is significantly less than the standard deviation of the 10-year moving average of the same data. This suggests that this trend is insignificant when compared to natural variations in temperature.

The second feature is the sudden temperature rise of almost 0.8 °C seen in 1988. This looks unnatural. So much so that, if it were to occur in just one temperature record, then it could be ascribed to a random fluctuation, or a sudden change in the local environment or undocumented location change. But this is not seen in just one record; it is seen in the average of over 100 temperature records, as the data in Fig. 44.2 below shows.

 

Fig. 44.2: The number of sets of station data included each month in the temperature trend for Europe.

 

Nor can we claim that this is just a local effect. The map below in Fig. 44.3 shows the approximate location of all 109 stations whose data was used to construct the trend in Fig. 44.1 above. While it is clear that the greatest concentration of stations is in central Europe between France and Poland, it is also evident that there are significant numbers of stations with very long records located on the edges of Europe such as in the UK, Scandinavia and eastern Europe. This suggests that the sudden rise in temperature seen in 1988 is real and widespread.

 


 Fig. 44.3: The locations of long stations in Europe with more than 1800 months of data, or more than 1200 months of data but with significant data from before 1860. Those stations with a high warming trend from 1700-1980 are marked in red.

 

For comparison, I have performed the same averaging process on the adjusted data for each station created by Berkeley Earth. This adjusted data for each station incorporates two adjustments to the data. Firstly, the monthly reference temperatures (MRTs) are constructed from homogenized data for the region rather than from the raw station data. Secondly, the trend of each temperature record is spliced into segments using breakpoints, and each segment is adjusted up or down relative to its original position. These breakpoint adjustments are supposed to remove local measurement errors (such as those due to changes in instrumentation or location) and thus make the data more reliable, but as I pointed out in my previous post, reliability in temperature data is very hard to measure due to the amount of natural variability that it contains.

 

Fig. 44.4: Temperature trends for all long and medium stations in Europe since 1750 derived by aggregating and averaging the Berkeley Earth adjusted data. The best fit linear trend line (in red) is for the period 1801-1980 and has a gradient of +0.33 ± 0.03 °C/century.

 

The results of averaging the Berkeley Earth adjusted data are shown in Fig. 44.4 above. Three things are noticeable in this data. Firstly, the trend in the data before 1980 has increased by a factor of three. There are two main reasons for this. One reason is that the adjustments made to the data have increased the trend slightly and smoothed out some of peaks before 1830 (see Fig. 44.6 below). The other is that the interval used for the fitting of the linear regression is shorter. This in turn reduces the gradient of the trend.

The second feature of the data in Fig. 44.4 above is that the jump in temperature after 1988 is still present, and is just as large as that seen in Fig. 44.1.

The third feature of the data in Fig. 44.4 is that it closely resembles that data shown for the 12-month and 10-year trends that has been published by Berkeley Earth (see Fig. 44.5 below). This suggests that the averaging process I have used is sufficiently accurate without the need to apply different weightings to the data from different stations as Berkeley Earth does. The weightings that Berkeley Earth use are supposedly to correct for any clustering of stations, but the map in Fig. 44.3 suggests these weightings are not likely to vary significantly for most stations, and so are not likely to be of primary importance. The agreement between the data in Fig. 44.4 and that in Fig. 44.5 appears to confirm that hypothesis.

 

Fig. 44.5: The temperature trend for Europe since 1750 according to Berkeley Earth.

 

It can be seen from these results that the differences between the trends I have constructed using the original data and the trends derived using Berkeley Earth's adjusted data are not as large as has been seen in previous regional analyses, such as those for South Africa (Post 37), South America (Post 35), the South Pacific (Post 33 and Post 34), Papua New Guinea (Post 32), Indonesia (Post 31), Australia (Post 26) and New Zealand (Post 8). These differences for Europe are shown in Fig. 44.6 below.

 

Fig. 44.6: The contribution of Berkeley Earth (BE) adjustments to the anomaly data in Fig. 44.4 after smoothing with a 12-month moving average. The linear best fit (red line) to the breakpoint adjustment data (shown in orange) is for the period 1841-2010 and has a gradient of 0.057 ± 0.001 °C per century. The blue curve represents the total BE adjustments including those from homogenization.

 

Overall, the adjustments made by Berkeley Earth to their data have probably only added about 0.2 °C to the warming. More significant are the adjustments made to data before 1830 which appear to be designed to flatten the curve. Such adjustments, though, assume that the mean temperature before 1830 was stable. Yet data from 1830 to 1980 suggests that the temperature trend for Europe was anything but stable, even though the trend shown in Fig. 44.1 was constructed from between 50 and 109 different datasets over that period. The full extent of that instability for the 5-year average temperature can be seen in Fig. 44.7 below.

 

Fig. 44.7: The 5-year moving average of the temperature trend for Europe since 1700. The best fit is applied to the monthly anomaly data for the interval 1731-1980 and has a positive gradient of +0.10 ± 0.04 °C per century.


Conclusions

In 1981 James Hansen and co-workers at NASA's Goddard Institute for Space Studies (GISS) published a paper in the pre-eminent journal Science (which incidentally, has an impact factor of 41.8, where impact factors over 1.0 are considered good) that was one of the first to warn of the impact that increased levels of carbon dioxide in the atmosphere could have on global warming and climate change. But here is the problem: the data shown here appears to indicated that there was no significant warming in Europe before 1981. As the data shown in Fig. 44.1 indicates, the total warming in Europe for the 250 years before 1981 was so small (less than 0.25 °C) that it was less than the natural variation in the mean decadal temperatures over the same period.

Then, in 1988 the mean temperatures in Europe suddenly jumped by over 0.8 °C (see Fig. 44.1), just in time for the IPCC's  first assessment report on climate change in 1990 (PDF). A similar abrupt jump was seen at about the same time in Botswana and, to a lesser extent, in South Africa. Convenient, certainly. But is this just coincidence or 20:20 foresight by the IPCC?

As I have shown throughout the course of this blog, before 1981 there does not appear to have been any exceptional warming in most of the Southern Hemisphere either. So the above analysis raises important concerns regarding the reported extent of climate change in Europe and beyond. The most important question is: is the temperature rise seen after 1988 in Fig. 44.1 real? And if so, what is causing it? 

If it is being driven by CO2, then why does it not correlate with increases in CO2 levels in the atmosphere? If it is a natural phenomenon, why are there no other jumps of a similar magnitude in the previous 250 years? Could it be another example of chaotic behaviour similar to the self-similarity I explored in Post 42? And if so, is it just random, or is it the consequence of a complex system being driven between meta-stable states by, for example, greenhouse gases? What I don't see so far is conclusive evidence either way.