Showing posts with label Germany. Show all posts
Showing posts with label Germany. Show all posts

Wednesday, March 31, 2021

57. The case against temperature data adjustments (EU)

Fig. 57.1: The number of weather stations with temperature data in the Northern Hemisphere since 1700 according to Berkeley Earth.

 

There are four major problems with global temperature data.

1) It is not spread evenly

Only about 10% of all available data covers the Southern Hemisphere (compare Fig. 57.2 below with Fig. 57.1 above), while in the Northern Hemisphere over half the data is from the USA alone (as shown in Fig. 57.3). In addition, there is no reliable temperature data covering the oceans from before 1998 when the Argo programme for a global array of 3000 autonomous profiling floats was proposed. The Argo programme has since been used to measure ocean temperatures and salinity on a continuous basis down to depths of 1000m across most of the oceans between the polar regions, but that means we only have reliable data for the last 20 years. 

The result is that only land-based data is available before 1998, and this tends to cluster around urban areas. The solution to this clustering employed by climate scientists is to resort to techniques such as gridding, weighting and homogenization. 

Gridding involves creating a virtual grid of points across the Earth's surface, usually 1° of longitude or latitude apart. This is limited by two factors: computing power and data coverage. As there are unlikely to be any weather stations at these grid points, unless by coincidence, virtual station records are created at these points by averaging the temperatures from the nearest real stations. This averaging of stations is not equal. Instead the average usually weights the different stations according to their closeness in distance (although even stations 1000 km away can be included) and their correlation to the mean of all those datasets. This process of weighting based on correlation is often called homogenization. 

 

Fig. 57.2: The number of weather stations with temperature data in the Southern Hemisphere since 1750 according to Berkeley Earth.

 

2) It does not go back far enough in time

As I have shown previously, the earliest temperature records are from Germany (see Post 49) and the Netherlands (see Post 41) and go back to the early 18th century. However, there is no Southern Hemisphere temperature data before 1830, and only two datasets in the USA from before 1810. The principal reason is that the amount of available data is positively correlated with economic development. As more countries have industrialized, the number of weather stations has increased. Unfortunately, climate change involves measuring the change in temperature since a previous epoch or reference period (over say 100 or 200 years), and in those times the availability of data is much, much, worse. So increasing the quality of current data cannot increase the quality of the measured temperature change. This will always be constrained by how much data we had in the distant past.


Fig. 57.3: The number of weather stations with temperature data in the United States since 1700 according to Berkeley Earth.


3) The data is often subject to measurement errors

Over time weather stations are often moved, instruments are ungraded, and the local environment changes as well. The conventional wisdom is that all these changes have profound impacts on the temperature records that need to be compensated for. This is the rationale behind data adjustments. The problem is, none of it is really justified, as I will demonstrate in this post.

If there are problems with the temperature data at different times and locations, these issues should be randomly distributed. That means any adjustments to correct these errors should be randomly distributed as well. This in turn means that averaging a sufficiently large number of stations for a regional or global trend should result in the cancellation of both the errors and the adjustments. As I have shown in many previous posts here, this does not happen. In fact in many cases the adjustments can add (or subtract) as much, or even more, warming (or cooling) to the mean trend than is present in the original data, particularly in the Southern Hemisphere. For examples see my posts for Texas, Indonesia, PNG, the South Pacific (East and West), NSW, Victoria, South Australia, Northern Territory and New Zealand among others.

One contentious issue is the problem of station moves or changes to the local environment. The conventional wisdom is that these will both strongly affect the temperature record. Frankly, I disagree. In my view those who say they will are failing to understand what is being measured. One example is, what would happen if the weather station was to be moved from open ground to an area under a large tree? Does the increased shade reduce the temperature? The answer is no because the thermometer is already in the shade inside its Stevenson screen. Moreover, the thermometer is measuring air temperature, not the temperature on the ground, and the air is continuously circulating. So the air under the tree is at virtually the same temperature as the air above open ground. The one adjustment that does affect temperature is altitude. Air (almost) always gets colder as you ascend in height.

4) There just isn't enough data

There are currently about 40,000 weather stations across the globe. This sounds like a lot, but it is only about one for every 13,000 square kilometres of area. That means that on average, these stations are over 110 km apart, or more than 1° of longitude or latitude. Even today, that is probably the bare minimum of what is required to measure a global temperature. Unfortunately, in previous times, the availability of data was much, much, worse.

Of course, now there are alternatives. One is to use satellites, but again this only provides data back to about 1980. The other problem with satellites is that their orbits generally no not cover the polar regions. And finally, they can only see what is emitted at the top of the atmosphere (TOA). So they can measure temperatures at the TOA, but measuring surface temperatures can be problematic as the infra-red radiation emitted by the surface is largely absorbed by carbon dioxide and water vapour in the lower atmosphere.

Over the course of the last eleven months I have posted 56 articles to this blog. Over half of these have analysed the surface temperature trends in various countries, states and regions. In virtually every case, the trend I have determined by averaging station anomalies has differed from the conventional widely publicized versions. These differences are largely due to homogenization and data adjustments. 

Homogenization

There are two potential issues with homogenization. Firstly, there are more urban stations than rural ones. This is because stations tend to be located near to where people live. Secondly, urban stations tend to be closer together. So they are more likely to be strongly correlated. As homogenization uses correlation for weighting the influence of each station's data in the mean temperature for the local region, this means that the influence of urban stations will be stronger. 

So both potential issues are likely to favour urban stations over rural ones. Yet it is the urban ones that are more likely to be biased due to the urban heat island (UHI) effect. The result is that that bias is often transmitted to the less contaminated rural stations, thereby biasing the whole regional trend upwards. This is why I do not use homogenization in my analysis. The other problematic intervention is data adjustment.

Data adjustments

The rationale for data adjustments is that they are needed to compensate for measurement errors that may occur from changes of station site, instrument or method. The justification for using them is that climate scientists believe they can identify weak points in the data. Some might call that hubris. The alternative viewpoint is that these adjustments are unnecessary and that averaging a sufficiently large sample will erase the errors automatically via regression to the mean. I will now demonstrate that with real data.


Fig. 57.4: The 5-year average temperature trends for Austria, Hungary and Czechoslovakia together with best fit lines for the interval 1791-1980 (m is the gradient in °C per century). The Austria and Czechoslovakia data are offset by +2°C and -2°C respectively to aid clarity.


In three recent posts I calculated and examined the temperature trends for Czechoslovakia (Post 53), Hungary (Post 54) and Austria (Post 55). The five-year moving averages of the temperature trends in these three countries are shown in Fig. 57.1 above. What is immediately apparent is the high degree of similarity that these trends display, particularly after 1940. This is indicated by the red and black arrows which mark the positions of coincident peaks and troughs respectively in the three datasets.

It turns out that all three datasets are also very similar to that of Germany (see Post 49). This is shown in Fig. 57.5 below. This is not surprising as the four countries are all close neighbours. What is surprising is that there are not greater differences between the four datasets, particularly given the number of adjustments that Berkeley Earth felt needed to be made to the individual station records for these countries when undertaking their analysis.


Fig. 57.5: The 5-year average temperature trends for Austria, Hungary and Czechoslovakia compared to that of Germany.


To understand the potential impact of these adjustments, consider this. The temperature trend for Austria in Fig. 55.1 of Post 55 was determined by averaging up to 26 individual temperature records. Yet the total number of adjustments made to those records by Berkeley Earth in the time interval 1940-2013 was more than 90. That is more than three adjustments per temperature record, or at least one for every 21 years of data. Yet if the adjustments are ignored, and the data for each country is just averaged normally, the results for each country, Austria, Czechoslovakia, Germany and Hungary, are virtually identical. This leads to the following conclusions and implications.


Conclusions

1) The data in Fig. 57.5 indicates that the temperature trends for Austria, Czechoslovakia, Germany and Hungary are virtually identical after 1940. The probability that this is due to random chance is minimal. It therefore implies that the temperature trends for these countries from 1940 onwards are indeed virtually identical. This is not a total surprise as they are all close neighbours.

2) As all the individual temperature anomaly time series used to generate these trends are not identical, and all are likely to have data irregularities from time to time, this also means that those data irregularities are highly likely to be random in both their size and distribution across the various time series. This means that when averaged to create the regional trend, their irregularities will partially cancel. If the number of sites is large enough, the cancellation will be almost total. This is what is seen in Fig. 57.5, and it is why all the trends shown are virtually identical post-1940.


Implications

1) If the temperature trends for Austria, Czechoslovakia, Germany and Hungary are virtually identical after 1940, as conclusion #1 suggests, then it is reasonable to suppose that they should be virtually identical before 1940 as well. But they aren't, as the data in Fig. 57.5 illustrates. This is because the trends in each case are based on the average of too few individual anomaly time-series for the irregularities from each station time-series to be fully cancelled by the irregularities from the remainder. Before 1940 there are only sixteen valid temperature records in Austria, three in Hungary and three in Czechoslovakia. Germany, on the other hand has about thirty.

2) However, if it is true that all the temperature trends for Austria, Czechoslovakia, Germany and Hungary before 1940 should be the same, then there is no reason why we cannot combine them all into a single trend. This would dramatically increase the number of individual time-series being averaged, and so reduce the discrepancy between the calculated value for the trend and the true value. This has been done in Fig. 57.6 below.


Fig. 57.6: The temperature trend for Central Europe since 1700. The best fit is applied to the interval 1791-1980 and has a negative gradient of -0.05 ± 0.07 °C per century. The monthly temperature changes are defined relative to the 1981-2010 monthly averages.

 

The data in Fig. 57.6 represents the temperature trend for the combined region of Austria, Czechoslovakia, Germany and Hungary. The trend after 1940 is the same as that seen in those individual countries and the gradient of the best fit line for 1791-1980 more closely resembles the equivalent lines for Germany and Hungary than it does those of Austria and Czechoslovakia. But now we also have a more accurate trend before 1940. The question is, how much more accurate?

 

Fig. 57.7: The number of station time-series included in the average each month for the temperature trend in Fig. 57.6

 

The data from Austria, Czechoslovakia, Germany and Hungary suggest that approximately 20 different time-series are required in the average for the irregularities in the different station time-series to almost fully cancel. The graph in Fig. 57.7 suggests that this threshold is surpassed for almost every month of every year after 1830.

 

Fig. 57.8: The temperature trend for Central Europe since 1700. The best fit is applied to the interval 1831-2010 and has a positive gradient of 0.62 ± 0.07 °C per century. The monthly temperature changes are defined relative to the 1981-2010 monthly averages.

 

If we now calculate the best fit to the data in Fig. 57.8, but only use data after 1830, we get a gradient for the trend line of 0.62 °C per century. This equates to a temperature rise since 1830 of over 1.1 °C.

 

 
Fig. 57.9: The temperature trend for Central Europe since 1700. The best fit is applied to the interval 1781-2010 and has a positive gradient of 0.21 ± 0.05 °C per century. The monthly temperature changes are defined relative to the 1981-2010 monthly averages.

 

However, you could argue that the regional monthly average data in Fig. 57.6 is still reasonably accurate all the way back to 1780 as it continues to have over a dozen temperature records incorporated into the average every month of every year after this time. In which case the temperature rise since 1780, as indicated by the best fit line in Fig. 57.9, is actually less than 0.5 °C. This suggests that we can be reasonably confident that temperatures in central Europe between 1750 and 1830 were fairly similar to those of today.


Summary

What I have demonstrated here is that adjustments to the raw temperature data are unnecessary and can be avoided simply by averaging sufficient datasets (i.e. more than about 20).

I have also shown that it is highly likely that the mean temperature in central Europe is not much higher now than it was at the start of the Industrial Revolution (1750-1830). 


Disclaimer: No data were harmed or mistreated during the writing of this post. This blog believes that all data deserve to be respected and to have their values protected.


Tuesday, February 16, 2021

49. Germany - temperature trends PARABOLIC

If any country in Europe were to exhibit the effects of anthropogenic global warming (AGW) and climate change, then you might expect that country to be Germany. Except that it doesn't.

There are over 135 sets of weather data for Germany that contain over 480 months of data (see here). Of these 34 are long stations with over 1200 months of data while the remainder I denote as medium stations. In fact ten temperature records have over 2000 months of data. This makes the temperature data for Germany some of the best available.

The geographical locations of these weather stations are indicated on the map below (see Fig. 49.1). This shows that both the long and medium stations are distributed fairly evenly, although there appear to be slightly fewer medium stations in the former East Germany. The stations are also differentiated according to the strength of their warming trend. Those with a large warming trend are marked in red, where a large trend is defined to be one that is both greater than 0.25 °C in total and also more than twice the uncertainty. 

The threshold of 0.25 °C is set equal to the temperature rise that one would expect in the EU as a whole due to waste heat or direct anthropogenic surface heating (DASH) due to human and industrial activity. In fact for Germany, based on its population, area and energy consuption, we would expect the temperature rise since 1700 due to DASH to be at least 0.6 °C (see Post 14), even without the effects of an enhanced greenhouse effect.

 

Fig. 49.1: The locations of long stations (large squares) and medium stations (small diamonds) in Germany. Those stations with a high warming trend are marked in red.

 

The longest data set is for Berlin-Tempelhof (Berkeley Earth ID: 155194) which has data that extends back to 1701. This data is shown in Fig. 49.2 below as the temperature anomaly after subtracting the monthly reference temperatures (MRTs) based on the 1971-2000 averages. The method for calculating the anomalies and MRTs from the raw temperature data is described in Post 47. However, there are two caveats that need to be applied to the data in Fig. 49.2. Firstly, there are significant gaps in the data before 1756, and secondly any data before 1714 needs to be treated with caution simply because thermometers did not exist then, at least not in their current form. 


Fig. 49.2: The temperature trend for Berlin-Tempelhof since 1700. The best fit is applied to the interval 1821-1980 and has a positive gradient of +0.13 ± 0.10 °C per century. The monthly temperature changes are defined relative to the 1971-2000 monthly averages.


In order to determine the temperature trend for Germany I have averaged the temperature anomalies from all 135 long and medium stations. The result is shown in Fig. 49.3 below. All stations with data less than 480 months are excluded as they add no real value to the result, particularly if the data is very recent (i.e. after 1980). This is because the temperature change over time is small, typically 1 °C per century, so you really need at least 40 years of data to detect a measurable trend above the noise.


Fig. 49.3: The temperature trend for Germany since 1700. The best fit is applied to the interval 1756-2005 and has a negative gradient of -0.02 ± 0.05 °C per century. The monthly temperature changes are defined relative to the 1971-2000 monthly averages.


What is immediately apparent is that the trend in Fig. 49.3 differs significantly from the widely publicized IPCC version. Firstly, temperatures before 1850 appear to be higher than they are now, not lower. Secondly, temperatures were stable or declining for over 150 years prior to 1980, not rising. And finally, the mean temperature appears to jump suddenly in 1988 just as the IPCC was being established. Some of these traits are also seen in the mean temperature trend I constructed for the whole of Europe that was published in Post 44. The 19th century cooling is also seen in the temperature data of New Zealand (see Post 8) and Australia (see Post 26).

 

Fig. 49.4: The amount of temperature data from Germany included in the temperature trend each month for three different choices of MRT interval.


As I pointed out in Post 47, the choice of interval for determining the MRTs can influence the number of station records that are included in the final average for the temperature trend, and thus can also influence the nature of the trend itself. In order to test how robust the trend in Fig. 49.3 is regarding changes to the MRT interval, I repeated the calculation for three different MRT intervals. The curves in Fig. 49.4 above show how the number of stations in the final trend changes for each of the different MRT intervals. 

It is clear that there is very little difference between choosing MRT intervals of 1956-1985 and 1971-2000, although the latter does result in a slightly larger number of stations being included in the trend calculation after 1960. The advantage of using the former interval is that it corresponds to a part of the temperature record where the mean temperature is fairly stable whereas the latter interval spans the abrupt increase in temperature seen around 1988. Despite this, in both cases the final trends are very similar, with the best fit in each case being -0.015 °C/century for the 1971-2000 MRT and -0.032 °C/century for the 1956-1985 MRT. In both cases the fitting range was 1756-2005.

The 1901-1930 interval enables more data from before 1930 to be included in the trend (from stations that were closed down before 1930), but significantly less after 1950 when many new stations were set up. Nevertheless, the final trend is almost identical to the those for other two MRT intervals with the best fit being only slightly higher at +0.0004 °C/century. In all three cases temperatures before 1850 were about as high as those after 2000, and in all three cases the mean temperature trend exhibited a large jump in temperature in 1988 as is shown clearly in the 5-year moving average in Fig. 49.3.


Fig. 49.5: The temperature trend for Germany since 1750 according to Berkeley Earth.


Irrespective of which interval is used to determine the MRTs, the resulting temperature trend I have constructed and published in Fig. 49.3 differs significantly from that published by Berkeley Earth which is shown in Fig. 49.5 above. The difference, as I have noted before, is due to homogenization and breakpoint adjustments used by Berkeley Earth to create their adjusted anomalies for each station. Averaging their adjusted anomalies yields the trend shown below in Fig. 49.6, which is virtually identical to the one shown above in Fig. 49.5. This demonstrates that it is not a difference in averaging method that is responsible for the difference between my results in Fig. 49.3 and the Berkeley Earth result. So it must be a difference in the anomaly data itself that is responsible. This can only be due to the adjustments made by Berkeley Earth.


Fig. 49.6: Temperature trend in Germany since 1750 derived by aggregating and averaging the Berkeley Earth adjusted data for all long and medium stations. The best fit linear trend line (in red) is for the period 1801-1980 and has a gradient of +0.29 ± 0.03 °C/century.


The actual temperature difference between the data in Fig. 49.6 and that in Fig. 49.3 is shown below in Fig. 49.7 (blue curve) as the the total adjustment made to the data by Berkeley Earth. The data in Fig. 49.7 highlights two points of note. Firstly, the Berkeley Earth adjustments are not neutral: they add about 0.3 °C to the warming after 1840. Secondly, the adjustments flatten the curve before 1840 and so remove the warm period that mirrors the one seen after 1988. In so doing these adjustments radically change the nature of the temperature trend from an oscillatory one in Fig. 49.3 to the infamous hockey stick shape in Fig. 49.6 that is now synonymous with anthropogenic global warming (AGW).


Fig. 49.7: The contribution of Berkeley Earth (BE) adjustments to the anomaly data in Fig. 49.6 after smoothing with a 12-month moving average. The blue curve represents the total BE adjustments including those from homogenization. The linear best fit (red line) to these adjustments for the period 1841-2010 has a gradient of +0.173 ± 0.003 °C per century. The orange curve shows the contribution from breakpoint adjustments.


Conclusions

The results I have presented here clearly show that the real temperature trend for Germany over the last 300 years differs significantly from the conventional view of global warming. These differences can be summarized as follows.

1) Temperatures before 1840 were comparable to those of today (see Fig. 49.3).

2) The overall temperature trend since 1800 is broadly flat (see the best fit line in Fig. 49.3). 

3) At least 0.6 °C of any temperature rise since 1700 should be due to direct anthropogenic surface heating (DASH) or waste heat from human activity, and not from greenhouse gas emissions.

4) There is a large and seemingly unnatural temperature rise of 0.97 °C in 1988 that occurs at the very moment the IPCC is being formed (see the 5-year mean in Fig. 49.3).

5) Berkeley Earth adjustments have added 0.3 °C of warming to the temperature trend since 1840 and erased most of the warm temperatures before 1840 (see Fig. 49.7).

6) Of the 1.5 °C of warming since 1750 claimed by Berkeley Earth (see Fig. 49.6), 0.6 °C could be due to DASH (see point 3 above) and 0.3 °C is due to adjustments made to the temperature data by Berkeley Earth (see point 5 above).


Wednesday, June 17, 2020

14. Surface heating

The principal claim made by climate scientists is that global temperatures have increased by about 1 °C over the last 100 years. In the last post I outlined three ways that this might happen. The first, which was due to changes in the amount of solar radiation reaching the Earth, I discounted due to a lack of evidence or plausible mechanism. The last, changes to the radiative forcing term I will discuss at a later date. In this post I will consider the second possibility: changes to the amount of direct heat absorption at the surface of the Earth. There are essentially only two ways this can happen: (i) through changes to the Earth’s reflectivity or albedo; (ii) by direct heating of the surface from energy sources other than the Sun.

(i) Changing the Earth’s albedo.

As I explained in the last post, one way that the Earth's surface temperature might change is if the proportion of light from the Sun that is reflected from the surface were to change. The amount reflected is called the albedo. This effect can be seen in Fig. 14.1 below which is taken from a 2009 paper by Kevin Trenberth, John Fasullo and Jeffrey Kiehl (Bull. Amer. Meteor. Soc. 90 (3): 311–324). On the left of Fig. 14.1 where the direct radiation from the Sun (in yellow) impacts the surface, the radiation is partially reflected with 23 W/m2 being reflected and 161 W/m2 is absorbed. This equates to an albedo of 0.125 ( = 23/(23+161) ).

As an aside: it seems slightly suspicious that the fractions reflected at the surface (1/8) and at the top of the atmosphere (102/341 = 30%) are so close to simple fractions. Does this indicate a high degree of uncertainty in these numbers, I wonder?


Fig. 14.1: The Earth's energy balance according to Trenberth et al. (2009). 


In order for the surface temperature of the Earth to have increased by 1 °C, one way that this could have happened would be for the amount of energy absorbed at the surface to have increased over time by 2.3 W/m2. If this were to be achieved through changes to the albedo, then the albedo would need to have decreased from 0.1375 to 0.125. That is a change of 0.0125. So how likely is this?

The albedo of the Earth's surface depends of the type of material of the surface, as shown in Table 14.1. It also depends on the angle of incidence of the light as light tends to reflect more off surfaces at glazing incidence. So ocean water at the equator has a lower albedo than it does near the poles. However, there is also much less surface area near the poles which consequently reduces the contribution of high angle reflectance. 


  Surface  % of Earth's
    Surface Area   
  Albedo   
%
    Contribution to the    
Earth's Albedo
Ocean          71.00           6                 0.0426
Forest            7.62       8-18                 0.0091
Grassland            7.93         25                 0.0198
Arable            2.37         17                 0.0040
Desert sand                    5.51         40                 0.0220
Urban            0.21         20                 0.0004
Glaciers & ice caps              2.90         80                 0.0232
Shrub & tundra            2.46         15                 0.0037

Table 14.1: Approximate albedo of different parts of the Earth's surface.


The most common claims made about land use and climate changes are in regard to deforestation, increasing agricultural use, and increased urbanization. First it is claimed that deforestation for farming, particularly livestock farming aids global warming. As far as changes to the albedo are concerned, the evidence in Table 14.1 seems to point the other way. Turning forests into grassland increases the albedo.

Urbanization is also generally believed to reduce albedo, partly through what is termed the urban heat island (UHI) effect. This is the theory that cities with large amounts of concrete soak up more heat, and tall buildings trap that heat. This may be true, but it may also be a small localized effect. Again the data in Table 14.1 does not support it as a major driver of global warming.

A third claim is often made about polar ice and glaciers. The claim is that, because ice and snow have high levels of albedo, any change in their total albedo would have a large impact on global temperatures. The two main negative effects cited tend to be reductions in area by melting, or black carbon soot particles that drop on the surface and reduce the albedo. The main problem here is that the changes required are huge; a 54% decrease in area, or a decrease in albedo from 0.80 to 0.37. The first obviously has not and will not happen, and the latter is very unlikely as it would require huge levels of soot deposits.

The conclusion is, therefore, that changes to the Earth's albedo are difficult to achieve, and any that might have occurred have probably produced very little real effect in terms of increasing global temperatures.



(ii) Direct anthropogenic surface heating due to human and industrial activity.

The proposition here is this. All energy generation by humans results in an output of heat or thermal energy. Not only does every industrial process produce waste heat, but all mechanical work that is done by that process eventually ends up as heat or entropy as well. These are the consequences of the Second Law of Thermodynamics, and as every physicist knows, nothing can defeat the Second Law of Thermodynamics. So as temperature is just a measure of heat and entropy, it follows that everything humans do, every industrial process they create, all the energy that goes in will, in the end, just heat up the environment.

In the last post I showed that an increase of 2.3 ± 0.5 W/m2 in the amount of radiation at the surface would raise global temperatures by 1 °C. So if we can work out what the rate of energy production and consumption by humans is, then we can equate that to a global temperature rise. The starting point for this is clear: we know from IPCC reports and the protestations of climate scientists that the human race currently emits 36 gigatonnes of carbon dioxide (CO2) into the atmosphere. That CO2 is created primarily by three processes.

The first is the burning of pure carbon (from coal) that produces an energy output of 394 kJ/mol for the process


(14.1)

The second is burning of methane (natural gas) that produces an energy output of 882 kJ/mol for the process


(14.2)

The third is the burning of higher alkanes (from oil) that produces an energy output of about 660 kJ per mole of CO2 for the process


(14.3)

Each of the above energy outputs is for the burning of carbon or hydrocarbons to produce one mole of CO2. To work out how energy that amounts to in total we need to know how much of each type of fossil fuel was used.

In 2018 global coal production was 7665 million tonnes, natural gas production was 3955 billion cubic metres or 2786 million tonnes (assuming 1 cubic metre = 704.5 g), and crude oil production was 4472 million tonnes. That suggests a mean energy output of about 560 kJ/mol. As 36 gigatonnes of carbon dioxide equates to 8.18 x 10m14 moles, then the total energy consumption would have been 4.58 x 1020 J for the year, or 52,268 TWh.



Fig. 14.2: Global fossil fuel consumption since 1800.


However, according to the Our World In Data website, the global energy consumption from fossil fuels in 2017 amounted to 36,704 TWh from natural gas, 53,752 TWh from crude oil and 43,397 TWh from coal (see Fig. 14.2 above). The total of these values (133,853 TWh) is 2.53 times the value based on CO2 emissions and suggests only 39% of fossil fuel combustion results in CO2. This higher figure equates to an average power density at the Earth's surface of 0.030 W/m2 across the whole surface of the Earth. That is turn implies a global temperature increase (based on the 2.3 W/m2 required for a 1 °C increase that I demonstrated in the last post) of 0.013 °C compared to pre-industrial times. This, though, still omits the impact of nuclear power and renewables.


Fig. 14.3: Global energy production by energy type (2005-2018).


According to Statistica.com renewables and nuclear energy accounted for 15.3% of global energy consumption in 2018, and fossil fuel usage in 2018 exceeded that in 2017 (see Fig. 14.3 above), so that implies a global temperature increase of at least 0.015 °C compared to pre-industrial times. This temperature increase of 0.015 °C is, however, at least 60 times less than the one the IPCC is claiming for global warming since 1850. So this suggests that any resulting surface heating is such a small effect that we can safely ignore it, right? Well, not so fast.

We know that this heat is not spread evenly, its impact is greatest in the areas where most people live and work. We know that 90% of people live in the Northern Hemisphere; we know that 99.999% of people live on land. It is also true that 90% of weather stations are in the Northern Hemisphere, and at least 99.9% of them are on land. In other words there is a high degree of correlation between where people live, where industrial energy usage is, and where the weather stations are. For example, 19.7% of the Earth's surface is land in the Northern Hemisphere. So if 90% of the energy use is found there then the mean temperature rise on land in the Northern Hemisphere will be 0.069 °C. But of course, even that fails to tell the whole story. If we look at individual countries the results become even more stark.

If we start with what has been, historically, the biggest CO2 producer, the USA, we see that it accounts for about 20% of global energy use despite being home to only 4.3% of the world's population, and covering only 1.6% of the Earth's surface area. That suggests that the power density for surface heating in the USA should be about 0.38 W/m2 (an increase by a factor of 12.6 on the global average of 0.03 W/m2). This picture is confirmed by data from the US Energy Information Administration that indicates that the total power consumption of the 48 contiguous states (excluding Hawaii and Alaska) is 100.3 x 1015 BTU (see Fig. 14.4 below) over an area of 8.08 x 106 km2. As 1 BTU (British thermal unit) is the equivalent of 1055 J, this gives a power density for surface heating of 0.42 W/m2. Yet this increases to 0.69 W/m2 in Texas and 1.11 W/m2 in Pennsylvania. That means that the temperature rise in Pennsylvania due to surface heating is almost 0.5 °C. But if we look at Europe the situation is even more extreme.


 Fig. 14.4: US energy consumption since 1950 by sector (in BTU).


According to the IEA, the UK's energy usage in 2018 was 177 million tonnes of oil equivalent (Mtoe), or 2059 TWh (1 Mtoe = 11.63 MWh). As the area of the UK is only 242,495 km2, that equates to a power density of 0.97 W/m2 and a temperature rise of 0.42 °C. But it is safe to assume that that energy usage will not be spread evenly across the country. At least 84% of both the UK population and UK economic activity is found in England (with an area of 130,395 km2) which implies a temperature rise for England alone of 0.66 °C. Yet that is still modest compared to Belgium and the Netherlands with their much higher population densities (see Table 14.2 below) where the projected temperature rise is close to 1.0 °C. That is more than the IPCC claims for global warming from greenhouse gas emissions.


  Country  Energy Usage
(Mtoe)   
  Power Density
(W/m2)  
 Temperature Rise
(°C) 
UK                 177                 0.97                 0.42
Italy                 151                 0.66                 0.29
France                 245                 0.50                 0.22
Belgium                   52                 2.25                 0.98
Netherlands                      72                 2.30                 1.00
Germany                 298                 1.11                 0.48
Austria                   33                 0.52                 0.23
Switzerland                   24                 0.77                 0.34

Table 14.2: Energy usage, surface heating and temperature rise in Europe.


What Table 14.2 illustrates is that surface heating is a significant factor in overall global warming, and it is occurring in every major EU country, including those that border the Alps. In fact the average temperature rise over all five of the main alpine countries is 0.30 °C. It is perhaps no wonder then that the alpine glaciers have been retreating for over a century, while those in Norway and New Zealand, where the population density (and also the economic activity) is much lower, have remained more stable. But what this warming is not due to is increased CO2 levels in the atmosphere or an enhanced Greenhouse Effect. That is a completely separate issue.

The conclusion we can draw from this is that, in most developed countries, warming of up to 1.0 °C has occurred since pre-industrial times, and this warming is solely a result of industrial activity and the heat that is generated as a result of that activity. This will occur irrespective of the energy type or source used because it is the heat that is directly warming the planet, not increases in the concentration of waste gases that then add to the Greenhouse Effect. This also means that when the energy usage goes down, the temperature should go down.

This has major implications for future energy policy because it means that nuclear power and most renewables are no better than fossil fuels. It also means that the efficiency of energy generation is as important as the quantity of energy generation in determining the amount of warming.



Fig. 14.5: Efficiencies of different power sources.


As an example of the impact of energy efficiency consider the case of solar photovoltaics. The relative efficiencies of different power sources are illustrated in Fig. 14.5 above. Of these photovoltaics are among the least efficient. They are in fact only about 15% efficient, meaning that for every 100 joules of energy they harvest from the Sun, they only create 15 joules of electricity. Yet in order to do this solar cells need to be 95% efficient in terms of absorbing incoming solar radiation. In other words their albedo needs to be less than 0.05. That means that for every 100 joules of solar radiation that falls on a solar cell, 5 joules is reflected back into space, 15 joules is turned into electricity (which will then become surface heat at the point of use), and 80 joules becomes waste surface heat in the solar cell.

Now a fashionable policy proposal at the moment is to put large numbers of photovoltaics in the Sahara Desert and then pump the electricity they produce to wherever it is needed. The problem is that not only will the electricity generated heat the location of its end user, but the solar cells will heat up the desert by decreasing the local albedo from 0.40 to 0.05. That is a double whammy. It is global warming without the need for CO2. Now you don't hear much about that from climate scientists.