Showing posts with label climate change. Show all posts
Showing posts with label climate change. Show all posts

Sunday, December 5, 2021

83. Is the BBC biased on climate change?

I am sure that for many people the answer to this question is obvious, and I suspect the answer depends on how strongly you believe in climate change. But consider this.

On the 25th August 2021 the BBC ran a story on their website reported on a UN claim that Madagascar was "on the brink of climate change-induced famine".

 


Well, it turns out that not everyone agrees. Because on the 2nd December 2021 the BBC reported that another group of scientists had cast doubt on the UN claim.



The issue I have with these two reports is that they do not appear to have been given the same prominence or weight on the BBC site. While the first gets its own webpage, the second seems only to get a small note on the BBC climate page list of recent updates (unless I have missed something), and certainly does not link to a full webpage article (although it does link to the original article). So having created a sensational climate change headline, the BBC then seems to have (partially) buried the opposing viewpoint, or rejection of it, which just happens to go against the prevailing narrative that most environmental disasters are now due to climate change. 

Personally, I strongly support the BBC and see it as an important part of the British media, one that is essential in protecting political diversity and democracy in the UK. I just wish it would be more objective and critical, rather than trying to be sensationalist, populist, mainstream and worst of all, "impartial". 

For reference, the actual climate change in Madagascar is shown below (see also Post 77).


Fig. 77.6: The mean temperature anomaly (MTA) for Madagascar. The best fit is applied to the monthly mean data from 1932 to 2011 and has a negative gradient of -0.15 ± 0.07 °C per century.



Tuesday, November 16, 2021

80. Lateral thought #4 - COP26 and keeping 1.5 alive


For the last two weeks politicians from all over the world have been gathering and meeting in Glasgow in order to formulate an agreement to cut the use of fossil fuels by mankind. The target has been to keep the maximum extent of global warming below 1.5°C and so avoid a catastrophic warming of over 2.8°C by the end of this century. This may be very laudable, but in my opinion most of the measures agreed or demanded are unworkable and unnecessary.

My first critique is with regard to the current temperature rise and its projection. The received wisdom is that current warming relative to pre-industrial times (i.e. before 1750) now stands at 1.1°C. In contrast, the real temperature records, as outlined on this blog, show that this is unlikely to be true. Over the last sixteen months I have analysed the land-based temperature records of virtually the entire Southern Hemisphere, plus those of the USA, Europe and southern Asia. None show a warming of over 1°C since 1750 that correlates with increases in anthropogenic carbon dioxide emissions. The only consistent warming is seen after 1980, and this is only about 0.5°C in magnitude. Given that 70% of the Earth's surface is water and that the oceans heat up by less than half the amount compared to land, it is impossible to get to a 1.1°C average warming globally unless one postulates that land temperatures have increased by over 2°C everywhere, as Berkeley Earth does (see Fig. 80.1 below). But the reality of the raw data that I have analysed so far is that there is virtually no country or continent that I have investigated so far where this has happened. So the real temperature increase so far is likely to be less than 0.5°C. And as I showed in Post 14 and Post 29, much of this 0.5°C could be due to urban heat island effects.


Fig. 80.1: Land and ocean global average temperature anomalies since 1850 according to Berkeley Earth.


My biggest criticism, though, is reserved for the proposed countermeasures. The one consideration that has been completely omitted from discussions of carbon reduction policies has been the economics. While a lot of time has been devoted to discussing financial aid to small developing countries that are supposedly at risk from climate change, none has been directed to considering the financial impact on producers and consumers. 

One of the main aims of COP26 was to "keep 1.5 alive" - namely to enact measures that would prevent the global temperature rise from exceeding 1.5°C. This, we are told, requires a 50% reduction in fossil fuel use by 2030, and a move to net-zero by about 2060. The question, then, is how do we reduce fossil fuel use by 50% by 2030, or 5% per year? At COP26 all the emphasis appeared to be on reducing fossil fuel demand rather than supply. Yet both are problematic, and both will cause economic hardship to many.

The current political strategy appears to revolve around getting as many countries as possible to cut their usage of fossil fuels, but this policy has two flaws. Firstly, it requires over 180 countries to agree to do something that none really want to do. That means it is highly unlikely to succeed (think: herding cats). But if it does there is the second problem. It will devastate the economies of many oil producers. What is striking is the callous disregard many climate activists have for the people of these countries.

Countries like Iran, Iraq, Azerbaijan, Russia, Libya, Nigeria and Venezuela are almost entirely dependent on the revenues from oil and gas to feed their people. They are economic monocultures. Nor do they have large sovereign wealth funds to fall back on like Norway, Saudi Arabia or Kuwait. So what happens to their economies when demand for oil and gas runs out, or the sale is banned by international treaty? The impact will be cataclysmic.

The alternative strategy is hardly much better, but will create a different set of losers. Rather than trying to regulate demand, the UN could instead try to regulate supply by getting the producers to cut supply by 5% per year and thus force the consumer nations to adapt. This strategy has two advantages. Firstly it requires the agreement only of the producers who are much fewer in number, and secondly any cut in supply would result in spikes in price which would largely protect the incomes of the producers. Instead the consumers would suffer, and with them the global economy. The result would be oil and gas shortages, high prices, fuel poverty and global economic collapse. So, not a great choice!


Thursday, September 30, 2021

78. Mozambique - temperature trend 0.6°C WARMING after 1980

According to climate science, Mozambique has experienced a more or less continuous warming to its climate of over 1.2°C over the last 130 years. The reality as evidenced by the actual temperature data is rather different. Like many places around the world, Mozambique has certainly seen some modest warming of about 0.6°C since 1980, but before 1980 the picture is uncertain, and on balance there may have been no significant warming at all in this period.

Mozambique has eleven long and medium weather stations most of which are located on, or near, the coast, as indicated in Fig. 78.1 below. This is same number of long and medium weather stations that was seen for Madagascar (see Post 77). However Madagascar also has another nine stations with over 300 months of data; Mozambique has only four.


Fig. 78.1: The (approximate) locations of the weather stations in Mozambique. Those stations with a high warming trend between 1901 and 2000 are marked in red while those with a cooling or stable trend are marked in blue. Those denoted with squares are stations with over 1000 months of data, while diamonds denote stations with over 480 months of data.


Of the eleven long and medium stations in Mozambique, two are long stations with over 1200 months of data, another two have over 1000 months of data, and the remaining seven are medium stations with over 480 months of data. Averaging the temperature anomalies from these eleven station records results in the mean temperature anomaly (MTA) for the region shown in Fig. 78.2 below. The method used to determine the MTA was the same as that used in previous posts and has been outlined in Post 47

First the monthly reference temperatures (MRTs) were calculated for a suitable time interval, in this case the thirty year period from 1951 to 1980. This ensured that all eleven long and medium temperature records contained sufficient valid data in this interval for each month (a minimum of twelve years of data for each MRT is usually required). The MRTs for each set of station data were then subtracted from the raw monthly temperature readings to produce the anomalies for that station location. The anomalies from all eleven long and medium station records were then averaged to determine the MTA.


Fig. 78.2: The temperature trend for Mozambique relative to the 1951-1980 monthly averages based on an average of anomalies from stations with over 480 months of data. The best fit is applied to the monthly mean data from 1913 to 2012 and has a positive gradient of +1.07 ± 0.06 °C per century.


The MTA data in Fig. 78.2 appears to show an upward trend of about 1°C per century, but that is not the whole story. As Fig. 78.3 below indicates, the number of stations that contribute to the MTA drops significantly as you go back in time before 1950. This in turn suggests that the trend in Fig. 78.2 after 1960 is much more reliable than that before 1930.


Fig. 78.3: The number of station records included each month in the mean temperature anomaly (MTA) trend for Mozambique in Fig. 78.2.


As usual I have compared my results based on raw temperature data with the Berkeley Earth (BE) version based on their adjusted data. The equivalent average of the BE adjusted anomalies is shown below in Fig. 78.4. From 1940 onwards it is very similar to the trend from the raw temperature data shown in Fig. 78.2 above. In both cases the MTA plateaus between 1940 and 1980 before rising by about 0.8°C over the next twenty years. Then it drops back slightly by about 0.2°C in the following decade.


Fig. 78.4: Temperature trends for Mozambique based on Berkeley Earth adjusted data. The average is for anomalies from all stations with over 360 months of data. The best fit linear trend line (in red) is for the period 1911-2010 and has a gradient of +1.00 ± 0.03°C/century.


The MTA based on BE adjusted data also shows good agreement between 1920 and 1990 with the official Berkeley Earth trend for Mozambique shown in Fig. 78.5 below. It is only after 1995 that there is significant disagreement between the two plots. This may be because Berkeley Earth has included more data in the average after 2000 using many more stations with less than 300 months of data than I have.


Fig. 78.5: The temperature trend for Mozambique since 1840 according to Berkeley Earth.


It is clear that the temperature trend for Mozambique exhibits some warming after 1980, but the trend before 1980 is less clear. However, if we look at all the available data (see here for a list of all stations in Mozambique) we do see that there are a number of stations with data from 1930 to 1960 that are not included in the MTA trend shown in Fig. 78.2 above, due to insufficient data within the MRT period. It turns out that the temperature time series of most of these station datasets display cooling trends from 1930-1960. The one main exception is the data from Lourenço Marques (Berkeley Earth ID: 156935) which shows more or less continuous warming from 1920 onwards.

If we wish to incorporate these extra stations into the MTA then we need to change the MRT period to the interval 1931-1960 when most of this extra data was recorded. This is what I have done to generate the data in Fig. 78.6 below. The other significant change was to discard the data from Lourenço Marques (aka Maputo) from the MTA. The reason for doing this was that, while this station clearly has the longest set of temperature data in Mozambique (it has temperature data from as far back as 1892), its temperature trend is an outlier. This is probably because the station is located in the capital city Maputo and is more prone to the urban heat island (UHI) effect than most other stations in the country. I should note that this outlier behaviour of capital cities is not unusual. The same was seen for Jakarta in Indonesia (see Post 31) as well as for Sydney and Melbourne in Australia.


Fig. 78.6: The mean temperature trend relative to the 1931-1960 monthly averages for stations in Mozambique with over 300 months of data but excluding Lourenço Marques. The best fit is applied to the monthly mean data from 1921 to 1980 and has a positive gradient of +0.16 ± 0.12 °C per century.


The result of this change of MRT, together with the removal of the data from Lourenço Marques, is to completely change the temperature trend before 1940 (see Fig. 78.6 above). While the trend after 1980 still exhibits a warming of 0.6°C, the trend before 1980 is now neutral but with significant natural variability. It is also more reliable as it is based on data from more stations than previously (see Fig. 78.7 below). This would suggest that the temperature trend in Fig. 78.7 is a better representation of the true behaviour of the Mozambique climate over the last 100 years than is the trend in Fig. 78.2. It also provides more evidence to suggest that anthropogenic climate change is really only a post-1980 phenomenon and not one that began in 1850.


Fig. 78.7: The number of station records included each month in the mean temperature anomaly (MTA) trend for Mozambique in Fig. 78.6.


Summary

The data shows that there is clearly strong warming of almost 0.6°C in Mozambique after 1980 (see Fig. 78.2).

Before 1980 the climate appears to be stable (see Fig. 78.6).

Overall this suggests that the warming seen in Mozambique after 1980 is probably real as it coincides with the period of greatest anthropogenic carbon dioxide production. It is also similar in timing and magnitude to temperature rises seen elsewhere around the world as I have shown previously.


Acronyms

BE = Berkeley Earth.

MRT = monthly reference temperature (see Post 47).

MTA = mean temperature anomaly.


Saturday, August 8, 2020

29. Lateral thought #1 - suburban heating


Question

How much does the average home heat up its environment?

This is really a question that ties in with what I wrote on Post 14. Surface Heating, but I think it illustrates the point at a level that most people can relate to.
 

Answer

Well, we know from Trenberth et al. that the average power of solar radiation incident on the Earth's surface is approximately 161 W/m2 (see Fig. 14.1). We also know that this leads to a mean surface temperature for the Earth of about 288 K. We also know from Post 12 (black body radiation and Planck's law) that the emitted surface radiation density scales as T 4, where T is the absolute temperature of the surface measured in kelvins, and the emitted radiation must balance the incoming radiation. In other words, both incoming and outgoing surface radiation densities will be proportional to T 4. For the outgoing radiation the constant of proportionality will be the Stefan-Boltzmann constant, while for the incoming radiation it will be the Stefan-Boltzmann constant divided by the feedback amplification factor. It therefore follows that if the mean surface temperature were to increase by 1 K to 289 K, then the quantity T 4 must increase by 1.40 %. And there are two main ways that this could be achieved. 

The first is to increase the feedback or radiative forcing through an increase in the strength of the Greenhouse Effect. This is what most climate scientists concentrate on, and what they believe is responsible for any temperature changes. But the second possibility is to increase the radiation power absorbed by the surface before the feedback amplification occurs. This could happen if the strength of the Sun's output changes, but more realistically it will happen whenever extra heat is liberated at the surface of the Earth. The amount of heat required to do this will be 1.40 % of 161 W/m2, or 2.25 W/m2. So an increase of 2.25 W/m2 in the incident surface energy density will result in a 1 °C temperature rise (see Post 13 - Case 2).

As I pointed out in Post 14, a major source for such additional heat liberation at the Earth's surface is energy generation and consumption by humans, often for industrial needs. This leads to direct anthropogenic surface heating (DASH) that can raise the temperature of whole countries by as much as 1 °C. But it is not just industry that can significantly heat the local environment.

Consider a typical home. The average household in the UK uses at least 10,000 kWh of energy per year. That equates to an average rate of usage of energy of 1.14 kW throughout the year.

The average land area of homes in the UK is at most 500 m2. Most modern housing developments have more than 30 new homes per hectare (see PPG3 guidance paragraphs 57-58); older suburban developments are generally a lot less dense than this; inner city flats and terraced houses are clearly a lot more.

All of this means that the power density for heat escaping from homes will be at least 2.28 W/m2 (i.e 1140 ÷ 500). In other words, the energy used by a typical household is more than sufficient to increase the local surface temperature by more than 1 °C. And remember, all this heating has got nothing to do with CO2 emissions. Nor does this calculation include the energy consumption of commercial buildings, industry or transport.


Conclusion

The energy used by the average household in the UK each day raises the temperature of their local environment by at least 1 °C compared to pre-industrial levels. That will be true irrespective of the source of the energy. Renewables will not help. Nor will cutting your level of CO2 emissions. This is all down to heat, entropy and thermodynamics.

Wednesday, June 24, 2020

16. The story so far


 

The main purpose of this blog has been to analyse the physics behind climate change, and then to compare what the basic physics and raw data are indicating with what the climate scientists are saying. These are the results so far.

Post 7 looked at the temperature trend in New Zealand and found that the overall mean temperature actually declined until 1940 before increasing slightly up to the present day. The overall temperature change was a slight rise, but only amounting to about 0.25 °C since the mid 19th century. This is much less than the 1 °C rise climate scientists claim.

Post 8 examined the temperature trend in New Zealand in more detail and found that the breakpoint adjustments made to the data by Berkeley Earth, that were intended to correct for data flaws, actually added more warming to the trend than was in the original data.

Post 9 looked at the noise spectrum of the New Zealand data and found evidence of self-similarity and scaling behaviour with a fractal dimension of about 0.25. This implies that long-term temperature records over several thousands of years should still see fluctuations between the average temperature each century of over 0.5 °C, even without human intervention. In other words, a temperature rise (or fall) of at least up to 1 °C over a century is likely to be fairly common over time, and perfectly natural.

Post 10 looked at the impact of Berkeley Earth's breakpoint adjustments on the scaling behaviour of the temperature records and found that they had a negative impact. In other words the integrity of the data appeared to decline rather than improve after the adjustments were made.

Post 11 looked at the degree of correlation between pairs of temperature records in New Zealand as a function of their distance apart. For the original data a strong linear negative trend was observed for the maximum possible correlation between station pairs over distances up to 3000 km. But again the effect of Berkeley Earth's breakpoint adjustments to the data was a negative one. This trend became less detectable after the adjustments had been made. The one-year and five-year moving average smoothed data did become more highly correlated though.

After analysing the physics that dictate how the Sun and the Earth's atmosphere interact to set the Earth's surface temperature in Post 13, I then explored the implications of direct heating or energy liberation by humans at the Earth's surface in Post 14. Calculations of this direct anthropogenic surface heating (DASH) showed that while human energy use only contributed an average increase of 0.013 °C to the current overall global temperature, this energy use was highly concentrated. It is practically zero over the oceans and the poles, but in the USA it leads to an average increase of almost 0.2 °C. This rises to 0.3 °C in Texas and 0.5 °C in Pennsylvania. Yet in Europe the increases are typically even greater. In England the increase is almost 0.7 °C, and in the Benelux countries almost 1.0 °C. Perhaps more significantly for our understanding of retreating glaciers, the mean temperature rise from this effect for all the alpine countries is at least 0.3 °C.

Finally in Post 15 I looked at the energy requirements for sea level rise (SLR). Recent papers have claimed that sea levels are rising by up to 3.5 mm per year while NOAA/NASA satellite data puts the rise at 3.1 mm per year. These values are non-trivial but are still a long way short of the rate needed to cause serious environmental problems over the next 100 years.

In upcoming posts I will examine more of the global temperature data. But given what I have discovered so far, it would be a surprise if the results were found to be as clear cut as climate scientists claim. Contrary to what many claim, the science is not settled, and the data is open to many interpretations. That is not to say that everything is hunky dory though. Far from it.



Sunday, June 14, 2020

13. The Earth's energy budget

In order to understand how the Earth is heating up, you need to understand why it is warm in the first place. That means you need to know where the energy is coming from and where it is going. That is the basis of the Earth's energy budget or energy balance.

The purpose of this post is to analyse that energy balance, and to determine which parts of it can change, and what the effects of those changes are likely to be. Specifically, this post will try to relate various possible changes in the energy balance to any consequential changes in global temperatures. In so doing, it will also be necessary to critically ascertain the degree of confidence that there is surrounding the various estimates, and measurements, regarding the energy flows in the different parts of the atmosphere.

As I pointed out in the last post, virtually all the energy that is present on Earth originated in the Sun. The amount of energy per second arriving from the Sun at the top of the Earth’s atmosphere is 1361 watts per square metre (W/m2), and as I also pointed out, because the area this energy is ultimately required to heat up (4πr2 where r is the Earth's radius) is four times the cross-sectional area that actually captures the energy (πr2), that means that the mean power density (remember: power is rate of flow of energy) that the Earth receives is only a quarter of the incoming 1361 W/m2, or about 341 W/m2. However as I also showed in Fig. 12.1, not all this energy reaches the Earth's surface. In fact only about 161 W/m2 does. The rest is either absorbed by the atmosphere (78 W/m2), reflected by the atmosphere and clouds (79 W/m2), or is reflected by the Earth's surface (23 W/m2). This is shown diagrammatically in Fig. 13.1 below.


  
Fig. 13.1: The Earth's energy budget as postulated by Trenberth et al. (2009).


The image in Fig. 13.1 is taken from a 2009 paper by Kevin Trenberth, John Fasullo and Jeffrey Kiehl (Bull. Amer. Meteor. Soc. 90 (3): 311–324). It is not necessarily the most definitive representation of the energy flows (as we shall see there are other models and significant disparaties and uncertainties in the numbers), but it is probably the most cited. The data it quotes specifically relates to the energy budget for the period March 2000 - May 2004.



Fig. 13.2: The Earth's energy budget as postulated by Kiehl and Trenberth (1997).


The 2009 Trenberth paper is not the first or last paper he has produced on the subject. The energy budget it describes is actually a revision of an earlier attempt from 1997 (J. T. Kiehl and K. E. Trenberth, Bull. Amer. Meteor. Soc., 78, 197–208) shown in Fig 13.2 above, and has since been revised again in 2012 (K. E. Trenberth and J. T.  Fasullo, Surv. Geophys. 33, 413–426) as shown in Fig. 13.3 below.



Fig. 13.3: The Earth's energy budget as postulated by Trenberth and Fasullo (2012).


The only real difference between the energy budget in Fig. 13.3 and that from 2009 in Fig. 13.1 is the magnitude of the atmospheric window for long wave infra-red radiation (revised down from 40 W/m2 to 22 W/m2), but I still think this highlights the level of uncertainty that there is regarding these numbers. This is further emphasised by a contemporary paper from Stephens et al. (Nature Geoscience 5, 691–696 (2012) ) shown below in Fig. 13.4.



Fig. 13.4: The Earth's energy budget as postulated by Graeme L. Stephens et al. (2012).


As the 2009 Trenberth paper appears to be the most cited it is probably best to use this as the basis for the following discussion, but to bear in mind the amount of uncertainty regarding the actual numbers.

In Fig. 13.1 the three most significant numbers are those for the direct surface absorption from the Sun (161 W/m2), the upward surface radiation (396 W/m2), and the long-wave infra-red back radiation due to the Greenhouse Effect (333 W/m2). Of these it is the upward surface radiation (396 W/m2) that determines the temperature but its value is set by the other two.

As I explained in the last post the emission of electromagnetic radiation from a hot object is governed by the Stefan-Boltzmann law as shown below

  
(13.1)

where I(T) is the power density (per unit area) of the emitted radiation, σ = 5.67 x 10-8 Wm-2K-4 is the Stefan-Boltzmann constant, and the term ε is the relative emissivity of the object. The emissivity defines the proportion of the emission from that object at that wavelength compared to a black body at the same temperature, and it varies with wavelength. It is also different for different materials. In the case of planet Earth, it is generally assumed to be very close to unity all over the surface for all emission wavelengths, but this is not always the case.

It is Eq. 13.1 that allows us to determine the surface temperature (T = 289 K) from the upward surface radiation (396 W/m2) or visa versa. It also allows us to calculate the change in upward surface radiation that would result from a given increase in the surface temperature. It turns out that an increase in surface temperature of 1 °C would necessitate the upward surface radiation increasing from 396 W/m2 to 401 W/m2, in other words a 1.39% increase. A 2 °C increase would require a 2.80% increase in the upward surface radiation.

I also explained in the last post how the total upward surface radiation (IT) was related to the direct surface absorption from the Sun (Io) via a feedback factor f which represented the fraction of upward surface radiation that was reflected back via the Greenhouse Effect.


(13.2)

This model assumed that all the energy absorbed by the greenhouse gases came from one source, though, namely surface upward radiation, and was driven by a single input, the surface absorption of solar radiation, Io. As Fig. 13.1 indicates, this is not the case. This means that Eq. 13.2 will need to be modified.

The aim here is to determine what changes to the energy flows in Fig. 13.1 would result in a particular temperature rise, specifically a rise of 1 °C in the surface temperature. Realistically, there are only three things that could bring about any significant change. The first is a change in the amount of energy coming from the Sun. The second is is a change in the direct absorption of radiation at the surface, Io. The third is a change in the strength of the Greenhouse Effect, f.


Case 1: Changes to the incoming solar radiation.

This is probably the easiest of the three propositions to analyse. If the incoming solar radiation at the top of the atmosphere were to change by 1.39%, then we would expect virtually all the projected heat flows in Fig. 13.1 to change by the same amount, including the upward surface radiation (from 396 W/m2 to 401 W/m2). This is because almost all the scattering mechanisms and absorption processes in Fig. 13.1 are linear and proportional. The two exceptions are likely to be the thermals (17 W/m2) and the evapo-transpiration (80 W/m2), the former of which will be governed more by temperature differences, and the latter by the non-linear Clausius-Clapeyron equation. While changes to these two components are likely to be linear for small changes, they are unlikely to be proportional. However, as the changes to these two components are likely to be fairly small and comparable to other errors, we can probably ignore these deficiencies. So, if the incoming solar radiation (1361 W/m2) were to increase by 1.39% we could see a global temperature rise of 1 °C.

The problem is that there is no evidence to suggest the Sun's solar output has changed by anything like 1.39% over the last 100 years, and no obvious theoretical mechanism to suggest that it could. The only evidence of change is from satellite measurements over the last 40 years or so that suggest an oscillation in solar output with an eleven year period and an amplitude of about 0.05% (see Fig. 13.5 below). This would give a maximum temperature change of about 0.1 °C.



Fig. 13.5: Changes in the Sun's output since 1979 (from NOAA).


The only other known mechanism is the Milankovitch cycle. This can produce temperature oscillations of over 10 °C in magnitude (peak to trough) but is only seen over 120,000 year cycles (see red curve in Fig. 13.6 below). 


Fig. 13.6: Changes to temperature in the southern oceans (red curve) derived from isotope analysis of the Vostok ice core in Antarctica.


These temperature oscillations are mainly due to changes in the Earth's orbit around the Sun (changes to a more elliptical ortbit), or changes in the Earth's angle of inclination or tilt, or an increased precession that then exposes the polar regions to higher levels of solar radiation. Such effects may be responsible for the cycle of ice ages, but cannot be responsible for changes thought to have happened over the last 100 years. As the data in Fig. 13.6 indicates, even the periods of fastest climate change amounted to only a 10 °C increase over 10,000 years, or 0.1 °C per century, and we do not appear to be in one of those warming periods. If anything, the planet should be slowly cooling by about 0.01 °C per century.

The conclusion, therefore, is that global temperatures may fluctuate by 0.1 °C across the decade due to changes in solar output, but there is no evidence or credible mechanism that would support a long-term warming trend.


Case 2: Changes to the direct absorption of radiation at the surface.

The second possible driver of global warming comes from changes at the surface, specifically to the thermal energy absorbed there, Io. This will then impact on the total upward surface radiation IT and thereby also on the back radiation. According to Eq. 13.2 the changes to Io and IT should be proportional. As Eq. 13.1 indicates that a 1 °C change to the surface temperature, To, should result in a 1.39% change to IT, it follows that a 1.39% change to Io should result in a 1 °C change to To. Unfortunately there are three additional complications that we need to consider: the thermals (Ith = 17 W/m2), the evapo-transpiration (IE = 80 W/m2), and the incoming solar radiation absorbed by the atmosphere (IA = 78 W/m2).

The thermals (17 W/m2) and evapo-transpiration (80 W/m2) in Fig. 13.1 transfer heat from the surface into the upper atmosphere (top of the tropopause) by mass transfer (convection) rather than radiation. This may potentially provide a route for heat to escape from the Earth via a by-passing of the greenhouse mechanism. However, I would expect this energy to eventually get dumped in the atmosphere somewhere before the top of the tropopause (at a height of 20 km). When this happens it will merely add to the long-wave infra-red radiation being emitted from the surface, and so should still be reflected by the greenhouse gases. So while these heat sources will not contribute to the surface temperature as defined in Eq. 13.1, they should be included in the feedback factor f in Eq. 13.2.

So too will some of the power absorbed by the atmosphere directly from the incoming solar radiation (78 W/m2). Here again things are complicated because if the energy is absorbed before the bottom of the stratosphere (at 20 km altitude), the Greenhouse Effect will actually reflect some of that heat back into space. To account for this we can include an additional parameter μ as a variable that specifies the proportion of the incoming solar absorbed by the atmosphere that is absorbed in the lower atmosphere where it can be reflected backwards the surface. The fraction (1-μ) absorbed in the upper atmosphere will escape and therefore will not contribute to the back radiation.

In all there are seven energy terms that we need to consider.
  1. Initial surface absorption (Io = 161 W/m2).
  2. Thermals (Ith = 17 W/m2).
  3. Evapo-transpiration (IE = 80 W/m2).
  4. Upward surface long-wavelength radiation (Iup = 396 W/m2).
  5. Long-wavelength back radiation (IRF = 333 W/m2).
  6. Incoming solar absorbed by the atmosphere (IA = 78 W/m2).
  7. Net radiation permanently absorbed by the Earth's surface (Inet = 0.9 W/m2).
We must then consider energy conservation at the surface and in the atmosphere. At the surface the law of conservation of energy (1st law of thermodynamics) requires that

 (13.3)

while in the atmosphere similar considerations mean that the total energy entering the atmosphere must equal the total that is emitted. As f is the proportion that is reflected back it follows that

(13.4)

The parameter μ is a variable that specifies the proportion of the incoming solar absorbed by the atmosphere (IA) that is absorbed in the lower atmosphere where it can be reflected back towards the surface. The fraction (1-μ) absorbed in the upper atmosphere will escape and therefore will not contribute to IRF. It therefore follows that


(13.5)

Using Eq. 13.5 we can work out a value for f, but only if we know μ, which we don't. However, using Eq. 13.4 and the knowlege that μ must lie in the range 0 <  μ < 1, we can say that f will be in the range 0.583 to 0.675 and that when μ = 0.5, f = 0.626. This allows us to estimate the change required in Io to generate a 1 °C change in To, but to do that we will need to make some assumptions given the number of variables that there are.

First we can probably assume that f, μ and IA remain unchanged even when Io changes. We know that a 1 °C increase in To will result in a 1.39% increase in Iup to 401.5 W/m2 and a 2 °C increase in To will result in a 2.80% increase in Iup to 407.1 W/m2. The question is what happens to the thermals (Ith), the evapo-transpiration (IE) and the net surface absorption (Inet)? They will probably increase as well, but by how much? A good starting point is to assume that they will increase by the same percentage as the upward surface long-wavelength radiation (Iup). A benchmark control is to assume that they stay constant. This gives us the following two scenarios.

If Ith, IE and Inet scale with Iup and the scaling factor due to the increase in temperature To is g, then Eq. 13.5 can be rearranged to give

(13.6)

whereas if Ith, IE and Inet are constant then

(13.7)

We know that g = 1.0139 for a 1 °C rise in To and g = 1.0280 for a 2 °C rise in To. So combining the two options in Eq. 13.56 and Eq. 13.7 implies that Io is in the range 162.8-163.9 W/m2. That implies an excess direct heating at the surface of ∆Io = 2.33 ± 0.54 W/m2, with the error range being set by the range of possible values for f, μ, IE, Inet and Ith. A 2 °C increase in surface temperature would require a change in direct heating at the surface of ∆Io = 4.69 ± 1.09 W/m2.

The conclusion, therefore, is that a 1 °C increase in global temperatures would require an increase in the initial surface absorption of ∆Io = 2.3 ± 0.5 W/m2. How this might be achieved will be explored further in the next post.


Case 3: Changes to the feedback factor.

The most obvious and heavily reported mechanism by which global temperatures could increase is via changes to the Greenhouse Effect due to increased carbon dioxide concentrations in the atmosphere. The specific change that will ensue will be in the value of the feedback term, f, and hence the value of the back radiation, IRF. As in the previous case, some of the heat flow parameters in Fig. 13.1 would change and some would stay the same. For example, we can confidently assume that IA and Io will remain unchanged, but if f changes, so might μ. But as before, the main question is what happens to the thermals (Ith) and the evapo-transpiration (IE)?

Rearranging Eq. 13.5 once more gives

 (13.8)

while for the case that Ith, IE and Inet are constant we get

 (13.9)

It turns out there is very little difference in the results using the two methods. The biggest factor affecting f is the value of μ. When there is no warming (g = 1.0) f = 0.629. A warming of 1 °C (g = 1.0139) requires f to increase to 0.634, and a warming of 2 °C (g = 1.0280) requires f to increase to 0.638. These values all correspond to values for μ of 0.5, but the possible spread of values for μ leads to an error in f of ±0.047 in all cases.

What this shows is that the increase in feedback factor needed for a 1 °C rise in global temperatures will be about 0.005. This is a small change, but at the end of the last post (Post 12) I calculated that the fraction of the long-wave infra-red radiation that could be absorbed and reflected by the carbon dioxide in its main absorption band (the frequency range 620-720 wavenumbers or the wavelength range 13.89 - 16.13 μm). The result was at best 10.5%. This implies that only about 15% of the Greenhouse Effect is due to CO2, and the rest is due to other agents, mainly water vapour.

The conclusion, therefore, is that a 1 °C increase in global temperatures would require an increase in the width or strength of the carbon dioxide absorption band by at least 5% relative to its current size in order achieve this temperature rise.


The final point to note is the size of the potential measurement errors in the various energy flows, and the effect of rounding errors. A particular egregious anomaly occurs at the top of the atmosphere in Fig. 13.1 (and remains uncorrected in Fig. 13.3) where the rounded value of the incoming solar (341 W/m2) radiation balances the rounded outgoing values (239 W/m2 and 102 W/m2). This is inconsistent with the rest of the diagram as there should be a 0.9 W/m2 difference to account for the net absorption at the surface. In the more exact values quoted (341.3 W/m2, 238.5 W/m2 and 101.9 W/m2) this difference is specified correctly. So the problem is a rounding issue initially, but it then has a knock-on effect for the values quoted within the atmosphere.

For consistency it would therefore be better in this instance to round the 238.5 W/m2 value down (to 238 W/m2) rather than up (to 239 W/m2). That would ensure that there was a net inflow of about 1 W/m2 that balanced the net absorbed value at the surface (0.9 W/m2). It would also eliminate the false imbalance within the atmosphere itself. Here the net inflow should balance the net outflow (currently there is a 1 W/m2 deficit). There can be no 0.9 W/m2 energy gain in the atmosphere otherwise the atmosphere would heat up, and heat up by more than 2.7 °C per annum. What should remain invariant at various points from the surface to the top of the atmosphere is the following energy balance

(13.10)

where ITOA = 238.5 W/m2 is the outgoing long-wave radiation at the top of the atmosphere. A correction for this error requires the stated value for the power emitted upwards by the atmosphere (169 W/m2) in Fig. 13.1 to be reduced to 168 W/m2.

It is also important to note that some of the errors in the energy flows in Fig. 13.1-Fig.13.4 are considerable, either in magnitude, or as a percentage. A comparison of the data in Fig. 13.1 and Fig. 13.4 illustrates how variable the results can be. The back radiation values, for example, do not agree within the noted error range, and the net surface absorption is 50% higher in Trenberth's papers than it is in the Stephens paper (Fig. 13.4). I shall look at the net surface absorption in more detail later as it has important implications for sea level rise, but the fact that this value is so small, not just relative to the other energy flows, but also in comparison to their errors, is a cause for concern with respect to its own accuracy. It should also be noted that the net surface absorption should also be measurable directly at the top of the atmosphere using satellite technology to measure both the solar energy going in and the Earth's thermal energy flowing out. Yet the discrepancies seen there between incoming and outgoing energy flows currently far exceed 0.9 W/m2. The result is that most of the energy flows shown in Fig. 13.1-Fig.13.4 are at best estimates, and are often based more on climate models than on actual data.

Sunday, May 24, 2020

6. New Zealand station profile

New Zealand is probably most famous for two things: sheep and rugby (not necessarily in that order). I’m not sure what impact rugby has had on global warming, but sheep are are not exactly carbon-neutral. I shall leave further discussion regarding the methane problem until another day though.

New Zealand is, however, surprising in one sense: despite being a small country with an even smaller population, it has the second highest number of long temperature records in the Southern Hemisphere. Only Australia has more station records with more than 1200 monthly measurements each (1200 being the equivalent of more than 100 years of data). New Zealand therefore seems like a good place to start analysing regional temperature trends.

According to Berkeley Earth, New Zealand has about 64 station records (it may be slightly more or less depending on whether you include some near to Antartica or some South Pacific Islands). Of these, ten have more than 1200 months of temperature data stretching back to the 19th Century, including two that date back to January 1853. I shall characterize these as long stations due to the length of their records. In addition there are a further 27 stations with more than 240 months of data which could be characterized as medium length stations. This includes a further dozen or so stations that contain data covering most of the period from 1973 to 2013.

In my previous post I explained how temperature data can be processed into a usable form comprising the temperature anomaly, and how these anomalies can be combined to produce a global warming trend for the region (see Eq. 5.11 here). This process involves combining multiple temperature records from the same country or region into a single numerical series by adding the anomaly data from the same month in each record and taking the average. This new average should, in theory, have less noise that the individual records from which it is constructed because the averaging process should lead to a regression towards the mean. What is left should be a general trend curve that consists of the signal of long-term climate change for that region together with a depreciated noise component. As a starting point we shall in the next post look at combining the ten longest data sets for New Zealand and seeing how the warming trend it produces compares with the trend as advertised by climate scientists.

As I noted last time, we need to be careful in regard to how the different records are combined, and in particular, to consider two main issues. The first is the evenness of the distribution of the stations across the region in question. If stations are too close together they will merely reproduce each other’s data and render one or more of them redundant. Ideally they should be evenly spaced, otherwise they should be weighted by area (see Eq. 5.15 here).




Fig. 6.1: Geographical distribution of long (1200+ months), medium (400+ months) and short (240+ months) temperature records in New Zealand.


If we look at the spatial distribution of the long stations in New Zealand (see Fig. 6.1.), we see that they are indeed distributed very evenly across the country. This means that weighting coefficients are unnecessary and the local warming trend can be approximated to high precision merely by adding the anomalies from each station record.

The second issue regards the construction of the temperature anomalies themselves (how these anomalies are derived has been discussed here). These anomalies are the amount by which the average temperature for a particular month in a given year has deviated from the expected long-term value for that month. In other words, by how much does the average temperature for this month (May 2020) differ from the average temperature for all months of May over the last 30 years or so? Central to this derivation is the construction of a set of monthly average temperatures, which involves finding the mean temperature for each of the 12 months over a pre-defining time interval of about 30 years as outlined here and in Fig. 4.2 here. I call these averages the monthly reference temperatures (MRTs) because they are temperatures against which the actual data is compared in order to determine the monthly change in temperature. These temperature changes or anomalies are in essence a series of random fluctuations about the mean value, but they may also exhibit an underlying trend over time. It is this trend that climate scientists are seeking to identify and measure.

This immediately raises an important question: over what period should the reference temperature for each month be measured? Most climate science groups seem to favour a thirty year period from 1961-1990. It appears that this is chosen because it tends to correspond to a period with a high number of active stations, and this is certainly true for New Zealand. As the Berkeley Earth graph in Fig. 6.2 below shows, the number of active stations in New Zealand has risen over time, peaking at over 30 in the last few decades. However, when it comes to finding the optimum period for the MRT calculation, relying on station population frequencies is not always the best illustrator.


 Fig. 6.2: New Zealand stations used in the Berkeley Earth average.

What we really require is a time period which allows us to incorporate the maximum number of data points into our analysis. This can be achieved, not by summing the number of active stations each month, but instead by summing the total number of data points that each of the stations present in that month possesses. Such a graph of the sum of station frequency x data length versus time is shown below in Fig. 6.3.


Fig. 6.3: Data frequency over time.


Fig. 6.3 shows more clearly that the period 1970-2000 is the Goldilocks zone for calculating the MRT. Choosing this time period for the MRT not only allows us to incorporate a large number of stations, but it also means we will have a large number of data points per temperature record, and hence a longer trend.  Nevertheless, not all temperature records will have enough data in this region, and some useful data could still be lost. So why not choose a different period, say 10 years, or a longer period, say 50 years or 100 years that could access the lost data? And how much effect would this choice make on the overall warming trend?

The problem is that there are various competing drivers at play here. One is the need to have the longest temperature record, as that will yield the most detectable temperature trend. But measurement accuracy also depends on having the highest number of stations in the calculation, and in having an accurate determination of the MRT for each. And of course, ideally the same time-frame should be used for all the different temperature records that are to be combined in order to maintain data consistency and accuracy. Unfortunately, this is not always possible as most temperature records tend to have different time-frames. When faced with the need to compromise, it is generally best to try different options and seek the optimal one.


Wednesday, May 20, 2020

4. Data analysis at the South Pole

If there is one place on Earth that is synonymous with global warming, it is Antarctica. The conventional narrative is that because of climate change, the polar ice caps are melting, all the polar bears and penguins are being rendered homeless and are likely to drown, and the rest of the planet will succumb to a flood of biblical proportions that will turn most of the Pacific islands into the Lost City of Atlantis, and generally lead to global apocalypse. Needless to say, most of this is a gross exaggeration.

I have already explained that melting sea ice at the North Pole cannot raise sea levels because of Archimedes’ principle. The same is true of ice shelves around Antarctica. The only ice that can melt and raise sea levels is that which is on land. In Antarctica (and Greenland) this is virtually all at altitude (above 1000 m) where the mean temperature is below -20 °C, and the mean monthly temperature NEVER gets above zero, even in summer. Consequently, the likelihood of any of this ice melting is negligible.

The problem with analysing climate change in Antarctica is that there is very little data. If you exclude the coastal regions and only look at the interior, there are only twenty sets of temperature data with more than 120 months of data, and only four extend back beyond 1985. Of those four, one has 140 data points and only runs between 1972 and 1986 and so is nigh on useless for our purposes. The other three I shall consider here in detail.

The record that is the longest (in terms of data points), most complete and most reliable is the one that is actually at the South Pole. It is at the Amundsen-Scott Base that is run by the US government and has been permanently manned since 1957. The graph below (Fig. 4.1) illustrates the mean monthly temperatures since 1957.



Fig. 4.1: The measured monthly temperatures at Amundsen-Scott Base.


The thing that strikes you first about the data is the large range of temperatures, an almost 40 degree swing from the warmest months to the coldest. This is mainly due to the seasonal variation between summer and winter. Unfortunately, this seasonal variation makes it virtually impossible to detect a discernible trend in the underlying data. This is a problem that is true for most temperature records, but is acutely so here. However, there is a solution. If we calculate the mean temperature for each of the twelve months individually, and then subtract these monthly means from all the respective monthly temperatures in the original record, what will be left will be a signal representing time dependent changes in the local climate.



Fig. 4.2: The monthly reference temperatures (MRTs) for Amundsen-Scott Base.


The graph above (Fig. 4.2) illustrates the monthly means for the data in Fig. 4.1. We get this repeating data set by adding together all the January data in Fig. 4.1 and dividing it by the number of January readings (i.e. 57). Then we repeat the method for the remaining 11 months. Then we plot the twelve values for each year to give a repeating trend as illustrated in Fig. 4.2. If we then subtract this data from the data in Fig. 4.1 we get the data shown below (Fig. 4.3). This is the temperature anomaly for each month, namely the amount by which the average temperature for that month has deviated from the expected long-term value shown in Fig. 4.2. This is the temperature data that climate scientists are interested in and try to analyse. The monthly means in Fig. 4.2 therefore represent a series of monthly reference temperatures (MRTs) that are subtracted to the raw data in order to generate the temperature anomaly data. The temperature anomalies are therefore the amount by which the actual temperature each month changes relative to the reference or average for that month.



Fig. 4.3: The monthly temperature anomalies for Amundsen-Scott Base.


Also shown in Fig. 4.3 is the line of best fit to the temperature anomaly (red line). This is almost perfectly flat, although its slope is slightly negative (-0.003 °C/century). Even though the error in the gradient is ±0.6 °C per century, we can still venture, based on this data that there is no global warming at the South Pole.

The reasons for the error in the best fit gradient being so large (it is comparable to the global trend claimed by the IPCC and climate scientists) are the large temperature anomaly (standard deviation = ±2.4 °C) and the relatively short time baseline of 57 years (1957-2013). This is why long time series are essential, but unfortunately these are also very rare.

Then there is another problem: outliers. Occasionally the data is bad or untrustworthy. This is often manifested as a data-point that is not only not following the trend of the other data, it is not even in the same ballpark. This can be seen in the data below (Fig. 4.4) for the Vostok station that is located over 1280 km from the South Pole.



Fig. 4.4: The measured monthly temperatures at Vostok.


There is clearly an extreme value for the January 1984 reading. There are also others, including at March 1985 and March 1997, but these are obscured by the large spread of the data. They only become apparent when the anomaly is calculated, but we can remove these data points in order to make the data more robust. To do this the following process was performed.

First, find the monthly reference temperaturs (MRTs) and the anomalies as before. Then, calculate the mean anomaly. Next, calculate either the standard deviation of the anomalies, or the mean deviation (either will do). Then I set a limit for the maximum number of multiples of the deviation that an anomaly data point can lie above or below the mean value for it to be considered a good data point (I generally choose a factor of 5). Any data-points that fall outside this limit are then excluded. Then, with this modified dataset, I recalculated the MRTs and the anomalies once more. The result of this process for Vostok is shown below together with the best fit line (red line) to the resulting anomaly data (Fig. 4.5).


Fig. 4.5: The monthly temperature anomalies for Vostok.


Notice how the best fit line is now sloping up slightly, indicating a warming trend. The gradient, although looking very shallow, is still an impressive +1.00 ± 0.63 °C/century, which is more than that claimed globally by the IPCC for the entire planet. This shows how difficult these measurements are, and how statistically unreliable. Also, look at the uncertainty or error of ±0.63 °C/century. This is almost as much as the measured value. Why? Well, partly because of the short time baseline and high noise level as discussed previously, and partly because of the underlying oscillations in the data which appear to have a periodicity of about 15 years. The impact of these oscillations becomes apparent when we reduce or change the length of the base timeline.


Fig. 4.6: The monthly temperature anomalies for Vostok with reduced fitting range.


In Fig. 4.6 the same data is presented, but the best fit line has only been performed to data between 1960 and 2000. The result is that the best fit trend line (red line) changes sign and now demonstrates long-term cooling of -0.53 ± 1.00 °C/century. Not only has the trend changed sign, but the uncertainty has increased.

What this shows is the difficulty of doing a least squares best fit to an oscillatory dataset. Many people assume that the best fit line for a sine wave lies along the x-axis because there are equal numbers of points above and below the best fit line. But this is not so, as the graph below illustrates.



 Fig. 4.7: The best fit to a sine wave.


The best fit line to a single sine wave oscillation of width 2π and amplitude A is 3A2 (see Fig. 4.7). This reduces by a factor n for n complete oscillations but it never goes to zero. Only a best fit to a cosine wave will have zero gradient because it is symmetric. Yet the problem with temperature data is that most station records contain an oscillatory component that distorts the overall trend in the manner described above. This is certainly a problem for many of the fits to shorter data sets (less than 20 years). But a far bigger problem is that most temperature records are fragmented and incomplete, as the next example will illustrate.



Fig. 4.8: The measured monthly temperatures at Byrd Station.


Byrd Station is located 1110 km from the South Pole. Its local climate is slightly warmer than those at Amundsen-Scott and Vostok but the variation in seasonal temperature is just as extreme (see Fig. 4.8 above). Unfortunately, its data is far from complete. This means that its best fit line is severely compromised.



Fig. 4.9: The monthly temperature anomalies for Byrd Station.


The best fit to the Byrd Station data has a warming trend of +3.96 ± 0.83 °C/century (see the red line in Fig. 4.9 above). However, things are not quite that simple, particularly given the missing data between 1970 and 1980 which may well consist of a data peak, as well as the sparse data between 2000 and 2010 which appears to coincide with a trough. It therefore seems likely that the gradient would be very different, and much lower, if all data were present. How much lower we will never know. Nor can we know for certain why so much data is missing. Is this because the site of the weather station changed? In which case, can we really consider all the data to being part of a single record, or should we be analysing the fragments separately? This is a major and very controversial topic in climate science. As I will show later, it leads to the development of controversial numerical methods such as breakpoint alignment and homogenization.

What this post has illustrated I hope, is the difficulty of discerning an unambiguous warming (or cooling) trend in a temperature record. This is compounded by factors such as inadequate record length, high noise levels in signals, missing and fragmented data, and underlying nonlinear trends of unknown origin. However, if we can combine records, could that improve the situation? And if we do, would it yield something similar to the legendary hockey stick graph that is so iconic and controversial in climate science? Next I will use the temperature data from New Zealand to try and do just that.