Showing posts with label Archimedes' principle. Show all posts
Showing posts with label Archimedes' principle. Show all posts

Wednesday, May 20, 2020

4. Data analysis at the South Pole

If there is one place on Earth that is synonymous with global warming, it is Antarctica. The conventional narrative is that because of climate change, the polar ice caps are melting, all the polar bears and penguins are being rendered homeless and are likely to drown, and the rest of the planet will succumb to a flood of biblical proportions that will turn most of the Pacific islands into the Lost City of Atlantis, and generally lead to global apocalypse. Needless to say, most of this is a gross exaggeration.

I have already explained that melting sea ice at the North Pole cannot raise sea levels because of Archimedes’ principle. The same is true of ice shelves around Antarctica. The only ice that can melt and raise sea levels is that which is on land. In Antarctica (and Greenland) this is virtually all at altitude (above 1000 m) where the mean temperature is below -20 °C, and the mean monthly temperature NEVER gets above zero, even in summer. Consequently, the likelihood of any of this ice melting is negligible.

The problem with analysing climate change in Antarctica is that there is very little data. If you exclude the coastal regions and only look at the interior, there are only twenty sets of temperature data with more than 120 months of data, and only four extend back beyond 1985. Of those four, one has 140 data points and only runs between 1972 and 1986 and so is nigh on useless for our purposes. The other three I shall consider here in detail.

The record that is the longest (in terms of data points), most complete and most reliable is the one that is actually at the South Pole. It is at the Amundsen-Scott Base that is run by the US government and has been permanently manned since 1957. The graph below (Fig. 4.1) illustrates the mean monthly temperatures since 1957.



Fig. 4.1: The measured monthly temperatures at Amundsen-Scott Base.


The thing that strikes you first about the data is the large range of temperatures, an almost 40 degree swing from the warmest months to the coldest. This is mainly due to the seasonal variation between summer and winter. Unfortunately, this seasonal variation makes it virtually impossible to detect a discernible trend in the underlying data. This is a problem that is true for most temperature records, but is acutely so here. However, there is a solution. If we calculate the mean temperature for each of the twelve months individually, and then subtract these monthly means from all the respective monthly temperatures in the original record, what will be left will be a signal representing time dependent changes in the local climate.



Fig. 4.2: The monthly reference temperatures (MRTs) for Amundsen-Scott Base.


The graph above (Fig. 4.2) illustrates the monthly means for the data in Fig. 4.1. We get this repeating data set by adding together all the January data in Fig. 4.1 and dividing it by the number of January readings (i.e. 57). Then we repeat the method for the remaining 11 months. Then we plot the twelve values for each year to give a repeating trend as illustrated in Fig. 4.2. If we then subtract this data from the data in Fig. 4.1 we get the data shown below (Fig. 4.3). This is the temperature anomaly for each month, namely the amount by which the average temperature for that month has deviated from the expected long-term value shown in Fig. 4.2. This is the temperature data that climate scientists are interested in and try to analyse. The monthly means in Fig. 4.2 therefore represent a series of monthly reference temperatures (MRTs) that are subtracted to the raw data in order to generate the temperature anomaly data. The temperature anomalies are therefore the amount by which the actual temperature each month changes relative to the reference or average for that month.



Fig. 4.3: The monthly temperature anomalies for Amundsen-Scott Base.


Also shown in Fig. 4.3 is the line of best fit to the temperature anomaly (red line). This is almost perfectly flat, although its slope is slightly negative (-0.003 °C/century). Even though the error in the gradient is ±0.6 °C per century, we can still venture, based on this data that there is no global warming at the South Pole.

The reasons for the error in the best fit gradient being so large (it is comparable to the global trend claimed by the IPCC and climate scientists) are the large temperature anomaly (standard deviation = ±2.4 °C) and the relatively short time baseline of 57 years (1957-2013). This is why long time series are essential, but unfortunately these are also very rare.

Then there is another problem: outliers. Occasionally the data is bad or untrustworthy. This is often manifested as a data-point that is not only not following the trend of the other data, it is not even in the same ballpark. This can be seen in the data below (Fig. 4.4) for the Vostok station that is located over 1280 km from the South Pole.



Fig. 4.4: The measured monthly temperatures at Vostok.


There is clearly an extreme value for the January 1984 reading. There are also others, including at March 1985 and March 1997, but these are obscured by the large spread of the data. They only become apparent when the anomaly is calculated, but we can remove these data points in order to make the data more robust. To do this the following process was performed.

First, find the monthly reference temperaturs (MRTs) and the anomalies as before. Then, calculate the mean anomaly. Next, calculate either the standard deviation of the anomalies, or the mean deviation (either will do). Then I set a limit for the maximum number of multiples of the deviation that an anomaly data point can lie above or below the mean value for it to be considered a good data point (I generally choose a factor of 5). Any data-points that fall outside this limit are then excluded. Then, with this modified dataset, I recalculated the MRTs and the anomalies once more. The result of this process for Vostok is shown below together with the best fit line (red line) to the resulting anomaly data (Fig. 4.5).


Fig. 4.5: The monthly temperature anomalies for Vostok.


Notice how the best fit line is now sloping up slightly, indicating a warming trend. The gradient, although looking very shallow, is still an impressive +1.00 ± 0.63 °C/century, which is more than that claimed globally by the IPCC for the entire planet. This shows how difficult these measurements are, and how statistically unreliable. Also, look at the uncertainty or error of ±0.63 °C/century. This is almost as much as the measured value. Why? Well, partly because of the short time baseline and high noise level as discussed previously, and partly because of the underlying oscillations in the data which appear to have a periodicity of about 15 years. The impact of these oscillations becomes apparent when we reduce or change the length of the base timeline.


Fig. 4.6: The monthly temperature anomalies for Vostok with reduced fitting range.


In Fig. 4.6 the same data is presented, but the best fit line has only been performed to data between 1960 and 2000. The result is that the best fit trend line (red line) changes sign and now demonstrates long-term cooling of -0.53 ± 1.00 °C/century. Not only has the trend changed sign, but the uncertainty has increased.

What this shows is the difficulty of doing a least squares best fit to an oscillatory dataset. Many people assume that the best fit line for a sine wave lies along the x-axis because there are equal numbers of points above and below the best fit line. But this is not so, as the graph below illustrates.



 Fig. 4.7: The best fit to a sine wave.


The best fit line to a single sine wave oscillation of width 2π and amplitude A is 3A2 (see Fig. 4.7). This reduces by a factor n for n complete oscillations but it never goes to zero. Only a best fit to a cosine wave will have zero gradient because it is symmetric. Yet the problem with temperature data is that most station records contain an oscillatory component that distorts the overall trend in the manner described above. This is certainly a problem for many of the fits to shorter data sets (less than 20 years). But a far bigger problem is that most temperature records are fragmented and incomplete, as the next example will illustrate.



Fig. 4.8: The measured monthly temperatures at Byrd Station.


Byrd Station is located 1110 km from the South Pole. Its local climate is slightly warmer than those at Amundsen-Scott and Vostok but the variation in seasonal temperature is just as extreme (see Fig. 4.8 above). Unfortunately, its data is far from complete. This means that its best fit line is severely compromised.



Fig. 4.9: The monthly temperature anomalies for Byrd Station.


The best fit to the Byrd Station data has a warming trend of +3.96 ± 0.83 °C/century (see the red line in Fig. 4.9 above). However, things are not quite that simple, particularly given the missing data between 1970 and 1980 which may well consist of a data peak, as well as the sparse data between 2000 and 2010 which appears to coincide with a trough. It therefore seems likely that the gradient would be very different, and much lower, if all data were present. How much lower we will never know. Nor can we know for certain why so much data is missing. Is this because the site of the weather station changed? In which case, can we really consider all the data to being part of a single record, or should we be analysing the fragments separately? This is a major and very controversial topic in climate science. As I will show later, it leads to the development of controversial numerical methods such as breakpoint alignment and homogenization.

What this post has illustrated I hope, is the difficulty of discerning an unambiguous warming (or cooling) trend in a temperature record. This is compounded by factors such as inadequate record length, high noise levels in signals, missing and fragmented data, and underlying nonlinear trends of unknown origin. However, if we can combine records, could that improve the situation? And if we do, would it yield something similar to the legendary hockey stick graph that is so iconic and controversial in climate science? Next I will use the temperature data from New Zealand to try and do just that.


Monday, May 18, 2020

2. Is climate change real?

Well, first there is the nature of the temperature rise itself. Below is the graph of the global temperature rise since 1880 (as postulated by climate scientists) which shows the generally accepted rise in temperature of about 0.9 °C. There are a number of problems with it, not least regarding how it is constructed. This I will discuss at length in later posts. But more immediately there is the question of why this temperature curve doesn’t look like most real temperature data.



Fig. 2.1: Global temperature rise since 1880.

Actual temperature records don’t look anything like the one above: they look like the one below.


Fig. 2.2: Temperature anomaly at Berlin-Templehof (1701-2013).

This is the temperature record from Berlin showing average monthly temperatures from month to month. It is probably the longest temperature record we have and extends back to 1701, which is doubly remarkable since Daniel Fahrenheit only invented the thermometer and the temperature scale that bears his name in 1714. But there you go.

The first point to note is that the data in the Berlin-Templehof record consists of fluctuations of up to ±5 °C. These are changes to the monthly mean and are not the result of seasonal variations. I suspect the size of these fluctuations is far greater than what most people would imagine them to be if asked to speculate. Was February 1929 really almost 13 °C colder on average than the previous February? Well apparently it was, and we are not talking about the odd cold day here or there, we mean all 28 of them, or most of them at least. But not only that, the average yearly temperatures fluctuated by over two degrees over the period 1701-2013, and the average temperature for each decade by more than one degree. In which case, why are climate scientists getting so worked up about a temperature rise of 0.9 °C over 125 years?

It is a rhetorical question that the Norwegian Nobel Laureate in physicist, Ivar Giaever posed when he resigned from the American Physical Society (APS) in 2011, principally over its stated position that global warming was happening and the “The evidence is incontrovertible”. He pointed out that a rise of the mean temperature on Earth from 288.0 K to 288.8 K (which is also 0.8 °C) in 150 years seems remarkably stable. This represents a change of less than 0.3% per century, which most people (including most physicists) would consider an example of extreme systemic stability.

Then there is the issue of carbon dioxide. The graph below is the plot of CO2 concentration in parts per million (ppm) in the atmosphere as measured near the top of Mauna Loa in Hawaii at an altitude of 3397 m. In climate science this data is one of the few pieces of hard evidence that is undisputed (except perhaps by a few cranks).


Fig. 2.3: The concentration of carbon dioxide in the atmosphere since 1959.

By 2005 the CO2 concentration had risen to just over 370 pp, up from 280 ppm in pre-industrial times. This coincides with a temperature rise of about 0.9 °C. Yet if you look at when these changes happened there are striking differences. Two-thirds of the rise in CO2 levels occurred before 1980, but two-thirds of the temperature rise is after 1980. That is not a strong positive correlation.

Then there is the plateau in temperatures between 1940 and 1980 in Fig 2.1 above. What caused this? We don’t know for sure, but one suggestion is that it may be due to the cooling effect of particles in the air due to industrial pollution, and that this increased as dirty heavy industry expanded after World War II, thus offsetting the temperature rise. This means, though, that up until 1980 there was virtually no noticeable increase in global temperatures. But this raises a much bigger question that rarely gets asked and is never answered, at least not by climate scientists. If temperatures before 1980 had not yet increased, why was it that in the 1980s this whole global warming hysteria suddenly took off? Are climate scientists also clairvoyants? How could their claims be based on evidence if the evidence wasn’t there (yet)?

Part of the reason why I think the global warming debate will not go away is that the whole subject is based on two facts that are undoubtedly true, but which has led many to a conclusion, via bad physics or bad logic, which may not be. The fact that CO2 levels in the atmosphere are increasing is true, as is the assertion that CO2 is a greenhouse gas. But this does not mean that increasing CO2 levels must lead to an increase in temperatures. The greenhouse effect is not linear, as I will probably discuss further at a later date.

What is particularly worrying about climate science is that the things that are controversial now were just as controversial back in the 1980s. Back in 1990, Channel 4 in the UK broadcast a documentary entitled “The Greenhouse Conspiracy” (try getting that commissioned on Channel 4 now). What is particularly striking about the programme is how many of criticisms of global warming it made at the time still remain valid today. Equally striking is how apocalyptic claims made 30 years ago still haven’t come true but are still being touted by climate scientists to justify current policy.

One such issue is extreme weather. We are led to believe that global warming will lead to more extreme weather events such as droughts, storms, floods etc., except of course when it doesn’t. Then the explanation is that warming at the poles reduces the temperature gradient, thus reducing extreme weather. Talk about having your cake and eating it. Except that there is no evidence of warming at the poles. The temperature at the South Pole has gone down since 1956 (when the first measurements were taken) not up, and there is no weather data within 1000 miles of the North Pole and never has been. As for extreme weather, well the USA has some of the longest weather records on the planet, including records of extreme weather. These show that over the last 170 years there has been no increase in the number of hurricanes hitting the USA (see Fig. 2.4), nor was there any change in their average strength.


Fig 2.4: Number of hurricanes hitting the USA by decade (1850-2019).

Over the last 70 years the number of tornadoes has also been stable (see Fig. 2.5) but might actually be going down (2018 was the first year on record where no violent EF4 or EF5 tornadoes were recorded).

 Fig. 2.5: Annual frequency of tornadoes (EF1-EF5) in the USA (1954-2014).

There has been no increase in drought, flooding or wildfires, or in the number of fatalities due to extreme events. And while most of these events are difficult to record accurately, their insurance costs can be measured and these have also remained stable as a proportion of GDP. And before anybody cries foul about the use of %GDP as a valid measure rather than real value in pounds or dollars, I will point out that insurance costs are invariably linked to the value of property, and property values are inextricably linked to increases in GDP.

Then there is the issue of melting polar ice-caps and rising sea levels. The problem here is that very little of this is true either. While glaciers in the Northern Hemisphere may be shrinking, some of those in the Southern Hemisphere have been growing. In New Zealand since 1980 many glaciers have expanded in size after large declines prior to 1960 (see below). Both of these changes are uncorrelated with either the local or the global temperature records.


Fig. 2.6: Length of selected New Zealand glaciers since 1900.

But it is really the claims regarding rising sea levels that need to be addressed. One frequent claim is that melting sea ice will raise sea levels. It won’t. Anyone who understands Archimedes’ principle should understand that. If a floating iceberg melts, then the sea level must remain the same. That is basic physics. The iceberg floats because it displaces a volume of water equal to its own mass: 10% of the iceberg is above the waterline because the density of ice is 10% less than water. When it melts it contracts and exactly fills the hole in the sea that it had previously created. There is no rise in sea level and never can be.

Nor can warming oceans be a significant factor in sea level rise either. The thermal coefficient of expansion of water is 207 ppm/°C. As the average ocean depth is 3688m, a 1 °C rise in all ocean temperatures at all depths would result in a rise of less than 77 cm. However, as it is extremely unlikely that any heating of the ocean surface could extend down beyond more than about 500 m from the surface (remember, warm water is less dense and so rises to the surface), a more likely sea level rise would be a mere 10 cm or 1 mm per year. That is unnoticeable, and un-measureable.

Finally, there is one last philosophical question to consider. If the temperature of our planet is changing, and mankind is responsible, and if it is in our power to set that temperature to one of our choosing, what temperature should we choose? In short, what is the optimum surface temperature of Planet Earth, and what should be the optimum level of CO2 to ensure that our plant is the greenest, and most beneficial for life and diversity that it can possibly be? These are the questions that no environmentalists will answer, partly because before they got hysterical about global warming, they got equally hysterical about global cooling. And at the centre of both was one man: Stephen Schneider.