Government Weather and Climate Forecasts Are Failures.

Forecasts Wrong, Science Wrong

In general, we look for a new law by the following process: First we guess it; then we compute the consequences of the guess to see what would be implied if this law that we guessed is right; then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment, it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is, it does not make any difference how smart you are, who made the guess, or what his name is—if it disagrees with experiment, it is wrong.

Richard Feynman’s comment describes the scientific method but also applies to weather and climate forecasting. Short, medium and long-term weather and climate forecasts are wrong and below any level of usability. Most forecasting agencies swing between determining their own accuracy level or openly detailing the inadequacy of their work. No production of society is as wrong as government weather forecasts and yet continues to operate. Apparently people just lump it in with all government waste and failure. Their real view is reflected in the fact that few activities receive more ridicule and disdain than weather forecasts.[amazon asin=0988877740&template=*lrc ad (right)]

History of Forecasts

Around 300 BC Theophrastus, a student of Aristotle’s, wrote a book setting out the first rules for weather forecasting. In the Book of Signs, he recorded over 200 empirical indicators such as “A halo around the moon portends rain.” Many skeptics, including me, say we haven’t come very far since. Indeed, I would argue we have regressed.

Various attempts to forecast weather and climate exist over the centuries. Benjamin Franklin’s Old Richard’s Almanac began a service in 1757, especially to farmers, of long-term forecasts. It expanded on Theophrastus’ ideas of weather folklore that actually were climatic observations reflecting seasonal events and their change. It was replaced in 1792 by The Farmer’s Almanac, now known as The Old Farmer’s Almanac and used by many people, especially farmers. Founder, Robert B. Thomas combined solar activity, weather patterns, and astronomy cycles to create his forecasts. We can translate those to mean sunspot activity, historical weather data and variations in magnetism to create a better chance of accuracy than the limited variables in most forecasts, but especially those of the Intergovernmental Panel on Climate Change (IPCC). They have a better record of accuracy than official long term forecasts, Consider the UKMO seasonal inaccuracies over the last many years, most recently the prediction of a dry winter in 2013 in one of the wettest on record.

In 1837, Patrick Murphy, an English gentlemen of science published a paper titled, The Weather Almanac (on Scientific principles, showing the State of the Weather for Every Day of the Year of 1838). It included one approximately accurate forecast for January 20, 1838;

“Fair, and probably the lowest degree of winter temperature.”

The actual temperature was a remarkable -20°C, the coldest anyone could remember. Heavy ice formed on the Thames, thick enough to allow a sheep to be roasted over a roaring fire at Hammersmith. The temperature seems remarkable today, but was consistent with an earth recovering from the nadir of the Little Ice Age (LIA) in the 1680s set back by the cooling associated with the Dalton Minimum.[amazon asin=1621571610&template=*lrc ad (right)]

The winter of 1838 became Murphy’s winter, however the rest of the year’s forecasts were mostly wrong. His poor results prompted a poem printed in The Times.

When Murphy says ‘frost’, then it will snowThe wind’s fast asleep when he tells us ’twill blow.For his rain, we get sunshine; for high, we have low.Yet he swears he’s infallible – weather or no!

This appears just as applicable to the UK Met Office (UKMO ) today.

A Dr. Merriweather from Whitby Yorkshire developed a technique for forecasting weather from watching the leeches he used for bleeding in his practice. He noticed the position of the leeches in their jar appeared to predict the weather. In calm conditions they were placid on the bottom, but if they began to rise up the side a weather change was half a day away. When rain was due the leeches climbed above the water line and if they stayed above the line and curled into a tight ball a storm was coming.

Merriweather wrote a paper titled An Essay Explanatory of the Tempest Prognostication to accompany a special jar he designed with a leech and a bell that rang when the leech left the water. He sold it at the 1851 Great Exhibition (World’s Fair). His failed prognostications are comparable to today’s claims of increased severe weather.

Modern Forecast Failures

Over 200 years ago Lavoisier (1743-1794) said,

It is almost possible to predict one or two days in advance, within a rather broad range of probability what the weather is going to be.[amazon asin=1571986030&template=*lrc ad (right)]

I understand that because of persistency of weather systems and Markov probability the chances of tomorrow being the same as today are 63 percent. Currently the UK Met Office claims 93.8% accuracy for temperatures for the first day of forecast, but minimum temperatures for the first night are only 84.3%. The problem is both are with a ±2°C error range so the gain on probability is minimal. It appears little improved on Lavoisier’s “broad range of probabilities”. Most achieve better results because thy practice what I call “gradual approximation”. They make a five-day forecast and then change it every six hours up to the actual time period. I am not aware of any research that compares the accuracy from the first five-day forecast to the reality. How much change was made to even come close to reality?

Weather forecasts had a practical use during the First World War when airplanes and their pilots were at the mercy of the weather. It is why most weather stations are at airports where they became compromised by heat from runways, jet engines, and in many cases the expanding urban heat island (UHI). Bjerknes created many of the terms used in forecasting such as Cold and Warm Fronts or advancing or retreating frontal systems from battle terminology. Now, as aviation advances the need for forecasts diminishes. Weather needs of aviation are now simply station data of current conditions and only for small aircraft. Larger or more sophisticated aircraft can land in virtually zero visibility so only a closed runway is a problem. The problem with most weather station data is it is not “real time”, so pilots rely on what the control tower is telling them.

Farmers need accurate forecasts either a week or months ahead so they can plan operations. Neither is available with any useable accuracy. Other agencies such as forestry, power producers create their own forecasts, many even collecting their data. The problem is insufficient weather stations to create weather maps with sufficient accuracy to produce useful results. The longer the forecast the more expansive the number of stations involved – looking out five days means weather developing a long way down wind. In most cases this means the gaps of information simply are too great.

Public images of weather forecasting come from television. It is the 2 or 3-minute segment at the end of the news that is forgotten shortly after it’s presented. Most stations try to hype the information with visuals and hyperbole. Some present “Extreme weather” or present it from the “Storm Center”. They distort reality presenting wind chill or heat indices as if it is actual temperature. Everything is exaggerated and that causes people to pay less attention. They lose more credibility because they frequently fail to forecast extreme events.[amazon asin=1931676178&template=*lrc ad (right)]

I began flying before computer generated weather maps were introduced. Weather forecasts were not very good, but certainly better than those that are made today. In those days the weather person took individual station data and plotted their own isobaric based maps. While preparing the map they developed a sense of the weather patterns that they then combined with experience of local conditions. Still there was little faith in the forecasts, especially for people who needed a more accurate product. Hubert Lamb as a forecaster for the UKMO took seriously complaints about poor forecasts from aircrew flying over Germany during WWII. He realized better forecasts required better knowledge of past weather and that was a driving force for establishing the Climatic Research Unit (CRU).

When Wigley took over from Lamb he took the CRU in a different direction effectively abandoning reconstruction of past climate. The work some were doing exposed the limitations of the data and the computer models ability to create accurate weather maps. Benjamin Santer a CRU graduate completed a thesis titled, Regional Validation of General Circulation Models. It used three top computer models to recreate North Atlantic conditions. Apparently the area was chosen because it was the largest area with the best data. Despite this the computer models created massive pressure systems that don’t exist.

Santer used regional models in 1987 but things haven’t improved in 21 years. In 2008 Tim Palmer, a leading climate modeller at the European Centre for Medium-Range Weather Forecasts in Reading England said in the New Scientist.

I don’t want to undermine the IPCC, but the forecasts, especially for regional climate change, are immensely uncertain.

How uncertain is reflected in the skill testing measures carried out by the National Oceanographic Atmospheric Administration (NOAA) in the US and Environment Canada.. Figures 1 and 2 are NOAA measures of 3 month forecasts.

The following explains how the test works.[amazon asin=1585748579&template=*lrc ad (right)]

The term “skill” in reference to forecasts means a measure of the performance of a forecast relative to some standard. Often, the standard used is the long-term (30-year) average (called the the climatology) of the parameter being predicted. Thus, skill scores measure the improvement of the forecast over the standard.

CPC uses the Heidke skill score, which is a measure of how well a forecast did relative to a randomly selected forecast. A score of 0 means that the forecast did no better than what would be expected by chance. A score of 100 depicts a “perfect” forecast and a score of -50 depicts the “worst possible” forecast. The dashed lines in the skill graph indicates the average skill score for all forecasts and for “Non-CL” forecasts. “CL” refers to climatology or a forecast of equal chances of Above, Near Normal, and Below Normal temperature or precipitation. “Non-CL” refers to all forecasts where enhanced above normal or enhanced below normal temperatures or precipitation are predicted. “Percent Coverage” refers to the percent of the forecast region where enhanced above or below temperature or precipitation is predicted.

The results are very poor; barely better than chance in most cases.

Environment Canada does a similar skill test. Figures 3 and 4 are examples of 4 to 6 month (left) and 12 month (right) temperature forecasts. Precipitation forecasts are even worse.

Canadian results are worse than the US with the average in Figure 3 a mere 44.6% confidence and 41.5% in Figure 4. These examples were randomly selected and in most cases the results are worse as you can see for yourself. Again precipitation forecasts are worse.

A 2005 news release said, “NASA/NOAA Announce Major Weather Forecasting Advancement.” In light of the results identified above, JSCDA Director Dr. John LeMarshall made an odd statement.

“A four-percent increase in forecast accuracy at five or six days normally takes several years to achieve,” “This is a major advancement, and it is only the start of what we may see as much more data from this [amazon asin=B005S28ZES&template=*lrc ad (right)]instrument is incorporated into operational forecast models at the NOAA’s Environmental Modeling Center.”

What does it mean that forecast accuracy has improved in 19 years (2014-2005)? If we assume that half that time (~9 years) is “several years” then accuracy presumably improved by 8 percent for 5 to 6 day forecasts. The trouble is you are starting from virtually zero accuracy at 5 to 6 days. It will take centuries to reach a useful level. The important question is at what point do you acknowledge that the science is wrong?

A recent WUWT article reporting on medium term forecasts detailed a study that concluded,

“there is still a long way to go before reliable regional predictions can be made on seasonal to decadal time scales.”

That is understating the problems and potentials. Data is inadequate in all dimensions. Numbers of variables included are insufficient. Knowledge and modeling of mechanisms are inadequate. Computer capacity is insufficient. Nobody uses the output because it is so unreliable. It only continues because the government is funding it and bureaucrats who produce it are telling politicians it is valuable. It isn’t.

The IPCC never acknowledge their science is wrong. Every single IPCC forecast from the 1990 Report on was wrong. Instead of re-examining their science they began what became their standard practice of moving the goalposts. The full politicizing of climate science took hold between the 1990 and 1995 Reports. Instead of acknowledging the science was wrong they changed from forecasts to projections and all those were incorrect since. Figure 6 shows Clive Best’s three IPCC levels of projection from 1990 against actual surface (blue) and satellite (green) temperature.

Short (hours and days) medium (weeks and months) and long term (decades) forecasts are all wrong, being close to or less than chance in almost every case. Despite billions of dollars and concentrated efforts at improvement there is little or no improvement. If such a level of forecast failure persisted in any other endeavor logic would demand, at the very least, an acknowledgement that something is fundamentally wrong with the science.

Maybe the February 20, 2014 story in the Daily Caller will stir some response as the government forecasters are insulted in the worst way for their profession. The headline reads, “Report: Farmers’ Almanac More Accurate Than Government Climate Scientists.” I know most people are not surprised, they’ve known for a long time government forecasts are mostly wrong, very expensive and of very little value. As Thomas Sowell so pointedly asked,

Would you bet your paycheck on the weather forecast for tomorrow? If not, then why should this country (US) bet billions on global warming predictions that have even less foundation?

It is worse than that. The government science is wrong and therefore their forecasts are wrong. From this base they push policies that are the opposite of the evolving and future conditions.

Reprinted with the permission from Dr. Tim Ball.