Climate Change: Predicting the Future

(Image by Steve Buissinne from Pixabay)

Excerpt from Beyond Smoke and Mirrors: Climate Change and Energy in the 21st Century, by Nobel Laureate Burton Richter (Cambridge University Press, 2011). Reprinted with permission from the author.

From Chapter 5: Predicting the future

Who does it?

There are many sayings about the difficulty of predicting the future. My favorite is, “Predicting the future is hard to do because it hasn’t happened yet.” It is especially hard when you are trying to predict what will happen 100 years from now and the science behind the prediction is really only 50 years old. It was the work of Keeling and Revelle in the 1950s mentioned earlier that jump-started the science community’s work on climate change and global warming. It is the Intergovernmental Panel on Climate Change (IPCC) that does the predictions today.

My own involvement in climate change research has been more as an observer than as a participant. My first exposure to the issue was in 1978 when a group I am in, called the JASONs, took it up. The JASONs are a collection of academics mostly that meet every summer for about six weeks to work on problems of importance to the government. In 1978 a subgroup of the JASONs led by Gordon MacDonald, a distinguished geophysicist, began a study of climate change for the US Department of Energy. The JASONs always have many pots on the stove and I was working on something else. However, we all were fascinated by the climate issue, and nearly everyone sat in on the sessions and critiqued the report. Its conclusion was that doubling atmospheric CO2 would increase the average global surface temperature by 4.3 °F (2.4 °C), and that the increase at the poles would be much more than the average. The JASON climate model included a more sophisticated treatment of the ocean–atmosphere interaction than had been used before. The model was a simplified one that could be solved without big computers, and the answer was in fairly good agreement with what we get now for the average temperature increase, but overstated the polar increase. The report was influential in increasing government funding for climate change research.

The IPCC, which shared the Nobel Peace Prize in 2007 for its work, was created in 1988 as a UN-sponsored organization under the United Nations Environmental Program.[1] With the signing of the UN Framework Convention on Climate Change (UNFCCC) in 1992, the IPCC became the technical arm of this 162-nation organization.

The IPCC does not sponsor climate research itself but coordinates and synthesizes the work done by many groups around the world. Its major products are its periodic Assessment Reports which make the best scientific predictions of the effects of increasing greenhouse gas concentration to the year 2100 based on the state of knowledge at the time the report is produced. The first Assessment Report appeared in 1990, the second in 1995, the third in 2001, and the fourth in 2007.

The first IPCC assessment was due only two years after the creation of the organization, and required a huge amount of work to produce in so short a time. The team had to gather and analyze the data, create a credible scientific peer-review system, and get the report through its parent UN agencies and out to the world. The report on the science of climate change (IPCC Working Group I or WG I) was broadly accepted in the science community. The report on expected impacts of climate change (WG II) encountered some scientific argument, while the report on responses (WG III) wandered into the policy area and ran into serious troubles with the IPCC’s UN sponsor who thought that policy was their job. The policy part was removed to the UNFCCC organization itself, and the IPCC remains today as the main organization responsible for scientific and technical analysis of the issues. It is respected by governments, non-governmental organizations, and the science community. The process of producing these reports is complicated but the output of the IPCC has come to be trusted by all the signers of the UNFCCC, which means most of the members of the UN.

After 1992 and the signing of the UNFCCC, more formality was brought to the assessment process. The assessments are now prepared by a large group of experts who are nominated by signatory countries. The only way, for example, that a US scientist can become a member of an assessment team is by nomination by the US government or some other country (any signer country can nominate anyone). Although the administration of US President G. W. Bush was not noted for believing in the urgency of action on climate change, it did nominate the best climate scientists in the United States to the relevant panels. According to the panel members that I know, there seems to have been little politics in the selection of the scientific members of these panels by any country.

Each of the three Working Groups produces what is called a Summary for Policymakers. This summary is non-technical and is gone over line by line with representatives of the signers of the UNFCCC at a meeting that has political as well as technical overtones. Since the UN operates by consensus on climate change there has to be agreement on the exact wording of the report, and that language evolves as the scientific evidence evolves. For example, in the Third Assessment Report the summary did not say that global warming was caused with high probability by human activities. The Fourth Assessment Report does say that. There was no consensus in the Third Report that human activities are the main cause of warming (the main holdouts were China and the United States), but there is in the fourth after a long and sometimes heated argument at the review meeting.

After agreement is reached on the wording in the summary, the scientific groups have to go back and make the words in their technical reports consistent with what is in the summary. They may have to change their descriptive words but they do not have to change their technical findings or any of the numbers in their analyses. Some would say that this procedure is overtly political. They would be correct, but since only the countries that are the major emitters of greenhouse gases can do anything about global warming, a consensus on the issues is needed as a preface to global action. Without that consensus the two largest emitters of greenhouse gases, China and the United States, have refused to join in official control mechanisms. With it, they may join in the next round of negotiations on international action.

How is it done?

All sorts of models are made of what will happen in the future, based on previous experience and knowledge of the processes that will affect whatever is being modeled. People make (or should make) models of income and savings against payments when buying a car. The Federal Reserve models economic growth and inflation when it decides on interest rates. Both of these are relatively short-term models that rely on predictions that look only months or a few years into the future, and they are also not based on any actual physical laws. They also do not treat the potential for instability in any reasonable way, as shown by the global financial chaos that began in 2008.

The climate models are attempting to do something much more ambitious, and do it better. They are trying to predict what will happen to our climate 100 or more years in the future based on models of how the climate system responds to changes. The models are grounded in the physical and biological sciences, are mathematically complex, and come from an evolving understanding of the science. In Chapter 2 an introduction to the greenhouse effect was given that showed how the temperature of our planet is set by a balance between the energy coming from the Sun and the heat energy radiated back out into space. Calculating the temperature at which that balance is struck in the real world is an enormously complicated job that has to take into account a host of interactions between very many elements of the systems that determine our climate.

The amount of incoming radiation from the Sun is known very well. Not all of that radiation reaches the ground. Some is reflected back into space by clouds, and some is absorbed by various chemicals in the atmosphere and by the clouds themselves. Some is reflected from the surface, more strongly from snow and ice, less strongly from deserts, and least strongly from the oceans and land areas covered by vegetation. The oceans, the land masses of the continents, and the atmosphere interact with each other in complex ways. Changes in one thing change other things as well. These effects are called feedback loops and some were described earlier. I have mentioned how increasing greenhouse gases increases the temperature; increasing the temperature increases water vapor in the atmosphere; water vapor is also a greenhouse gas so the temperature increases further; increasing water vapor also increases clouds; more clouds reflect more incoming radiation into space, decreasing the temperature. We usually think that increased temperature should lead to less snowfall, but one of the oddest feedback effects increases snowfall in Antarctica when the global average temperature increases slightly, because an increase in temperature increases water vapor in the atmosphere and that leads to more snow. Some of these feedback loops amplify climate change effects while others reduce them. Getting all of this right is the job of the modelers and their computers.

Many of the feedbacks are positive in the sense that given a temperature increase they increase the temperature more, but we know that there has been no runaway greenhouse effect on Earth where the feedbacks turned the Earth into something like Venus. The historical record over the past few billion years shows that although the climate has been both much hotter and much colder than today, it has stayed within limits that still support life. If the Earth were to become either very hot or very cold compared with today it might not support our standard of living, but life would go on.

There are many climate models that have been created by groups of experts around the world. The most sophisticated and complex are called atmosphere–ocean general circulation models (AOGCM). These models divide the surface of our planet into small blocks, the atmosphere into layers, and the oceans into layers too. Here the first big problem arises. The surface of the land is not smooth. The wind in the atmosphere is not uniform. The ocean has currents that are narrow compared with the size of the ocean. In size, the Gulf Stream that keeps Europe warm is to the Atlantic Ocean as the Mississippi River is to the North American continent. The Alps are small compared with the total land area of Europe. All are small compared with the scale of what they affect. Models have to look at effects at an appropriate scale. On the land surface, mountain ranges affect wind patterns and therefore the transport of heat. In the oceans, currents such as the warm Gulf Stream off the US east coast and the cold Humboldt Current off the US west coast pierce the quiet oceans with water plumes that also move huge amounts of heat. Land bottlenecks exist around Greenland, for example, that restrict the flow of water, strongly affecting the heat flow.

All of these effects make the design of the calculations extremely complicated, and the description above only begins to take into account the wide variety of conditions over the entire surface of the Earth. The most sophisticated calculations start off with the surface divided into squares which might be as small as 25 miles on an edge where the terrain is highly variable and as large as several hundred miles on an edge where the terrain is smooth and fairly uniform. The atmosphere is divided into layers and there can be as many as 15 or 20 of them. Similarly the oceans are divided into layers, and the number of layers has to take into account the depth of the oceans as well. In the actual calculations there can be hundreds of thousands of these cells. Heat and fluids (water in the oceans and air in the atmosphere) flow into a cell from one side or top or bottom, and flow out another to adjacent cells. The calculations require enormous computers, and even the largest computers available today cannot do the job quickly. These AOGCMs are not run very often because of the huge amounts of computer time needed. There are 23 different AOGCMs that the IPCC takes into account in its climate synthesis.

The problem is so large that it cannot be solved from first principles on any existing computer. That would require starting off with oceans uniformly full of water and an atmosphere full of the proper mix of gases both at the average temperature of the Earth (65 °F or 18 °C), and running the program for long enough to allow the currents in the ocean and the air to develop, the temperature to stabilize, the ice caps to form, etc. Perhaps when computers become at least 1000 times more powerful than those of today it can be done. Today, what is done is called a “perturbation analysis.” The starting point is the world as it is, and the calculation sees how it changes (is perturbed) when greenhouse gases are added to the atmosphere. The oceans are there, their currents are flowing, the atmospheric winds blow, the ice is in place, and the computer grinds away step by step to predict the future as greenhouse gases accumulate.

There are also natural phenomena that occur randomly from time to time and have to be put in explicitly. For example, major volcanic eruptions, like that of Mount Pinatubo in the Philippines in 1991, throw large amounts of material into the upper atmosphere that affect the albedo (the reflection of incoming solar radiation back out into space) directly and indirectly by affecting cloud formation. This gives a cooling effect that lasts a few years until the volcanic material falls out of the atmosphere. Things like this cannot be predicted in advance. After an event like Mount Pinatubo, the material ejected into the sky has to be added explicitly and the models run again. Fortunately for those doing the predictions, effects from these kinds of events do not last for a long time and are not really important for the long term, although they do contribute to the seemingly random fluctuations in the planetary temperature.

A more important issue is predicting how human activities will change the amount of greenhouse gases put into the atmosphere. Scenarios are created that predict how energy use grows over time and what the mix of fuels will be. From this the amount of greenhouse gas going into the atmosphere for each scenario is derived. The IPCC uses six main scenarios, each with a few variations. These scenarios go through the same sort of approval process as the climate change reports to assure the UNFCCC signatories that the scenarios are reasonable. The scenarios do not assume the existence of any mechanisms for greenhouse gas reduction; the IPCC is not allowed to make such assumptions. The scenarios are simply alternative economic growth models that make different assumptions on economic growth, energy efficiency, population, fuel mix, etc. For example, all the scenarios assume a world economic growth rate of between 2% and 3% per year. This does not seem to be much of a difference in economic growth in the short term, but over a period of 100 years that 1% extra economic growth makes world economic output, energy use, and greenhouse gas emissions more than twice what they would be with a growth rate of only 2%.

The calculations move ahead one time step at a time. The model adds a year’s worth of greenhouse gases to the atmosphere and calculates what happens to the atmosphere, the oceans, the clouds, the snow and ice, etc. This answer is the input for the next time step and so forth on into the future.

[1] All the IPCC reports are available online (www.ipcc.ch) and the reports called “Summary for Policymakers” are written with clarity for people without a technical background.

Excerpted from Beyond Smoke and Mirrors by Nobel Laureate Burton Richter. Copyright © Burton Richter, 2011. All rights reserved.

Burton Richter is Paul Pigott Professor in the Physical Sciences Emeritus, and Director Emeritus, Stanford Linear Accelerator Center at Stanford University. He is a Nobel Prize-winning physicist for his pioneering work in the discovery of a heavy elementary particle. He received the Lawrence Medal from the US Department of Energy and the Abelson Prize from the American Association for the Advancement of Science. He has served on many US and international review committees on climate change and energy issues.

Burton Richter on “Beyond Smoke and Mirrors: Climate Change and Energy in the 21st Century”

David Attenborough on climate change: ‘The world will be transformed’

Be sure to ‘like’ us on Facebook

LEAVE A REPLY

Please enter your comment!
Please enter your name here