Numerical climate models (or climate system models) are mathematical models that can simulate the interactions of important drivers of climate. These drivers are the atmosphere, oceans, land surface and ice. Scientists use climate models to study the dynamics of the climate system and to make projections of future climate and of climate change. Climate models can also be qualitative (i.e. not numerical) models and contain narratives, largely descriptive, of possible futures.[1]
Climate models take account of incoming energy from the Sun as well as outgoing energy from Earth. An imbalance results in a change in temperature. The incoming energy from the Sun is in the form of short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared. The outgoing energy is in the form of long wave (far) infrared electromagnetic energy. These processes are part of the greenhouse effect.
Climate models vary in complexity. For example, a simple radiant heat transfer model treats the Earth as a single point and averages outgoing energy. This can be expanded vertically (radiative-convective models) and horizontally. More complex models are the coupled atmosphere–ocean–sea ice global climate models. These types of models solve the full equations for mass transfer, energy transfer and radiant exchange. In addition, other types of models can be interlinked. For example Earth System Models include also land use as well as land use changes. This allows researchers to predict the interactions between climate and ecosystems.
Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. Scientists divide the planet into a 3-dimensional grid and apply the basic equations to those grids. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points. These are coupled with oceanic models to simulate climate variability and change that occurs on different timescales due to shifting ocean currents and the much larger heat storage capacity of the global ocean. External drivers of change may also be applied. Including an ice-sheet model better accounts for long term effects such as sea level rise.
There are three major types of institution where climate models are developed, implemented and used:
Big climate models are essential but they are not perfect. Attention still needs to be given to the real world (what is happening and why). The global models are essential to assimilate all the observations, especially from space (satellites) and produce comprehensive analyses of what is happening, and then they can be used to make predictions/projections. Simple models have a role to play that is widely abused and fails to recognize the simplifications such as not including a water cycle.[2]
A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.
GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat)[4] combine the two models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory[5] AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Such integrated multi-system models are sometimes referred to as either "earth system models" or "global climate models."
Versions designed for decade to century time scale climate applications were originally created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey.[3] These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.Simulation of the climate system in full 3-D space and time was impractical prior to the establishment of large computational facilities starting in the 1960s. In order to begin to understand which factors may have changed Earth's paleoclimate states, the constituent and dimensional complexities of the system needed to be reduced. A simple quantitative model that balanced incoming/outgoing energy was first developed for the atmosphere in the late 19th century.[6] Other EBMs similarly seek an economical description of surface temperatures by applying the conservation of energy constraint to individual columns of the Earth-atmosphere system.[7]
Essential features of EBMs include their relative conceptual simplicity and their ability to sometimes produce analytical solutions.[8]: 19 Some models account for effects of ocean, land, or ice features on the surface budget. Others include interactions with parts of the water cycle or carbon cycle. A variety of these and other reduced system models can be useful for specialized tasks that supplement GCMs, particularly to bridge gaps between simulation and understanding.[9][10]
Zero-dimensional models consider Earth as a point in space, analogous to the pale blue dot viewed by Voyager 1 or an astronomer's view of very distant objects. This dimensionless view while highly limited is still useful in that the laws of physics are applicable in a bulk fashion to unknown objects, or in an appropriate lumped manner if some major properties of the object are known. For example, astronomers know that most planets in our own solar system feature some kind of solid/liquid surface surrounded by a gaseous atmosphere.
A very simple model of the radiative equilibrium of the Earth is
where
The constant parameters include
The constant can be factored out, giving a nildimensional equation for the equilibrium
where
The remaining variable parameters which are specific to the planet include
This very simple model is quite instructive. For example, it shows the temperature sensitivity to changes in the solar constant, Earth albedo, or effective Earth emissivity. The effective emissivity also gauges the strength of the atmospheric greenhouse effect, since it is the ratio of the thermal emissions escaping to space versus those emanating from the surface.[14]
The calculated emissivity can be compared to available data. Terrestrial surface emissivities are all in the range of 0.96 to 0.99[15][16] (except for some small desert areas which may be as low as 0.7). Clouds, however, which cover about half of the planet's surface, have an average emissivity of about 0.5[17] (which must be reduced by the fourth power of the ratio of cloud absolute temperature to average surface absolute temperature) and an average cloud temperature of about 258 K (−15 °C; 5 °F).[18] Taking all this properly into account results in an effective earth emissivity of about 0.64 (earth average temperature 285 K (12 °C; 53 °F)).[citation needed]
Dimensionless models have also been constructed with functionally separated atmospheric layers from the surface. The simplest of these is the zero-dimensional, one-layer model,[19] which may be readily extended to an arbitrary number of atmospheric layers. The surface and atmospheric layer(s) are each characterized by a corresponding temperature and emissivity value, but no thickness. Applying radiative equilibrium (i.e conservation of energy) at the interfaces between layers produces a set of coupled equations which are solvable.[20]
Layered models produce temperatures that better estimate those observed for Earth's surface and atmospheric levels.[21] They likewise further illustrate the radiative heat transfer processes which underlie the greenhouse effect. Quantification of this phenomenon using a version of the one-layer model was first published by Svante Arrhenius in year 1896.[6]
Water vapor is a main determinant of the emissivity of Earth's atmosphere. It both influences the flows of radiation and is influenced by convective flows of heat in a manner that is consistent with its equilibrium concentration and temperature as a function of elevation (i.e. relative humidity distribution). This has been shown by refining the zero dimension model in the vertical to a one-dimensional radiative-convective model which considers two processes of energy transport:[22]
Radiative-convective models have advantages over simpler models and also lay a foundation for more complex models.[23] They can estimate both surface temperature and the temperature variation with elevation in a more realistic manner. They also simulate the observed decline in upper atmospheric temperature and rise in surface temperature when trace amounts of other non-condensible greenhouse gases such as carbon dioxide are included.[22]
Other parameters are sometimes included to simulate localized effects in other dimensions and to address the factors that move energy about Earth. For example, the effect of ice-albedo feedback on global climate sensitivity has been investigated using a one-dimensional radiative-convective climate model.[24][25]
The zero-dimensional model may be expanded to consider the energy transported horizontally in the atmosphere. This kind of model may well be zonally averaged. This model has the advantage of allowing a rational dependence of local albedo and emissivity on temperature – the poles can be allowed to be icy and the equator warm – but the lack of true dynamics means that horizontal transports have to be specified.[26]
Early examples include research of Mikhail Budyko and William D. Sellers who worked on the Budyko-Sellers model.[27][28] This work also showed the role of positive feedback in the climate system and has been considered foundational for the energy balance models since its publication in 1969.[7][29]
Depending on the nature of questions asked and the pertinent time scales, there are, on the one extreme, conceptual, more inductive models, and, on the other extreme, general circulation models operating at the highest spatial and temporal resolution currently feasible. Models of intermediate complexity bridge the gap. One example is the Climber-3 model. Its atmosphere is a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of half a day; the ocean is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.[30]
Box models are simplified versions of complex systems, reducing them to boxes (or reservoirs) linked by fluxes. The boxes are assumed to be mixed homogeneously. Within a given box, the concentration of any chemical species is therefore uniform. However, the abundance of a species within a given box may vary as a function of time due to the input to (or loss from) the box or due to the production, consumption or decay of this species within the box.[citation needed]
Simple box models, i.e. box model with a small number of boxes whose properties (e.g. their volume) do not change with time, are often useful to derive analytical formulas describing the dynamics and steady-state abundance of a species. More complex box models are usually solved using numerical techniques.[citation needed]
Box models are used extensively to model environmental systems or ecosystems and in studies of ocean circulation and the carbon cycle.[31] They are instances of a multi-compartment model.
In 1961 Henry Stommel was the first to use a simple 2-box model to study factors that influence ocean circulation.[32]
In 1956, Norman Phillips developed a mathematical model that realistically depicted monthly and seasonal patterns in the troposphere. This was the first successful climate model.[33][34] Several groups then began working to create general circulation models.[35] The first general circulation climate model combined oceanic and atmospheric processes and was developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, a component of the U.S. National Oceanic and Atmospheric Administration.[36]
By 1975, Manabe and Wetherald had developed a three-dimensional global climate model that gave a roughly accurate representation of the current climate. Doubling CO2 in the model's atmosphere gave a roughly 2 °C rise in global temperature.[37] Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO2 concentration was increased.
By the early 1980s, the U.S. National Center for Atmospheric Research had developed the Community Atmosphere Model (CAM), which can be run by itself or as the atmospheric component of the Community Climate System Model. The latest update (version 3.1) of the standalone CAM was issued on 1 February 2006.[38][39][40] In 1986, efforts began to initialize and model soil and vegetation types, resulting in more realistic forecasts.[41] Coupled ocean-atmosphere climate models, such as the Hadley Centre for Climate Prediction and Research's HadCM3 model, are being used as inputs for climate change studies.[35]The Coupled Model Intercomparison Project (CMIP) has been a leading effort to foster improvements in GCMs and climate change understanding since 1995.[43][44]
The IPCC stated in 2010 it has increased confidence in forecasts coming from climate models:
"There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (e.g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases."[45]
The World Climate Research Programme (WCRP), hosted by the World Meteorological Organization (WMO), coordinates research activities on climate modelling worldwide.
A 2012 U.S. National Research Council report discussed how the large and diverse U.S. climate modeling enterprise could evolve to become more unified.[46] Efficiencies could be gained by developing a common software infrastructure shared by all U.S. climate researchers, and holding an annual climate modeling forum, the report found.[47]
Cloud-resolving climate models are nowadays run on high intensity super-computers which have a high power consumption and thus cause CO2 emissions.[48] They require exascale computing (billion billion – i.e., a quintillion – calculations per second). For example, the Frontier exascale supercomputer consumes 29 MW.[49] It can simulate a year’s worth of climate at cloud resolving scales in a day.[50]
Techniques that could lead to energy savings, include for example: "reducing floating point precision computation; developing machine learning algorithms to avoid unnecessary computations; and creating a new generation of scalable numerical algorithms that would enable higher throughput in terms of simulated years per wall clock day."[48]
Box 2.3. 'Models' are typically numerical simulations of real-world systems, calibrated and validated using observations from experiments or analogies, and then run using input data representing future climate. Models can also include largely descriptive narratives of possible futures, such as those used in scenario construction. Quantitative and descriptive models are often used together.
Climate models on the web: