Global Climate System Models

From The Encyclopedia of Earth
Jump to: navigation, search


February 18, 2010, 12:00 am
May 7, 2012, 1:27 pm
Content Cover Image
Camel-icon.png

Modern climate models are composed of a system of interacting model components, each of which simulates a different part of the climate system. The individual parts often can be run independently for certain applications. Nearly all the Coupled Model Intercomparison Project 3 (CMIP3) class of models include four primary components: atmosphere, land surface, ocean, and sea ice. The atmospheric and ocean components are known as “general circulation models” or GCMs because they explicitly simulate the large-scale global circulation of the atmosphere and ocean. Climate models sometimes are referred to as coupled atmosphere-ocean GCMs. This name may be misleading because coupled GCMs can be employed to simulate aspects of weather and ocean dynamics without being able to maintain a realistic climate projection over centuries of simulated time, as required of a climate model used for studying anthropogenic climate change. What follows in this Chapter is a description of a modern climate model’s major components and how they are coupled and tested for climate simulation.

This article is drawn from Chapters 1 and 2 of CCSP, 2008: Climate Models: An Assessment of Strengths and Limitations. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research D.C., C. Covey, W.J. Gutowski Jr., I.M. Held, K.E. Kunkel, R.L. Miller, R.T. Tokmakian and M.H. Zhang (Authors). Department of Energy, Office of Biological and Environmental Research, Washington, D.C., USA, 124 pp. A Table of Contents of other articles drawn from the report is included before the references section of this article.

Climate Model Construction

Comprehensive climate models are constructed using expert judgments to satisfy many constraints and requirements. Overarching considerations are the accurate simulation of the most important climate features and the scientific understanding of the processes that control these features. Typically, the basic requirement is that models should simulate features important to humans, particularly surface variables such as temperature, precipitation, windiness, and storminess. This is a less-straightforward requirement than it seems because a physically based climate model also must simulate all complex interactions in the coupled atmosphere–ocean–land surface–ice system manifested as climate variables of interest. For example, jet streams at altitudes of 10 km above the surface must be simulated accurately if models are to generate midlatitude weather with realistic characteristics. Midlatitude highs and lows shown on surface weather maps are intimately associated with these high-altitude wind patterns. As another example, the basic temperature decrease from the equator to the poles cannot be simulated without taking into account the poleward transport of heat in the oceans, some of this heat being carried by currents 2 or 3 km deep into the ocean interior. Thus, comprehensive models should produce correctly not just the means of variables of interest but also the extremes and other measures of natural variability. Finally, our models should be capable of simulating changes in statistics caused by relatively small changes in the Earth’s energy budget that result from natural and human actions. Climate processes operate on time scales ranging from several hours to millennia and on spatial scales ranging from a few centimeters to thousands of kilometers. Principles of scale analysis, fluid dynamical filtering, and numerical analysis are used for intelligent compromises and approximations to make possible the formulation of mathematical representations of processes and their interactions. These mathematical models are then translated into computer codes executed on some of the most powerful computers in the world. Available computer power helps determine the types of approximations required. As a general rule, growth of computational resources allows modelers to formulate algorithms less dependent on approximations known to have limitations, thereby producing simulations more solidly founded on established physical principles. These approximations are most often found in “closure” or “parameterization” schemes that take into account unresolved motions and processes and are always required because climate simulations must be designed so they can be completed and analyzed by scientists in a timely manner, even if run on the most powerful computers.

Climate models have shown steady improvement over time as computer power has increased, our understanding of physical processes of climatic relevance has grown, datasets useful for model evaluation have been developed, and our computational algorithms have improved. Figure 1.2 shows one attempt at quantifying this change. It compares a particular metric of climate model performance among the CMIP1 (1995), CMIP2 (1997), and CMIP3 (2004) ensembles of Atmosphere-Ocean General Circulation Models (AOGCMs). This particular metric assesses model performance in simulating the mean climate of the late 20th Century as measured by a basket of indicators focusing on aspects of atmospheric climate for which observational counterparts are deemed adequate.

Model ranking according to individual members of this basket of indicators varies greatly, so this aggregate ranking depends on how different indicators are weighted in relative importance. Nevertheless, the conclusion that models have improved over time is not dependent on the relative weighting factors, as nearly all models have improved in most respects. The construction of metrics for evaluating climate models is itself a subject of intensive research and will be covered in more detail below.

Also shown in Fig. 1.2 is the same metric evaluated from climate simulation results obtained by averaging over all AOGCMs in the CMIP1, CMIP2, and CMIP3 archives. The CMIP3 “ensemble- mean” model performs better than any individual model by this metric and by many others. This kind of result has convinced the community of the value of a multimodel approach to climate change projection. Our understanding of climate is still insufficient to justify proclaiming any one model “best” or even showing metrics of model performance that imply skill in predicting the future. More appropriate in any assessments focusing on adaptation or mitigation strategies is to take into account, in a pertinently informed manner, the products of distinct models built using different expert judgments at centers around the world.

Figure 1.2. Performance Index I2 for Individual Models (circles) and Model Generations (rows). Best performing models have low I2 values and are located toward the left. Circle sizes indicate the length of the 95% confidence intervals. Letters and numbers identify individual models; flux corrected models are labeled in red. Grey circles show the average I2 of all models within one model group. Black circles indicate the I2 of the multimodel mean taken over one model group. The green circle (REA) corresponds to the I2 of the NCEP/NCAR Reanalysis (Kalnay et al. 1996), conducted by the <a href='/w/index.php?title=National_Weather_Service&action=edit&redlink=1' _fcksavedurl='/w/index.php?title=National_Weather_Service&action=edit&redlink=1' class='new' title='National Weather Service (page does not exist)'>National Weather Service</a>’s <a href='/w/index.php?title=National_Centers_for_Environmental_Prediction&action=edit&redlink=1' class='new' title='National Centers for Environmental Prediction (page does not exist)'>National Centers for Environmental Prediction</a> and the <a href='/w/index.php?title=National_Center_for_Atmospheric_Research&action=edit&redlink=1' class='new' title='National Center for Atmospheric Research (page does not exist)'>National Center for Atmospheric Research</a>. Last row (PICTRL) shows I2 for the preindustrial control experiment of the CMIP3 project. [Adapted from Fig. 1 in T. Reichler and J. Kim 2008: How well do coupled models simulate today’s climate? Bulletin American Meteorological Society, 89(3), doi:10.1175/BAMS-89-3-303. Reproduced by permission of the American Meteorological Society.

Atmospheric General Circulation Models

Atmospheric general circulation models (AGCMs) are computer programs that evolve the atmosphere’s three-dimensional state forward in time. This atmospheric state is described by such variables as temperature, pressure, humidity, winds, and water and ice condensate in clouds (Clouds). These variables are defined on a spatial grid, with grid spacing determined in large part by available computational resources. Some processes governing this atmospheric state’s evolution are relatively well resolved by model grids and some are not. The latter are incorporated into models through approximations often referred to as parameterizations. Processes that transport heat, water, and momentum horizontally are relatively well resolved by the grid in current atmospheric models, but processes that redistribute these quantities vertically have a significant part that is controlled by subgrid-scale parameterizations.

The model’s grid-scale evolution is determined by equations describing the thermodynamics and fluid dynamics of an ideal gas. The atmosphere is a thin spherical shell of air that envelops the Earth. For climate simulation, emphasis is placed on the atmosphere’s lowest 20 to 30 km (i.e., the troposphere and the lower stratosphere). This layer contains over 95% of the atmosphere’s mass and virtually all of its water vapor, and it produces nearly all weather although current research suggests possible interactions between this layer and higher atmospheric levels[1]. Because of the disparity between scales of horizontal and vertical motions governing global and regional climate, the two motions are treated differently by model algorithms. The resulting set of equations is often referred to as the "primitive equations".[2]

Although nearly all AGCMs use this same set of primitive dynamical equations, they use different numerical algorithms to solve them. In all cases, the atmosphere is divided into discrete vertical layers, which are then overlaid with a two-dimensional horizontal grid, producing a three-dimensional mesh of grid elements. The equations are solved as a function of time on this mesh. The portion of the model code governing the fluid dynamics explicitly simulated on this mesh often is referred to as the model’s “dynamical core.” Even with the same numerical approach, AGCMs differ in spatial resolutions and configuration of model grids. Some models use a “spectral” representation of winds and temperatures, in which these fields are written as linear combinations of predefined patterns on the sphere (spherical harmonics) and are then mapped to a grid when local values are required. Some models have few layers above the tropopause (the moving boundary between the troposphere and stratosphere[3]), while others have as many layers above the troposphere as in it.[4]

All AGCMs use a coordinate system in which the Earth’s surface is a coordinate surface, simplifying exchanges of heat, moisture, trace substances, and momentum between the Earth’s surface and the atmosphere. Numerical algorithms of AGCMs should precisely conserve the atmosphere’s mass and energy. Typical AGCMs have spatial resolution of 200 km in the horizontal and 20 levels in the volume below the altitude of 15 km. Because numerical errors often depend on flow patterns, there are no simple ways to assess the accuracy of numerical discretizations in AGCMs. Models use idealized cases testing the model’s long-term stability and efficiency[5], as well as tests focusing on accuracy using short integrations.[6]

Figure 2.1. Schematic Showing Interaction of a Well- Mixed Surface Layer with Stratified Interior in a Region with a Strong Temperature Gradient. Mixing (dashed lines) is occurring both across temperature (T) gradients and along the temperature gradient with increasing depth. This process is poorly observed and not well understood. It must be parameterized in large-scale models. [Adapted from Fig. 1, p. 18, in Coupling Process and Model Studies of Ocean Mixing to Improve Climate Models—A Pilot Climate Process Modeling and Science Team, a U.S. CLIVAR white paper by Schopf et al. (2003). Figure originated by John Marshall, Massachusetts Institute of Technology.]

All AGCMs must incorporate the effects of radiant-energy transfer. The radiative-transfer code computes the absorption and emission of electromagnetic waves by air molecules and atmospheric particles. Atmospheric gases absorb and emit radiation in “spectral lines” centered at discrete wavelengths, but the computational costs are too high in a climate model to perform this calculation for each individual spectral line. AGCMs use approximations, which differ among models, to group bands of wavelengths together in a more efficient calculation. Most models have separate radiation codes to treat solar (visible) radiation and the much-longer wavelength terrestrial (infrared) radiation. Radiation calculation includes the effects of water vapor, carbon dioxide, ozone, and clouds (Clouds). Models used in climate change experiments also include aerosols and additional trace gases such as methane, nitrous oxide, and the cloroflourocarbons. Validation of AGCM radiation codes often is done offline (separate from other AGCM components) by comparison with lineby-line model calculations that, in turn, are compared against laboratory and field observations.[7]

All GCMs use subgrid-scale parameterizations to simulate processes that are too small or operate on time scales too fast to be resolved on the model grid. The most important parameterizations are those involving cirrus and stratus cloud formation and dissipation, cumulus convection (thunderstorms and fair-weather cumulus clouds), and turbulence and subgrid-scale mixing. For cloud calculations, most AGCMs treat ice and liquid water as atmospheric state variables. Some models also separate cloud particles into ice crystals, snow, graupel (snow pellets), cloud water, and rainwater. Empirical relationships are used to calculate conversions among different particle types. Representing these processes on the scale of model grids is particularly difficult and involves calculation of fractional cloud cover within a grid box, which greatly affects radiative transfer and model sensitivity. Models either predict cloud amounts from the instantaneous thermodynamical and hydrological state of a grid box or they treat cloud fraction as a time-evolving model variable. In higher-resolution models, one can attempt to explicitly simulate the size distribution of cloud particles and the “habit” or nonspherical shape of ice particles, but no current global AGCMs attempt this.

Cumulus convective transports, which are important in the atmosphere but cannot be explicitly resolved at General Climate Model (GCM) scale, are calculated using convective parameterization algorithms. Most current models use a cumulus mass flux scheme patterned after that proposed by Arakawa and Schubert[8], in which convection’s upward motion occurs in very narrow plumes that take up a negligible fraction of a grid box’s area. Schemes differ in techniques used to determine the amount of mass flowing through these plumes and the manner in which air is entrained and detrained by the rising plume. Most models do not calculate separately the area and vertical velocity of convection but try to predict only the product of mass and area, or convective mass flux. Prediction of convective velocities, however, is needed for new models of interactions between aerosols and clouds. Most current schemes do not account for differences between organized mesoscale convective systems and simple plumes. The turbulent mixing rate of updrafts and downdrafts with environments and the phase changes of water vapor within convective systems are treated with a mix of empiricism and constraints based on the moist thermodynamics of rising air parcels. Some models also include a separate parameterization of shallow, nonprecipitating convection (fairweather cumulus clouds). In short, clouds generated by cumulus convection in climate models should be thought of as based in large part on empirical relationships.

All AGCMs parameterize the turbulent transport of momentum, moisture, and energy in the atmospheric boundary layer near the surface. A long-standing theoretical framework, Monin-Obukhov similarity theory, is used to calculate the vertical distribution of turbulent fluxes and state variables in a thin (typically less than 10 m) layer of air adjacent to the surface. Above the surface layer, turbulent fluxes are calculated based on closure assumptions that provide a complete set of equations for subgrid-scale variations. Closure assumptions differ among AGCMs; some models use high-order closures in which the fluxes or second-order moments are calculated prognostically (with memory in these higher-order moments from one time step to the next). Turbulent fluxes near the surface depend on surface conditions such as roughness, soil moisture, and vegetation. In addition, all models use diffusion schemes or dissipative numerical algorithms to simulate kinetic energy dissipation from turbulence far from the surface and to damp small-scale unresolved structures produced from resolved scales by turbulent atmospheric flow.

The realization that a significant fraction of momentum transfer between atmosphere and surface takes place through nonturbulent pressure forces on small-scale “hills” has resulted in a substantial effort to understand and model this transfer.[9] This process is often referred to as gravity wave drag because it is intimately related to atmospheric wave generation. The variety of gravity wave drag parameterizations is a significant source of differences in mean wind fields generated by AGCMs. Accounting for both surface-generated and convectively generated gravity waves are difficult aspects of modeling the stratosphere and mesosphere (≥ 20 km altitude), since winds in those regions are affected strongly by transfer of momentum and energy from these unresolved waves.

Extensive field programs have been designed to evaluate parameterizations in GCMs, ranging from tests of gravity wave drag schemes (Mesoscale Alpine Program or "MAP")[10]) to tests of radiative transfer and cloud parameterizations (Atmospheric Radiation Measurement Program or "ARM"[11]). Running an AGCM coupled to a land model as a numerical weather prediction model for a few days— starting with best estimates of the atmosphere and land’s instantaneous state at any given time—is a valuable test of the entire package of atmospheric parameterizations and dynamical core.[12] Atmosphere-land models also are routinely tested by running them with boundary conditions taken from observed sea-surface temperatures and sea-ice distributions[13] and examining the resulting climate.

Ocean general circulation models

Ocean general circulation models (OGCMs) solve the primitive equations for global incompressible fluid flow analogous to the ideal-gas primitive equations solved by atmospheric GCMs. In climate models, OGCMs are coupled to the atmosphere and ice models through the exchange of heat, salinity, and momentum at the boundary among components. Like the atmosphere, the ocean’s horizontal dimensions are much larger than its vertical dimension, resulting in separation between processes that control horizontal and vertical fluxes. With continents, enclosed basins, narrow straits, and submarine basins and ridges, the ocean has a more complex three-dimensional boundary than does the atmosphere. Furthermore, the thermodynamics of sea water is very different from that of air, so an empirical equation of state must be used in place of the ideal gas law.

An important distinction among ocean models is the choice of vertical discretization. Many models use vertical levels that are fixed distances below the surface (Z-level models) based on the early efforts of Bryan and Cox[14] and Bryan[15]. The General Fluid Dynamics Laboratory (GFDL) and Community Climate System Model (CCSM) ocean components fall into this category[16]. Two Goddard Institute for Space Studies (GISS) models (R and AOM) use a variant of this approach in which mass rather than height is used as the vertical coordinate[17]. A more fundamental alternative uses density as a vertical coordinate. Motivating this choice is the desire to control as precisely as possible the exchange of heat between layers of differing density, which is very small in much of the ocean yet centrally important for simulation of climate. The GISS EH model utilizes a hybrid scheme that transitions from a Z-coordinate near the surface to density layers in the ocean interior[18].

Horizontal grids used by most ocean models in the CMIP3 archive are comparable to or somewhat finer than grids in the atmospheric models to which they are coupled, typically on the order of 100 km (~ 1º spacing in latitude and longitude) for most of Earth. In many OGCMs the north-south resolution is enhanced within 5º latitude of the equator to improve the ability to simulate important equatorial processes.

OGCM grids usually are designed to avoid coordinate singularities caused by the convergence of meridians at the poles. For example, the CCSM OGCM grid is rotated to place its North Pole over a continent, while the GFDL models use a grid with three poles, all of which are placed over land (Murray 1996). Such a grid results in having all ocean grid points at numerically viable locations.

Processes that control ocean mixing near the surface are complex and take place on small scales (order of centimeters). To parameterize turbulent mixing near the surface, the current generation of OGCMs uses several different approaches[19] similar to those developed for atmospheric near-surface turbulence. Within the ocean’s stratified, adiabatic interior, vertical mixing takes place on scales from meters to kilometers (Fig. 2.1); the smaller scales also must be parameterized in ocean components.

Ocean mixing contributes to its heat uptake and stratification, which in turn affects circulation patterns over time scales of decades and longer. Experts generally feel[20] that subgrid-scale mixing parameterizations in OGCMs contribute significantly to uncertainty in estimates of the ocean’s contribution to climate change.

Very energetic eddy motions occur in the ocean on the scale of a few tens of kilometers. These so-called mesoscale eddies are not present in the ocean simulations of CMIP3 climate models.

Ocean models used for climate simulation cannot afford the computational cost of explicitly resolving ocean mesoscale eddies. Instead, they must parameterize mixing by the eddies. Treatment of these mesoscale eddy effects is an important factor distinguishing one ocean model from another. Most real ocean mixing is along rather than across surfaces of constant density. Development of parameterizations that account for this essential feature of mesoscale eddy mixing[21] is a major advance in recent ocean and climate modeling. Inclusion of higher-resolution, mesoscale eddy–resolving ocean models in future climate models would reduce uncertainties associated with these parameterizations.

Other mixing processes that may be important in the ocean include tidal mixing and turbulence generated by interactions with the ocean’s bottom, both of which are included in some models. Lee, Rosati, and Spellman[22] describe some effects of tidal mixing in a climate model. Some OGCMs also explicitly treat the bottom boundary and sill overflows[23]. Furthermore, sunlight penetration into the ocean is controlled by chlorophyll distributions[24], and the depth of penetration can affect surface temperatures. All U.S. CMIP3 models include some treatment of this effect, but they prescribe rather than attempt to simulate the upper ocean biology controlling water opacity. Finally, the inclusion of fresh water input by rivers is essential to close the global hydrological cycle; it affects ocean mixing locally and is handled by models in a variety of ways.

The relatively crude resolution of OGCMs used in climate models results in isolation of the smaller seas from large ocean basins. This requires models to perform ad hoc exchanges of water between the isolated seas and the ocean to simulate what in nature involves a channel or strait. (The Strait of Gibraltar is an excellent example.) Various modeling groups have chosen different methods to handle water mixing between smaller seas and larger ocean basins.

OGCM components of climate models are often evaluated in isolation—analogous to the evaluation of AGCMs with prescribed ocean and sea-ice boundary conditions—in addition to being evaluated as components of fully coupled ocean-atmosphere GCMs. (See the articles: Climate models, surface temperature and precipitation, Climate models and twentieth century trends, and Climate models and extreme events for results of full AOGCM evaluation.) Evaluation of ocean models in isolation requires input of boundary conditions at the air-sea interface. To compare simulations with observed data, boundary conditions or surface forcing are from the same period as the data. These surface fluxes also have uncertainties and, as a result, the evaluation of OGCMs with specified sea-surface boundary conditions

Land-Surface Models

Interaction of Earth’s surface with its atmosphere is an integral aspect of the climate system. Exchanges (fluxes) of mass and energy, water vapor, and momentum occur at the interface. Feedbacks between atmosphere and surface affecting these fluxes have important effects on the climate system[25]. Modeling the processes taking place over land is particularly challenging because the land surface is very heterogeneous and biological mechanisms in plants are important. Climate model simulations are very sensitive to the choice of land models [26].

In the earliest global climate models, land-surface modeling occurred in large measure to provide a lower boundary to the atmosphere that was consistent with energy, momentum, and moisture balances[27]. The land surface was represented by a balance among incoming and outgoing energy fluxes and a “bucket” that received precipitation from the atmosphere and evaporated moisture into the atmosphere, with a portion of the bucket’s water draining away from the model as a type of runoff. The bucket’s depth equaled soil field capacity. Little attention was paid to the detailed set of biological, chemical, and physical processes linked together in the climate system’s terrestrial portion. From this simple starting point, land surface modeling for climate simulation has increased markedly in sophistication, with increasing realism and inclusiveness of terrestrial surface and subsurface processes.

Although these developments have increased the physical basis of land modeling, greater complexity has at times contributed to more differences among climate models[28]. However, the advent of systematic programs comparing land models, such as the Project for Intercomparison of Land Surface Parameterization Schemes (PILPS)[29] has led gradually to more agreement with observations and among land models[30], in part because additional observations have been used to constrain their behavior. However, choices for adding processes and increasing realism have varied among land-surface models[31].

Figure 2.2 shows schematically the types of physical processes included in typical land models. Note that the schematic in the figure describes a land model used for both weather forecasting and climate simulation, an indication of the increasing sophistication demanded by both. The figure also hints at important biophysical and biogeochemical processes that gradually have been added and continue to be added to land models used for climate simulation, such as biophysical controls on transpiration and carbon uptake. Some of the most extensive increases in complexity and sophistication have occurred with vegetation modeling in land models. An early generation of land models[32] introduced biophysical controls on plant transpiration by adding a vegetation canopy over the surface, thereby implementing vegetative control on the terrestrial water cycle. These models included exchanges of energy and moisture among the surface, canopy, and atmosphere, along with momentum loss to the surface. Further developments included improved plant physiology that allowed simulation of carbon dioxide fluxes[33]. This method lets the model treat the flow of water and carbon dioxide as an optimization problem, balancing carbon uptake for photosynthesis against water loss through transpiration. Improvements also included implementation of model parameters that could be calibrated with satellite observation[34], thereby allowing global-scale calibration.

Figure 2.2. Schematic of Physical Processes in a Contemporary Land Model. [Adapted from Fig. 6 in F. Chen and J. Dudhia 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model implementation and sensitivity, Monthly Weather Review, 129, 569–585. Reproduced by permission of the American Meteorological Society.]

Continued development has included more realistic parameterization of roots[35] and the addition of multiple canopy layers[36]. The latter method, however, has not been used in climate models because the added complexity of multicanopy models renders unambiguous calibration very difficult. An important ongoing advance is the incorporation of biological processes that produce carbon sources and sinks through vegetation growth and decay and the cycling of carbon in the soil[37], although considerable work is needed to determine observed magnitudes of carbon uptake and depletion.

Most land models assume soil with properties that correspond to inorganic soils, generally consistent with mixtures of loam, sand, and clay. High-latitude regions, however, may have extensive zones of organic soils (peat bogs), and some models have included organic soils topped by mosses, which has led to decreased soil heat flux and increased surface-sensible and latent heat fluxes[38].

Climate models initially treated snow as a single layer that could grow through snowfall or deplete though melt[39]. Some recent land models for climate simulation include subgrid distributions of snow depth[40] and blowing[41]. Snow models now may use multiple layers to represent fluxes through the snow[42]. Effort also has gone into including and improving effects of soil freezing and thawing[43], although permafrost modeling is more limited[44].

Vegetation interacts with snow by covering it, thereby masking snow’s higher albedo[45] and retarding spring snowmelt[46]. The net effect is to maintain warmer temperatures than would occur without vegetation masking[47]. Vegetation also traps drifting snow[48], insulating the soil from subfreezing winter air temperatures and potentially increasing nutrient release and enhancing vegetation growth[49]. Albedo masking is included in some land-surface models, but it requires accurate simulations of snow depth to produce accurate simulation of surface-atmosphere energy exchanges[50].

Time-evolving ice sheets and mountain glaciers are not included in most climate models. Ice sheets once were thought to be too sluggish to respond to climate change in less than a century. However, observations via satellite altimetry, synthetic aperture radar interferometry, and gravimetry all suggest rapid dynamic variability of ice sheets, possibly in response to climatic warming[51]. Most global climate models to date have been run with prescribed, immovable ice sheets. Several modeling groups are now experimenting with the incorporation of dynamic ice sheet models. Substantial physical, numerical, and computational improvements, however, are needed to provide reliable projections of 21st Century ice sheet changes. Among major challenges are incorporation of a unified treatment of stresses within ice sheets, improved methods of downscaling atmospheric fields to the finer ice sheet grid, realistic parameterizations of surface and subglacial hydrology (fast dynamic processes controlled largely by water pressure and extent at the base of the ice sheet), and models of ice shelf interactions with ocean circulation. Ocean models, which usually assume fixed topography, may need to be modified to include flow beneath advancing and retreating ice. Meeting these challenges will require increased interaction between the glaciological and climate modeling communities, which until recently have been largely isolated from each another.

The initial focus of land models was vertical coupling of the surface with the overlying atmosphere. However, horizontal water flow through river routing has been available in some models for some time[52], with spatial resolution of routing in climate models increasing in more recent versions[53]. Freezing soil poses additional challenges for modeling runoff[54], with more recent work showing some skill in representing its effects[55].

Work also is under way to couple groundwater models into land models[56]. Groundwater potentially introduces longer time scales of interaction in the climate system in places where it has contact with vegetation roots or emerges through the surface.

Land models encompass spatial scales ranging from model grid-box size down to biophysical and turbulence processes operating on scales the size of leaves. Explicit representation of all these scales in a climate model is beyond the scope of current computing systems and the observing systems that would be needed to provide adequate model calibration for global and regional climate. Model fluxes do not represent a single point but rather the behavior in a grid box that may be many tens or hundreds of kilometers across. Initially, these grid boxes were treated as homogeneous units but, starting with the pioneering work of Avissar and Pielke[57], many land models have tiled a grid box with patches of different land-use and vegetation types. Although these patches may not interact directly with their neighbors, they are linked by their coupling to the grid box’s atmospheric column. This coupling does not allow for possible small-scale circulations that might occur because of differences in surfaceatmosphere energy exchanges among patches[58]. Under most conditions, however, the imprint of such spatial heterogeneity on the overlying atmospheric column appears to be limited to a few meters above the surface[59].

Vertical fluxes linking the surface, canopy, and near-surface atmosphere generally assume some form of down-gradient diffusion, although counter-gradient fluxes can exist in this region much as in the overlying atmospheric boundary layer. Some attempts have been made to replace diffusion with more advanced Lagrangian random- walk approaches[60].

Topographic variation within a grid box usually is ignored in land modeling. Nevertheless, implementing detailed river-routing schemes requires accurate digital elevation models[61]. In addition, some soil water schemes include effects of land slope on water distribution[62] and surface radiative fluxes[63].

Validation of land models, especially globally, remains a problem due to lack of measurements for relevant quantities such as soil moisture and energy, momentum, moisture flux, and carbon flux. The PILPS project[64] has allowed detailed comparisons of multiple models with observations at points around the world having different climates, thus providing some constraint on the behavior of land models. Global participation in PILPS has led to more understanding of differences among schemes and improvements. Compared to previous generations, the latest land surface models exhibit relatively smaller differences from current observation-based estimates of the global distribution of surface fluxes, but the reliability of such estimates remains elusive[65]. River routing can provide a diagnosis vs observations of a land model’s spatially distributed behavior[66]. Remote sensing has been useful for calibrating models developed to exploit it but generally has not been used for model validation. Regional observing networks that aspire to give Earth system observations, such as some mesonets in the United States, offer promise of data from spatially distributed observations of important fields for land models.

Land modeling has developed in other disciplines roughly concurrently with advances in climate models. Applications are wide ranging and include detailed models used for planning water resources[67], managing ecosystems[68], estimating crop yields[69], simulating ice-sheet behavior[70], and projecting land use such as transportation planning[71]. As suggested by this list, widely disparate applications have developed from differing scales of interest and focus. Development in some other applications has informed advances in land models for climate simulation, as in representation of vegetation and hydrologic processes. Because land models do not include all climate system features, they can be expected in future to engage other disciplines and encompass a wider range of processes, especially as resolution increases.

Sea-Ice Models

Most climate models include sea-ice components that have both dynamic and thermodynamic elements. That is, models include the physics governing ice movement as well as that related to heat and salt transfer within the ice. While sea ice in the real world appears as ice floes on the scale of meters, in climate models sea ice is treated as a continuum with an effective large-scale rheology describing the relationship between stress and flow.

Rheologies commonly in use are the standard Hibler viscous-plastic (VP) rheology[72] and the morecomplex elastic-viscous-plastic (EVP) rheology of Hunke and Dukowicz[73], designed primarily to improve the computational efficiency of ice models. The EVP method explicitly solves for the ice-stress tensor, while the VP solution uses an implicit iterative approach. As examples, the GFDL models[74] and Community Climate System Model, Version 3 (CCSM3)[75] use the EVP rheology, while the GISS models use the VP implementation.

The thermodynamic portions of sea ice models also vary. Earlier generations of climate models generally used the sea ice thermodynamics of Semtner[76], which includes one snow layer and two ice layers with constant heat conductivities together with a simple parameterization of brine (salt) content. The GFDL climate models continue to use this but also include the interactions between brine content and heat capacity[77]. The CCSM3 and GISS models use variations[78] incorporating additional physical processes within the ice, such as the melting of internal brine regions. Different models define snow and ice layers and ice categories differently, but all include an open water category. Typically, ice models share the grid structure of the underlying ocean model.

The albedo (proportion of incident sunlight reflected from a surface) of snow and ice plays a significant role in the climate system. Sea-ice models parameterize the albedo using expressions based on a mix of radiative transfer theory and empiricism. Figure 2.3 from Curry, Schramm, and Ebert[79] illustrates sea-ice system interrelations and how the albedo is a function of snow or ice thickness, ice extent, open water, and surface temperature, and other factors. Models treat these factors in similar ways but vary on details. For example, the CCSM3 sea-ice component does not include dependence on solar elevation angle[80], but the GISS model does[81]. Both models include the contribution of melt ponds[82]. The GFDL model follows Briegleb et al.[83] but accounts for different effects of the different wavelengths comprising sunlight[84].

Figure 2.3. Schematic Diagram of Sea Ice–Albedo Feedback Mechanism. Arrow direction indicates the interaction direction. The “+” signs indicate positive interaction (i.e., increase in the first quantity leads to increase in the second quantity), and the “–” signs indicate negative interaction (i.e., increase in the first quantity leads to decrease in the second quantity). The “+/–” signs indicate either that the interaction sign is uncertain or that the sign changes over the annual cycle. [From Fig. 6 in J.A. Curry, J. Schramm, and E.E. Ebert 1995: On the sea ice albedo climate feedback mechanism, J. Climate, 8, 240–247. Reproduced by permission of the American Meteorological Society.]

Component Coupling and Coupled Model Evaluation

The climate system’s complexity and our inability to resolve all relevant processes in models result in a host of choices for development teams. Differing expertise, experience, and interests result in distinct pathways for each climate model. While we eventually expect to see model convergence forced by increasing insights into the climate system’s workings, we are still far from that limit today in several important areas. Given this level of uncertainty, multiple modeling approaches clearly are needed. Models vary in details primarily because development teams have different ideas concerning underlying physical mechanisms relevant to the system’s less-understood features. In the following, we describe some key aspects of model development by the three U.S. groups that contributed models to the IPCC Fourth Assessment[85]. Particular focus is on points most relevant for simulating the 20th Century global mean temperature and on the model’s climate sensitivity.

NOAA GFDL Model-Development Path

The National Oceanic and Atmospheric Administration’s GFDL conducted a thorough restructuring of its atmospheric and climate models for more than 5 years prior to its delivery of models to the CMIP3 database in 2004. This was done partly in response to the need for modernizing software engineering and partly in response to new ideas in modeling the atmosphere, ocean, and sea ice. Differences between the resulting models and the previous generation of climate models at GFDL are varied and substantial. Mapping out exactly why climate sensitivity and other considerations of climate simulations differ between these two generations of models would be very difficult and has not been attempted. Unlike the earlier generation, however, the new models do not use flux adjustments; some other improvements are discussed below.

The new atmospheric models developed at GFDL for global warming studies are referred to as AM2.0 and AM2.1[86]. Key points of departure from previous GFDL models are the adoption of a new numerical core for solving fluid dynamical equations for the atmosphere, the inclusion of liquid and ice concentrations as prognostic variables, and new parameterizations for moist convection and cloud formation. Much atmospheric development was based on running the model over observed sea-surface temperature and sea-ice boundary conditions from 1980 to 2000, with a focus on both the mean climate and the atmospheric response to El Niño–Southern Oscillation (ENSO) variability in the tropical Pacific. Given the basic model configuration, several subgrid closures were varied to optimize climate features. Modest improvements in the midlatitude wind field were obtained by adjusting the “orographic gravity wave drag,” which accounts for the effects of force exerted on the atmosphere by unresolved topographic features. Substantial improvements in simulating tropical rainfall and its response to ENSO were the result of parameter optimization as well, especially the treatment of vertical transport of horizontal momentum by moist convection.

The ocean model chosen for this development is the latest version of the modular ocean model (MOM) developed over several decades at GFDL. Notable new features in this version are a grid structure better suited to simulating the Arctic Ocean and a framework for subgrid-scale mixing that avoids unphysical mixing among oceanic layers of differing densities[87]. A new sea ice model includes an EVP large-scale effective rheology that has proven itself in the past decade in several models and multiple ice thicknesses in each grid box. The land model chosen is relatively simple, with vertically resolved soil temperature but retaining the “bucket hydrology” from the earlier generation of models.

The resulting climate model was studied, restructured, and tuned for an extended period, with particular interest in optimizing the structure and frequency of the model’s spontaneously generated El Niño events, minimizing surface temperature biases, and maintaining an Atlantic overturning circulation of sufficient strength. During this development phase, climate sensitivity was monitored by integrating the model to equilibrium with doubled CO2 when coupled to a “flux-adjusted” slab ocean model. A single model modification reduced the model’s sensitivity range from 4.0 to 4.5 K to between 2.5 and 3.0 K (see Model Climate Sensitivity). The change responsible for this reduction was inclusion of a new model of mixing in the planetary boundary near the Earth’s surface. GFDL included the mixing model because it generated more-realistic boundary-layer depths and nearsurface relative humidities. Sensitivity reduction resulted from modifications to the low-level cloud field; the size of this reduction was not anticipated.

Aerosol distributions used by the model were computed offline from the MOZART II model as described in Horowitz et al.[88]. No attempt was made to simulate indirect aerosol effects (interactions between clouds and aerosols), as confidence in the schemes tested was deemed insufficient. In 20th Century simulations, solar variations followed the prescription of Lean, Beer, and Bradley[89], while volcanic forcing was based on Sato et al.[90]. Stratospheric ozone was prescribed, with the Southern Hemisphere ozone hole prescribed in particular, in 20th Century simulations. A new detailed land-use history provided a time history of vegetation types.

Final tuning of the model’s global energy balance, using two parameters in the cloud prediction scheme, was conducted by examining control simulations of the fully coupled model using fixed 1860 and 1990 forcings (see box, Tuning the Global Mean Energy Balance). The resulting model is described in Delworth et al.[91] and Gnanadesikan et al.[92]. IPCC-relevant runs of this model (CM2.0) were provided to the CMIP3–IPCC archive. Simulations of the 20th Century with time-varying forcings provided to the database and described in Knutson et al.[93] were the first of this kind generated with this model. The model was not retuned, and no iteration of the aerosol or any other time-varying forcings followed these initial simulations.

Model development proceeded in the interim, and a new version emerged rather quickly in which the atmospheric model’s numerical core was replaced by a “finite-volume” code[94]. Treatment of wind fields near the surface improved substantially, which in turn resulted in enhanced extratropical ocean circulation and temperatures. ENSO variability increased in this model to unrealistically large values; however, the ocean code’s efficiency also improved substantially. With retuning of the clouds for global energy balance, the new model CM2.1 was deemed to be an improved model over CM2.0 in several respects, warranting the generation of a new set of database runs. CM2.1, when run with a slab-ocean model, was found to have somewhat increased sensitivity. However, transient climate sensitivity—the global mean warming at the time of CO2 doubling in a fully coupled model with 1% a year increase in CO2—actually is slightly smaller than in CM2.0. Solar, aerosol, volcanic, and greenhouse gas forcings are identical in the two models.

Box: Tuning the Global Mean Energy Balance

A procedure common to all comprehensive climate models is tuning the global mean energy balance. A climate model must be in balance at top of atmosphere (TOA) and globally averaged to within a few tenths of a W/m2 in its control (pre-1860) climate if it is to avoid temperature drifts in 20th and 21st century simulations that would obscure response to imposed changes in greenhouse, aerosol, volcanic, and solar forcings. Especially because of difficulty in modeling clouds but also even in clear sky, untuned models do not currently possess this level of accuracy in their radiative fluxes. Untuned imbalances more typically range up to 5 W/m2. Parameters in the cloud scheme are altered to create a balanced state, often taking care that individual components of this balance—the absorbed solar flux and emitted infrared flux—are individually in agreement with observations, since these help ensure the correct distribution of heating between atmosphere and ocean. This occasionally is referred to as “final tuning” the model to distinguish it from various choices made for other reasons while the model is being configured.

The need for final tuning does not preclude the use of these models for global warming simulations in which radiative forcing itself is on the order of several W/m2. Consider, for example, the Ramaswamy et al.[95] study on the effects of modifying the “water vapor continuum” treatment in a climate model. This is an aspect of the radiative transfer algorithm in which there is significant uncertainty. While modifying continuum treatment can change the TOA balance by more than 1 W/m2, the effect on climate sensitivity is found to be insignificant. The change in radiative transfer in this instance alters the outgoing infrared flux by roughly 1%, and it affects the sensitivity (by changing the flux derivative with respect to temperature) by roughly the same percentage. A sensitivity change of this magnitude, say from 3 K to 3.03 K, is of little consequence given uncertainties in cloud feedbacks. The strength of temperature-dependent feedbacks, not errors in mean fluxes per se, is of particular concern in estimating climatic responses.

Community Climate System Model-Development Path

CCSM3 was released to the climate community in June 2004. CCSM3 is a coupled climate model with components representing the atmosphere, ocean, sea ice, and land surface connected by a flux coupler. CCSM3 is designed to produce realistic simulations over a wide range of spatial resolutions, enabling inexpensive simulations lasting several millennia or detailed studies of continental-scale dynamics, variability, and climate change. Twenty-six papers documenting all aspects of CCSM3 and runs performed with it were published in a special issue of the Journal of Climate [96]. The atmospheric component of CCSM3 is a spectral model. Three different resolutions of CCSM3 are supported. The highest resolution is the configuration used for climate change simulations, with a T85 grid for atmosphere and land and a grid with around 1º resolution for ocean and sea ice but finer meridional resolution near the equator. The second resolution is a T42 grid for atmosphere and land with 1º ocean and sea-ice resolution. A lower-resolution version, designed for paleoclimate studies, has T31 resolution for atmosphere and land and a 3º version of ocean and sea ice.

The new CCSM3 version incorporates several significant improvements in physical parameterizations. Enhancements in model physics are designed to reduce several systematic biases in mean climate produced by previous CCSM versions. These enhancements include new treatments of cloud processes, aerosol radiative forcing, land-atmosphere fluxes, ocean mixed-layer processes, and sea-ice dynamics. Significant improvements are shown in sea-ice thickness, polar radiation budgets, tropical sea-surface temperatures, and cloud radiative effects. CCSM3 produces stable climate simulations of millennial duration without ad hoc adjustments to fluxes exchanged among component models. Nonetheless, there are still systematic biases in ocean-atmosphere fluxes in coastal regions west of continents, the spectrum of ENSO variability, spatial distribution of precipitation in tropical oceans, and continental precipitation and surface air temperatures. Work is under way to produce the next version of CCSM, which will reduce these biases further, and to extend CCSM to a more accurate and comprehensive model of the complete Earth climate system.

CCSM3’s climate sensitivity is weakly dependent on the resolution used. Equilibrium temperature increase due to doubling carbon dioxide, using a slab-ocean model, is 2.71°C, 2.47°C, and 2.32°C, respectively, for the T85, T42, and T31 atmosphere resolutions. The transient climate temperature response to doubling carbon dioxide in fully coupled integrations is much less dependent on resolution, being 1.50°C, 1.48°C, and 1.43°C, respectively, for the T85, T42, and T31 atmosphere resolutions[97].

The following CCSM3 runs were submitted for evaluation for the IPCC Fourth Assessment Report and to the Program for Climate Model Diagnosis and Intercomparison (called PCMDI) for dissemination to the climate scientific community: long, present day, and 1870 control runs; an ensemble of eight 20th Century runs; and smaller ensembles of future scenario runs for the A2, A1B, and B1 scenarios and for the 20th Century commitment run where carbon dioxide levels were kept at their 2000 values. The control and 20th Century runs have been documented and analyzed in several papers in the Journal of Climate special issue, and future climate change projections using CCSM3 have been documented by Meehl et al.[98].

GISS Development Path

The most recent version of the GISS atmospheric GCM, ModelE, resulted from a substantial reworking of the previous version, Model II′. Although model physics has become more complex, execution by the user is simplified as a result of modern software engineering and improved model documentation embedded within the code and accompanying web pages. The model, which can be downloaded from the GISS website by outside users, is designed to run on myriad platforms ranging from laptops to a variety of multiprocessor computers, partly because of NASA’s rapidly shifting computing environment. The most recent (post-AR4) version can be run on an arbitrarily large number of processors.

Historically, GISS has eschewed flux adjustment. Nonetheless, the net energy flux at the top of atmosphere (TOA) and surface has been reduced to near zero by adjusting threshold relative humidity for water and ice cloud formation, two parameters that otherwise are weakly constrained by observations. Near-zero fluxes at these levels are necessary to minimize drift of either the ocean or the coupled climate. To assess climate-response sensitivity to treatment of the ocean, ModelE has been coupled to a slab-ocean model with prescribed horizontal heat transport, along with two ocean GCMs. One GCM, the Russell ocean[99], has 13 vertical layers and horizontal resolution of 4º latitude by 5º longitude and is mass conserving (rather than volume conserving like the GFDL MOM). Alternatively, ModelE is coupled to the Hybrid Coordinate Ocean Model (HYCOM), an isopycnal model developed originally at the University of Miami[100]. HYCOM has 2º latitude by 2º longitude resolution at the equator, with latitudinal spacing decreasing poleward with the cosine of latitude. A separate rectilinear grid is used in the Arctic to avoid polar singularity and joins the spherical grid around 60°N.

Climate sensitivity to CO2 doubling depends upon the ocean model due to differences in sea ice. Climate sensitivity is 2.7°C for the slabocean model and 2.9°C for the Russell ocean GCM[101]. As at GFDL and CCSM, no effort is made to match a particular sensitivity, nor is the sensitivity or forcing adjusted to match 20th Century climate trends[102]. Aerosol forcing is calculated from prescribed concentration, computed offline by a physical model of the aerosol life cycle. In contrast to GFDL and NCAR models, ModelE includes a representation of the aerosol indirect effect. Cloud droplet formation is related empirically to the availability of cloud condensation nuclei, which depends upon the prescribed aerosol concentration[103].

Flexibility is emphasized in model development[104]. ModelE is designed for a variety of applications ranging from simulation of stratospheric dynamics and middle-atmosphere response to solar forcing to projection of 21st Century trends in surface climate. Horizontal resolution typically is 4º latitude by 5º longitude, although twice that resolution is used more often for studies of cloud processes. The model top has been raised from 10 mb (as in the previous Model II') to 0.1 mb, so the top has less influence on stratospheric circulation. Coding emphasizes “plug-and-play” structure, so the model can be adapted easily for future needs such as fully interactive carbon and nitrogen cycles.

Model development is devoted to improving the realism of individual model parameterizations, such as the planetary boundary layer or sea-ice dynamics. Because of the variety of applications, relatively little emphasis is placed on optimizing the simulation of specific phenomena such as El Niño or the Atlantic thermohaline circulation; as noted above, successful reproduction of one phenomenon usually results in a suboptimal simulation of another. Nonetheless, some effort was made to reduce biases in previous model versions that emerged from the interaction of various model features such as subtropical low clouds, tropical rainfall, and variability of stratospheric winds. Some model adjustments were structural, as opposed to the adjustment of a particular parameter—for example, introduction of a new planetary boundary layer parameterization that reduced unrealistic cloud formation in the lowest model level[105].

Because of their uniform horizontal coverage, satellite retrievals are emphasized for model evaluation like Earth Radiation Budget Experiment fluxes at TOA, Microwave Sounding Unit channels 2 (troposphere) and 4 (stratosphere) temperatures, and International Satellite Cloud Climatology Project(ISCCP) diagnostics. Comparison to ISCCP is through a special algorithm that samples GCM output to mimic data collection by an orbiting satellite. For example, high clouds may include contributions from lower levels in both the model and the downwardlooking satellite instrument. This satellite perspective within the model allows a rigorous comparison to observations. In addition to satellite retrievals, some GCM fields like zonal wind are compared to in situ observations adjusted by European Center for Medium Range Weather Forecasts’ 40-year reanalysis data[106]. Surface air temperature is taken from the Climate Research Unit gridded global surface temperature dataset[107].

Common Problems

The CCSM and GFDL development teams met several times to compare experiences and discuss common problems in the two models. A subject of considerable discussion and concern was the tendency for an overly strong “cold tongue” to develop in the eastern equatorial [[Pacific] Ocean] and for associated errors to appear in the pattern of precipitation in the Inter-Tropical Convergence Zone (often referred to as the “double-ITCZ problem”). Meeting attendees noted that the equilibrium climate sensitivities of the two models to doubled atmospheric carbon dioxide (see Model Climate Sensitivity) had converged from earlier generations in which the NCAR model was on the low end of the canonical sensitivity range of 1.5 to 4.5 K, while the GFDL model was near the high end. This convergence in global mean sensitivity was considered coincidental because no specific actions were taken to engineer convergence. It was not accompanied by any noticeable convergence in cloud-feedback specifics or in the regional temperature changes that make up global mean values.

Reductive vs Holistic Evaluation of Model

To evaluate models, appreciation of their structure is necessary. For example, discussion of climatic response to increasing greenhouse gases is intimately related to the question of how infrared radiation escaping to space is controlled. When summarizing results from climate models, modelers often speak and think in terms of a simple energy balance model in which the global mean infrared energy escaping to space has a simple dependence on global mean surface temperature. Water vapor or cloud feedbacks often are incorporated into such global mean energy balance models with simple relationships that can be tailored easily to generate a desired result. In contrast, Fig. 2.4 shows a snapshot at an instant when infrared radiation is escaping to space in the kind of AGCM discussed in this report. Detailed distributions of clouds and water vapor simulated by the model and transported by the model’s evolving wind fields create complex patterns in space and time that, if the simulation is sufficiently realistic, resemble images seen from satellites viewing Earth at infrared wavelengths.

As described above, AGCMs evolve the state of atmosphere and land system forward in time, starting from some initial condition. They consist of rules that generate the state of a variable (e.g., temperature, wind, water vapor, clouds (Clouds), rainfall rate, water storage in the land, and landsurface temperature) from its preceding state roughly a half-hour earlier. By this process a model simulates the weather over the Earth. To change the way the model’s infrared radiation reacts to increasing temperatures, the rules would have to be modified.

One goal of climate modeling is to decrease empiricism and base models as much as possible on well-established physical principles. This goal is pursued primarily by decomposing the climate system into a number of relatively simple processes and interactions. Modelers focus on rules governing the evolution of these individual processes rather than working with more holistic concepts such as global mean infrared radiation escaping to space, average summertime rainfall over Africa, and average wintertime surface pressure over the Arctic. These are all outcomes of the model, determined by the set of reductive rules that govern the model’s evolution.

Suppose the topic under study is how ocean temperatures affect rainfall over Africa. An empirical statistical model could be developed using observations and standard statistical techniques in which the model is tuned to these observations. Alternatively, one can use an AGCM giving results like those pictured in Fig. 2.4. An AGCM does not deal directly with high-level climate output such as African rainfall averaged over some period. Rather, it attempts to simulate the climate system’s inner workings or dynamics at a much finer level of granularity. To the extent that the simulation is successful and convincing, the model can be analyzed and manipulated to uncover the detailed physical mechanisms underlying the connection between ocean temperatures and rainfall over Africa. The AGCM-simulated connection may or may not be as good as the fit obtained with the explicitly tuned statistical model, but a reductive model ideally provides a different level of confidence in its explanatory and predictive power. See, for example, Hoerling et al.[108] for an analysis of African rainfall and ocean temperature relationships in a set of AGCMs.

Our confidence in the explanatory and predictive power of climate models grows with their ability to simulate many climate system features simultaneously with the same set of physically based rules. When a model’s ability to simulate the evolution of global mean temperature over the 20th Century is evaluated, it is important to try to make this evaluation in the context of the model’s ability to spontaneously generate El Niño variability of the correct frequency and spatial structure, for example, and to capture the effects of El Niño on rainfall and clouds. Simulation quality adds confidence in the reductive rules being used to generate simultaneous simulation of all these phenomena.

A difficulty to which we will return frequently in this report is that of relating climate-simulation qualities to a level of confidence in the model’s ability to predict climate change.

Figure 2.4. A Snapshot in Time of Infrared Radiation Escaping to Space in a Version of Atmospheric Model AM2 Constructed at NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL 2004). The largest amount of energy emitted is in the darkest areas, and the least is in the brightest areas. This version of the atmospheric model has higher resolution than that used for simulations in the CMIP3 archive (50 km rather than 200 km), but, other than resolution, it uses the same numerical algorithm. The resolution is typical in many current studies with atmosphere-only simulations.

Use of Model Metrics

Recently, objective evaluation has exploded with the wide availability of model simulation results in the CMIP3 database[109]. One important area of research is in the design of metrics to test the ability of models to simulate well-observed climate features[110]. Aspects of observed climate that must be simulated to ensure reliable future predictions are unclear. For example, models that simulate the most realistic present-day temperatures for North America may not generate the most reliable projections of future temperature changes. Projected climate changes in North America may depend strongly on temperature changes in the tropical Pacific Ocean and the manner in which the jet stream responds to them. The quality of a model’s simulation of air-sea coupling over the Pacific might be a more relevant metric. However, metrics can provide guidance about overall strengths and weaknesses of individual models, as well as the general state of modeling.

The use of metrics also can explain why the “best” climate model cannot be chosen at this time. In Fig. 2.5 below, each colored triangle represents a different metric for which each model was evaluated (e.g., “ts” represents surface temperature). The figure displays the relative error value for a variety of metrics for each model, represented by a vertical column above each tick mark on the horizontal axis. Values less than zero represent a better-than-average simulation of a particular field measured by the metric, while values greater than zero show models with errors greater than the average. The black triangles connected by the dashed line represent the normalized sum from the errors of all 23 fields. The models were ranked from left to right based on the value of this total error. As can be seen, models with the lowest total errors tend to score better than average in most individual metrics but not in all. For an individual application, the model with the lowest total errors may not be the best choice.

Climate Simulations Discussed in this Report

Three types of climate simulation discussed in this report are described below. They differ according to which climate-forcing factors are used as model input.

Control runs use constant forcing. The sun’s energy output and the atmospheric concentrations of carbon dioxide and other gases and aerosols do not change in control runs. As with other types of climate simulation, day-night and seasonal variations occur, along with internal “oscillations” such as ENSO. Other than these variations, the control run of a well-behaved climate model is expected eventually to reach a steady state.

Values of control-run forcing factors often are set to match present-day conditions, and model output is compared with present-day observations. Actually, today’s climate is affected not only by current forcing but also by the history of forcing over time—in particular, past emissions of greenhouse gases. Nevertheless, present-day control-run output and present-day observations are expected to agree fairly closely if models are reasonably accurate. (See Model Simulation of Major Climate Features for comparison of model control runs with observations.)

Idealized climate simulations are aimed at understanding important processes in models and in the real world. They include experiments in which the amount of atmospheric carbon dioxide increases at precisely 1% per year (about twice the current rate) or doubles instantaneously. Carbon dioxide doubling experiments typically are run until the simulated climate reaches a steady state of equilibrium with the enhanced greenhouse effect. Until the mid- 1990s, idealized simulations often were employed to assess possible future climate changes including human-induced global warming. Recently, however, more realistic time-evolving simulations (defined immediately below) have been used for making climate predictions. (See Model Climate Sensitivity for discussion of idealized simulations and their implications for climate sensitivity.)

Time-dependent climate-forcing simulations are the most realistic, especially for eras in which climate forcing is changing rapidly, such as the 20th and 21st centuries. Input for 20th Century simulations includes observed time-varying values of solar energy output, atmospheric carbon dioxide, and other climate-relevant gases and aerosols, including those produced in volcanic eruptions. Each modeling group uses its own best estimate of these factors. Significant uncertainties occur in many of them, especially atmospheric aerosols, so different models use different input for their 20th Century simulations. ( See Model Climate Sensitivity for discussion of uncertainties in climate-forcing factors and Climate models and twentieth century trends for discussion of 20th Century simulations after comparing control runs with observations.)

Time-evolving climate forcing also is used as input for modeling future climate change. This subject is discussed in CCSP Synthesis and Assessment Product 3.2. Finally, we mention for the record simulations of the distant past (various time periods ranging from early Earth up to the 19th Century). These simulations are not discussed in this report, but some of them have been used to loosely “paleocalibrate” simulations of the more recent past and the future [111].

Figure 2.5. Model Metrics for 23 Different Climate Fields. Values less than 0 indicate an error less than the average CMIP3 model, while values greater than 0 are more than the average. The black triangles connected by the black line show a total score obtained by averaging all 23 fields. Each tick mark represents a different model. [Figure adapted from P.J. Gleckler, K.E. Taylor, and C. Doutriaux 2008: Performance metrics for climate models. J. Geophysical Research, 113, D06104, doi:10.1029/2007JD008972. Reproduced by permission of the American Geophysical Union (AGU).]

Climate Models: An Assessment of Strengths and Limitations - Table of Contents

  1. Strengths and limitations of climate models: Executive Summary
  2. History of climate model development
  3. Global Climate System Models
  4. Downscaling and Regional Climate Models
  5. Model Climate Sensitivity
  6. Model Simulation of Major Climate Features
  7. Future Climate Model Development
  8. Applications of Climate Model Results

References

This article was initially drawn from CCSP, 2008: Climate Models: An Assessment of Strengths and Limitations. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research D.C., C. Covey, W.J. Gutowski Jr., I.M. Held, K.E. Kunkel, R.L. Miller, R.T. Tokmakian and M.H. Zhang (Authors). Department of Energy, Office of Biological and Environmental Research, Washington, D.C., USA, 124 pp.

Citation

Energy, D. (2012). Global Climate System Models. Retrieved from http://editors.eol.org/eoearth/wiki/Global_Climate_System_Models