iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://plato.stanford.edu/entries/climate-science/
Climate Science (Stanford Encyclopedia of Philosophy)

Climate Science

First published Fri May 11, 2018

Climate science investigates the structure and dynamics of earth’s climate system. It seeks to understand how global, regional and local climates are maintained as well as the processes by which they change over time. In doing so, it employs observations and theory from a variety of domains, including meteorology, oceanography, physics, chemistry and more. These resources also inform the development of computer models of the climate system, which are a mainstay of climate research today. This entry provides an overview of some of the core concepts and practices of contemporary climate science as well as philosophical work that engages with them. The focus is primarily on epistemological and methodological issues that arise when producing climate datasets and when constructing, using and evaluating climate models. Some key questions and findings about anthropogenic climate change are also discussed.

1. Introduction

The field of climate science emerged in the second half of the twentieth century. Though it is sometimes also referred to as “climatology”, it differs markedly from the field of climatology that came before. That climatology, which existed from the late-nineteenth century (if not earlier), was an inductive science, in many ways more akin to geography than to physics; it developed systems for classifying climates based on empirical criteria and, by the mid-twentieth century, was increasingly focused on the calculation of statistics from weather observations (Nebeker 1995; Edwards 2010; Weart 2008 [2017, Other Internet Resources]; Heymann & Achermann forthcoming). Climate science, by contrast, aims to explain and predict the workings of a global climate system—encompassing the atmosphere, oceans, land surface, ice sheets and more—and it makes extensive use of both theoretical knowledge and mathematical modeling. In fact, the emergence of climate science is closely linked to the rise of digital computing, which made it possible to simulate the large-scale motions of the atmosphere and oceans using fluid dynamical equations that were otherwise intractable; these motions transport mass, heat, moisture and other quantities that shape paradigmatic climate variables, such as average surface temperature and rainfall. Today, complex computer models that represent a wide range of climate system processes are a mainstay of climate research.

The emergence of climate science is also linked to the issue of anthropogenic climate change. In recent decades, growing concern about climate change has brought a substantial influx of funding for climate research. It is a misconception, however, that climate science just is the study of anthropogenic climate change. On the contrary, there has been, and continues to be, a significant body of research within climate science that addresses fundamental questions about the workings of the climate system. This includes questions about how energy flows in the system, about the roles of particular physical processes in shaping climates, about the interactions that occur among climate system components, about natural oscillations within the system, about climate system feedbacks, about the predictability of the climate system, and much more.

This entry provides an overview of some of the core concepts and practices of contemporary climate science, along with philosophical work that engages with them. Thus far, most of this philosophical work has focused on the epistemology of climate modeling and closely-related topics. Section 2 introduces a few basic concepts: climate system, climate and climate change. Section 3 turns to climate data, highlighting some of the complexities and challenges that arise when producing three important types of climate dataset. Section 4 focuses on climate modeling, briefly describing some of the main types of climate model employed today and then considering in more detail the construction, use and evaluation of complex global climate models. Finally, Section 5 discusses research in climate science that addresses the issue of anthropogenic climate change, as well as several recent controversies related to this research; it also provides some pointers to the large literature that has emerged on ethics and climate change.

2. Basic Concepts

On a standard characterization, Earth’s climate system is the complex, interactive system consisting of the atmosphere, hydrosphere, cryosphere, lithosphere and biosphere (IPCC-glossary: 1451). Some definitions appear to be narrower, limiting the scope of the climate system to those aspects of the atmosphere, hydrosphere, etc. that jointly determine the values of paradigmatic climate variables, such as average surface temperature and rainfall, in response to external influences (see, e.g., American Meteorological Society 2017a). In either case, it would seem that some human activities, including those that release greenhouse gases to the atmosphere and those that change the land surface, are part of the climate system. In practice, however, climate scientists often classify such human activities as external influences, along with volcanic eruptions (which eject reflective aerosols and other material into the atmosphere) and solar output (which is the primary source of energy for the system) (see IPCC-glossary: 1454). Classifying human activities as external to the climate system seems to be a pragmatic choice—it is easier, and a reasonable first approximation, to represent anthropogenic greenhouse gas emissions as exogenous variables in climate models—though it may also reflect a deeper ambivalence about whether humans are part of nature.

A climate is a property of a climate system, but there are different views on what sort of property it is. According to what might be called actualist views, a climate is defined by actual conditions in the climate system. Narrower actualist definitions, which have their roots in climatology (see Section 1), reference weather conditions only. For example, the climate of a region is often defined as its average weather conditions, or the statistical distribution of those conditions, when long time periods are considered (Houghton 2015: 2; IPCC-glossary: 1450). The standard period of analysis, set by the World Meteorological Organization, is 30 years, defining a “climate normal” for a region. Given recent rates of change in climate system conditions, however, some climate scientists suggest that climate normals should now be defined over much shorter periods (Arguez & Vose 2011). Broader actualist definitions of climate reference conditions throughout the climate system. The Intergovernmental Panel on Climate Change (IPCC), for example, gives a broader definition of climate as “the state, including a statistical description, of the climate system” (IPCC-glossary: 1450). These broader definitions emerged in the second half of the twentieth century in connection with increased efforts to understand, with the help of physical theory, how climates in the narrower sense are maintained and changed via atmospheric, oceanic and other processes. These efforts in effect marked the emergence of climate science itself (see Section 1).

A number of other definitions of “climate” are what Werndl (2016) calls model-immanent: they reference conditions indicated by a model of the climate system, rather than actual conditions. For example, taking a dynamical systems perspective, climate is often identified with an attractor of the climate system, that is, with the conditions represented by an attractor of a (perfect) mathematical model of the climate system (Palmer 1999; Smith 2002). Roughly speaking, an attractor is a set of points to which the values of variables in a dynamical model tend to evolve from a wide range of starting values. Actual conditions in the climate system over a finite period are then understood to be a single realization from the set of possibilities represented by the attractor. A practical drawback of this way of thinking about climate is that the distribution of conditions in a given finite time period need not resemble those represented by an (infinite-time) attractor (Smith 2002; Werndl 2016). More fundamentally, standard (autonomous) dynamical systems theory does not apply to models of the climate system in which external factors like solar output and greenhouse gas concentrations are changing significantly with time, as they are for the real climate system; how to apply the resources of non-autonomous dynamical systems theory to the climate system is a current area of research (e.g., Chekroun et al. 2011; Drόtos et al. 2015).

Werndl (2016) provides a critical, philosophical examination of several definitions of climate and proposes a novel, model-immanent definition on which a climate is a distribution of values of climate variables over a suitably-long but finite time period, under a regime of external conditions and starting from a particular initial state. Roughly speaking, a regime is a pattern of variation in external conditions that has an approximately constant mean value over different sub-periods of a chosen time period. This definition is model-immanent because it refers to the distribution of conditions that would obtain (i.e., the conditions that a perfect model of the climate system would indicate) if the climate system were subjected to a regime for a suitably-long time period; this distribution defines the climate of periods in which that regime actually obtains, even if those periods are relatively short. Werndl’s definition avoids many of the problems that she finds with other definitions, but still some questions remain, including how to employ the regime concept when external conditions are changing rapidly.

Definitions of climate change are closely related to definitions of climate. For instance, scientists subscribing to the climate-as-attractor view might characterize climate change as the difference between two attractors, one associated with a set of external conditions obtaining at an earlier time and one associated with a different set obtaining at a later time. By contrast, a definition of climate change associated with a narrower, actualist view is:

any systematic change in the long-term statistics of climate elements (such as temperature, pressure, or winds) sustained over several decades or longer. (American Meteorological Society 2017b)

The latter definition, unlike the former, allows that climate change might occur even in the absence of any changes in external conditions, as a result of natural processes internal to the climate system (e.g., slowly-evolving ocean circulations). That is, the latter definition allows that climate change can be a manifestation of internal variability in the climate system.

The notion of internal variability, and various other concepts in climate science, also raise interesting questions and problems, both conceptual and empirical (Katzav and Parker forthcoming). Thus far, however, the foundations of climate science have remained largely unexplored by philosophers.

3. Climate Data

The sources and types of observational data employed in climate science are tremendously varied. Data are collected not only at land-based stations, but also on ships and buoys in the ocean, on airplanes, on satellites that orbit the earth, by drilling into ancient ice at earth’s poles, by examining tree rings and ocean sediments, and in other ways. Many challenges arise as climate scientists attempt to use these varied data to answer questions about climate and climate change. These challenges stem both from the character of the data—they are gappy in space and time, and they are obtained from instruments and other sources that have limited lifespans and that vary in quality and resolution—and from the nature of the questions that climate scientists seek to address, which include questions about long-term changes on a global scale.

To try to overcome these challenges, climate scientists employ a rich collection of data modeling practices (Edwards 2010; Frigg et al. 2015b). These are practices that take available observations and apply various procedures for quality control, correction, synthesis and transformation. Some of these practices will be highlighted below, in the course of introducing three important types of climate dataset: station-based datasets (Section 3.1), reanalyses (Section 3.2), and paleoclimate reconstructions (Section 3.3). Because of the extensive data modeling involved in their production, these datasets are often referred to as data products. As the discussion below will suggest, an interesting feature of data modeling practices in climate science is that they tend to be dynamic and iterative: data models for nearly the same set of historical observations are further and further refined, to address limitations of previous efforts.

3.1 Station-Based Datasets

The weather and climate conditions that matter most immediately to people are those near the earth’s surface, where they live. Coordinated networks of land-based observing stations—measuring near-surface temperature, pressure, precipitation, humidity and sometimes other variables—began to emerge in the mid-nineteenth century and expanded rapidly in the twentieth century (Fleming 1998: Ch.3). Today, there are thousands of stations around the world making daily observations of these conditions, often overseen by national meteorological services. In recent decades, there have been major efforts to bring together records of past surface observations in order to produce long-term global datasets that are useful for climate change research (e.g., Menne et al. 2012; Rennie et al. 2014). These ongoing efforts involve international cooperation as well as significant “data rescue” activities, including imaging and digitizing of paper records, in some cases with the help of the public.

Obtaining digitized station data, however, is just the first step. As Edwards (2010: 321) emphasizes, “…if you want global data, you have to make them”. To construct global temperature datasets that are useful for climate research, thousands of station records, amounting to millions of individual observational records, are merged, subjected to quality control, homogenized and transformed to a grid. Records come from multiple sources, and merging aims to avoid redundancies while maximizing comprehensiveness in station coverage (Rennie et al. 2014). Procedures for quality control seek to identify and remove erroneous data. For example, production of the Global Historical Climate Network—Daily (GCHN-Daily) database involves 19 automated quality assurance tests designed to detect duplicate data, climatological outliers and spatial, temporal and internal inconsistencies (Durre et al. 2010). Homogenization attempts to remove jumps and trends in station time series that are due to non-climatic factors, e.g., because an instrument is replaced with a new one, a building is constructed nearby, or the timing of observations changes. Homogenization methods rely on station metadata, when it is available, as well as physical understanding and statistical techniques, such as change-point analysis (Costa & Soares 2009). Finally, for many purposes, it is useful to have datasets that are gridded, providing temperature values at points on a grid, where each point is associated with a spatial region (e.g., 2° latitude × 2° longitude). Transforming from a collection of stations to a grid involves further methodological choices: which stations should influence the value assigned to a given grid point, what to do if the associated region contains no reporting stations over a period, etc. In practice, scientific groups make different methodological choices (Hartmann et al. 2013).

Gridded station-based datasets for temperature, precipitation and other variables have been developed (e.g., Harris et al. 2014). Surface temperature datasets have attracted particular attention, because of their role in efforts to quantify the extent of recent global warming. Three prominent sources for such temperature datasets are NASA’s Goddard Institute for Space Studies (GISS), the University of East Anglia’s Climatic Research Unit (CRU) and the U.S. National Centers for Environmental Information (NCEI). Periodically, these groups develop new versions of their datasets reflecting both the acquisition of additional data as well as methodological innovation, e.g., addressing additional sources of inhomogeneity (see Hansen et al. 2010; Jones et al. 2012; Lawrimore et al. 2011). Despite the many different methodological choices involved in producing these datasets, there is good agreement among analyses of twentieth century global land surface temperature changes derived from them, especially for the second half of the twentieth century. Nevertheless, climate contrarians have expressed concern that these analyses exaggerate late twentieth-century warming. This recently motivated a fourth, independent analysis by the non-governmental organization Berkeley Earth; drawing on a much larger set of station records and using a quite different geostatistical methodology for handling inhomogeneities, their analysis confirmed the twentieth century global warming seen in other datasets (Rohde et al. 2013).

3.2 Reanalyses

In situ observations of conditions away from earth’s surface are much less plentiful than surface observations. Radiosondes, which are balloon-borne instrument packages that ascend through the atmosphere measuring pressure, temperature, humidity and other variables, are now launched twice daily at hundreds of sites around the world, but they do not provide even, global coverage (Ingleby et al. 2016). Satellite-borne instruments can provide global coverage—and are of significant value in the study of weather and climate—but they do not directly measure vertical profiles of key climatological variables like temperature; approximate values of these variables must be inferred in a complex way from radiance measurements. A recently-established system of several thousand ocean floats, known as Argo, takes local profiles of temperature and salinity in the top 2000 meters of the ocean roughly every 10 days (Riser et al. 2016). None of these data sources existed a century ago, though observations of conditions away from earth’s surface were sometimes made (e.g., using thermometers attached to kites).

One way to remedy spatial and temporal gaps in observations is to perform statistical interpolation. Beginning in the 1990s, however, climate scientists also began to employ a different type of methodology, known as data assimilation (Kalnay 2003). First developed in the context of weather forecasting, data assimilation estimates the three-dimensional state of the atmosphere or ocean using not only available observations but also one or more forecasts from a physics-based simulation model. The forecast constitutes a first-guess estimate of the atmosphere or ocean state at the time of interest; this is updated in light of observational data collected around that time. In daily weather forecasting, the resulting best estimate is known as the analysis, and it provides initial conditions for weather prediction models. To produce long-term datasets for climate research, data assimilation is performed iteratively for a sequence of past times (e.g., every 12 hours over several decades), producing a retrospective analysis or reanalysis for each time in the sequence (Bengtsson & Shukla 1988; Edwards 2010).

Atmospheric reanalysis datasets are in heavy use in climate research, because they provide complete gridded data at regular time steps over long time periods, both at the surface and above, for a wide range of variables, including ones that are difficult to measure with instruments. Many reanalysis datasets cover a few decades of the twentieth century and are produced using as much of the available observational data as is feasible (see Dee et al. 2016 for an overview); when satellite data are available, this can amount to millions of observations for each analysis period in the sequence. An interesting exception is NOAA’s 20th Century Reanalysis (20CR) project: it covers the entire twentieth century and assimilates only surface pressure observations and observed monthly sea surface temperatures and sea ice distribution (Compo et al. 2011). 20CR is not informed by any temperature observations over land, yet its results indicate twentieth century global warming over land similar to that seen in station-based temperature datasets.

20CR has been described as providing independent “observational” confirmation of the twentieth century warming seen in station-based datasets (Compo et al. 2013). More generally, despite the central role of computer forecasts in reanalysis, many climate scientists refer to reanalysis data as “observations” and use them as such: to investigate climate system dynamics, to evaluate climate models, to find evidence of the causes of recent climate change, and so on. Other climate scientists, however, emphasize that reanalyses should not be confused with “real” observations (e.g., Schmidt 2011). Parker (2017) argues that differences between data assimilation and traditional observation and measurement are not as great as one might think; she suggests that data assimilation can be understood as a complex measuring procedure that is still under development.

3.3 Paleoclimate Reconstructions

Climate scientists are also interested in climates of the distant past, before the advent of meteorological instruments. These paleoclimatic investigations rely on proxies: aspects of the natural environment that are “interpreted, using physical and biophysical principles, to represent some combination of climate-related variations back in time” (IPCC-glossary: 1460). For example, variations in the ratio of different isotopes of oxygen in deep ice cores and in the fossilized shells of tiny animals serve as proxies for variations in temperature. Additional proxies for climate-related variables come via tree rings, corals, lake sediments, boreholes and other sources (PAGES-2K-Consortium 2013; Masson-Delmotte et al. 2013).

Producing proxy-based paleoclimate reconstructions, especially on hemispheric or global scales, involves a host of methodological challenges. Just a few will be mentioned here. First, as the characterization above suggests, proxies often reflect the influence of multiple environmental factors at once. For example, tree rings can be influenced not only by temperature but also by precipitation, soil quality, cloud cover, etc., which makes it more difficult to confidently infer a single variable of interest, such as temperature. Second, proxies often must be calibrated using recent observations made with meteorological instruments, but the instrumental record covers only a very short period in earth’s history, and factors shaping the proxy in the distant past may be somewhat different. Third, proxies of a given type can have limited geographical coverage: ice cores are found only at the poles, tree rings are absent in locations where there is no distinct growing season, and so on. Fourth, the temporal resolution of different types of proxy can differ markedly—from a single year to a century or more—adding a layer of complexity when attempting to use multiple proxies together. For these reasons and others, quantifying the uncertainties associated with proxy reconstructions is also a significant challenge (Frank et al. 2010).

Despite these challenges, climate scientists have produced paleoclimatic reconstructions covering various regions, time periods and climate-related variables, including temperature, precipitation, streamflow, vegetation and more. Temperature reconstructions in particular have been a source of controversy (see Section 5.3), in part because they underwrite conclusions like this one, from the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report: the period 1983–2012 was very likely (i.e., probability >0.95) the warmest 30-year period of the last 800 years (Masson-Delmotte et al. 2013: 386). Vezér (2016a) suggests that inferences to such conclusions can be characterized as a form of variety-of-evidence reasoning, insofar as they involve multiple temperature reconstructions informed by different proxies, methodological assumptions and statistical techniques (see also Oreskes 2007).

4. Climate Modeling

Models of the climate system, especially computer simulation models, have come to occupy a central place in both theoretical and applied research in climate science. After providing an overview of some common types of climate model (Section 4.1), this section focuses primarily on atmosphere-ocean general circulation models and earth system models, discussing some noteworthy features of their construction (Section 4.2), some of their most important uses (Section 4.3), and how they are evaluated (Section 4.4). The focus is on these complex climate models both because they have attracted the most attention from philosophers and, relatedly, because they play particularly important roles in climate research today, including climate change research.

4.1 Types of Climate Model

Climate scientists often speak of a “hierarchy” or “spectrum” of climate models, ranging from simple to complex. The complexity of climate models increases with: the number of spatial dimensions represented; the resolution at which those dimensions are represented; the range of climate system components and processes “included” in the model; and the extent to which those processes are given realistic rather than simplified representations. As a consequence, more complex models tend to be more computationally demanding as well. The discussion below introduces a few types of model, illustrating different levels of complexity (for other types see, e.g., Stocker 2011; McGuffie & Henderson-Sellars 2014). The types discussed here are physics-based, in the sense that they are grounded to a significant extent in physical theory. Data-driven or “empirical” climate models also have been developed for some purposes but are less common and will be mentioned only in passing.

Among the simplest climate models are energy balance models (EBMs), designed to represent earth’s surface energy budget in a highly aggregate way (McGuffie & Henderson-Sellars 2014: Ch.3). These models, which often have surface temperature as their sole dependent variable, are constructed using both physical theory (e.g., the Stefan-Boltzmann equation) and empirical parameters (e.g., representing the albedo of the earth and the emissivity of the atmosphere). Zero-dimensional EBMs represent the entire climate system as a single point. They can be used to calculate by hand an estimate of the globally-averaged surface temperature when the system is in radiative equilibrium. One-dimensional (1-D) and two-dimensional (2-D) EBMs take a similar energy-budget approach but represent average temperature at different latitudes and/or longitudes and account in a rough way for the transport of heat between them (see Sellers 1969 for an early example). The equations of 1-D and 2-D EBMs are usually solved with the help of digital computers.[1]

Earth system models of intermediate complexity (EMICs) come in a range of forms but tend to be both comprehensive and highly idealized (Claussen et al. 2002). An EMIC might incorporate not only representations of the atmosphere, ocean, land surface and sea ice, but also representations of some biospheric processes, ice sheets and ocean sediment processes. These representations, however, are often relatively simple or coarse. The atmosphere component of an EMIC, for instance, might be a two-dimensional enhanced version of an EBM known as an energy moisture balance model. The ocean might be simulated explicitly in three dimensions using fluid dynamical equations, but with low spatiotemporal resolution. (See Flato et al. 2013: Table 9.2 and 9.A.2 for examples and further details.) The relative simplicity and coarseness of EMICs makes them computationally efficient, however, which in turn makes it possible to use them to simulate the climate system on millennial time scales.

At the complex end of the spectrum, coupled ocean-atmosphere general circulation models (GCMs) simulate atmospheric and oceanic motions in three spatial dimensions, often at as high a resolution as available supercomputing power allows. They also incorporate representations of the land surface and sea ice, and they attempt to account for important interactions among all of these components. GCMs evolved from atmosphere-only general circulation models, which in turn were inspired by early weather forecasting models (Edwards 2000; Weart 2010). The latest generation of climate models, earth system models (ESMs), extend GCMs by incorporating additional model components related to atmospheric chemistry, aerosols and/or ocean biogeochemistry (Flato et al. 2013: 747). In both GCMs and ESMs, numerical methods are used to estimate solutions to discretized versions of fluid dynamical equations at a set of points on a three-dimensional grid.[2] With increased computing power in recent decades, the horizontal spacing of these grid points for the atmosphere has reduced from several hundred kilometres to ~100 km, and the number of vertical layers has increased from ~10 to ~50, with a time step of 10–30 minutes (McGuffie & Henderson-Sellers 2014: 282). Despite this increased resolution, many important processes—including the formation of clouds and precipitation, the transfer of radiation, and chemical reactions—still occur at sub-grid scales; accounting for the effects of these processes is a major challenge, as discussed in the next section.

Another important type of climate model is the regional climate model (RCM). Like GCMs and ESMs, RCMs are designed to be comprehensive and to incorporate realistic, theory-based representations of climate system processes. Unlike GCMs and ESMs, however, RCMs represent only a portion of the globe (e.g., a continent or country), which allows them to have a higher spatiotemporal resolution without exceeding available computing power. With this higher resolution, RCMs can explicitly simulate smaller-scale processes, and they also have the potential to reveal spatial variations in conditions—due to complex topography, for instance—that cannot be resolved by GCMs/ESMs. These features make RCMs an attractive tool for studies of regional climate change (e.g., Mearns et al. 2013). A challenge, however, is specifying conditions at the horizontal boundaries of the modelled region in a physically-consistent way; in practice, these boundary conditions often are informed by results from GCM/ESM simulations.

4.2 Constructing Climate Models

Currently, there are a few dozen state-of-the-art GCMs/ESMs housed at modeling centers around the world (see Flato et al. 2013: Table 9.1). These are huge models, in some cases involving more than a million lines of computer code; it takes considerable time to produce simulations of interest with them, even on today’s supercomputers. To construct a GCM/ESM from scratch requires a vast range of knowledge and expertise (see Lahsen 2005). Consequently, many of today’s GCMs and ESMs have been built upon the foundations of earlier generations of models (Knutti et al. 2013). Such models often have a layered history, with some parts of their code originally developed years or even decades ago—sometimes by scientists who are no longer working at the modeling center—and other parts just added or upgraded. Even models at different modeling centers sometimes have pieces of computer code in common, whether shared directly or borrowed independently from an earlier model.

Many of today’s GCMs/ESMs have an ostensibly modular design: they consist of several component models corresponding to different parts of the climate system—atmosphere, ocean, land surface, etc.—as well as a “coupler” that passes information between them where the spatial boundaries of the component systems meet and that must account for any differences in the component models’ spatiotemporal resolution (Alexander & Easterbrook 2015). Often these component models are modified versions of named, stand-alone models. For example, the Community Earth System Model version 1.2 (CESM1.2) housed at the U.S. National Center for Atmospheric Research incorporates the Community Atmosphere Model (CAM) and an extension of the Parallel Ocean Program (POP). Such modularity is intended to allow for easier reconfiguration of a GCM/ESM for different modeling studies, e.g., replacing the ocean model with a simpler version when this will suffice. Lenhard and Winsberg (2010), however, argue that complex climate models in fact exhibit only a “fuzzy” modularity: in order for the model as a whole to work well, some of the details of the component models will be adjusted to mesh with the particular features of other components and to try to compensate for their particular limitations.

Each component model in a GCM/ESM in turn incorporates representations of a number of important processes operating within that part of the climate system. For atmosphere models, for example, it is common to distinguish the “dynamics” and the “physics” (see, e.g., Neale et al. 2010). The dynamics (or “dynamical core”) is a set of fluid dynamical and thermodynamic equations reflecting conservation of momentum, mass and energy, as well as an equation of state. These equations are used to simulate large-scale atmospheric motions, which transport heat, mass and moisture; the scale of the motions that can be resolved depends on the grid mesh of the model. The “physics” encompasses representations of sub-grid processes that impact grid-scale conditions in a significant way: radiative transfer, cloud formation, precipitation and more. These processes are parameterized, i.e., represented as a function of the grid-scale variables that are calculated explicitly in the dynamical core (e.g., temperature, pressure, winds, humidity). Parameterization is necessary in other components of climate models as well, whenever sub-grid processes significantly influence resolved-scale variables.

Constructing a parameterization is akin to an engineering problem, where the goal is to find an “adequate substitute” (Katzav 2013a) for the explicit simulation of a sub-grid process, using a limited set of ingredients. This is a challenging task. For given values of the grid-scale variables, there are usually many possible realizations of the sub-grid conditions. The standard approach to parameterization has been a deterministic one, aiming to estimate the contribution that the sub-grid process would make on average, over many possible realizations that are physically consistent with a given set of grid-scale conditions (McFarlane 2011). An alternative approach gaining popularity is stochastic parameterization, which aims to estimate the contribution made by a single, randomly-selected member of the set of realizations physically consistent with the grid-scale conditions (Berner et al. 2017). Typically, parameterizations of either type are informed by physical understanding but also incorporate elements derived at least in part from observations (see also Sundberg 2007; Guillemot 2010); this is why parameterizations are often described as “semi-empirical”. Because climate models incorporate both accepted physical theory and such semi-empirical, engineered elements, they are sometimes described as having a hybrid realist-instrumentalist status (Parker 2006; Katzav 2013a; see also Goodwin 2015).

Parameterization also gives rise to fuzzy modularity within component models of GCMs/ESMs: which parameterizations work best with a given climate model depends to some extent on how other sub-grid processes have already been represented and the errors that those other representations introduce (see also Lenhard & Winsberg 2010 on “generative entrenchment”). Even the best-available parameterizations, however, usually have significant limitations. Indeed, uncertainty about how to adequately parameterize sub-grid processes remains a major source of uncertainty in climate modeling. Differences in parameterizations—especially for cloud processes—account for much of the spread among simulations of the climate system’s response to increasing greenhouse gas concentrations (Flato et al. 2013: 743). This is one motivation for a new approach to parameterization: super-parameterization involves explicitly simulating a sub-grid process in a simplified way, by coupling a 1-D or 2-D model of the process (e.g., cloud formation) to each GCM/ESM grid point (Randall et al. 2013; Gramelsberger 2010). This is a multi-scale modeling approach that requires significant additional computing power, relative to traditional parameterization.

Construction of climate models also inevitably involves some tuning, or calibration, which involves the (often ad hoc) adjustment of parameter values or other elements of the model in order to improve model performance, usually measured by fit with observations. The observations that are targeted vary from case to case; they might relate to individual processes or to aggregate, system-level variables, such as global mean surface temperature (see Mauritsen et al. 2012; Hourdin et al. 2017). Which parameter values are adjusted depends on the observations with which a better fit is sought, but often adjustments are made within parameterizations, to parameters whose best values are significantly uncertain. The extent to which a model is tuned is usually not documented or reported in detail (but see Mauritsen et al. 2012 for an instructive example). Recently, there have been calls for more transparency in reporting tuning strategies and targets (Hourdin et al. 2017), in part because information about tuning is relevant to model evaluation (see Section 4.4). An interesting question, however, is exactly what counts as tuning since, even when formal quantitative comparison with a set of observational data is not performed, modelers can be familiar with those data and may well make choices in model development—choices which could reasonably have been somewhat different—with the expectation that they will improve the model’s performance with respect to those already-seen data.

A further issue concerns the role of social and ethical values in climate model construction. Winsberg (2010, 2012; Biddle & Winsberg 2009) argues that such values influence climate model construction (and thereby results) both by shaping priorities in model development and via inductive risk considerations, e.g., when deciding to represent a climate system process in one way rather than another, in order to reduce the risk that the model’s results err in a way that would have particularly negative non-epistemic consequences. He contends that social and ethical values operate in climate model construction whenever there are no decisive, purely epistemic grounds for considering one model-building option to be the best available. Parker (2014a) disputes this, pointing to the importance of pragmatic factors, such as ease of implementation, local expertise and computational demands. Intemann (2015) argues that social and ethical values can legitimately influence climate model construction, including via the routes identified by Winsberg, when this promotes democratically-endorsed social and epistemic aims of research.

4.3 Uses of Climate Models

Climate models are used for many types of purpose; just a few will be mentioned here (see also Petersen 2012: Ch.5). One important use is in characterizing features of the climate system that are difficult to learn about via available observations. For example, the internal variability of the climate system is often estimated from the variability seen in long GCM/ESM simulations in which external conditions are held constant at pre-industrial levels (Bindoff et al. 2013). Internal variability is difficult to estimate from the instrumental record, because the record is relatively short and reflects the influence not just of natural internal processes but also of changing external conditions, such as rising greenhouse gas concentrations. Estimates of internal variability in turn play an important role in studies that seek to detect climate change in observations (see Section 5.1).

Climate models are also used as scientists seek explanations and understanding. Typically, explanations sought in climate science are causal; climate scientists seek accurate accounts of how combinations of climate system processes, conditions and features together bring about climate phenomena of interest, including climate change. Parker (2014b) identifies several ways in which climate models have facilitated such explanation: by serving as a surrogate for observations, the analysis of which can suggest explanatory hypotheses and help to fill in gaps in how-possibly/plausibly explanations; by allowing scientists to test hypotheses relevant to explanation, e.g., that a set of causal factors represented in the model is sufficient to produce a phenomenon (see also Lorenz 1970); and by serving as experimental systems that can be manipulated and studied in order to gain insight into their workings, which can inform thinking about how the climate system itself works. In connection with the latter, Held (2005) calls for increased efforts to develop and systematically study “hierarchies of lasting value”—sets of models, ranging from the highly idealized to the very complex, that are constructed such that they stand in known relations to one another so that the sources of differences in their behavior can be more readily diagnosed; he contends that the study of such hierarchies is essential to the development of climate theory in the twenty-first century.

In addition, climate models are used to make predictions. It might be thought that, due to chaos, climate prediction is impossible. But chaos is an obstacle to precisely predicting the trajectory of a system—the time-ordered sequence of its states over a period—not (necessarily) for predicting the statistical properties of one or more trajectories; climate prediction is concerned with the latter. Short-term predictions of climate for periods ranging from a year to a decade or so are made with both physics-based and empirical models (e.g., Meehl et al. 2014; Krueger & von Storch 2011). For these predictions, assumptions are made about what external conditions actually will be like over the forecast period, and the forecasts begin from an observationally-based estimate of the recent state of the climate system. Better known, however, is the use of climate models to make longer-term conditional predictions, known as projections. These are predictions of what future climate would be like under particular external condition scenarios, without assuming that any of those scenarios will actually occur, and they are often launched from an initial state that is meant to be representative of the climate at the start of the simulated period, though not estimated directly from observations (see Section 4.4). Climate change projections have become a major focus of climate research and will be discussed further in Section 5.2 below.

Recently there have been calls for discipline-level changes in the practice of climate modeling, in order to better serve some of the purposes just mentioned. To improve climate change projections, for instance, Shukla et al. (2009) call for a “revolution” in climate modeling, such that both expertise and computing power are consolidated in a few international modeling centers, allowing for much-higher-resolution simulations. Katzav and Parker (2015) critically examine this and other leading proposals and reflect on their potential benefits and costs.

4.4 Evaluating Climate Models

Evaluation of climate models occurs throughout the model development process. Parameterizations and component models are often tested individually “off-line” to see how they perform (Randall et al. 2007), and adjustments are made to try to improve them when they fall short of expectations. Likewise, there is testing and adjustment as the different model components are coupled. This evaluation can have both qualitative and quantitative dimensions. For instance, it might be checked that the model’s climate is basically realistic-looking, with salient features (e.g., of the atmospheric circulation) in roughly the right place. Quantitative evaluation produces scores on performance metrics. These can include measures of conservation and stability—how close does the model come to achieving a top-of-atmosphere energy balance?—as well measures of fit between model output and observational data. The evaluation that occurs in the course of model development, however, is rarely reported in publications. What is reported instead are features of the model, including how it performs, once it is fully constructed, tuned and ready to be released for scientific use.

Climate model evaluation can have many different goals. A basic motivation is simply to learn about how climate models relate to the climate system. Borrowing the language of Giere (2004), we might say that a goal is to learn about the respects in which and degrees to which climate models are similar to the climate system (Lloyd 2010). A climate scientist might investigate, for example, whether a particular GCM exhibits a seasonal cycle in temperature for North America that roughly matches that which currently occurs in the real system. Evaluation efforts can also be directed at understanding similarities at the level of the processes and mechanisms underlying behaviors. In this regard, Lloyd (2009, 2010, 2015) points to fit between model results and observations on a variety of metrics, independent support for model components, and robustness of findings as considerations that are relevant. A second common goal of climate model evaluation, closely related to the first, is to learn whether a model is adequate for particular purposes (Parker 2009; Baumberger et al. 2017). As discussed in Section 4.3, scientists use climate models for various purposes: to characterize climate system properties (e.g., internal variability), to make projections of future climate change under particular scenarios, and so on. Many evaluation activities are intended to inform conclusions about climate models’ adequacy for these purposes and, ipso facto, about the levels of confidence that can be had in the models’ estimates of internal variability, their projections of future climate change, etc.

What is known about the ingredients of a climate model—e.g., that they closely approximate accepted theory or involve massive simplifications—can provide reasons to think that the model is similar (or not) to the climate system in particular ways or adequate (or not) for particular purposes (Baumberger et al. 2017). But climate scientists also seek evidence regarding similarity and adequacy by comparing model results to observations (see also Guillemot 2010). Such comparisons can be qualitative or quantitative, as noted above. In some cases, it is relatively straightforward to gauge whether model-data comparisons provide evidence for or against hypotheses about model-system similarities or about the adequacy of climate models for particular purposes. Prime candidates include cases in which what is hypothesized entails just a rough fit between model results and well-observed climate system behaviors. Thus, for example, the Intergovernmental Panel on Climate Change (IPCC) is able to report

very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period, including the more rapid warming in the second half of the 20th century, and the cooling immediately following large volcanic eruptions. (Flato et al. 2013: 743)

In many other cases, however, it is more difficult to gauge the extent to which model-data comparisons provide evidence of a model's similarity to the climate system or its adequacy for particular purposes. There are a number of complicating factors.

First, differences between modeling results and observational data might not be due to shortcomings of the climate model. Climate datasets can contain errors or have uncertainties that are underestimated or not reported. Indeed, in more than one case, it has turned out that model-data conflicts were resolved largely in favor of the models (see, e.g., Lloyd 2012). There also have been cases where a poor fit between modeling results and data was judged to be due in part to errors in assumptions about the external conditions (e.g., aerosol concentrations) that obtained during the simulated period (e.g., Santer et al. 2014, 2017). More fundamental is an issue related to the initial conditions of climate simulations (see also Schmidt & Sherwood 2015). Oftentimes, a simulation for a particular period is launched from a representative state that is arrived at after letting the climate model run for a while under external conditions like those assumed to hold at the start of the period of interest; this “spin up” of the model is needed to allow its various components to come into balance with one another. Since this representative state inevitably differs somewhat from the actual conditions at the start of the period of interest (e.g., on Jan 1, 1900), and since the climate system is believed to be chaotic, the simulation cannot be expected to closely track observations throughout the period, even if the model perfectly represents all climate system processes; the goal is for differences between climate simulations and observations to be no larger than would result when running a perfect model from other initial states that are consistent with the external conditions at the start of the period of interest (i.e., no larger than differences stemming from internal variability).

Interpretation can be difficult even when there is relatively good fit between modeling results and observational data. One reason is that, to varying degrees, today’s climate models have been tuned to twentieth century observations. Tuning sometimes delivers an improved score on a performance metric by compensating for errors elsewhere in the model (Petersen 2000), and the stability of errors is not guaranteed as external conditions change, e.g., as greenhouse gas concentrations continue to rise (Stainforth et al. 2007; Knutti 2008). Some climate scientists suggest that data used in tuning cannot be used subsequently in model evaluation (e.g., Randall et al. 2007: 596; Flato et al. 2013: 750). Steele and Werndl (2013, 2016), however, argue that in some circumstances it is legitimate to use data for both calibration (i.e., tuning) and confirmation, something they illustrate with both Bayesian and frequentist methodologies. Frisch (2015) advocates what he calls a moderate predictivism in this context: if a climate model is tuned to achieve a good fit with data for a particular variable, this provides less support for the hypothesis that the model can deliver the same performance with respect to other predictions for that variable (e.g., future values), compared to the case when the good fit is achieved without tuning (see also Stainforth et al. 2007; Katzav et al. 2012; Mauritsen et al. 2012; Schmidt & Sherwood 2015). Frisch defends this view on the grounds that successful simulation without tuning gives us more reason to think that a climate model has accurately represented key processes shaping the variable of interest; if we were already confident of the latter, then successful simulation without tuning would not have such an advantage. In any case, to take account of tuning in model evaluation, it is important to know what tuning has been done, and this information is often not readily available (see Section 4.2).

A further complication is that, in many cases, the “observations” used in climate model evaluation are reanalysis datasets whose contents are determined in part by model-based weather forecasts (Edwards 1999, 2010; Flato et al. 2013: Table 9.3; see also Section 3.2 above). Since weather forecasting models and climate models can incorporate some similar idealizations and simplifications, the worry arises that fit between reanalyses and climate modeling results might be artificially inflated in some respects, due to shared imperfections. Identifying such artificial inflation of fit can be challenging, given limited availability of independent observations, i.e., ones not used in producing the reanalysis. Leuschner (2015: 370), however, suggests that a focus on the shared assumptions themselves might be a promising way forward; the adequacy of shared assumptions about the radiative effects of aerosols, for instance, might be established via comparison with the results of laboratory experiments (see also Frigg et al. 2015b: 957).

Lastly, when it comes to evaluating the adequacy of climate models for some purposes, especially predictive purposes, it can be difficult to know what sort of fit with past climate conditions would even count as evidence of a climate model’s adequacy-for-purpose (Parker 2009). How accurately should a climate model simulate various features of past climate if it is adequate for, say, predicting future changes in global mean surface temperature to within some specified margin of error? This can be very difficult to answer, not only because there is more than one way to achieve such predictive accuracy (e.g., by distributing error differently across contributing variables), but also because simplifications and idealizations in the model, as well as tuned components, might work better under some external conditions than others, e.g., under those of the recent past rather than those assumed in future scenarios (see ibid.). Katzav (2014) suggests that, in practice, attempts to determine what sort of fit with past conditions would count as evidence of a climate model’s adequacy-for-purpose will often rely on results from climate models themselves, in a way that is question-begging.

Given these challenges, climate model evaluation today often focuses primarily on describing the extent to which modeling results fit with observations and on characterizing how that fit has improved from one generation of models to the next; it tends to reach only relatively modest conclusions about the adequacy of climate models for particular purposes. For example, recent Intergovernmental Panel on Climate Change (IPCC) reports have concluded that today’s climate models provide “credible quantitative estimates” of climate system conditions in support of climate change detection, attribution and projection (see Section 5), where this credibility is underwritten both by how the models are constructed—i.e., their grounding in basic physical principles—and by how they perform on a wide range of metrics (see Randall et al. 2007; Flato et al. 2013; see also Knutti 2008). The credibility of results is judged to be greater for larger spatial scales (i.e., for quantities averaged over global or continental scales) and for variables that are less spatially heterogeneous (e.g., for temperature and pressure as opposed to precipitation), but the expected accuracy of modeling results used in detection, attribution and projection studies is rarely explicitly quantified. To begin to move the discussion forward, Baumberger et al. (2017) draw on both philosophical and scientific resources to outline a conceptual framework intended to aid evaluation of the adequacy of climate models for particular purposes; they point to empirical accuracy, robustness, and coherence with background knowledge as relevant considerations (see also Knutti (2018: 11.4), highlighting the importance of "process understanding").

Finally, it is worth noting that a significant motivation for model evaluation studies is model improvement: model evaluation activities can be directed toward understanding why a model’s results exhibit particular errors, in the hope of improving the model in future. Lenhard and Winsberg (2010), however, suggest that such “analytical understanding” of climate model performance is largely out of reach. They argue that features of climate models, including their complexity, their fuzzy modularity (see Section 4.2) and their incorporation of “kludges”—unprincipled fixes applied in model development to make the model as a whole work better—will often make it very difficult to apportion blame for poor simulation performance to different parts of climate models; climate modeling, they argue, faces a particularly challenging variety of confirmational holism (see also Petersen 2000). As empirical support, they point to the limited success of model intercomparison projects (e.g., Taylor et al. 2012) in identifying the sources of disagreement in climate model simulations. However, there have been some notable successes in localizing sources of error and disagreement in climate simulations. For example, differences in the representation of cloud processes have been identified as a major source of disagreement among estimates of climate sensitivity (Flato et al. 2013: 743). Likewise, a growing body of work on “emergent constraints” has uncovered sources of spread in various aspects of climate change projections (see, e.g., Qu & Hall 2007 and subsequent work; Frigg et al. 2015c). Thus, while Lenhard and Winsberg are surely right that it is difficult to achieve analytical understanding of climate model performance, just how limited the prospects are seems an open question.

5. Anthropogenic Climate Change

The idea that humans could change Earth’s climate by emitting large quantities of carbon dioxide and other gases is not a new one (Fleming 1998; Weart 2008 [2017, Other Internet Resources]). In the late nineteenth century, Swedish chemist Svante Arrhenius calculated that doubling the levels of carbonic acid (i.e., carbon dioxide) in the atmosphere would warm Earth’s average surface temperature by several degrees Celsius (Arrhenius 1896). By the mid-twentieth century, oceanographer Roger Revelle and colleagues concluded:

By the year 2000, the increase in atmospheric CO2 … may be sufficient to produce measurable and perhaps marked change in climate. (quoted in Oreskes 2007: 83–84)

In 1988, climate scientist James Hansen famously testified to the U.S. Congress that global warming was already happening. The same year, the World Meteorological Organization and the United Nations Environment Program established the Intergovernmental Panel on Climate Change (IPCC)

to provide policymakers with regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation. (IPCC 2013a)

Drawing on the expertise of the international climate science community, the IPCC has delivered assessment reports roughly every five years since 1990.[3]

Because IPCC reports usefully synthesize a huge body of scientific research, they will be referenced frequently in what follows. The discussion will focus primarily on questions about the physical climate system, rather than questions about the impacts of climate change on societies or about climate policy options; the latter, which are also addressed by the IPCC, call for expertise that extends well beyond climate science (see IPCC 2014). Section 5.1 discusses detection and attribution research, which is concerned with the following questions: In what ways has Earth’s climate changed since pre-industrial times? How much have human activities, especially human emissions of greenhouse gases, contributed to the changes? Section 5.2 considers the projection of future climate change, with special attention to debates about the interpretation of model-based projections. Section 5.3 provides a brief overview of several recent controversies related to the methods and conclusions of climate change research. Finally, Section 5.4 notes some of the ethical questions and challenges raised by the issue of anthropogenic climate change.

5.1 Detection and Attribution

The IPCC defines detection of climate change as

the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense, without providing a reason for that change. (IPCC-glossary: 1452)

They imply that they are interested in climate change due to external factors, rather than natural processes internal to the climate system, when they add:

An identified change is detected in observations if the likelihood of occurrence by chance due to internal variability alone is determined to be small, for example <10%. (ibid.)

Detection thus requires a statistical estimate of how much a quantity or field of interest might fluctuate due to internal variability. It is challenging to estimate this from the instrumental record. Estimates have been extracted from paleoclimate data for some quantities (Schurer et al. 2013) but, as noted in Section 4.3, often they are obtained from long GCM/ESM simulations in which external conditions are held constant.

In its periodic assessments, the IPCC has reached increasingly strong conclusions about the detection of climate change in observations. The IPCC’s Fifth Assessment Report concluded that it is “virtually certain” (i.e., probability >0.99) that the increase in global mean surface temperature seen since 1950 is not due to internal variability alone (Bindoff et al. 2013: 885). That is, the probability that the warming is due to internal variability alone was assessed by the scientists, on the basis of available evidence and expert judgment, to be less than 1%.[4] Indeed, it was noted that, even if internal variability were three times larger than estimates from simulations, a change would still be detected (ibid.: 881, citing Knutson et al. 2013). Changes have been formally detected in many other aspects of the climate system as well, not just in the atmosphere but also in the oceans and the cryosphere.

The process of attribution seeks to identify the cause(s) of an observed change in climate. It employs both basic physical reasoning—observed changes are qualitatively consistent (or not) with the expected effects of a potential cause—as well as quantitative studies. The latter include fingerprint studies, in which climate models are used to simulate what would occur over a period if a particular causal factor (or set of factors) were changing, while other potential causal factors were held constant; the resulting simulated pattern of change in the target variable or field—often a spatiotemporal pattern—is the “fingerprint” of that factor (or set of factors). Scientists then perform a regression-style analysis, looking for the linear combination of the fingerprints that best fits the observations and checking whether the residual is statistically consistent with internal variability (see, e.g., Bindoff et al. 2013: Box 10.1). If so, then an estimate of the contributions of the different causal factors that were considered can be extracted from the analysis. Because of uncertainties associated with observational data, the response patterns and various methodological choices, attribution studies produce a range of estimated contributions for each factor.[5]

As with detection, IPCC conclusions related to attribution have grown stronger over time and have come to encompass more variables and fields. In their Fifth Assessment Report, the IPCC concluded that it is “extremely likely” (i.e., probability >0.95) that more than half of the increase in global mean surface temperature since 1950 was due to anthropogenic greenhouse gas emissions and other anthropogenic forcings (Bindoff et al. 2013: 869). This conclusion was informed by multiple fingerprint studies, as well as physical reasoning and expert judgment. The latter underwrite key assumptions of the attribution exercise (e.g., that no significant causes have been overlooked) and, relatedly, play a role in deciding the extent to which face-value probabilities produced in fingerprint studies should be downgraded (e.g., from “virtually certain” to “extremely likely”) in light of remaining uncertainties and recognized limitations of those studies. For various other changes, including increases in global mean sea level since the 1970s and retreat of glaciers since the 1960s, the IPCC concludes that it is at least “likely” (i.e., probability >0.66) that human influence played a role (ibid.: 869–871).

These conclusions about detection and attribution align with, if not constitute, the “consensus” position regarding the reality and causes of recent climate change. The extent to which there is a scientific consensus on these matters, however, has itself become a topic of debate. It is often reported, for example, that 97% (or more) of climate scientists agree that anthropogenic emissions of greenhouse gases are causing global climate change (Cook et al. 2016). Analyses of the scientific literature, as well as surveys of scientists, have been conducted to demonstrate (or challenge) the existence of this consensus. This focus on consensus can seem misguided: what matters is that there is good evidence for a scientific claim, not merely that some percentage of scientists endorses it (see also Intemann 2017). Yet consensus among experts can serve as indirect evidence for a scientific claim—an indication that the scientific evidence favors or even strongly supports the claim—at least if the consensus is “produced in the right manner” (Odenbaugh 2012). A consensus emerging from a process that gives free rein to shared bias or groupthink (Ranalli 2012), or that does not require that those surveyed have relevant expertise, might have little evidential value. By contrast, a “hard won” consensus—one that emerges despite the reluctance of independent-minded and critical parties, after “vigorous debate and a thorough examination of the range of alternative explanations” (ibid.: 187) might be significant; the indirect evidence provided by such a consensus could be useful to non-experts who are not able to evaluate the scientific evidence themselves. Ranalli (2012) suggests that parties on opposite sides of debates about the reality and causes of climate change generally hold similar views about what makes for a reliable scientific consensus, but disagree over the extent to which the climate consensus meets the standard.

In support of the consensus position regarding detection and attribution, Oreskes (2007) argues that research underpinning it accords with various models of scientific reliability: it has a strong inductive base, it has made nontrivial correct predictions, it has survived persistent efforts at falsification, it has uncovered a body of evidence that is consilient, and—in light of all of this—climate scientists have concluded that the best explanation of recent observed warming includes a significant anthropogenic component. She also emphasizes that the core idea that increased atmospheric concentrations of carbon dioxide would warm the climate emerged first from basic physical understanding, not from computer models of the climate system. In a complementary analysis, Lloyd (2010, 2013) offers a defense of the use of climate models in attribution research and argues that studies involving multiple climate models—which incorporate similar assumptions about the radiative properties of greenhouse gases but differ somewhat in their empirically-supported representations of other climate system processes—provide enhanced support for the conclusion that greenhouse gases are (part of) a good explanation of observed recent warming; she argues that such studies illustrate a distinctive kind of model robustness, which incorporates variety-of-evidence considerations, and that such robustness is a “confirmatory virtue” (see also Vezér 2016b). Robustness and variety-of-evidence considerations are also highlighted by Parker (2010), in a different way; she suggests that they have been important for justifying detection and attribution claims, given that formal studies, such as fingerprint studies, often rely on contentious or false assumptions. Winsberg (2018: Ch.11&12) critically examines and synthesizes much of this recent work on robustness in climate science, drawing on Schupbach’s (2016) account of robustness analysis as explanatory reasoning.

Katzav (2013b), however, considers another perspective on evidence—Mayo’s (1996) severe testing framework—and argues that the claim that more than half of the increase in global mean surface temperature since 1950 was due to anthropogenic forcing has not passed a severe test in Mayo’s sense, even when variety-of-evidence considerations are factored in; from a severe testing perspective, he contends, we still lack good evidence for this conclusion. He suggests that some weaker attribution claims—such as the claim that anthropogenic forcing had some role in post-1950 warming—might be ones for which there is good evidence in Mayo’s demanding sense.

5.2 Projecting Future Climate Change

Climate scientists also seek to understand how climate would change in the future if greenhouse gas emissions and other anthropogenic forcing factors were to evolve in particular ways. Such information can be of significant interest to policy makers, who might choose to implement policies that push toward some future scenarios and away from others, and to many other decision makers as well (e.g., insurance companies deciding which policies to offer). Climate models have emerged as an important tool for making projections of future conditions. For a given scenario, it is common to make an ensemble of projections using different climate models or model versions; this is motivated in part by uncertainty about how to construct a climate model that can deliver highly accurate projections (Parker 2006; Betz 2009). This uncertainty in turn stems both from limited theoretical understanding of some processes operating within the climate system as well as from limited computing power, which constrains how existing knowledge can be implemented in models (see the discussion of parameterization in Section 4.2).

Different types of ensemble study explore the consequences of different sources of uncertainty in modeling. A multi-model ensemble (MME) study probes structural uncertainty, which is uncertainty about the form that modeling equations should take as well as how those equations should be solved. An example of an MME study is the Coupled Model Intercomparison Project Version 5 (CMIP5), which produced projections using a few dozen climate models developed at different modeling centers around the world (see Flato et al. 2013: Table 9.1). A perturbed-physics ensemble (PPE) consists of multiple versions of the same climate model, where the versions differ in the numerical values assigned to one or more parameters within the model. Examples of studies employing PPEs include the climateprediction.net project (Stainforth et al. 2005) and the UK Climate Projections (UKCP09) project (Murphy et al. 2009). PPEs probe parameter uncertainty, i.e., uncertainty about the numerical values that should be assigned to parameters within a given model structure. In an initial condition ensemble (ICE), the same model or model version is assigned different initial conditions. ICEs can probe initial condition uncertainty, which is uncertainty about the initial model state from which simulations should be launched, and they are often used in combination with one of the other two types of ensemble (see Werndl forthcoming for further discussion).

When it comes to interpreting ensemble projections, a range of views and methodologies have emerged. At one end of the spectrum, there are statistical methodologies used to infer probabilities (of future changes in climate) from ensemble results. For instance, some MME studies have adopted what amounts to a “truth-plus-error” framework: probabilities are estimated by treating ensemble members as random draws from a distribution of possible models centered on truth (e.g., Tebaldi et al. 2005). The performance of today’s MMEs on past data, however, suggests that they are not centered on truth (Knutti et al. 2010a). An alternative “statistically-indistinguishable” framework allows for the estimation of probabilities by assuming that ensemble members and truth (i.e., perfect observations of future conditions) are drawn from the same statistical distribution. Examining past data, Annan and Hargreaves (2010) find this assumption to roughly hold for one prominent MME (the CMIP3 ensemble), at least for some variables. Bishop and Abramowitz (2013), however, argue that results from today’s MMEs lack some statistical properties that should be present if the assumption of indistinguishability is accurate. They propose an alternative “replicate-Earth” framework, which involves post-processing MME projections so that they display those desired statistical properties; Bishop and Abramowitz do not seem to insist, however, that the post-processed distribution be considered a probability distribution. Bayesian methodologies have also been developed. The UKCP09 project, for example, used a Bayesian PPE methodology to produce probabilistic climate change projections out to 2100 at a fine spatiotemporal resolution for the United Kingdom (Murphy et al. 2009). That methodology treated parameter uncertainty as uncertainty about which model was the best (on given metrics of performance) in a given model class, and attempted to account for the difference between the best model and a perfect model with the help of results from an MME (Rougier 2007; Sexton et al. 2012).

Scientists and philosophers have raised a number of criticisms of methods that infer probabilities from ensemble results. As implied above, some lines of criticism cite evidence that the statistical assumptions underlying the studies are not met. Others argue that uncertainty about future climate change is deeper than precise probabilities imply, either because of challenges associated with quantifying structural model uncertainty (e.g., Stainforth et al. 2007; Hillerbrand 2014; Parker 2014a) or, relatedly, because the possible future states of the climate system (i.e., the outcome space) is itself uncertain (Katzav et al. 2012). In other words, these critics contend that probabilistic uncertainty estimates have a false precision and, in that sense, are misleading about the actual state of knowledge. Frigg et al. (2013, 2014, 2015a) argue that, for PPE studies like UKCP09, such probabilities are also likely to be misleading in a different sense, namely, because they differ markedly from the probabilities that would be obtained from a model with no structural error. They illustrate that, for nonlinear systems, even a small amount of structural model error can give rise to very misleading probabilities—a consequence they dub the “hawkmoth effect” (taking inspiration from the “butterfly effect”, which relates to initial condition error; see also Mayo-Wilson 2015, Lawhead forthcoming). Winsberg and Goodwin (2016), however, contend that more work is needed to establish that the hawkmoth effect is as devastating for probabilistic climate projection as Frigg et al. claim (see also Goodwin & Winsberg 2016).

Some of these critics of probabilistic approaches advocate a very different interpretation of results from today’s ensemble projection studies, according to which the projections indicate changes that are (merely) plausible or possible in light of current understanding. Stainforth et al. (2007), for example, suggest that the range of projections produced using state-of-the-art climate models constitutes a “non-discountable envelope” of future changes, a set of possible changes that should not be disregarded; this does not imply that changes outside the envelope are discountable. Likewise, Katzav (2014) argues that climate change projections can often be interpreted as indicating real possibilities:

a state of affairs in a target domain is…a real possibility relative to time t if and only if (a) its realisation is compatible with the basic way things are in the target domain over the period during which it might be realised and (b) our knowledge at t does not exclude its realisation over that period. (2014: 236)

Betz (2015), however, argues that even a possibilistic interpretation of ensemble results may be hard to justify, since it is difficult to show that contrary-to-fact assumptions employed in climate models do not lead to projections that are inconsistent with background knowledge; on his view, though not on Katzav’s, this consistency is necessary for projections to indicate serious possibilities. Part of the disagreement here thus hinges on what it takes for something to be a serious/real possibility.

There are also intermediate views. One such view sees today’s ensemble projection studies as imperfect investigations of uncertainty, whose results should be considered by experts alongside all other available information when reaching conclusions about future climate change; those conclusions might take various forms, depending on the variable under consideration (see, e.g., Kandlikar et al. 2005; Mastrandrea et al. 2010). For example, in its recent assessment reports, the IPCC relied on ensemble modeling results, background knowledge and expert judgment to arrive at “likely” ranges for changes in near-surface global mean temperature under different scenarios; the experts judged it to be likely (i.e., probability >0.66) that the temperature change would fall within the 5% to 95% range of the CMIP5 ensemble results (Collins et al. 2013: 1031). For the moderate emission scenario known as RCP6.0, for instance, the conclusion was that the global mean temperature during 2081–2100 would likely be between 1.4 and 3.1 degrees Celsius warmer than during 1986–2005 (ibid.). The IPCC thus asserted more than that the changes projected by the CMIP5 ensemble were possibilities, but they did not go so far as to assign a single, full probability density function over temperature change values.

Intertwined with the issue of ensemble interpretation is the issue of weighting, i.e., whether to assign greater weight to some projections than others. This is part and parcel of some methods, but in MME studies it has been common to give equal weight to participating models, as the IPCC approach illustrates. This is sometimes referred to as “model democracy” or “one-model one-vote” (Knutti 2010). Part of the original motivation for model democracy was the difficulty of determining which of the state-of-the-art models included in MMEs would give the most accurate projections of future conditions (see Section 4.4). This difficulty notwithstanding, recently weighting has been advocated on the grounds that, for at least some projected variables, there is good reason to think that some models are more skillful than others and, moreover, the one-model one-vote approach fails to take account of dependence among models, i.e., of the fact that an MME can include several models that are variants of one another (Knutti et al. 2017). A key challenge, however, is to select and combine relevant metrics of performance and other criteria to assign appropriate weights for a given projected variable (Gleckler et al. 2008; Knutti et al. 2010b; Weigel et al. 2010). The basic issue of interpretation discussed above also remains, i.e., how to interpret the weighted ensemble results.

There are many other philosophically-interesting questions related to ensemble climate projection (Frame et al. 2007); just a few will be mentioned here. First, how can ensemble studies be designed so that they probe uncertainty in desired ways? Today’s MME studies do not probe structural uncertainty systematically, and it is unclear what it would mean to do so; there are questions about how to define the space of relevant models and how to sample it (Murphy et al. 2007). Second, how significant are robust results? Climate scientists often assume that agreement among projections has special epistemic significance—e.g., that we can be substantially more confident in those projected changes (Pirtle et al. 2010)—but Parker (2011) argues that it is difficult to make the case for this. There are also questions about how to quantify the extent of agreement or robustness (see Collins et al. 2013: Box 12.1) and about what it means for ensemble members to be independent, an important factor in interpreting the significance of robustness (Annan and Hargreaves 2017). Finally, to what extent do non-epistemic values influence ensemble results? Winsberg (2010, 2012; see also Biddle & Winsberg 2009) identifies pathways of influence via model construction (see Section 3.2), which in turn affect probabilistic uncertainty estimates produced from ensemble results, though not necessarily in a way that raises concerns about “wishful thinking”. Parker (2014a) suggests that this influence might be dampened by representing uncertainty in coarser ways that also better reflect the extent of actual uncertainty, e.g., by using the IPCC imprecise probability intervals; the extent to which the value influence is escapable in practice, however, remains unclear.

5.3 Recent Controversies

Climate “contrarians” challenge key conclusions of “mainstream” climate science, such as those of the IPCC, in a host of public venues: blogs, newspaper op-ed pieces, television and radio interviews, Congressional hearings and, occasionally, scientific journals. Those considered contrarians come in many varieties, from climate scientists who consider headline attribution claims to be insufficiently justified—they are usually described as climate “skeptics”—to individuals and groups from outside of climate science whose primary motivation is to block climate policy, in some cases by “manufacturing doubt” about the reality or seriousness of anthropogenic climate change; they are commonly labelled climate “deniers” (see, e.g., Oreskes & Conway 2010; Ranalli 2012). (The use of these labels varies, however.) Contrarians have played a role in creating or sustaining a number of public controversies related to climate science. Four of these are discussed very briefly below, along with some recent philosophical work reflecting on the impact of contrarian dissent.

The Tropospheric Temperature Controversy. Climate model simulations indicate that rising greenhouse gas concentrations will induce warming not only at earth’s surface but also in the layer of atmosphere extending 8–12 km above the surface, known as the troposphere. Satellites and radiosondes are the primary means of monitoring temperatures in this layer. Analyses of these data in the early 1990s, when only about a decade of satellite data was available, indicated a lack of warming in the troposphere (Spencer & Christy 1990). Prima facie, this presented a challenge to climate science, and it became a key piece of evidence in contrarian dismissals of the threat of anthropogenic climate change. Over time, additional research uncovered numerous problems with the satellite and radiosonde data used to estimate tropospheric temperature trends, with many of these problems related to homogenization (NRC 2000; Karl et al. 2006). More recent observational estimates agree that the troposphere has been warming and, although observed trends still tend to be somewhat smaller than those in simulations, the mainstream view is that there is “no fundamental discrepancy” between observations and models, given the significant uncertainties involved (Thorne et al. 2011). Nevertheless, the debate continues (see, e.g., Christy et al. 2010; Santer et al. 2017). Lloyd (2012) suggests that this controversy in part involves a conflict between a “direct empiricist” view that sees observational data as a naked reflection of reality, taking priority over models, and a more nuanced “complex empiricist” view (see also Edwards 2010: 417).

The Hockey Stick Controversy. The hockey stick controversy focused on some of the first millennial-scale paleoclimate reconstructions of the evolution of Northern Hemisphere mean near-surface temperature. These reconstructions, when joined with the instrumental temperature record, indicate a long, slow decline in temperature followed by a sharp rise beginning around 1900; their shape is reminiscent of a hockey stick (e.g., Mann et al. 1998, 1999). Such reconstructions featured prominently in the IPCC’s Third Assessment Report (Albritton et al. 2001) and constituted part of the evidence for the IPCC conclusion that “the 1990s are likely to have been the warmest decade of the millennium” (Folland et al. 2001: 102). Contrarian criticism in the published literature followed two main lines. One argued that there were problems with the data and statistical methods used in producing the reconstructions (McIntyre & McKitrick 2003, 2005), while a second appealed to additional proxy data and alternative methods for their interpretation in order to challenge the uniqueness of late-twentieth century warming (Soon et al. 2003). Mainstream climate scientists offered direct replies to these challenges (e.g., Mann et al. 2003; Wahl & Ammann 2007), and additional research has since produced new and longer reconstructions using a variety of proxies, which also support the conclusion that the late twentieth century was anomalously warm (Masson-Delmotte et al. 2013). [6] Contrarians, however, continue to criticize temperature reconstructions and the conclusions drawn from them, on various grounds; a recent target was Marcott et al. 2013. Book length accounts of the controversy from opposing sides include Montford 2010 and Mann 2012.

The Climategate Controversy. In 2009, a large number of emails were taken from the University of East Anglia’s Climatic Research Unit and made public without authorization. Authors of the emails included climate scientists at a variety of institutions around the world. Focusing primarily on a few passages, contrarians claimed that the emails revealed that climate scientists had manipulated data to support the consensus position on anthropogenic climate change and had suppressed legitimate dissenting research in various ways (e.g., by preventing its publication or by refusing to share data). A number of independent investigations were subsequently conducted, all exonerating climate scientists of the charges of scientific fraud and misconduct that contrarians had alleged (e.g., Russell et al. 2010). Some of the investigations, however, did find that climate scientists had failed to be sufficiently transparent, especially in their response to contrarian requests for station data used to estimate changes in global temperature (ibid.: 11). Despite the exonerations, surveys found that in some countries this “Climategate” episode—as it became known in popular media—significantly reduced public trust in climate science (Leiserowitz et al. 2013).

The Hiatus Controversy. Global mean near-surface temperature increased significantly during the 1990s but then showed little increase between the late 1990s and the early 2010s. By the mid-2000s, contrarians began to claim that global warming had stopped and that climate models (and climate science) were thus fundamentally flawed, since they had projected more warming. Part of the problem here was communication: graphs shared with policymakers and the public often highlighted the average of climate model projections, which smoothed out the significant variability seen in individual simulations and suggested a relatively steady warming; in fact, the observed rate of warming was not so different from that seen in some of the model projections (see Schmidt 2012; Risbey et al. 2014). Moreover, significant variations in temperature on decadal time scales, akin to that seen in the observations, were not entirely unexpected, given internal variability in the climate system (Easterling & Wehner 2009). Nevertheless, by the early 2010s, pressure was mounting for climate scientists to give a detailed explanation of the apparent slowdown in surface warming, by then referred to as the “hiatus” or “pause”, and to explain why most models projected more warming. A preliminary analysis was given in the IPCC Fifth Assessment Report (see Flato et al. 2013: Box 9.2). A host of potential explanatory factors were identified—related to external forcing, internal variability, ocean heat uptake and errors in observational data—which contrarians portrayed as excuses. Subsequent investigation found evidence for actual contributions from most of the hypothesized factors, though discussion of their relative importance continues (see Medhaug et al. 2017 for an overview; a large literature on the topic has emerged). In the meantime, since 2014, global mean temperature has once again shown a sharp increase, in part due to a strong El Niño event in 2015/16.

Contrarian dissent has impacted the practice of climate science in various ways. Most obviously, research is sometimes directed (at least in part) at rebutting contrarian claims and arguments. For example, a recent research paper related to tropospheric temperature trends (Santer et al. 2017) is explicitly framed as a response to contrarian claims made in the course of testimony to the U.S. Senate (see also Lewandowsky et al. 2015 on "seepage" and the hiatus controversy). In addition, Brysse et al. (2013) contend that pressure from contrarians and the risk of being accused of alarmism may be part of the explanation for climate scientists’ tendency to err on the side of caution in their predictions related to climate change. Drawing these threads together, Biddle and Leuschner (2015) suggest that contrarian dissent has impeded scientific progress in at least two ways:

by (1) forcing scientists to respond to a seemingly endless wave of unnecessary and unhelpful objections and demands and (2) creating an atmosphere in which scientists fear to address certain topics and/or to defend hypotheses as forcefully as they believe is appropriate. (Biddle & Leuschner 2015: 269)

They argue that, while dissent is science is often epistemically fruitful, the dissent expressed by climate contrarians has tended to be epistemically detrimental (see also Biddle et al. 2017; Leuschner 2018).

5.4 Ethics and Climate Change

The issue of anthropogenic climate change raises a host of challenging ethical questions. Most of these are beyond the scope of this entry on climate science. A very brief discussion is provided here nevertheless, because the questions are important and because a full entry on the topic is not yet available.

The basic ethical question is: What ought to be done about anthropogenic climate change, and by whom? The question arises because there is good evidence that climate change is already having harmful impacts on both humans and non-human nature, and because continued high rates of greenhouse gas emission can be expected to bring additional and more devastating harms in the future (Field et al. 2014). Attempting to address this basic ethical question, however, leads to further, complex questions of global and intergenerational justice, as well as to questions regarding our ethical obligations to non-human nature. Here are just a few examples: Do some nations, including those that have emitted large quantities of greenhouse gases in the past, have an obligation to bear more of the costs of climate change mitigation and adaptation than other nations? (See, e.g., Baer et al. 2009; Caney 2011; Singer 2016: Ch.2.) When considering actions to mitigate climate change, how should the harms and benefits to future generations be weighed against those affecting people today? (See, e.g., Broome 2008; Gardiner 2006.) How should impacts of climate change on non-human nature, including loss of biodiversity, be taken into account? (See, e.g., Palmer 2011.) Are there circumstances in which proposed geoengineering solutions—such as injecting sulfate aerosols into the stratosphere or seeding the oceans with carbon-absorbing phytoplankton—are ethically acceptable? (See, e.g., Gardiner 2010; Preston 2013.) There is a large and growing philosophical literature engaging with these and related questions; some anthologies and book-length works include Arnold 2011; Broome 2012; Gardiner 2011; Gardiner et al. 2010; Garvey 2008; Maltais and McKinnon 2015, Preston 2016; Shue 2014; Singer 2016.

Bibliography

  • Albritton, Daniel L., L. Gylvan Meira Filho, Ulrich Cubasch, et al., 2001, “Technical Summary”, in Houghton et al. 2001: 21–83.
  • Adler, Carolina E. and Gertrude Hirsch Hadorn, 2014, “The IPCC and treatment of uncertainties: topics and sources of dissensus”, WIREs Climate Change, 5(5): 663–676. doi:10.1002/wcc.297
  • Alexander, K. and S.M. Easterbrook, 2015, “The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations”, Geoscientific Model Development, 8(4): 1221–1232. doi:10.5194/gmd-8-1221-2015
  • American Meteorological Society, cited 2017a, “Climate System”, Glossary of Meteorology, available online.
  • –––, cited 2017b, “Climate Change”, Glossary of Meteorology, available online.
  • Annan, James D. and Julia C. Hargreaves, 2010, “Reliability of the CMIP3 ensemble”, Geophysical Research Letters, 37(2): L02703. doi:10.1029/2009GL041994
  • –––, 2017, “On the meaning of independence in climate science”, Earth System Dynamics, 8(1): 211–224. doi:10.5194/esd-8-211-2017
  • Arguez, Anthony and Russell S. Vose, 2011, “The Definition of the Standard WMO Climate Normal: The Key to Deriving Alternative Climate Normals”, Bulletin of the American Meteorological Society, 92(6): 699–704. doi:10.1175/2010BAMS2955.1
  • Arnold, Denis G. (ed.), 2011, The Ethics of Global Climate Change, New York: Cambridge University Press. doi:10.1017/CBO9780511732294
  • Arrhenius, Svante, 1896, “On the Influence of Carbonic Acid in the Air Upon the Temperature of the Ground”, Philosophical Magazine, Series 5, 41: 237–276, available online.
  • Baer, Paul, Tom Athanasiou, Sivan Kartha, and Eric Kemp-Benedict, 2009, “Greenhouse Development Rights: A Proposal for a Fair Global Climate Treaty”, Ethics, Place & Environment, 12(3): 267–281. doi:10.1080/13668790903195495
  • Baumberger, Christoph, Reto Knutti, and Gertrude Hirsch Hadorn, 2017, “Building confidence in climate model projections: an analysis of inferences from fit”, WIREs Climate Change, 8(3): e454. doi:10.1002/wcc.454
  • Bengtsson, L. and J. Shukla, 1988, “Integration of Space and In Situ Observations to Study Global Climate Change”, Bulletin of the American Meteorological Society, 69(10): 1130–1143. doi:10.1175/1520-0477(1988)069<1130:IOSAIS>2.0.CO;2
  • Berner, Judith, Ulrich Achatz, Lauriane Batté, Lisa Bengtsson, Alvaro de la Cámara, Hannah M. Christensen, Matteo Colangeli, et al., 2017, “Stochastic Parameterization: Toward a New View of Weather and Climate Models”, Bulletin of the American Meteorological Society, 98(3): 565–587. doi:10.1175/BAMS-D-15-00268.1
  • Betz, Gregor, 2009, “Underdetermination, Model-Ensembles and Surprises: On the Epistemology of Scenario-Analysis in Climatology”, Journal for General Philosophy of Science, 40(1): 3–21. doi:10.1007/S10838-009-9083-3
  • –––, 2015, “Are climate models credible worlds? Prospects and limitations of possibilistic climate prediction”, European Journal for Philosophy of Science, 5(2): 191–215. doi:10.1007/s13194-015-0108-y
  • Biddle, Justin and Eric Winsberg, 2009, “Value Judgments and the Estimation of Uncertainty in Climate Modeling”, in P.D. Magnus and J. Busch (eds.) New Waves in the Philosophy of Science, New York: Palgrave MacMillan, pp.172–97.
  • Biddle, Justin B. and Anna Leuschner, 2015, “Climate Skepticism and the Manufacture of Doubt: Can Dissent in Science be Epistemically Detrimental?” European Journal for Philosophy of Science, 5(3): 261–278. doi:10.1007/s13194-014-0101-x
  • Biddle, Justin B., Ian James Kidd and Anna Leuschner, 2017, “Epistemic Corruption and Manufactured Doubt: The Case of Climate Science”, Public Affairs Quarterly, 31(3): 165–187.
  • Bindoff, Nathaniel L., Peter A. Stott, et al., 2013, “Detection and Attribution of Climate Change: from Global to Regional”, in Stocker et al. 2013: 867–952 (ch. 10).
  • Bishop, Craig H. and Gab Abramowitz, 2013, “Climate model dependence and the replicate Earth paradigm”, Climate Dynamics, 41(3–4): 885–900. doi:10.1007/s00382-012-1610-y
  • Bradley, Richard, Casey Helgeson, and Brian Hill, 2017, “Climate Change Assessments: Confidence, Probability, and Decision”, Philosophy of Science, 84(3): 500–522. doi:10.1086/692145.
  • Broome, John, 2008, “The Ethics of Climate Change”, Scientific American 298(6): 96–102. doi:10.1038/scientificamerican0608-96
  • –––, 2012, Climate Matters: Ethics in a Warming World, New York: W.W. Norton.
  • Brown, Matthew J. and Joyce C. Havstad, 2017, “The Disconnect Problem, Scientific Authority, and Climate Policy”, Perspectives on Science, 25(1): 67–94. doi:10.1162/POSC_a_00235
  • Brysse, Keynyn, Naomi Oreskes, Jessica O’Reilly, and Michael Oppenheimer, 2013, “Climate change prediction: Erring on the side of least drama?”, Global Environmental Change, 23(1): 327–337. doi:10.1016/j.gloenvcha.2012.10.008
  • Calel, Raphael and David A. Stainforth, 2017, “On the physics of three integrated assessment models”, Bulletin of the American Meteorological Society, 98(6): 1199–1216. doi:10.1175/BAMS-D-16-0034.1
  • Caney, Simon, 2011, “Climate Change, Energy Rights and Equality”, in Arnold 2011: 77–103. doi:10.1017/CBO9780511732294.005
  • Chekroun, Mickaël D., Eric Simonnet, and Michael Ghil, 2011, “Stochastic climate dynamics: Random attractors and time-dependent invariant measures”, Physica D: Nonlinear Phenomena, 240(21): 1685–1700. doi:10.1016/j.physd.2011.06.005
  • Christy, John R., Benjamin Herman, Roger Pielke, Philip Klotzbach, Richard T. McNider, Justin J. Hnilo, Roy W. Spencer, Thomas Chase, and David Douglass, 2010, “What Do Observational Datasets Say about Modeled Tropospheric Temperature Trends since 1979?”, Remote Sensing, 2(9): 2148–2169. doi:10.3390/rs2092148
  • Claussen, M., L.A. Mysak, A.J. Weaver, et al., 2002, “Earth system models of intermediate complexity: Closing the gap in the spectrum of climate system models”, Climate Dynamics, 18(7): 579–586. doi:10.1007/s00382-001-0200-1
  • Collins, Matthew, Reto Knutti, et al., 2013, “Long-term Climate Change: Projections, Commitments and Irreversibility”, in Stocker et al. 2013: 1029–1136 (ch. 12).
  • Compo, G.P., J.S. Whitaker, P.D. Sardeshmukh, et al., 2011, “The Twentieth Century Reanalysis Project”, Quarterly Journal of the Royal Meteorological Society, 137(654): 1–28. doi:10.1002/qj.776
  • Compo, G.P., P.D. Sardeshmukh, J.S. Whitaker, et al., 2013, “Independent confirmation of global land warming without the use of station temperatures”, Geophysical Research Letters, 40: 3170–3174. doi:10.1002/grl.50425
  • Cook, John, Naomi Oreskes, Peter T. Doran, et al., 2016, “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming”, Environmental Research Letters, 11(4): 048002. doi:10.1088/1748-9326/11/4/048002
  • Costa, Ana Cristina and Amílcar Soares, 2009, “Homogenization of Climate Data: Review and New Perspectives Using Geostatistics”, Mathematical Geosciences, 41(3): 291–305. doi:10.1007/s11004-008-9203-3
  • Dee, Dick, John Fasullo, Dennis Shea, John Walsh, and National Center for Atmospheric Research Staff (eds.), 2016, “The Climate Data Guide: Atmospheric Reanalysis: Overview & Comparison Tables”, available online, last modified 12 Dec 2016.
  • Drótos, Gábor, Tamás Bódai, and Tamás Tél, 2015, “Probabilistic Concepts in a Changing Climate: A Snapshot Attractor Picture”, Journal of Climate, 28(8): 3275–3288. doi:10.1175/JCLI-D-14-00459.1
  • Durre, Imke, Matthew J. Menne, Byron E. Gleason, Tamara G. Houston, and Russell S. Vose, 2010, “Comprehensive Automated Quality Assurance of Daily Surface Observations”, Journal of Applied Meteorology and Climatology, 49(8): 1615–1633. doi:10.1175/2010JAMC2375.1
  • Easterling, David R. and Michael F. Wehner, 2009, “Is the Climate Warming or Cooling?”, Geophysical Research Letters, 36(8): L08706. doi:10.1029/2009GL037810
  • Edwards, Paul N., 1999, “Global climate science, uncertainty and politics: Data-laden models, model-filtered data”, Science as Culture, 8(4): 437–472. doi:10.1080/09505439909526558
  • –––, 2000, “A Brief History of Atmospheric General Circulation Modeling”, in David A. Randall (ed.), General Circulation Model Development: Past, Present, and Future, San Diego, CA: Academic Press, pp. 67–90.
  • –––, 2010, A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming, Cambridge, MA: MIT Press.
  • Field, Christopher B., Vicente R. Barros, David Jon Dokken, Katherine J. Mach, Michael D. Mastrandrea, et al., 2014, “Technical Summary”, in Christopher B. Field, Vicente R. Barros, David Jon Dokken, Katherine J. Mach, Michael D. Mastrandrea, et al. (eds.) Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, New York: Cambridge University Press, pp. 35–94, available online.
  • Flato, Gregory, Jochem Marotzke, et al., 2013, “Evaluation of Climate Models”, in Stocker et al. 2013: 741–866 (ch. 9).
  • Fleming, James Rodger, 1998, Historical Perspectives on Climate Change, New York, Oxford University Press.
  • Folland, Christopher K., Thomas R. Karl, J.R. Christy, et al., 2001, “Observed Climate Variability and Change”, in Houghton et al. 2001: 99–181.
  • Frame, D.J., N.E. Faull, M.M. Joshi, and M.R. Allen, 2007, “Probabilistic climate forecasts and inductive problems”, Philosophical Transactions of the Royal Society A, 365(1857): 1971–1992. doi:10.1098/rsta.2007.2069
  • Frank, David, Jan Esper, Eduardo Zorita, and Rob Wilson, 2010, “A noodle, hockey stick, and spaghetti plate: a perspective on high-resolution paleoclimatology”, WIREs Climate Change, 1(4): 507–516. doi:10.1002/wcc.53
  • Frigg, Roman, Seamus Bradley, Hailiang Du, and Leonard A. Smith, 2014, “Laplace’s Demon and the Adventures of His Apprentices”, Philosophy of Science, 81(1): 31–59. doi:10.1086/674416
  • Frigg, Roman, Leonard A. Smith, and David A. Stainforth, 2013, “The Myopia of Imperfect Climate Models: The Case of UKCP09”, Philosophy of Science, 80(5): 886–897. doi:10.1086/673892
  • –––, 2015a, “An assessment of the foundational assumptions in high-resolution climate projections: the case of UKCP09”, Synthese, 192(12): 3979–4008. doi:10.1007/s11229-015-0739-8
  • Frigg, Roman, Erica Thompson, and Charlotte Werndl, 2015b, “Philosophy of Climate Science Part I: Observing Climate Change”, Philosophy Compass, 10(12): 953–964. doi:10.1111/phc3.12294
  • –––, 2015c, “Philosophy of Climate Science Part II: Modelling Climate Change”, Philosophy Compass, 10(12): 965–977. doi:10.1111/phc3.12297
  • Frisch, Mathias, 2013, “Modeling Climate Policies: A Critical Look at Integrated Assessment Models”, Philosophy and Technology, 26(2): 117–137. doi:10.1007/s13347-013-0099-6
  • –––, 2015, “Predictivism and old evidence: a critical look at climate model tuning”, European Journal for Philosophy of Science, 5(2): 171–190. doi:10.1007/s13194-015-0110-4
  • Gardiner, Stephen M., 2006, “A Perfect Moral Storm: Climate Change, Intergenerational Ethics and the Problem of Moral Corruption”, Environmental Values 15(3): 397–415. doi:10.3197/096327106778226293
  • –––, 2010, “Is ‘Arming the Future’ with Geoengineering Really the Lesser Evil? Some Doubts About the Ethics of Intentionally Manipulating the Climate System”, in Gardiner et al. 2010.
  • –––, 2011, A Perfect Moral Storm: The Ethical Tragedy of Climate Change, New York: Oxford University Press. doi:10.1093/acprof:oso/9780195379440.001.0001
  • Gardiner, Stephen, Simon Caney, Dale Jamieson, and Henry Shue, 2010, Climate Ethics: Essential Readings, New York: Oxford University Press.
  • Garvey, James, 2008, The Ethics of Climate Change: Right and Wrong in a Warming World, London: Continuum.
  • Giere, Ronald N., 2004, “How Models Are Used to Represent Reality”, Philosophy of Science, 71(5): 742–52. doi:10.1086/425063
  • Gleckler, P.J., K.E. Taylor, and C. Doutraiux, 2008, “Performance metrics for climate models”, Journal of Geophysical Research—Atmospheres, 113(D6): D06104. doi:10.1029/2007JD008972
  • Goodwin, William M., 2015, “Global Climate Modeling as Applied Science”, Journal for General Philosophy of Science, 46(2): 339–350. doi:10.1007/s10838-015-9301-0
  • Goodwin, William M. and Eric Winsberg, 2016, “Missing the Forest for the Fish: How Much Does the ‘Hawkmoth Effect’ Threaten the Viability of Climate Projections?”, Philosophy of Science, 83(5): 1122–1132. doi:10.1086/687943
  • Gramelsberger, Gabriele, 2010, “Conceiving processes in atmospheric models—General equations, subscale parameterizations, and ‘superparameterizations’”, Studies in History and Philosophy of Modern Physics, 41(3): 233–241. doi:10.1016/j.shpsb.2010.07.005
  • Guillemot, Hélène, 2010, “Connections between simulations and observation in climate computer modeling. Scientist’s practices and ‘bottom-up epistemology’ lessons”, Studies in History and Philosophy of Modern Physics, 41(3): 242–252. doi:10.1016/j.shpsb.2010.07.003
  • Hansen, J., R. Ruedy, M. Sato, and K. Lo, 2010, “Global Surface Temperature Change”, Reviews of Geophysics, 48: RG4004. doi:10.1029/2010RG000345
  • Harris, I., P.D. Jones, T.J. Osborn, and D.H. Lister, 2014, “Updated high-resolution grids of monthly climatic observations—the CRU TS3.10 Dataset”, International Journal of Climatology, 34(3): 623–642. doi:10.1002/joc.3711
  • Hartmann, Dennis L., Albert M.G. Klein Tank, Matilde Rusticucci, et al., 2013, “Observations: Atmosphere and Surface Supplementary Material”, in Stocker et al. 2013: 2SM-1–2SM-30 (ch. 2SM).
  • Havstad, Joyce C. and Matthew J. Brown, 2017, “Neutrality, Relevance, Prescription, and the IPCC”, Public Affairs Quarterly, 31(4): 303–324.
  • Hegerl, Gabriele and Francis Zwiers, 2011, “Use of models in detection and attribution of climate change”, WIREs Climate Change, 2(4): 570–591. doi:10.1002/wcc.121
  • Heymann, Matthias and Dania Achermann, forthcoming, “From Climatology to Climate Science in the 20th Century”, in Sam White, Christian Pfister, and Franz Mauelshagen (eds.), Palgrave Handbook of Climate History, Palgrave MacMillan UK.
  • Held, Isaac N., 2005, “The gap between simulation and understanding in climate modeling”, Bulletin of the American Meteorological Society, 86(11): 1609–1614. doi:10.1175/BAMS-86-11-1609
  • Hillerbrand, Rafaela, 2014, “Climate Simulations: Uncertain Projections for an Uncertain World”, Journal for General Philosophy of Science, 45(S1): 17–32. doi:10.1007/s10838-014-9266-4
  • Houghton, John, 2015, Global Warming: The Complete Briefing, fifth edition, Cambridge: Cambridge University Press.
  • Houghton, John T., et al. (eds.), 2001, Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge: Cambridge University Press, available online.
  • Hourdin, Frédéric, Thorsten Mauritsen, Andrew Gettelman, et al., 2017, “The Art and Science of Climate Model Tuning”, Bulletin of the American Meteorological Society, 98(3): 589–602. doi:10.1175/BAMS-D-15-00135.1
  • Ingleby, Bruce, Mark Rodwell and Lars. Isaksen, 2016, “Global Radiosonde Network Under Pressure”, ECMWF Newsletter No. 149: 25–30. doi:10.21957/cblxtg
  • Intemann, Kristen, 2015, “Distinguishing Between Legitimate and Illegitimate Values in Climate Modeling”, European Journal for Philosophy of Science, 5(2): 217–232. doi:10.1007/s13194-014-0105-6
  • –––, 2017, “Who Needs a Consensus Anyway? Addressing Manufactured Doubt and Increasing Public Trust in Climate Science”, Public Affairs Quarterly 31(3): 189–208.
  • IPCC, 2013a, “What is the IPCC?”, 30 August 2013. IPCC Fact Sheet, available online, accessed August 2, 2017.
  • –––, [IPCC-glossary] 2013b, “Annex III: Glossary”, Serge Planton (ed.), in Stocker et al. 2013: 1447–1465.
  • –––, 2014, Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, Rajendra K. Pachauri and Leo Meyer (eds.)]. Geneva: IPCC, available online.
  • Jones, P.D., D.H. Lister, T.J. Osborn, C. Harpham, M. Salmon, and C.P. Morice, 2012, “Hemispheric and large-scale land-surface air temperature variations: An extensive revision and an update to 2010”, Journal of Geophysical Research: Atmospheres, 117(D5): D05127, doi:10.1029/2011JD017139
  • Kalnay, Eugenia, 2003, Atmospheric Modeling, Data Assimilation and Predictability, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511802270
  • Kandlikar, Milind, James S. Risbey and Suraje Dessai, 2005, “Representing and Communicating Deep Uncertainty in Climate-Change Assessments”, Comptes rendus Géoscience, 337(4): 443–455. doi:10.1016/j.crte.2004.10.010
  • Karl, Thomas R., Susan J. Hassol, Christopher D. Miller and William L. Murray (eds.), 2006, Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences, Washington, DC: U.S. Climate Change Science Program and Subcommittee on Global Change Research, available online.
  • Katzav, Joel, 2013a, “Hybrid Models, Climate Models, and Inference to the Best Explanation”, The British Journal for the Philosophy of Science, 64(1): 107–129. doi:10.1093/bjps/axs002
  • –––, 2013b, “Severe Testing of Climate Change Hypotheses”, Studies in History and Philosophy of Modern Physics, 44(4): 433–441. doi:10.1016/j.shpsb.2013.09.003
  • –––, 2014, “The Epistemology of Climate Models and Some of Its Implications for Climate Science and the Philosophy of Science”, Studies in History and Philosophy of Modern Physics, 46(May): 228–238. doi:10.1016/j.shpsb.2014.03.001
  • Katzav, Joel, Henk A. Dijkstra, and A.T.J. (Jos) de Laat, 2012, “Assessing Climate Model Projections: State of the Art and Philosophical Reflections”, Studies in History and Philosophy of Modern Physics, 43(4): 258–276. doi:10.1016/j.shpsb.2012.07.002
  • Katzav, Joel and Wendy S. Parker, 2015, “The Future of Climate Modeling”, Climatic Change, 132(4): 375–387. doi:10.1007/s10584-015-1435-x
  • Katzav, Joel and Wendy S. Parker, forthcoming, “Issues in the Theoretical Foundations of Climate Science”, Studies in History and Philosophy of Modern Physics, doi:10.1016/j.shpsb.2018.02.001
  • Knutson, Thomas R., Fanrong Zeng, and Andrew T. Wittenberg, 2013, “Multi-Model Assessment of Regional Surface Temperature Trends: CMIP3 and CMIP5 Twentieth-Century Simulations”, Journal of Climate, 26(22): 8709–8743. doi:10.1175/JCLI-D-12-00567.1
  • Knutti, Reto, 2008, “Should We Believe Model Predictions of Future Climate Change?”, Philosophical Transactions of the Royal Society A, 366(1885): 4647-4664. doi:10.1098/rsta.2008.0169
  • –––, 2010, “The End of Model Democracy?”, Climatic Change, 10(3–4)2: 395–404. doi:10.1007/s10584-010-9800-2
  • –––, 2018, “Climate Model Confirmation: From Philosophy to Predicting Climate in the Real World”, in Lloyd and Winsberg 2018: 325–351 (Ch. 11).
  • Knutti, Reto, Reinhard Furrer, Claudia Tebaldi, Jan Cermak, and Gerald A. Meehl, 2010a, “Challenges in Combining Projections from Multiple Climate Models”, Journal of Climate, 23(10): 2739–2758. doi:10.1175/2009JCLI3361.1
  • Knutti, Reto, Gabriel Abramowitz, Matthew Collins Veronika Eyring, Peter J. Gleckler, Bruce Hewitson, and Linda Mearns, 2010b, “Good practice guidance paper on assessing and combining multi model climate projections”, in Stocker et al. 2010: 1–14.
  • Knutti, Reto, David Masson, and Andrew Gettelman, 2013, “Climate Model Genealogy: Generation CMIP5 and How We Got There”, Geophysical Research Letters, 40(6): 1194–1199. doi:10.1002/grl.50256
  • Knutti, Reto, Jan Sedláček, Benjamin M. Sanderson, Ruth Lorenz, Erich M. Fischer, and Veronika Eyring, 2017, “A Climate Model Projection Weighting Scheme Accounting for Performance and Interdependence”, Geophysical Research Letters 44(4): 1909–1918. doi:10.1002/2016GL072012
  • Krueger, Oliver and Jin-Song Von Storch, 2011, “A Simple Empirical Model for Decadal Climate Prediction”, Journal of Climate, 24(4): 1276–1283. doi:10.1175/2010JCLI3726.1
  • Lahsen, Myanna, 2005, “Seductive Simulations? Uncertainty Distribution Around Climate Models”, Social Studies of Science, 35(6): 895–922. doi:10.1177/0306312705053049
  • Lawhead, Jon, forthcoming, “Structural Modelling Error and the System Individuation Problem”, The British Journal for the Philosophy of Science.
  • Lawrimore, Jay H., Matthew J. Menne, Byron E. Gleason, Claude N. Williams, David B. Wuertz, Russell S. Vose, and Jared Rennie, 2011, “An Overview of the Global Historical Climatology Network Monthly Mean Temperature Data Set, Version 3”, Journal of Geophysical Research, 116(D19): D19121. doi:10.1029/2011JD016187.
  • Leiserowitz, Anthony A., Edward W. Maibach, Connie Roser-Renouf, Nicholas Smith, and Erica Dawson, 2013, “Climategate, Public Opinion, and the Loss of Trust”, American Behavioral Scientist, 57(6): 818–837. doi:10.1177/0002764212458272
  • Lenhard, Johannes and Eric Winsberg, 2010, “Holism, Entrenchment and the Future of Climate Model Pluralism”, Studies in History and Philosophy of Modern Physics, 41(3): 253–262. doi:10.1016/j.shpsb.2010.07.001
  • Lewandowsky, Stephan, Naomi Oreskes, James S. Risbey, Ben R. Newell, and Michael Smithson, 2015, “Seepage: Climate change denial and its effect on the scientific community”, Global Environental Change, 33: 1–13. doi: 10.1016/j.gloenvcha.2015.02.013
  • Leuschner, Anna, 2015, “Uncertainties, Plurality, and Robustness in Climate Research and Modeling. On the Reliability of Climate Prognoses”, Journal for General Philosophy of Science, 46(2): 367–381. doi:10.1007/s10838-015-9304-x
  • –––, 2018, “Is It Appropriate to ‘target’ Inappropriate Dissent? on the Normative Consequences of Climate Skepticism”, Synthese, 195(3): 1255–1271. doi:10.1007/s11229-016-1267-x
  • Lloyd, Elisabeth A., 2009, “I—Varieties of Support and Confirmation of Climate Models”, Aristotelian Society Supplementary Volume, 83(1): 213–232. doi:10.1111/j.1467-8349.2009.00179.x
  • –––, 2010, “Confirmation and Robustness of Climate Models”, Philosophy of Science, 77(5): 971–984. doi:10.1086/657427
  • –––, 2012, “The Role of ‘complex’ Empiricism in the Debates About Satellite Data and Climate Models”, Studies in History and Philosophy of Science, 43(2): 390–401. doi:10.1016/j.shpsa.2012.02.001
  • –––, 2015, “Model Robustness as a Confirmatory Virtue: The Case of Climate Science”, Studies in History and Philosophy of Science, 49(2): 58–68. doi:10.1016/j.shpsa.2014.12.002
  • Lloyd, Elisabeth A. and Eric Winsberg (eds.), 2018, Climate Modeling: Philosophical and Conceptual Issues, Palgrave MacMillan Press.
  • Lorenz, Edward N., 1970, “Climatic Change as a Mathematical Problem”, Journal of Applied Meteorology, 9(3): 325–329. doi:10.1175/1520-0450(1970)009<0325:CCAAMP>2.0.CO;2
  • Maltais, Aaron and Catriona McKinnon, 2015, Ethics of Climate Change Governance, London: Rowman and Littlefield.
  • Mann, Michael E., 2012, The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, New York: Columbia University Press.
  • Mann, Michael E., Raymond S. Bradley, and Malcolm K. Hughes, 1998 “Global-Scale Temperature Patterns and Climate Forcing Over the Past Six Centuries” Nature, 392(6678): 779–787. doi:10.1038/33859
  • –––, 1999, “Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations”, Geophysical Research Letters, 26(6): 759–762. doi:10.1029/1999GL900070
  • Mann, Michael, Caspar Amman, Ray Bradley, Keith Briffa, Philip Jones, Tim Osborn, Tom Crowley, et al., 2003, “On Past Temperatures and Anomalous Late-20th Century Warmth”, Eos, 84(27): 256–258. doi:10.1029/2003EO270003
  • Marcott, S.A., J.D. Shakun, P.U. Clark and A.C. Mix, 2013, “A Reconstruction of Regional and Global Temperature for the Past 11,300 Years”, Science, 339(6124): 1198-1201. doi:10.1126/science.1228026
  • Masson-Delmotte, Valérie, Michael Schulz, et al., 2013, “Information from Paleoclimate Archives”, in Stocker et al. 2013: 383–464.
  • Mastrandrea, Michael D., Christopher B. Field, Thomas F. Stocker, Ottmar Edenhofer, Kristie L. Ebi, David J. Frame, Hermann Held, Elmar Kriegler, Katharine J. Mach, Patrick R. Matschoss, Gian-Kasper Plattner, Gary W. Yohe, and Francis W. Zwiers, 2010 “Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties. Intergovernmental Panel on Climate Change (IPCC)”, 6–7 July 2010, Jasper Ridge, CA, available online, accessed August 4, 2017.
  • Mauritsen, Thorsten, Bjorn Stevens, Erich Roeckner, Traute Crueger, Monika Esch, Marco Giorgetta, Helmuth Haak, et al., 2012, “Tuning the Climate of a Global Model”, Journal of Advances in Modeling Earth Systems, 4(3): M00A01. doi:10.1029/2012MS000154
  • Mayo, Deborah G., 1996, Error and the Growth of Experimental Knowledge, Chicago: University of Chicago Press.
  • Mayo-Wilson, Conor, 2015, “Structural Chaos”, Philosophy of Science, 82(5): 1236–1247. doi:10.1086/684086
  • McFarlane, Norman, 2011, “Parameterizations: Representing Key Processes in Climate Models Without Resolving Them”, WIREs Climate Change, 2(4): 482–497. doi:10.1002/wcc.122
  • McGuffie, Kendal and Ann Henderson-Sellers, 2014, The Climate Modelling Primer, fourth edition, Chichester: Wiley Blackwell.
  • McIntyre, Stephen and Ross McKitrick, 2003, “Corrections to the Mann et al. (1998) Proxy Data Base and Northern Hemispheric Average Temperature Series”, Energy & Environment, 14(6): 751–771. doi:10.1260/095830503322793632
  • –––, 2005, “Hockey Sticks, Principal Components, and Spurious Significance”, Geophysical Research Letters, 32(3): L03710. doi:10.1029/2004GL021750
  • Mearns, L. O., S. Sain, L. R. Leung, M. S. Bukovsky, S. McGinnis, S. Biner, D. Caya, et al., 2013, “Climate Change Projections of the North American Regional Climate Change Assessment Program (NARCCAP)” Climatic Change, 120(4): 965–975. doi:10.1007/s10584-013-0831-3.
  • Medhaug, Iselin, Martin B. Stolpe, Erich M. Fischer, and Reto Knutti, 2017, “Reconciling Controversies about the ‘Global Warming Hiatus’”, Nature, 545(7652): 41–47. doi:10.1038/nature22315
  • Meehl, Gerald A., Lisa Goddard, George Boer, Robert Burgman, Grant Branstator, Christophe Cassou, Susanna Corti, et al, 2014, “Decadal Climate Prediction: An Update from the Trenches”, Bulletin of the American Meteorological Society, 95(2): 243–267. doi:10.1175/BAMS-D-12-00241.1
  • Menne, Matthew J., Imke Durre, Russell S. Vose, Byron E. Gleason, and Tamara G. Houston, 2012, “An Overview of the Global Historical Climatology Network-Daily Database”, Journal of Atmospheric and Oceanic Technology, 29(7): 897–910. doi:10.1175/JTECH-D-11-00103.1
  • Montford, Andrew W., 2010, The Hockey Stick Illusion: Climategate and the Corruption of Science, London: Stacey International.
  • Murphy, J.M., B.B.B Booth, M Collins, G.R Harris, D.M.H Sexton, M.J Webb, 2007, “A Methodology for Probabilistic Predictions of Regional Climate Change from Perturbed Physics Ensembles”, Philosophical Transactions of the Royal Society A, 365(1857): 1993–2028. doi:10.1098/rsta.2007.2077
  • Murphy, J.M., D. Sexton, G. Jenkins, et al., 2009, UK Climate Projections Science Report: Climate Change Projections, Exeter: Met Office Hadley Centre, available online.
  • National Research Council (NRC), 2000, Reconciling Observations of Global Temperature Change, Washington, DC: National Academy Press. doi:10.17226/9755
  • –––, 2006, Surface Temperature Reconstructions for the Last 2,000 Years, Washington, DC: National Academy Press. doi:10.17226/11676
  • Neale, Richard B., Jadwiga H. Richter, Andrew J. Conley, Sungsu Park, Peter H. Lauritzen et al., 2010, “Description of the NCAR Community Atmosphere Model (CAM 4.0)”, NCAR Technical Note (NCAR/TN-485+STR), Boulder, CO: National Center For Atmospheric Research, available online, accessed August 1, 2017.
  • Nebeker, Frederik, 1995, Calculating the Weather: Meteorology in the 20th Century, New York: Academic Press.
  • Odenbaugh, Jay, 2012, “Climate, Consensus and Contrarians”, inWilliam P. Kabasenche, Michael O’Rourke, and Matthew H. Slater (eds.), The Environment: Philosophy, Science and Ethics, Cambridge, MA: MIT Press, pp.137–150. doi:10.7551/mitpress/9780262017404.003.0008
  • Oreskes, Naomi, 2007, “The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong?”, in Joseph F.C. DiMento and Pamela Doughman (eds.), Climate Change: What It Means for Us, Our Children, and Our Grandchildren, Cambridge: MIT Press, pp. 65–99.
  • Oreskes, Naomi and Erik M. Conway, 2010, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, London: Bloomsbury Press.
  • PAGES-2K-Consortium, 2013, “Continental-Scale Temperature Variability During the Past Two Millennia”, Nature Geoscience, 6(5): 339–346. doi:10.1038/ngeo1797
  • Palmer, Clare, 2011, “Does Nature Matter? The Place of the Non-Human in the Ethics of Climate Change”, in Arnold 2011: 272–291. doi:10.1017/CBO9780511732294.014
  • Palmer, T.N., 1999, “A Nonlinear Dynamical Perspective on Climate Prediction”, Journal of Climate, 12(2): 575–591. doi:10.1175/1520-0442(1999)012<0575:ANDPOC>2.0.CO;2
  • Parker, Wendy S., 2006, “Understanding Pluralism in Climate Modeling”, Foundations of Science, 11(4): 349–368. doi:10.1007/s10699-005-3196-x
  • –––, 2009, “II—Confirmation and Adequacy-for-Purpose in Climate Modelling”, Aristotelian Society Supplementary Volume, 83(1): 233–249. doi:10.1111/j.1467-8349.2009.00180.x
  • –––, 2010, “Comparative Process Tracing and Climate Change Fingerprints”, Philosophy of Science, 77(5): 1083–1095. doi:10.1086/656814
  • –––, 2011, “When Climate Models Agree: the Significance of Robust Model Predictions”, Philosophy of Science, 78(4): 579–600. doi:10.1086/661566
  • –––, 2014a, “Values and Uncertainties in Climate Prediction, Revisited”, Studies in History and Philosophy of Science, 46(June): 24–30. doi:10.1016/j.shpsa.2013.11.003
  • –––, 2014b, “Simulation and Understanding in the Study of Weather and Climate”, Perspectives on Science, 22(3): 336–356. doi:10.1162/POSC_a_00137
  • –––, 2017, “Computer Simulation, Measurement and Data Assimilation”, The British Journal for the Philosophy of Science, 68(1): 273-304. doi:10.1093/bjps/axv037
  • Petersen, Arthur C., 2000, “Philosophy of Climate Science”, Bulletin of the American Meteorological Society, 81(2): 265–271. doi:10.1175/1520-0477(2000)081<0265:POCS>2.3.CO;2
  • –––, 2012, Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties and Their Role in Climate Science and Policy Advice, Boca Raton: Taylor & Francis.
  • Pirtle, Zachary, Ryan Meyer, and Andrew Hamilton, 2010, “What Does it Mean When Climate Models Agree? A Case for Assessing Independence Among General Circulation Models”, Environmental Science & Policy, 13(5): 351–361. doi:10.1016/j.envsci.2010.04.004
  • Preston, Christopher J., 2013, “Ethics and Geoengineering: Reviewing the Moral Issues Raised by Solar Radiation Management and Carbon Dioxide Removal”, WIREs Climate Change 4(1): 23–37. doi:10.1002/wcc.198
  • ––– (ed.), 2016, Climate Justice and Geoengineering: Ethics and Policy in the Atmospheric Anthropocene, London: Rowman and Littlefield.
  • Qu Xin and Alex Hall, 2007, “What Controls the Strength of Snow Albedo Feedback?”, Journal of Climate, 20(15): 3971–3981. doi:10.1175/JCLI4186.1
  • Ranalli, Brent, 2012, “Climate Science, Character and the ‘Hard-Won’ Consensus”, Kennedy Institute of Ethics Journal, 22(2): 183–210. doi:10.1353/ken.2012.0004
  • Randall, David A., Richard A. Wood, Sandrine Bony, et al., 2007, “Climate Models and Their Evaluation”, in S. Solomon et al. (eds.) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge: Cambridge University Press, pp. 589–662 (ch. 8), available online.
  • Randall, David, Mark Branson, Minghuai Wang, Steven Ghan, Cheryl Craig, Andrew Gettelman, and James Edwards, 2013, “A Community Atmosphere Model with Superparameterized Clouds”, Eos, 94(25): 221–222. doi:10.1002/2013EO250001.
  • Rehg, William and Kent Staley, 2017, “‘Agreement’ in the IPCC Confidence Measure”, Studies in History and Philosophy of Modern Physics, 57(February): 126–134. doi:10.1016/j.shpsb.2016.10.008
  • Rennie, J.J., J.H. Lawrimore, B.E. Gleason et al., 2014, “The International Surface Temperature Initiative Global Land Surface Databank: Monthly Temperature Data Release Description and Methods”, Geoscience Data Journal, 1(2): 75–102. doi:10.1002/gdj3.8
  • Risbey, James S., Stephan Lewandowsky, Clothilde Langlais, Didier P. Monselesan, Terence J. O’Kane, and Naomi Oreskes, 2014, “Well-Estimated Global Surface Warming in Climate Projections Selected for ENSO Phase”, Nature Climate Change, 4(9): 835–840. doi:10.1038/NCLIMATE2310
  • Riser, Stephen C., Howard J. Freeland, Dean Roemmich, Susan Wijffels, Ariel Troisi, Mathieu Belbéoch, Denis Gilbert, et al., 2016, “Fifteen Years of Ocean Observations with the Global Argo Array”, Nature Climate Change, 6(2): 145–153. doi:10.1038/NCLIMATE2872
  • Rohde, Robert, Richard A. Muller, Robert Jacobsen, Elizabeth Muller, Saul Perlmutter, Arthur Rosenfeld, Jonathan Wurtele, Donald Groom, and Charlotte Wickham, 2013, “A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011”, Geoinformatics and Geostatistics: An Overview, 1(1): 1. doi:10.4172/2327-4581.1000101
  • Rougier, Jonathan, 2007, “Probabilistic Inference for Future Climate Using an Ensemble of Climate Model Evaluations”, Climatic Change, 81(3–4): 247–264. doi:10.1007/s10584-006-9156-9
  • Russell, Muir, Geoffrey Boulton, Peter Clarke, David Eyton and James Norton, 2010, The Independent Climate Change Emails Review, available online, accessed August 5, 2017.
  • Santer, Benjamin D., Céline Bonfils, Jeffrey F. Painter, Mark D. Zelinka, Carl Mears, Susan Solomon, Gavin A. Schmidt, et al., 2014, “Volcanic Contribution to Decadal Changes in Tropospheric Temperature”, Nature Geoscience, 7(3): 185–189. doi:10.1038/ngeo2098
  • Santer, Benjamin D., Susan Solomon, Giuliana Pallotta, Carl Mears, Stephen Po-Chedley, Qiang Fu, Frank Wentz, et al., 2017, “Comparing Tropospheric Warming in Climate Models and Satellite Data”, Journal of Climate, 30(1): 373–392. doi:10.1175/JCLI-D-16-0333.1
  • Schmidt, Gavin A., 2011, “Reanalyses ‘R’ Us”, posted July 26, 2011, available online, accessed August 1, 2017.
  • –––, 2012, “Short term trends: Another proxy fight”, posted November 1, 2012, available online, accessed August 10, 2017.
  • Schmidt, Gavin A. and Steven Sherwood, 2015, “A Practical Philosophy of Complex Climate Modelling”, European Journal for Philosophy of Science, 5(2): 149–169. doi:10.1007/s13194-014-0102-9
  • Schupbach, Jonah N., 2016, “Robustness Analysis as Explanatory Reasoning”, The British Journal for the Philosophy of Science, 69(1): 275–300. doi:10.1093/bjps/axw008
  • Schurer, Andrew P., Gabriele C. Hegerl, Michael E. Mann, Simon F. B. Tett, and Steven J. Phipps, 2013, “Separating Forced from Chaotic Climate Variability over the Past Millennium”, Journal of Climate, 26(18): 6954–6973. doi:10.1175/JCLI-D-12-00826.1
  • Sellers, William D., 1969, “A Global Climatic Model Based on the Energy Balance of the Earth-Atmosphere System”, Journal of Applied Meteorology, 8(3): 392–400. doi:10.1175/1520-0450(1969)008<0392:AGCMBO>2.0.CO;2
  • Sexton, David M. H., James M. Murphy, Mat Collins, and Mark J. Webb, 2012, “Multivariate Probabilistic Projections Using Imperfect Climate Models Part I: Outline of Methodology”, Climate Dynamics, 38(11–12): 2513–2542. doi:10.1007/s00382-011-1208-9
  • Shue, Henry, 2014, Climate Justice: Vulnerability and Protection, New York: Oxford University Press.
  • Shukla, J., R. Hagedorn, M. Miller, T. N. Palmer, B. Hoskins, J. Kinter, J. Marotzke, and J. Slingo, 2009, “Strategies: Revolution in Climate Prediction is Both Necessary and Possible: a Declaration at the World Modeling Summit for Climate Prediction”, Bulletin of the American Meteorological Society, 90(2): 175–178. doi:10.1175/2008BAMS2759.1
  • Singer, Peter, 2016, One World Now: The Ethics of Globalization, New Haven: Yale University Press.
  • Smith, L.A., 2002, “What Might We Learn from Climate Forecasts?”, Proceedings of the National Academy of Sciences, 99(supplement 1): 2487–2492. doi:10.1073/pnas.012580599
  • Soon, Willie, Sallie Baliunas, Craig Idso, Sherwood Idso, and David R. Legates, 2003, “Reconstructing Climatic and Environmental Changes of the Past 1000 Years: A Reappraisal”, Energy & Environment, 14(2): 233–296. doi:10.1260/095830503765184619
  • Spencer R.W. and J.R. Christy, 1990, “Precise Monitoring of Global Temperature Trends from Satellites”, Science, 247(4950): 1558–1562. doi:10.1126/science.247.4950.1558
  • Stainforth, D.A., T. Aina, C. Christensen, et al., 2005, “Uncertainty in Predictions of the Climate Response to Rising Levels of Greenhouse Gases”, Nature, 433(7024): 403–406. doi:10.1038/nature03301
  • Stainforth, D.A., M.R Allen, E.R Tredger, and L.A Smith, 2007, “Confidence, Uncertainty and Decision-Support Relevance in Climate Predictions”, Philosophical Transactions of the Royal Society A, 365(1857): 2145–2161. doi:10.1098/rsta.2007.2074
  • Steele, Katie and Charlotte Werndl, 2013, “Climate Models, Calibration and Confirmation”, The British Journal for the Philosophy of Science, 64(3): 609–635. doi:10.1093/bjps/axs036
  • –––, 2016, “The Diversity of Model Tuning Practices in Climate Science”, Philosophy of Science, 83(5): 1133–1144. doi:10.1086/687944
  • Stocker, Thomas, 2011, Introduction to Climate Modelling, Berlin: Springer-Verlag. doi:10.1007/978-3-642-00773-6
  • Stocker, Thomas F., et al. (eds.), 2013, Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, New York: Cambridge University Press, available online.
  • Sundberg, Mikaela, 2007, “Parameterizations as Boundary Objects”, Social Studies of Science, 37(3): 473–488. doi:10.1177/0306312706075330
  • Taylor, Karl E., Ronald J. Stouffer, and Gerald A. Meehl, 2012, “An Overview of CMIP5 and the Experiment Design”, Bulletin of the American Meteorological Society, 93(4): 485–498. doi:10.1175/bams-d-11-00094.1
  • Tebaldi, Claudia, Richard L. Smith, Doug Nychka, and Linda O. Mearn, 2005, “Quantifying Uncertainty in Projections of Regional Climate Change: A Bayesian Approach to the Analysis of Multimodel Ensembles”, Journal of Climate, 18(10): 1524–1540. doi:10.1175/JCLI3363.1
  • Thompson, Erica, Roman Frigg, and Casey Helgeson, 2016, “Expert Judgment for Climate Change Adaptation”, Philosophy of Science, 83(5): 1110–1121. doi:10.1086/687942
  • Thorne, Peter W., John R. Lanzante, Thomas C. Peterson, Dian J. Seidel, and Keith P. Shine, 2011, “Tropospheric Temperature Trends: History of An Ongoing Controversy”, WIREs Climate Change, 2(1): 66–88. doi:10.1002/wcc.80
  • Vezér, Martin A., 2016a, “Variety-Of-Evidence Reasoning About the Distant Past: A Case Study in Paleoclimate Reconstruction”, European Journal for Philosophy of Science, 7(2): 257–265. doi:10.1007/s13194-016-0156-y
  • –––, 2016b, “Computer Models and the Evidence of Anthropogenic Climate Change: An Epistemology of Variety-Of-Evidence Inferences and Robustness Analysis”, Studies in History and Philosophy of Science: Part A, 56(April): 95–102. doi:10.1016/j.shpsa.2016.01.004
  • Wahl, Eugene R. and Caspar M. Ammann, 2007, “Robustness of the Mann, Bradley, Hughes Reconstruction of Northern Hemisphere Surface Temperatures: Examination of Criticisms Based on the Nature and Processing of Proxy Climate Evidence”, Climatic Change, 8(1–2)5: 33–69. doi:10.1007/s10584-006-9105-7
  • Weart, Spencer, 2008, The Discovery of Global Warming, second edition (revised and updated), Cambridge, MA: Harvard University Press.
  • Weart, Spencer, 2010, “The Development of General Circulation Models of Climate”, Studies in History and Philosophy of Modern Physics, 41(3): 208–217. doi:10.1016/j.shpsb.2010.06.002
  • Weigel, Andreas P., Reto Knutti, Mark A. Liniger and Christof Appenzeller, 2010, “Risk of Model Weighting in Multimodel Climate Projections”, Journal of Climate, 23(15): 4175–4191. doi:10.1175/2010JCLI3594.1
  • Werndl, Charlotte, 2016, “On Defining Climate and Climate Change”, The British Journal for the Philosophy of Science, 67(2): 337–364. doi:10.1093/bjps/axu048
  • –––, forthcoming, “Initial Conditions Dependence and Initial Conditions Uncertainty in Climate Science”, The British Journal for the Philosophy of Science. doi:10.1093/bjps/axy021
  • Winsberg, Eric, 2010, Science in the Age of Computer Simulation, Chicago: Chicago University Press.
  • –––, 2012, “Values and Uncertainties in the Predictions of Global Climate Models”, Kennedy Institute of Ethics Journal, 22(2): 111–137. doi:10.1353/ken.2012.0008
  • –––, 2018, Philosophy and Climate Science, Cambridge: Cambridge University Press.
  • Winsberg, Eric and William Mark Goodwin, 2016, “The Adventures of Climate Science in the Sweet Land of Idle Arguments”, Studies in History and Philosophy of Modern Physics, 54(May): 9–17. doi:10.1016/j.shpsb.2016.02.001
  • Yohe, Gary and Michael Oppenheimer, 2011, “Evaluation, Characterization, and Communication of Uncertainty by the Intergovernmental Panel on Climate Change—An Introductory Essay”, Climatic Change, 108(4): 629–639. doi:10.1007/s10584-011-0176-8

Copyright © 2018 by
Wendy Parker <wendyparker@vt.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free