Guest Blogger / 2 opinion: Dr. Tim Ball
This article is intended as a starting point for a project that I hope will involve the extensive reach of the Internet and allow input from the traditionally ignored knowledge of the public. It also invites specialists, who would not normally look at the Intergovernmental Panel on Climate Change (IPCC) Reports, to critique what they are saying about their area of expertise. Most of them assume that IPCC scientists would rigorously follow scientific methods and procedures. They will learn the same as German physicist and meteorologist Klaus-Eckart Puls.
I discovered that much of what the IPCC and the media were telling us was sheer nonsense and was not even supported by any scientific facts and measurements.
Maybe Anthony Watts will consider setting up a file on this web page so people can record their information. The public has been excluded in the climate debate using the usual variety of excuses all designed for an elite group to control and dictate what is known and understood. You know the list; academic qualifications, peer review, government control, media bias, and so on. That is now coming to an end as the recent US election and the Brexit vote illustrated.
All the control of the elite at both ends of the political spectrum and the bias of the media, failed to control the outcomes because the people had access to the Internet and, if not necessarily to the truth, at least exposure to what they were not told. I am asking anyone who wants to identify data that puts the IPCC claimed human CO2 impact on global temperature in perspective. This will include data that is omitted, not measured, or falls within an error range that makes it statistically meaningless.
In a previous article , I wrote that
The Intergovernmental Panel on Climate Change (IPCC) claim with 95 percent certainty that they completed a 5000-piece puzzle using only eleven pieces. The pieces are shown in the Radiative Forcing diagram (Figure 1) from AR5. By their assessment, they have high confidence in only five of these pieces.
The focus on eleven pieces was deliberately dictated by the definition of climate change given to them in Article 1 of the UNFCCC, a treaty formalized at the “Earth Summit” in Rio in 1992.
a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over considerable time periods.
The validity of the claim of the impact of the eleven variables is unjustified because of the error range in the data.
The next question is what do they define as a forcing? Here are the AR5 Glossary definitions:
External forcing External forcing refers to a forcing agent outside the climate system causing a change in the climate system. Volcanic eruptions, solar variations and anthropogenic changes in the composition of the atmosphere and land use change are external forcings. Orbital forcing is also an external forcing as the insolation changes with orbital parameters eccentricity, tilt, and precession of the equinox.
Radiative forcing Radiative forcing is the change in the net, downward minus upward, radiative flux (expressed in W m–2) at the tropopause or top of atmosphere due to a change in an external driver of climate change, such as, for example, a change in the concentration of carbon dioxide or the output of the Sun. Sometimes internal drivers are still treated as forcings even though they result from the alteration in climate, for example aerosol or greenhouse gas changes in paleoclimates. The traditional radiative forcing is computed with all tropospheric properties held fixed at their unperturbed values, and after allowing for stratospheric temperatures, if perturbed, to readjust to radiative-dynamical equilibrium. Radiative forcing is called instantaneous if no change in stratospheric temperature is accounted for. The radiative forcing once rapid adjustments are accounted for is termed the effective radiative forcing. For the purposes of this report, radiative forcing is further defined as the change relative to the year 1750 and, unless otherwise noted, refers to a global and annual average value. Radiative forcing is not to be confused with cloud radiative forcing, which describes an unrelated measure of the impact of clouds on the radiative flux at the top of the atmosphere.
This is a bureaucratic rather than a scientific definition that allows them to focus on, modify, or ignore variables of the climate system as suits their purpose. Surely, the only definition they need is
“a change in any variable within the climate system that causes a net change, regardless of the amount.”
Of course, that is not possible given the narrow restriction to only human causes given them. But this is contradicted by acknowledgment of multiple natural forcings and the selection of only two for consideration. From AR5,
Several natural drivers of climate change operate on multiple time scales. Solar variability takes place at many time scales that include centennial and millennial scales (Helama et al., 2010), as the radiant energy output of the Sun changes. Also, variations in the astronomical alignment of the Sun and the Earth (Milankovitch cycles) induce cyclical changes in RF, but this is substantial only at millennial and longer time scales (see Section 188.8.131.52). Volcanic forcing is highly episodic, but can have dramatic, rapid impacts on climate. No major asteroid impacts occurred during the reference period (1750–2012) and thus this effect is not considered here. This section discusses solar and volcanic forcings, the two dominant natural contributors of climate change since the pre-industrial time.
There is no evidence to support the claim in the last sentence. It is not untoward, given the track record of manipulation and selectivity exposed primarily by the leaked emails of Climategate, to suspect that solar and volcanic forcings are examined for a political reason. Partial proof of that charge is that they only examine solar radiant energy while effectively ignoring Milankovitch and the Cosmic theory. This is further supported by the fact that they mention and dismiss them with confusing comments. For example,
Also, variations in the astronomical alignment of the Sun and the Earth (Milankovitch cycles) induce cyclical changes in RF, but this is substantial only at millennial and longer time scales.
The Milankovitch Effects cause variation in the amount of radiant energy reaching the Earth all the time . Figure 2 shows the plot of that variation over one million years. The range of variation is approximately 100 watts per square meter, which far exceeds the 1.68 Wm 2 that the IPCC attributes to human CO2. The IPCC argue that the Milankovitch Effect
Orbital forcing is considered the pacemaker of transitions between glacials and interglacials (high confidence), although there is still no consensus on exactly how the different physical processes influenced by insolation changes interact to influence ice sheet volume (Box 5.2; Section 5.3.2). The different orbital configurations make each glacial and interglacial period unique (Yin and Berger, 2010; Tzedakis et al., 2012a). Multi-millennial trends of temperature, Arctic sea ice and glaciers during the current interglacial period, and specifically the last 2000 years, have been related to orbital forcing (Section 5.5).
Figure 2: Variations in the amount of insolation (incoming solar radiation) at 65°N
The main reason given for not including Milankovitch in the IPCC calculations is the time scale, but that is a canard. There is a change, albeit small, since 1750 A.D the IPCC period of consideration, but it is only one part of a multitude of changes that collectively swamp the human CO2 portion of change.
The amount of forcing they attribute to human CO2 is 1.68 Wm 2 but the error range is 1.33 to 2.03 Wm 2 (Figure 1). The total amount of forcing by humans is 2.29 Wm 2 with an error range of 1.13 to 3.33 Wm 2 . This is a 2.2 Wm 2 range on a total forcing of 2.29 Wm 2 , or 96% of the total claimed forcing. They then claim in the final column labelled “Level of confidence,” that it is VH (Very High) for CO2 and H (High) for total anthropogenic forcing. This cannot be correct because they assume that CO2 is an evenly mixed atmospheric gas, which is contradicted by the data from the OCO2 satellite.
The problem is they produce the data on the annual amount of CO2 humans produce and they create and code the models that produce these results. It is a gross figure when a net figure is a reality for scientific accuracy. The error range is typical for most of the data and conclusions throughout the IPCC Reports.
For example, in the 2001 Physical Science Basis Report they wrote,
So we calculate that since the late 19th or the beginning of the 20th century, up to 2000, global warming has been 0.6 ± 0.2°C.
This is an error range was ±0.2°c or ±33%. This was reportedly produced by Phil Jones ‘hockey stick’ was an increase of 0.6°C, however, even a politician wouldn’t use those numbers in a polling survey. It is not surprising that when Warwick Hughes asked for the data to check it Jones, replied on February 21, 2005,
“We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”
No wonder Jones conveniently lost the data . But as Vincent Gray noted, the data was useless anyway.
The question is what was the error range of the other ten variables, but also of the natural variables? How do those error ranges compare to the IPCC claimed impact of human CO2 of 1.68 Wm 2 ?
Human influence on the climate system is clear. This is evident from the increasing greenhouse gas concentrations in the atmosphere, positive radiative forcing, observed warming, and understanding of the climate system.
Consider that claim against the following their statement from AR5 about the uncertainties.
Box 2.1 | Uncertainty in Observational Records
The vast majority of historical (and modern) weather observations were not made explicitly for climate monitoring purposes. Measurements have changed in nature as demands on the data, observing practices and technologies have evolved. These changes almost always alter the characteristics of observational records, changing their mean, their variability or both, such that it is necessary to process the raw measurements before they can be considered useful for assessing the true climate evolution. This is true of all observing techniques that measure physical atmospheric quantities. The uncertainty in observational records encompasses instrumental/ recording errors, effects of representation (e.g., exposure, observing frequency or timing), as well as effects due to physical changes in the instrumentation (such as station relocations or new satellites). All further processing steps (transmission, storage, gridding, interpolating, averaging) also have their own particular uncertainties. Because there is no unique, unambiguous, way to identify and account for non-climatic artefacts in the vast majority of records, there must be a degree of uncertainty as to how the climate system has changed. The only exceptions are certain atmospheric composition and flux measurements whose measurements and uncertainties are rigorously tied through an unbroken chain to et al., 1976a).
Uncertainty in data set production can result either from the choice of parameters within a particular analytical framework—parametric uncertainty, or from the choice of overall analytical framework— structural uncertainty. Structural uncertainty is best estimated by having multiple independent groups assess the same data using distinct approaches. More analyses assessed now than in AR4 include published estimates of parametric or structural uncertainty. It is important to note that the literature includes a very broad range of approaches. Great care has been taken in comparing the published uncertainty ranges as they almost always do not constitute a like- for-like comparison. In general, studies that account for multiple potential error sources in a rigorous manner yield larger uncertainty ranges. This yields an apparent paradox in interpretation as one might think that smaller uncertainty ranges should indicate a better product. However, in many cases this would be an incorrect inference as the smaller uncertainty range may instead reflect that the published estimate considered only a subset of the plausible sources of uncertainty. Within the time series figures, where this issue would be most acute, such parametric uncertainty estimates are therefore not generally included. Consistent with AR4 HadCRUT4 uncertainties in GMST are included in Figure 2.19, which in addition includes structural uncertainties in GMST.
To conclude, the vast majority of the raw observations used to monitor the state of the climate contain residual non-climatic influences. Removal of these influences cannot be done definitively and neither can the uncertainties be unambiguously assessed. Therefore, care is required in interpreting both data products and their stated uncertainty estimates. Confidence can be built from: redundancy in efforts to create products; data set heritage; and cross-comparisons of variables that would be expected to co-vary for physical reasons, such as LSATs and SSTs around coastlines. Finally, trends are often quoted as a way to synthesize the data into a single number. Uncertainties that arise from such a process and the choice of technique used within this chapter are described in more detail in Box 2.2. (My bold).
They are acknowledging that for a variety of reasons the data for human causes of forcing has severe limitations that result in a very wide error range. We know what that range is for the few variables they use, but what is it for the variables they don’t use?
It is difficult to list all the variables, but Figure 3, an often-used schematic of the complexity of the atmosphere is a good place to start.
Here are a few examples to get the list going.
Solar radiation; AR5 includes a graph of surface solar radiation they say is the longest available. It shows a range of radiation of approximately 95 to 135 Wm 2
Here is what they say about the record.
The longest observational SSR records, extending back to the 1920s and 1930s at a few sites in Europe, further indicate some brightening during the first half of the 20th century, known as ‘early brightening’ (cf. Figure 2.13) (Ohmura, 2009; Wild, 2009). This suggests that the decline in SSR, at least in Europe, was confined to a period between the 1950s and 1980s.
A number of issues remain, such as the quality and representativeness of some of the SSR data as well as the large-scale significance of the phenomenon (Wild, 2012). The historic radiation records are of variable quality and rigorous quality control is necessary to avoid spurious trends
The graph generates many questions that are not even considered. For example, how does the graph track against a) the general temperature trend b) low cloud cover c) CO2 and d) water vapor?
Soil moisture is central in the schematic and in transfer of energy. AR4 says:
Since the TAR, there have been few assessments of the capacity of climate models to simulate observed soil moisture. Despite the tremendous effort to collect and homogenize soil moisture measurements at global scales (Robock et al., 2000), discrepancies between large-scale estimates of observed soil moisture remain.
There has been a long history of off-line evaluation of land surface schemes, aided more recently by the increasing availability of site-specific data (Friend et al., 2007; Blyth et al., 2010). Throughout this time, representations of the land surface have significantly increased in complexity, allowing the representation of key processes such as links between stomatal conductance and photosynthesis, but at the cost of increasing the number of poorly known internal model parameters. These more sophisticated land surface models are based on physical specific data-rich sites, current land surface models still struggle to perform as well as statistical models in predicting year-to-year variations in latent and sensible heat fluxes (Abramowitz et al., 2008) and runoff (Materia et al., 2010).
Either way, they know virtually nothing about soil moisture and yet acknowledge it is important in the entire atmospheric mechanism. In AR4 they acknowledged that
Unfortunately, the total surface heat and water fluxes (see Supplementary Material, Figure S8.14) are not well observed.
It had not improved by AR5.
Carbon dioxide: This is at the heart of the problem, the forcing caused by human produced CO2. Figure 4 is directly from the IPCC WGI Report and shows their estimates of components of the carbon cycle. It is pure fiction. A multitude ofexamplesexistto support this charge including the fact that there are no actual measures of any of the numbers in the diagram, except for the Mauna Loa record and the amount of CO2 humans produce. The Keeling family owns the patent for all IPCC CO2 measurements, and Ralph Keeling is a member of the IPCC. Figure 4 shows him in illustrious company.
Figure 4 (Original caption)
Scientists Ralph Keeling, Naomi Oreskes and Lynne Talley all participated in a press conference Friday at The Scripps Institution of Oceanography
The IPCC produce the annual estimate of human CO2 production. In their 2001 Report they note the increase of CO2 from 6.5 GtC (gigatons of carbon) human sources to 7.5 GtC in the 2007 report. In the FAQ section , they answer the question “ How does the IPCC produce its Inventory Guidelines?” as follows.
Utilizing IPCC procedures, nominated experts from around the world draft the reports that are then extensively reviewed twice before approval by the IPCC.
There is another example of why it is all fiction in AR5. They wrote,
During the Holocene (beginning 11,700 years ago) prior to the Industrial Era the fast domain was close to a steady state, as evidenced by the relatively small variations of atmospheric CO2 recorded in ice cores (see Section 6.2), despite small emissions from human-caused changes in land use over the last millennia (Pongratz et al., 2009).
The “ relatively small variations” are artificial and a function of the measurement method. Figure 5 shows 2000 years of CO2 records from ice cores and stomata. A 70-year smoothing curve was applied to the ice core records, and they clearly underestimate the actual atmospheric level, but this was essential to the lower pre-Industrial levels required for the AGW impact story.
Figure 5 (Original caption)
What is amazing is the diagram, caption (see below), and written commentary follow the IPCC practice of identifying the severe limitations of the data, which creates the great contradiction between the claims of certainty in the information that goes to the public. They can’t be accused of not identifying the problems. The deception is in the certainty they present to the public. For example,
The numbers represent the estimated current pool sizes in PgC and the magnitude of the different exchange fluxes in PgC yr–1 averaged over the time-period 2000–2009 (see Section 6.3).
Or, consider this bizarre part of the caption,
Some recent studies (Section 6.3) indicate that this assumption is likely not verified, but global estimates of the Industrial Era perturbation of all these fluxes was not available from peer-reviewed literature.
None of this justifies the comment in the summary to the chapter
With a very high level of confidence1, the increase in CO2 emissions from fossil fuel burning and those arising from land use change are the dominant cause of the observed increase in atmospheric CO2 concentration.
I urge everyone to read the caption that accompanies Figure 4 that follows.
The caption from page 471 of the IPCC Report AR5.
Figure 6.1 | Simplified schematic of the global carbon cycle. Numbers represent reservoir mass, also called ‘carbon stocks’ in PgC (1 PgC = 1015 gC) and annual carbon exchange fluxes (in PgC yr–1). Black numbers and arrows indicate reservoir mass and exchange fluxes estimated for the time prior to the Industrial Era, about 1750 (see Section 184.108.40.206 for references). Fossil fuel reserves are from GEA (2006) and are consistent with numbers used by IPCC WGIII for future scenarios. The sediment storage is a sum of 150 PgC of the organic carbon in the mixed layer (Emerson and Hedges, 1988) and 1600 PgC of the deep-sea CaCO3 sediments available to neutralize fossil fuel CO2 (Archer et al., 1998). Red arrows and numbers indicate annual ‘anthropogenic’ fluxes averaged over the 2000–2009-time period. These fluxes are a perturbation of the carbon cycle during Industrial Era post 1750. These fluxes (red arrows) are: Fossil fuel and cement emissions of CO2 (Section 6.3.1), Net land use change (Section 6.3.2), and the Average atmospheric increase of CO2 in the atmosphere, also called ‘CO2 growth rate’ (Section 6.3). The uptake of anthropogenic CO2 by the ocean and by terrestrial ecosystems, often called ‘carbon sinks’ are the red arrows part of Net land flux and Net ocean flux. Red numbers in the reservoirs denote cumulative changes of anthropogenic carbon over the Industrial Period 1750–2011 (column 2 in Table 6.1). By convention, a positive cumulative change means that a reservoir has gained carbon since 1750. The cumulative change of anthropogenic carbon in the terrestrial reservoir is the sum of carbon cumulatively lost through land use change and carbon accumulated since 1750 in other ecosystems (Table 6.1). Note that the mass balance of the two ocean carbon stocks Surface ocean and Intermediate and deep ocean includes a yearly accumulation of anthropogenic carbon (not shown). Uncertainties are reported as 90% confidence intervals. Emission estimates and land and ocean sinks (in red) are from Table 6.1 in Section 6.3. The change of gross terrestrial fluxes (red arrows of Gross photosynthesis and Total respiration and fires) has been estimated from CMIP5 model results (Section 6.4). The change in air–sea exchange fluxes (red arrows of ocean atmosphere gas exchange) have been estimated from the difference in atmospheric partial pressure of CO2 since 1750 (Sarmiento and Gruber, 2006). Individual gross fluxes and their changes since the beginning of the Industrial Era have typical uncertainties of more than 20%, while their differences (Net land flux and Net ocean flux in the figure) are determined from independent measurements with a much higher accuracy (see Section 6.3). Therefore, to achieve an overall balance, the values of the more uncertain gross fluxes have been adjusted so that their difference matches the Net land flux and Net ocean flux estimates. Fluxes from volcanic eruptions, rock weathering (silicates and carbonates weathering reactions resulting into a small uptake of atmospheric CO2), export of carbon from soils to rivers, burial of carbon in freshwater lakes and reservoirs and transport of carbon by rivers to the ocean are all assumed to be pre-industrial fluxes, that is, unchanged during 1750–2011. Some recent studies (Section 6.3) indicate that this assumption is likely not verified, but global estimates of the Industrial Era perturbation of all these fluxes was not available from peer-reviewed literature. The atmospheric inventories have been calculated using a conversion factor of 2.12 PgC per ppm (Prather et al., 2012).
This is sufficient alone to support my claim that the data is fiction, and IPCC claims totally unjustified, but remember, this is only one segment considering a few variables. Let’s collectively brainstorm to identify the others, compare them with human CO2 impacts and further expose and belittle the greatest deception in history. Advertisements