Debating the Existence of Gravity

A few weeks ago over at the Serendipity Blog the author writes:
"If anyone wants to debate the existence or seriousness of anthropogenic climate change, I’d give the same response as I would if they wanted to debate the existence or strength of gravity."

The author views debating climate change as "pointless".

IMHO, the author is actually missing an important point. In a scientific debate, it is perfectly acceptable to remind the "settled science" opponent that a sense of conviction has nothing to do with reality. Trying to convince someone in a scientific debate is to miss the point of the debate. Scientific debates are not rhetorical debates. Why? Because reality doesn't care what anyone thinks.

Take gravity. Is gravity "settled science" and beyond debate? From the physics arXiv blog comes:

"Some physicists are convinced that the properties of information do not come from the behaviour of information carriers such as photons and electrons but the other way round. They think that information itself is the ghostly bedrock on which our universe is built.

Gravity has always been a fly in this ointment. But the growing realisation that information plays a fundamental role here too, could open the way to the kind of unification between the quantum mechanics and relativity that physicists have dreamed of."

Notice that it does not matter how convinced some people are. That is not the goal. So that would not be my goal in a debate on climate change. Just present the science behind the changes. Don't try and convince anyone. Almost Zen-like, the science will win the debate. (I seem to be on a Zen theme.)

Bayesian Scientific Method

The purpose of this post is to illustrate how scientific beliefs or truths change in a Bayesian manner when using the scientific method. It is really pretty simple, although the Bayesian viewpoint differs quite a bit from the notion of Popperian falsifiability. Here is the figure I will be using:


I have described this basic figure in a previous post. New to the figure are B(T), B(R|T), B(R), and B(T|R). These represent Bayesian beliefs. I know that it is more common to use the term Bayesian probabilities and the symbol P instead of B, but I want to avoid any possible confusion with frequency probabilities.

Prior to performing the next experiment, B(T) is my belief in theory/model 'T'. Like all Bayesian beliefs, it is a number between 0 and 1. Notice that this is my own subjective belief. But you will see that by using a Bayesian approach, my (changing) degree of belief will remain consistent with experiment over time.

B(R|T) is my belief that the experiment will yield observations/results 'R' assuming 'T' is true. This value will be deductively derivable from theory 'T'.

B(R) = B(R|T)B(T) + B(R|~T)B(~T). Where: B(T) + B(~T) = 1. Use this formula to calculate the degree that some theory (T or ~T) could believably explain results 'R'.

Notice that theories can still overlap in predicting results and that a zero value for B(R) is possible if no prior theory could explain the results.

B(T|R) = B(R|T)B(T)/B(R). This formula conditionalizes B(T) and calculates my posterior degree of belief in theory 'T'.

Notice that if B(R) is zero then the formula is of the form 0/0 and a new theory must be developed that explains the experiment before the iterative process that is the scientific method can continue.

Notice also that if the new experiment is not independent of previous experiments, then B(R|T) = 1. (Prior theory was previously conditionalized on previous experiments.) This gives a formula of the form 1/1 and my belief will not be altered. So such experiment is useless.

Some numerical examples should make the above clear. (Note: your definition of likely or unlikely may vary from mine.)

A Likely Theory Becomes Very Likely
Prior: B(T) = .95 (likely theory)
B(~T) = 1 - .95 = .05 (all other competing theory unlikely)
B(R|T) = .99 (results very likely)
B(R|~T) = .16 (results rather unlikely according to competing theory)
Posterior: B(T|R) = .99 * .95 / [ .99 * .95 + .16 * .05] = .99 (very likely)

A Likely Theory Becomes Neutral
Prior: B(T) = .95 (likely)
B(~T) = 1 - .95 = .05 (competing theory unlikely)
B(R|T) = .05 (unlikely results)
B(R|~T) = .99 (but strongly predicted by competing theory)
Conditioned: B(T|R) = .05 * .95 / [ .05 * .95 + .99 * .05] = .49 (neutral)

An Unlikely Theory Becomes Neutral
Prior: B(T) = .05 (unlikely)
B(~T) = 1 - .95 = .95 (competing theory likely)
B(R|T) = .99 (but unlikely theory highly confirmed)
B(R|~T) = .05 (and competing theory did not predict result)
Posterior: B(T|R) = .05 * .99 / [ .05 * .99 + .05 * .95] = .51 (neutral)

Zen Uncertainty

Over at the blog Various Consequences, jstultz has a post that takes note of the various levels of uncertainty possible in complex, physics-based computer models. He notes that: "At the bottom of the descent we find level infinity, Zen Uncertainty."

IMHO, Zen uncertainty is something quite different than infinite uncertainty. The definition referenced by jstultz is "Zen Uncertainty: Attempts to understand uncertainty are mere illusions; there is only suffering."

However, a completely Zen-equivalent definition would be:
Zen Uncertainty: Attempts to understand certainty are mere illusions; there is only happiness.


It is actually quite easy to understand Zen from a mathematical standpoint -- Straight lines are merely large circles. That is, positive infinity is equal to negative infinity. Therefore, Zen uncertainty means that infinite uncertainty is equivalent to infinite certainty.  Not exactly what jstultz had in mind!

Before dismissing this idea out-of-hand, consider two things. 1) There is no mathematical inconsistency inherent in believing this. 2) Scientific experiment shows this to be the case in reality.

Mathematically, note Robinson's non-standard analysis and his hyperreal numbers. There is nothing in mathematics, other than definitions, that prevents equating negative infinity with positive infinity.

For a physics example, consider negative Kelvin temperatures, specifically nuclear spin systems. The cooling thermodynamic temperature profile of one such nuclear spin system experiment was (See, IIRC, the Purcell and Pound reference in the article.):

room-temperature ----> +4K ----> (0K) ----> -4K ----> (-infinity == +infinity) ----> room-temperature

A Note on the Climate Model Software

The scientific method requires that a theory make a documented prediction and then an experiment (or observation) performed that tests the prediction. Bayesian inference rules are then used to condition belief in the theory based on the test's results.

Assuming the climate models are the embodiment of scientific theory, which of the climate models predicted the current 10 year (rather flat) temperature trend or the winter of 2010? The answer is none. But that's fine. The models predict climate and not weather, and since climate is usually defined as at least a 30 year record of climate, the climate models should not yet be used to "scientifically" condition our climate priors. (Either direction -- confirm or falsify.)

The alternative, assuming the science is settled and the climate models are engineering works, means that consensus software engineering quality assurance processes must be followed before the results can be used directly (without experiment) as evidence for Bayesian inference. IMHO, such SQA has not yet been adequately performed on the climate model software. A terrible shortcoming, since I think this alternative has the potential to allow us to rationally reach an earlier consensus.

A Point of Decorum

I commented on a blog recently, expressing a concern about the integrity of the scientific method as it is being applied by "the consensus" (IPCC) of climate scientists. I was informed by the blog's author, that I was not being duly concerned, unduly concerned, or even obsessively concerned -- I was being "a little hysterical." Hysterical? LOL.

A point of decorum. Politeness has its purpose. Making uncomplimentary statements about a person's emotional state and its effect on his ability to reason correctly can easily be interpreted as simply not wishing to discuss an issue on its merits. Or that the purpose of the posting is something other than technical or scientific.

Such statements as "being hysterical" are quite impossible to defend against. What evidence can you produce to change someone's opinion about such a thing? So such statements are never made with the intent of being proven right or wrong. This, again, calls into question an author's own motivations in making such statements. IMHO, a technically useless turn of events.

And so the author's response to my comment prevents any further comment by me. Technical discussion over. (At least the comment was posted. Kind of the author not to let my work go wasted.)

Let me reemphasize my main point. Impoliteness is scientifically/technically useless.

E.T. Jaynes once wrote: "In any field, the Establishment is seldom in pursuit of the truth, because it is composed of those who sincerely believe that they are already in possession of it."

I have never doubted the sincerity of the IPCC climate scientists' beliefs about the proper application of the scientific method. Nor do I doubt the sincerity of those climate scientists skeptical of the IPCC consensus' beliefs.

Why? Fortunately, for the scientific method ONLY, these sincere attitudes of climate scientists do not matter. The scientific method, when applied with integrity (regardless of one's prior attitude), is self-correcting.

As you can tell from my sig: "Politely Avoiding Sophistry," I believe decorum should be observed. But I guess some people simply do not understand why such a thing would be important.