IV&V is not Impossible

There is a very important reason why I have devoted a couple of posts to the scientific method. The posts lay the groundwork for addressing an issue concerning the independent verification and validation (IV&V) of science and engineering software.

The very important issue? Many people feel IV&V is impossible.

In an article in the Feb. 4, 1994 issue of Science Magazine, Oreskes et al. make the following argument:
Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always nonunique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary value of models is heuristic.
This argument should be taken seriously. After all, Science is a peer reviewed publication that tries to represent the best of quality science. Additionally, there does not seem to be much in the way of direct, forceful rebuttal of this argument easily and freely available on the WWW. AFAIK, most of what is available is either dismissive of the argument or in basic agreement with it.

For example, Patrick J. Roache is rather dismissive and writes in a paper on the quantification of uncertainty in computational fluid dynamics:
In a widely quoted paper that has been recently described as brilliant in an otherwise excellent Scientific American article (Horgan 1995), Oreskes et al (1994) think that we can find the real meaning of a technical term by inquiring about its common meaning. They make much of supposed intrinsic meaning in the words verify and validate and, as in a Greek morality play, agonize over truth. They come to the remarkable conclusion that it is impossible to verify or validate a numerical model of a natural system. Now most of their concern is with groundwater flow codes, and indeed, in geophysics problems, validation is very difficult. But they extend this to all physical sciences. They clearly have no intuitive concept of error tolerance, or of range of applicability, or of common sense. My impression is that they, like most lay readers, actually think Newton’s law of gravity was proven wrong by Einstein, rather than that Einstein defined the limits of applicability of Newton. But Oreskes et al (1994) go much further, quoting with approval (in their footnote 36) various modern philosophers who question not only whether we can prove any hypothesis true, but also “whether we can in fact prove a hypothesis false.” They are talking about physical laws—not just codes but any physical law. Specifically, we can neither validate nor invalidate Newton’s Law of Gravity. (What shall we do? No hazardous waste disposals, no bridges, no airplanes, no : : : .) See also Konikow & Bredehoeft (1992) and a rebuttal discussion by Leijnse & Hassanizadeh (1994). Clearly, we are not interested in such worthless semantics and effete philosophizing, but in practical definitions, applied in the context of engineering and science accuracy.
Ahmed E. Hassan, on the other hand, seems in basic agreement with Oreskes and writes in a fairly recent review paper on the validation of numerical ground water models:
Many sites of ground water contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This complexity has created a need for tools and approaches that can build confidence in model predictions and provide evidence that these predictions are sufficient for decision making. Confidence building is a long-term, iterative process and the author believes that this process should be termed model validation. Model validation is a process, not an end result. That is, the process of model validation cannot ensure acceptable prediction or quality of the model. Rather, it provides an important safeguard against faulty models or inadequately developed and tested models. If model results become the basis for decision making, then the validation process provides evidence that the model is valid for making decisions (not necessarily a true representation of reality). Validation, verification, and confirmation are concepts associated with ground water numerical models that not only do not represent established and generally accepted practices, but there is not even widespread agreement on the meaning of the terms as applied to models.
Let me also mention that the Oreskes article also briefly and indirectly alludes to another logical fallacy, the appeal to authority:
In contrast to the term verification, the term validation does not necessarily denote an establishment of truth (although truth is not precluded). Rather, it denotes the establishment of legitimacy, typically given in terms of contracts, arguments, and methods (27).

There are a lot of things I think would be interesting to discuss about Oreskes' article. However, this post is already getting too long. So I will only state what I feel is the strongest counter-argument and fill in the details in later posts. I do not agree with Oreskes because the scientific method, of which IV&V is a part, is not an exercise in logic. As I have already pointed out in an earlier post:
Note that even this most bare form of the scientific method contains two logical fallacies. The first is the use of abduction (affirming the consequent). The second is the partial reliance on IV&V for error management (appeal to authority). The use of abduction eliminates logical certainty from the scientific method and introduces the possibility of error. The logical shortcoming of IV&V means that finding and eliminating error is never certain.
The basic problem with Oreskes' argument is that it runs counter to the very foundations of the scientific method. The scientific method does not require logical certainty in order for it to work. The value of models is not only that they can be heuristic, it is that they can be be scientific. To be anti-model is to be anti-science. Good luck with that.

1 comment:

  1. I think the basic problem with Oreskes argument
    is that it seems to be a purely deductive paradigm and makes no room for Probabilistic Reasoning. The value of a model is in the eye of the user with a particular purpose.