But this would not be the consensus view. In May of 2006, a National Science Foundation (NFS) Blue Ribbon Panel issued a report on its findings and recommendations for Simulation-Based Engineering Science. Section 3.2 of the document talks about the verification, validation, and uncertainty quantification of computer-based simulations. The section addresses the question: "What level of confidence can one assign [to] a predicted [simulation] outcome in light of what may be known about the physical system and the model used to describe it?"
To quote from the Panel's findings:
While verification and validation and uncertainty quantification have been subjects of concern for many years, their further development will have a profound impact on the reliability and utility of simulation methods in the future. New theory and methods are needed for handling stochastic models and for developing meaningful and efficient approaches to the quantification of uncertainties. As they stand now, verification, validation, and uncertainty quantification are challenging and necessary research areas that must be actively pursued.
About verification and validation (V&V), the report stated:
The entire field of V&V is in the early stage of development. Basic definitions and principles have been the subject of much debate in recent years, and many aspects of the V&V remain in the gray area between the philosophy of science, subjective decision theory, and hard mathematics and physics.
On the subject of validation, the report states:
The twentieth century philosopher of science Karl Popper asserted that a scientific theory could not be validated; it could only be invalidated. Inasmuch as the mathematical model of a physical event is an expression of a theory, such models can never actually be validated in the strictest sense; they can only be invalidated. To some degree, therefore, all validation processes rely on prescribed acceptance criteria and metrics. Accordingly, the analyst judges whether the model is invalid in light of physical observations, experiments, and criteria based on experience and judgement.
And about verification the report states:
Verification processes, on the other hand, are mathematical and computational enterprises. They involve software engineering protocols, bug detection and control, scientific programming methods, and, importantly, a posteriori error estimation.
A more recent consensus is the 2009 WTEC Report which also has a section on validation, verification, and uncertainty quantification. The WTEC Report contains a lot more detail than the NSF Blue Ribbon Panel Report. However, not much has changed. This later Report notes that: "There are currently no funded U.S. national initiatives for fostering collaboration between researchers who work on new mathematical algorithms for V&V/UQ frameworks and design guidelines for stochastic systems."
So the answer is -- yes. Improving IV&V for scientific/engineering software would be worth the effort.