Rather software V&V is a measure of the acceptability of the risk that the software may fail to perform properly and thus not provide the desired benefits, that the consequences of using the software may even be negative. (Risk is defined as probability times consequence.)
And so here I make a quick note that the point of the previous post is not the only common and important philosophical misunderstanding about V&V. There is often a failure to realize that software V&V must be independent verification and validation (IV&V).
There is a general consensus that the process by which software is developed can add to or subtract from the quality of the final software product. But the degree to which this occurs is a subjective judgment. Different software stakeholders will have different opinions.
Also, the potential consequences of using software are different for different stakeholders. Just as the climate effects different groups of people differently, an error in the global climate models could potentially be misused to effect different people to differing degrees.
The bottom line is that the estimated risk associated with any software can vary greatly (even in sign) depending on which stakeholders are being used as the reference. Thus, software V&V must not be restricted to an activity that is performed by a single software stakeholder. That would not be fair. Software V&V must be IV&V such that all stakeholders are considered fairly.
You would think this concept would be obvious for all risk analyses (software IV&V or whatever) and far from a potential problem. Unfortunately, this is not the case. For example, how worried should we be about driving a Toyota? According to popular NYT blogger Robert Wright:
My back-of-the-envelope calculations (explained in a footnote below) suggest that if you drive one of the Toyotas recalled for acceleration problems and don’t bother to comply with the recall, your chances of being involved in a fatal accident over the next two years because of the unfixed problem are a bit worse than one in a million — 2.8 in a million, to be more exact. Meanwhile, your chances of being killed in a car accident during the next two years just by virtue of being an American are one in 5,244.Wright does not think these numbers are of much concern. But IMHO, he fails to understand that one stakeholder in the issue (Toyota) should not decide the risk for another (the public). For he writes:
So driving one of these suspect Toyotas raises your chances of dying in a car crash over the next two years from .01907 percent (that’s 19 one-thousandths of 1 percent, when rounded off) to .01935 percent (also 19 one-thousandths of one percent).
But lots of Americans seem to disagree with me. Why? I think one reason is that not all deaths are created equal. A fatal brake failure is scary, but not as scary as your car seizing control of itself and taking you on a harrowing death ride. It’s almost as if the car is a living, malicious being.IMHO, it's not that all deaths are not created equal -- it's that not all risk analyses are.
This was also noted in Chance News #62, where we have the following questions being asked about Wright's discussion of these numbers:
- People seem to make a distinctions between risks that they place upon themselves (e.g., talking on a cell phone while driving) and risks that are imposed upon them by an outsider (e.g., accidents caused by faulty manufacturing). Is this fair?
- Contrast the absolute change in risk (.01935-.01907=.00028) with the relative change in risk (.01935/.01907=1.0147). Which way seems to better reflect the change in risk?
- Examine the assumptions that Robert Wright uses. Do these seem reasonable?