[PAT: Note – there are a number of verification/validation misconceptions I am going to address in this article. In other words, I’m not going to constrain myself to answering the question – but why shouldn’t THAT surprise you!]
The CMMI addresses this question very explicitly, and yet the question keeps coming up. In the “Introductory Notes” of BOTH Verification and Validation, the model states: “verification ensures that ‘you built it right;’ whereas validation ensures that ‘you built the right thing.” All you have to do is follow that advice and you can’t go wrong differentiating between the two.
Yeah, right! To me, this pithy explanation is “cute,” but operationally useless. For example, if I’m conducting system testing, am I’m trying to ensure that the developers built it right, or am I’m trying to ensure that they built the right thing. I THINK it’s both – isn’t it?
Over the years I have developed what I believe is a much more useful heuristic – one that’s 90% “good enough” and certainly much easier to apply. Consider this alternative explanation…
When performing something that smells like V&V, take a look at who is involved in the activity. If it’s just our engineers and testers, then it’s probably a verification activity. The best the technical folks can hope to do is to compare the product they’re building or testing to the requirements everyone agreed to, and look for deviations that need to be addressed. Verification is ensuring the work products meet the requirements.
On the other hand, if the customer, user, or a customer/user surrogate (e.g., Product Management) is involved in a product evaluation activity, this tilts the scale much more heavily toward validation. When a customer looks at a system we are building on their behalf, they are much less interested in comparing it to the documented requirements, and much more concerned about whether the product is going to solve their problem, be usable by their people, and run on their already-overloaded computers. These concerns reflect those of validation – fitness for use and the ability to work in the intended operational environment.
So let’s test this to see if it’s easy to apply and helps us to reach the right conclusions. If we employed only “Stepford” engineers and testers, we would start each project with perfect requirements that were perfectly understood, build the product accordingly and then perform perfect verification that discovered and remediated all deviations from the requirements. In such a perfect world there would be nothing left to find in validation!
However in our non-Stepford, imperfect world, I contend that any problems found in validation likely results from one of three conditions:
- We missed a requirement; (“You didn’t tell us the pinball machine is for a cruise ship!”)
- We misunderstood a requirement (“You meant NAUTICAL miles?”) and/or
- Our verification activities failed to catch it (“Oooops!”).
- Unit testing – it’s just us engineers, which would lead us to verification. However, in the CMMI, all discussions related to unit testing is found in the Technical Solution process area under SP3.1 Implement the Design – so maybe it’s neither. Ah, but TS SP3.1, subpractice 4, “Perform unit testing of the product component as appropriate,” has a reference to the Verification process area, so our first instinct was right after all!
- Integration testing – it’s typically just us engineers and testers, so verification. However, if the user, customer, or their surrogate participate in some or all of these activities, that also brings a validation component to this activity. (A single activity CAN be both verification and validation.)
- System testing – same as integration testing.
- Qualification / customer acceptance testing – because the user, customer, and/or their surrogate are typically involved in such activities, they are primarily validation.
- Prototyping – it depends. If it’s a prototype like sample webpages or mocked up reports or modified process flow diagrams that are intended to be shared with the customer/user to make sure that we’re going down the right path to meet their needs, then it’s validation. If it’s an engineering prototype – one used to decide which of three alternative technical approaches will best meet the stringent performance requirements (for example), then it’s verification.
Another common misconception about validation is that “Validation = Customer Acceptance Testing.” Validation’s ‘Introductory Notes’ points out, “…validation is performed early (concept/exploration phases) and incrementally throughout the product lifecycle.” The CMMI legal lawyers will object, saying that the “Introductory Notes” are only “informative” model components – reminding us that only the goals are required! To address the objection, gently guide them to Requirements Development SG3, “Analyze and Validate the Requirements” – objection overruled!
I firmly believe that most organizations perform more validation than they give themselves credit for. To lift the veil on these hidden validation activities, ask questions about customer/user/Product Management involvement throughout the life cycle. What kinds of things do you get them involved in, and what does that involvement entail? Do you show them mocked-up webpages, sample reports, etc. to elicit their opinions?
And what kinds of changes do you make based their feedback? Questions like these will reveal that hidden validation activities that most development groups perform.
Keep in mind that the CMMI suggests that blissful ignorance of performing such activities is better than nothing – in fact, it’s called “capability level 1” – but it’s not as good as actually doing such validation activities “on purpose” – which is the purview of capability levels 2 and 3. The model also suggests that relying on project managers or engineers to “do what they feel is right” demonstrates more of an individual capability than an organizational capability. The CMMI would suggest explicitly establishing and institutionalizing the system of practices that provides an organizational validation capability. You’re probably exhibiting many of the behavioral patterns already, but the model is encouraging you to support it in a way that ensures it is performed consistently, consistently performed, and improved over time.
I have saved what I consider the most important point for the very end – and that is that the amorphous dividing line between verification and validation is something that is only of interest to us CMMI model geeks and that such discussions should probably not be conducted in front of the engineers and testers. Not only do we embarrass ourselves when such bickering occurs in public, but they really don’t care! What they DO care about is developing products that delight the customer. They don’t get hung up on whether a particular activity is “verification” or “validation” – if it’s the right thing to do, they do it. And THAT’s as it should be!
© Copyright 2014: Process Assessment, Consulting & Training and Broadsword Solutions
“Just the FAQs” is written/edited by Pat O’Toole and Jeff Dalton. Please contact the authors at pact.otoole@att.net and jeff@broadswordsolutions.com to suggest enhancements to their answers, or to provide an alternative response to the question posed. New questions are also welcomed!
2 comments:
Hi, nice post. I'm curious about "Validation = Customer Acceptance Testing is a misconception". Can you name another software validation activity that is not UAT/CAT?
Fox,
First of all, "Validation" is more than just "software validation." But, if it's software validation you are asking by you might want to add Unit Testing to the mix. There's also a debate about "pair programming" being validation, although plenty of people insist it's "verification" (I'm not going to start THAT debate!). But there are also plenty of opportunities to validate designs, requirements, and plans. Jeff
Post a Comment