Saturday, October 6, 2007

What is the appropriate coverage for Quantitative Management?

Our consultants are heavily emphasizing the use of regression, ANOVA, etc. as an interpretation of what the CMMI expects with regards to quantitatively managing processes and projects. I don’t disagree that these are very valuable techniques to use when appropriate and when in line with your business objectives (per the CMMI). However, in an organization of 800+ engineers and hundreds of projects, I propose some sort of stratification to the project pool that characterizes projects with respect to size, priority, relevance, risk, alignment with business goals, etc. as a basis for the application of statistical management. To require all projects to use predictive models in order to satisfy QPM is a rather narrow interpretation from my perspective. This really drives organizations into the “checking the box” mentality, which ironically, is the opposite direction the SEI intends. I believe we will ultimately converge on solid ground because logic and rational thinking usually prevail when mutual understanding is achieved, but the SEI may be swinging the pendulum to far in an effort to strengthen appraisal validity in the high maturity areas. Your thoughts?

Sometimes my posts can be a little long-winded, and with your question being a long and complex one we could do the readers a favor by me just saying "I'm with you," but let me go a bit further.

You're correct in stating that the methods and techniques should be appropriate and tied directly to the goals and objectives of your business - so that's a good indicator that you're on the right track. That said, there are unlimited approaches you could take to capturing, analyzing, and reacting to the data and the methods you've mentioned are but a few. There really is no right answer. Anyone who tells you otherwise just doesn't get it at all.

The bigger, and more important part of your question seems to apply to what I call "coverage." I don't believe there is ANY guidance in the model, nor did the SEI intend for there to be, that instructs us to apply statistical techniques on ALL projects or even ALL process areas. On the contrary, OPP and QPM are ONLY meant to be applied when we have the data to indicate that there is even a "special cause" variation in the process. That's why it's so important to be thoughtful and diligent in selecting metrics when you are at the Level Two stage - it's THAT data that we're talking about that will be our higher level "noise detectors" that will lead us into performing OPP and then, QPM.

Performing the OPP and QPM processes on ALL project or ALL PA's invalidate their reason for being. Should you perform QPM on QPM?

I like to describe Level Two as a chainsaw, Level Three as a broadsword, and Level Four as a scalpel. Level Five is a laser, but usually the students are looking at me funny so I stop at Level Four.

Would you ask a surgeon to cut up firewood with a scalpel? Of course not. But you MAY ask him to perform analysis on a tiny fiber that looked suspicious to you as you were analysing the results of cutting up firewood.

And would he perform that analysis on EVERY log? No, of course not. Only on the one's that the data told him it makes sense.

So, neither coverage in terms of projects, or in terms of PA's is intended to be 100% - or even close to it. The percentage of coverage is based on the data - not on the need to meet the models requirements.

In this case, you should always be saying "the data made me to it!" If you have that to show, the SEI would be plenty happy.

www.broadswordsolutions.com

No comments: