I know little about CMM levels as I am new in this industry. I went through the different levels and have observed that all these levels are related to the product and its quality only.
I want to know why resources or employees are not considered in these levels as the products are developed by these resources.
Ahhh, the smell of fresh blood! Welcome to CMMI and thanks for the post!
It took me years to really understand the model (some would say I have a long way to go!) but trust me when I say that it does address resources and people in many ways - although it's not always as obvious as it could be.
According to the CMMI, a "project" involves the application and coordination of resources (people, time, money, hardware, software, tools, data, et al) towards the achievement of a goal or objective (like the creation of a product). How do you apply these resources? You apply them through the use of processes and procedures right? Those processes are controlled and monitored by these same processes.
There is also a "version" of the CMMI called the People-CMM, or P-CMM that specifically addresses "people" related issues, but it is not yet commonly adopted in engineering or IT organizations. Cheers!
http://www.broadswordsolutions.com/
Got questions? Get answers! Thoughts from an Agile CMMI Lead Appraiser by Jeff Dalton.
Wednesday, October 31, 2007
Must we use historical data for estimating?
This is first time I am directly writing to you, otherwise I've just been reading your comments and feedback on the group.
I have two questions on which I'd appreciate your comments, because answers to these will help me in building sound concepts of CMMI v1.2. Let me give you a brief history of our organization. We are striving for ML 2 organization and ready for our Class A Appraisal.
Since we are striving for ML 2, we are in the phase of data collection which later would be referred to as historical data for the estimation of effort, cost and size of the project. Currently we are estimating Size using FP Technique and Effort using PERT technique. But there is no relation between size and effort established in our current OSSP.
It is also not required in the sub practices of SP 1.2 of PP. To satisfy SP 1.2 and SP 1.4 of PP Process, is it enough to show that we are in data gathering/collection phase and no direct relation between size and effort data exists or we have to show the relation and people checking that historical data as well while making estimates for the projects?
My second question is, trainings are planned, conducted and people do their routine work after getting training. Kindly let me know how the adequacy of trainings is determined for GP 2.5. How much details evidence is required for an organization that is going for ML 2.
Is it justifiable if an LA fails an organization just because
1. He is unable to determine the adequacy of trainings
2. Direct relation between size and effort data is not established and people are using their expert judgment but that judgment is not supported by historical data?
Your answers will be really appreciated and helps me and my organization to have better implementation of processes.
You've obviously been reading up on the model. You must be having trouble sleeping!
On the question about using historical data to estimate, and what is required by the model, I would ask this: if it were required by PP, does that mean companies that DON'T have historical data cannot achieve CMMI ML2? Of course not. I don't think that was ever the intent of the model. There are many ways to estimate the project, and using historical data is only one of them. The fact that you are developing a historical dataset is a good thing, and will only strengthen your performance and appraisal ratings in the future.
As to the training, we expect you to provide evidence that the right people were trained in the right process areas. There is no need for everyone to take all the training, or all levels of training. At ML2 there is no requirement for a formal "training capability" as there is in ML3, so 1:1 training, mentoring, presentations, or CBT's would all qualify as long as you kept records of people having attended or executed the training. Keep training sign-in sheets and the training materials and you should meet the requirements of GP2.5.
As for an LA "failing" an organization (technically speaking there is no "failing") because he is "unable to determine the adequacy of the training", I would say that this would be valid if he truly could not evaluate whether you had training or not. He should not be evaluating "goodness," only that you trained people appropriately for their roles.
Same for the next question. There is nothing in the model that requires you to use "historical" data for estimating size. That said, "expert judgement" may or may not be appropriate depending on what you mean by that.
Either way, don't let an appraiser tell you how to run your business or claim that there are prescriptive ways to perform a process in the CMMI - there are not any. It's strictly a definition of "what" not of "how" and the subpractices and work products are only suggestions.
Best of luck to you.
http://www.broadswordsolutions.com/
I have two questions on which I'd appreciate your comments, because answers to these will help me in building sound concepts of CMMI v1.2. Let me give you a brief history of our organization. We are striving for ML 2 organization and ready for our Class A Appraisal.
Since we are striving for ML 2, we are in the phase of data collection which later would be referred to as historical data for the estimation of effort, cost and size of the project. Currently we are estimating Size using FP Technique and Effort using PERT technique. But there is no relation between size and effort established in our current OSSP.
It is also not required in the sub practices of SP 1.2 of PP. To satisfy SP 1.2 and SP 1.4 of PP Process, is it enough to show that we are in data gathering/collection phase and no direct relation between size and effort data exists or we have to show the relation and people checking that historical data as well while making estimates for the projects?
My second question is, trainings are planned, conducted and people do their routine work after getting training. Kindly let me know how the adequacy of trainings is determined for GP 2.5. How much details evidence is required for an organization that is going for ML 2.
Is it justifiable if an LA fails an organization just because
1. He is unable to determine the adequacy of trainings
2. Direct relation between size and effort data is not established and people are using their expert judgment but that judgment is not supported by historical data?
Your answers will be really appreciated and helps me and my organization to have better implementation of processes.
You've obviously been reading up on the model. You must be having trouble sleeping!
On the question about using historical data to estimate, and what is required by the model, I would ask this: if it were required by PP, does that mean companies that DON'T have historical data cannot achieve CMMI ML2? Of course not. I don't think that was ever the intent of the model. There are many ways to estimate the project, and using historical data is only one of them. The fact that you are developing a historical dataset is a good thing, and will only strengthen your performance and appraisal ratings in the future.
As to the training, we expect you to provide evidence that the right people were trained in the right process areas. There is no need for everyone to take all the training, or all levels of training. At ML2 there is no requirement for a formal "training capability" as there is in ML3, so 1:1 training, mentoring, presentations, or CBT's would all qualify as long as you kept records of people having attended or executed the training. Keep training sign-in sheets and the training materials and you should meet the requirements of GP2.5.
As for an LA "failing" an organization (technically speaking there is no "failing") because he is "unable to determine the adequacy of the training", I would say that this would be valid if he truly could not evaluate whether you had training or not. He should not be evaluating "goodness," only that you trained people appropriately for their roles.
Same for the next question. There is nothing in the model that requires you to use "historical" data for estimating size. That said, "expert judgement" may or may not be appropriate depending on what you mean by that.
Either way, don't let an appraiser tell you how to run your business or claim that there are prescriptive ways to perform a process in the CMMI - there are not any. It's strictly a definition of "what" not of "how" and the subpractices and work products are only suggestions.
Best of luck to you.
http://www.broadswordsolutions.com/
Tuesday, October 30, 2007
Can you validate our Control Limit formula?
I follow your site regularly and find your thinking very refreshing. Thanks for sharing a wealth of knowledge with the community. I have a question bothering me for a long time,
While calculating Control limits in an SPC chart, I have seen several organizations use the following formula:
Lower Control Limit(LCL) , Upper Control Limit (UCL)
LCL = Mean - 3Sigma
UCL = Mean + 3Sigma
CL = Mean
What do you think?
What do I think? I'm thinking this gives me a headache!
I'm not a statistician, but I do write a blog, so of course that makes me an expert at everything! (I love the Internet!). Your formula is similar to the formula taught in the SEI's CMMI Understanding High Maturity Practices class and also in the best text I know on this subject, "Measuring the Software Process" by William A. Florac and Anita Carleton.
You'll need to read the book to gain a thorough understanding of the theory - but I warn you, save it for a night you can't sleep!
http://www.broadswordsolutions.com/
While calculating Control limits in an SPC chart, I have seen several organizations use the following formula:
Lower Control Limit(LCL) , Upper Control Limit (UCL)
LCL = Mean - 3Sigma
UCL = Mean + 3Sigma
CL = Mean
What do you think?
What do I think? I'm thinking this gives me a headache!
I'm not a statistician, but I do write a blog, so of course that makes me an expert at everything! (I love the Internet!). Your formula is similar to the formula taught in the SEI's CMMI Understanding High Maturity Practices class and also in the best text I know on this subject, "Measuring the Software Process" by William A. Florac and Anita Carleton.
You'll need to read the book to gain a thorough understanding of the theory - but I warn you, save it for a night you can't sleep!
http://www.broadswordsolutions.com/
Why can't we jump right to CMMI Maturity Level 5?
We've been told that an organisation should not go directly for CMMI level 5. Could you please confirm this and also tell us if any statement is available to substantiate the above?
Also, please clarify why we need to certify at Level 3 first and then target to Level 5.
Wow, we've got some over-achievers out there!
Like many things in the CMMI model, the advice you heard includes the words "should not" and not the words "can not." There is no policy from the SEI that explicitly says "you cannot go directly to ML 5," so, by definition, it is possible to do so.
Should you do it? That's a completely different discussion. I believe it's a mistake to do so because organizations can't change overnight, it takes time to absorb change, and you will often get a much better return on investment if you take a measured, gradual, and iterative approach to process improvement.
HOWEVER (and this is important) the SEI has declared that companies who pop-up as ML 5 for the first time that have never had a formal appraisal in the past will be subject to additional auditing and scrutiny. If they find that your appraiser should not have awarded you ML 5 then they will revoke it. This is not a problem if you really are performing at Level 5 . . . but there are numerous examples of this happening and the appraisal was, in fact, not valid.
Beware of the Lead Appraiser who recommends this type of approach. In my opionion someone that advises you to jump right to ML 5 is either un-ethical or, even worse, ignorant (this is by far the bigger problem). Achieving ML is (and should be) extremely difficult and while its effects can be very positive, it is a lengthy journey.
The answer to your ML3 question is the same. Not required, but highly recommended. ML3 provides you with a foundation for consistent process performance, something required to achieve ML 4 and ML5.
If I were working with your company I would want to know why this is important to you. If it's to save on appraisal costs, then this is not the solution. Appraisal costs are roughly 10% of your overall process improvement effort - hardly worth even worrying about.
Best of luck.
http://www.broadswordsolutions.com/
Also, please clarify why we need to certify at Level 3 first and then target to Level 5.
Wow, we've got some over-achievers out there!
Like many things in the CMMI model, the advice you heard includes the words "should not" and not the words "can not." There is no policy from the SEI that explicitly says "you cannot go directly to ML 5," so, by definition, it is possible to do so.
Should you do it? That's a completely different discussion. I believe it's a mistake to do so because organizations can't change overnight, it takes time to absorb change, and you will often get a much better return on investment if you take a measured, gradual, and iterative approach to process improvement.
HOWEVER (and this is important) the SEI has declared that companies who pop-up as ML 5 for the first time that have never had a formal appraisal in the past will be subject to additional auditing and scrutiny. If they find that your appraiser should not have awarded you ML 5 then they will revoke it. This is not a problem if you really are performing at Level 5 . . . but there are numerous examples of this happening and the appraisal was, in fact, not valid.
Beware of the Lead Appraiser who recommends this type of approach. In my opionion someone that advises you to jump right to ML 5 is either un-ethical or, even worse, ignorant (this is by far the bigger problem). Achieving ML is (and should be) extremely difficult and while its effects can be very positive, it is a lengthy journey.
The answer to your ML3 question is the same. Not required, but highly recommended. ML3 provides you with a foundation for consistent process performance, something required to achieve ML 4 and ML5.
If I were working with your company I would want to know why this is important to you. If it's to save on appraisal costs, then this is not the solution. Appraisal costs are roughly 10% of your overall process improvement effort - hardly worth even worrying about.
Best of luck.
http://www.broadswordsolutions.com/
Monday, October 29, 2007
How can CMMI work for small projects?
We are currently CMMI Level 2 for our deliverable products to customers. However, we tend to work on many very small internal projects (1 person, less than one month delivery) to support testing of products. As we gear up for Level 3, the thought of having a gazillion artifacts to cover 18 process areas seems like overkill for these projects. Is there some point where an organization can say a project is too small for CMMI compliance, as long as there is some minimal process in place for this micro project (requirements, test, estimated schedule, ...)?
This is an excellent (and common!) question.
Let’s start with the concept of “tailoring.” Tailoring is introduced in ML3 within the OPD and IPM process areas. The net on tailoring is that each project should adapt the process (or “tailor it”) for their own specific situation.
As CMMI is an “organizational” model, leaving projects out of scope can be problematic. At the same time, regardless of the model you use, do you really want individual engineers working in an ad-hoc way? What if they come up with something that is really cool and can be used by the rest of the company? What if their one-person project has a high risk profile?
So, the answer is not in having them create a “gazillion” artifacts (technical term) but having them tailor the process so it makes sense for their project. How about 1/2 a gazillion?
A couple of other points:
There is nothing in the model that requires us to use a gazillion documents, and I hope you were not guided in that direction by an LA or other CMMI consultant. You owe it to your practitioners to evaluate the impact on them and reduce, consolidate, and lighten the documentation to a reasonable amount of weight. Have you considered alternatives such as digital cameras, white board cameras, cards, databases, et al to reduce the load? Sometimes a single work product can (and should) satisfy multiple practices in the model.
Just to clarify, there are 18 PA’s in ML3, but not all of them require projects to create work products. OPD and OPF, for instance, are organizational, not project focused. So are parts of IPM. Is SAM applicable? That may be another that does not require much documentation. Same applies for parts of MA, VAL, and VER (SG1 in each one).
And of course, many of the GP’s don’t require project artifacts either.
Best of luck to you!
www.broadswordsolutions.com
This is an excellent (and common!) question.
Let’s start with the concept of “tailoring.” Tailoring is introduced in ML3 within the OPD and IPM process areas. The net on tailoring is that each project should adapt the process (or “tailor it”) for their own specific situation.
As CMMI is an “organizational” model, leaving projects out of scope can be problematic. At the same time, regardless of the model you use, do you really want individual engineers working in an ad-hoc way? What if they come up with something that is really cool and can be used by the rest of the company? What if their one-person project has a high risk profile?
So, the answer is not in having them create a “gazillion” artifacts (technical term) but having them tailor the process so it makes sense for their project. How about 1/2 a gazillion?
A couple of other points:
There is nothing in the model that requires us to use a gazillion documents, and I hope you were not guided in that direction by an LA or other CMMI consultant. You owe it to your practitioners to evaluate the impact on them and reduce, consolidate, and lighten the documentation to a reasonable amount of weight. Have you considered alternatives such as digital cameras, white board cameras, cards, databases, et al to reduce the load? Sometimes a single work product can (and should) satisfy multiple practices in the model.
Just to clarify, there are 18 PA’s in ML3, but not all of them require projects to create work products. OPD and OPF, for instance, are organizational, not project focused. So are parts of IPM. Is SAM applicable? That may be another that does not require much documentation. Same applies for parts of MA, VAL, and VER (SG1 in each one).
And of course, many of the GP’s don’t require project artifacts either.
Best of luck to you!
www.broadswordsolutions.com
Please help! Our Appraiser doesn't understand us!
Please help!
We are due to have our CMMI level 3 final appraisal soon & have run into a problem. We have had some gap analysis & employed a consultant to help us some time ago. They came up with a really good estimation process that helped our business no end over the past few months. At pre-appraisal the representative from the lead appraisers company tried to tell us that all CMMI estimations had to be done by size. We had already investiated this point with other CMMI consultants who had clearly told us that with CMMI v1.1 this was true, but with v1.2 that this was not the case; we could use size, but other methods (e.g. effort - as we do use) were perfectly acceptable for CMMI level 3, 4 & 5. With this knowledge we did not want to change a system that really worked for us. Knowing this we presented the evidence & discussed with the person from the lead appraisers company until he finally 'admitted' (off the record) that we were correct, but that he could not guarantee that another lead appraiser would be aware of this subtle change.
We took the attitude that if we were meeting the standard & that the method was giving a benefit to our company that it was fine to continue with it. Our lead appraiser is now saying that our ('non size') estimation method will mean that we will not pass CMMI level 3.
I have emailed the SEI who initially said that 'if the practice worked for us & did not adversely effect other areas' then it was acceptable. However they stopped short of saying that our appraisers information was out of date. We have emailed them again, but thay have said that they can not get into a dispute between an individual lead appraiser & a company. So basically:
1. Is it correct that estimation does not have to be based on 'size'
2. If this is the case then given we have a contract with the lead appraiser how can we get the SEI to get the lead appraiser educated correctly, as he & his company are too stubborn to admit that their information is out of date & that they are wrong on this point.
Boy, everybody has an opinion!
It sounds as if your LA is trying to be very a wee bit prescriptive about how you use “size” to conduct estimates. Sometimes SOME appraiser’s don’t know where their advice ends and their appraisal starts. If someone is telling you that anything in the CMMI “has to be done . . .” a certain way, I would view that as a reason to look for alternate appraiser. Who knows what else he is “interpreting” for you? The vast majority of LA's are reasonable, logical individuals - there are a few that are not.
There is a single goal in Project Planning for estimating: SG 1 “Establish Estimates.” This is the REQUIRED component. Now, in order to satisfy this goal, the model EXPECTS (read: doesn’t require) that you’ll determine what the scope is, what work products are going to be created and what their components are, and how much it’s going to cost (in time and money). If you don’t do those things, you must provide evidence of an alternative that gets you to a reliable estimate (and one that has “helped our business no end over the past few months” sounds pretty reliable). What you are describing could be an alternatve to "size" for estimating.
I assume your LA is referring to the second practice, SP1.2 “Establish Estimates of Work Product and Task Attributes.” If he were to read the informative material it goes on to say “Size is the primary input to many models . . . “ (notice it doesn’t say “all models.”). It goes on to say “ . . . can also be based on inputs such as connectivity, complexity, and structure.” This, as I read it, also means there are other methods, yet to be determined (or understood by the SEI), that one can come to an estimate. The options in the list are examples, not the only choices. Alternates are allowed if they satisfy the Goal.
You didn’t provide detail on HOW you were producing estimates, but, even if “size” were required, there are many elements of “size.” In the Agile world, time (manifested in Releases and Iterations) is a “size” attribute, as is number of features. Isn’t effort, if viewed as dollars, hours or amount of time, a calculation of size? I think it could be.
This type of “over-interpretation” of the model is a pet peeve of mine. I know of too many companies who have soured on the entire CMMI experience because there are a few appraiser’s out there with either no imagination, limited or no experience in software engineering, or have such a closed mind that they think you need to do it “their way” for it to be valid. Well I’m here to tell you . . . . there are many ways to establish estimates. This is too bad, because the CMMI can truly be a powerful and liberating model.
Bottom line on the LA is this: the SEI WON'T get in the middle (nor should they). LA's are "authorized" by the SEI, and therefore, we are permitted to deliver these appraisals on their behalf. This method works for the vast majority of appraisals . . .
So, what should you do? If you truly believe he won't/can't understand your business, it's time to face that fact and quickly disengage yourself from him - and find another one who will work WITH you to understand how you run your business.
Best of luck to you!
http://www.broadswordsolutions.com/
We are due to have our CMMI level 3 final appraisal soon & have run into a problem. We have had some gap analysis & employed a consultant to help us some time ago. They came up with a really good estimation process that helped our business no end over the past few months. At pre-appraisal the representative from the lead appraisers company tried to tell us that all CMMI estimations had to be done by size. We had already investiated this point with other CMMI consultants who had clearly told us that with CMMI v1.1 this was true, but with v1.2 that this was not the case; we could use size, but other methods (e.g. effort - as we do use) were perfectly acceptable for CMMI level 3, 4 & 5. With this knowledge we did not want to change a system that really worked for us. Knowing this we presented the evidence & discussed with the person from the lead appraisers company until he finally 'admitted' (off the record) that we were correct, but that he could not guarantee that another lead appraiser would be aware of this subtle change.
We took the attitude that if we were meeting the standard & that the method was giving a benefit to our company that it was fine to continue with it. Our lead appraiser is now saying that our ('non size') estimation method will mean that we will not pass CMMI level 3.
I have emailed the SEI who initially said that 'if the practice worked for us & did not adversely effect other areas' then it was acceptable. However they stopped short of saying that our appraisers information was out of date. We have emailed them again, but thay have said that they can not get into a dispute between an individual lead appraiser & a company. So basically:
1. Is it correct that estimation does not have to be based on 'size'
2. If this is the case then given we have a contract with the lead appraiser how can we get the SEI to get the lead appraiser educated correctly, as he & his company are too stubborn to admit that their information is out of date & that they are wrong on this point.
Boy, everybody has an opinion!
It sounds as if your LA is trying to be very a wee bit prescriptive about how you use “size” to conduct estimates. Sometimes SOME appraiser’s don’t know where their advice ends and their appraisal starts. If someone is telling you that anything in the CMMI “has to be done . . .” a certain way, I would view that as a reason to look for alternate appraiser. Who knows what else he is “interpreting” for you? The vast majority of LA's are reasonable, logical individuals - there are a few that are not.
There is a single goal in Project Planning for estimating: SG 1 “Establish Estimates.” This is the REQUIRED component. Now, in order to satisfy this goal, the model EXPECTS (read: doesn’t require) that you’ll determine what the scope is, what work products are going to be created and what their components are, and how much it’s going to cost (in time and money). If you don’t do those things, you must provide evidence of an alternative that gets you to a reliable estimate (and one that has “helped our business no end over the past few months” sounds pretty reliable). What you are describing could be an alternatve to "size" for estimating.
I assume your LA is referring to the second practice, SP1.2 “Establish Estimates of Work Product and Task Attributes.” If he were to read the informative material it goes on to say “Size is the primary input to many models . . . “ (notice it doesn’t say “all models.”). It goes on to say “ . . . can also be based on inputs such as connectivity, complexity, and structure.” This, as I read it, also means there are other methods, yet to be determined (or understood by the SEI), that one can come to an estimate. The options in the list are examples, not the only choices. Alternates are allowed if they satisfy the Goal.
You didn’t provide detail on HOW you were producing estimates, but, even if “size” were required, there are many elements of “size.” In the Agile world, time (manifested in Releases and Iterations) is a “size” attribute, as is number of features. Isn’t effort, if viewed as dollars, hours or amount of time, a calculation of size? I think it could be.
This type of “over-interpretation” of the model is a pet peeve of mine. I know of too many companies who have soured on the entire CMMI experience because there are a few appraiser’s out there with either no imagination, limited or no experience in software engineering, or have such a closed mind that they think you need to do it “their way” for it to be valid. Well I’m here to tell you . . . . there are many ways to establish estimates. This is too bad, because the CMMI can truly be a powerful and liberating model.
Bottom line on the LA is this: the SEI WON'T get in the middle (nor should they). LA's are "authorized" by the SEI, and therefore, we are permitted to deliver these appraisals on their behalf. This method works for the vast majority of appraisals . . .
So, what should you do? If you truly believe he won't/can't understand your business, it's time to face that fact and quickly disengage yourself from him - and find another one who will work WITH you to understand how you run your business.
Best of luck to you!
http://www.broadswordsolutions.com/
Saturday, October 6, 2007
What is the appropriate coverage for Quantitative Management?
Our consultants are heavily emphasizing the use of regression, ANOVA, etc. as an interpretation of what the CMMI expects with regards to quantitatively managing processes and projects. I don’t disagree that these are very valuable techniques to use when appropriate and when in line with your business objectives (per the CMMI). However, in an organization of 800+ engineers and hundreds of projects, I propose some sort of stratification to the project pool that characterizes projects with respect to size, priority, relevance, risk, alignment with business goals, etc. as a basis for the application of statistical management. To require all projects to use predictive models in order to satisfy QPM is a rather narrow interpretation from my perspective. This really drives organizations into the “checking the box” mentality, which ironically, is the opposite direction the SEI intends. I believe we will ultimately converge on solid ground because logic and rational thinking usually prevail when mutual understanding is achieved, but the SEI may be swinging the pendulum to far in an effort to strengthen appraisal validity in the high maturity areas. Your thoughts?
Sometimes my posts can be a little long-winded, and with your question being a long and complex one we could do the readers a favor by me just saying "I'm with you," but let me go a bit further.
You're correct in stating that the methods and techniques should be appropriate and tied directly to the goals and objectives of your business - so that's a good indicator that you're on the right track. That said, there are unlimited approaches you could take to capturing, analyzing, and reacting to the data and the methods you've mentioned are but a few. There really is no right answer. Anyone who tells you otherwise just doesn't get it at all.
The bigger, and more important part of your question seems to apply to what I call "coverage." I don't believe there is ANY guidance in the model, nor did the SEI intend for there to be, that instructs us to apply statistical techniques on ALL projects or even ALL process areas. On the contrary, OPP and QPM are ONLY meant to be applied when we have the data to indicate that there is even a "special cause" variation in the process. That's why it's so important to be thoughtful and diligent in selecting metrics when you are at the Level Two stage - it's THAT data that we're talking about that will be our higher level "noise detectors" that will lead us into performing OPP and then, QPM.
Performing the OPP and QPM processes on ALL project or ALL PA's invalidate their reason for being. Should you perform QPM on QPM?
I like to describe Level Two as a chainsaw, Level Three as a broadsword, and Level Four as a scalpel. Level Five is a laser, but usually the students are looking at me funny so I stop at Level Four.
Would you ask a surgeon to cut up firewood with a scalpel? Of course not. But you MAY ask him to perform analysis on a tiny fiber that looked suspicious to you as you were analysing the results of cutting up firewood.
And would he perform that analysis on EVERY log? No, of course not. Only on the one's that the data told him it makes sense.
So, neither coverage in terms of projects, or in terms of PA's is intended to be 100% - or even close to it. The percentage of coverage is based on the data - not on the need to meet the models requirements.
In this case, you should always be saying "the data made me to it!" If you have that to show, the SEI would be plenty happy.
www.broadswordsolutions.com
Sometimes my posts can be a little long-winded, and with your question being a long and complex one we could do the readers a favor by me just saying "I'm with you," but let me go a bit further.
You're correct in stating that the methods and techniques should be appropriate and tied directly to the goals and objectives of your business - so that's a good indicator that you're on the right track. That said, there are unlimited approaches you could take to capturing, analyzing, and reacting to the data and the methods you've mentioned are but a few. There really is no right answer. Anyone who tells you otherwise just doesn't get it at all.
The bigger, and more important part of your question seems to apply to what I call "coverage." I don't believe there is ANY guidance in the model, nor did the SEI intend for there to be, that instructs us to apply statistical techniques on ALL projects or even ALL process areas. On the contrary, OPP and QPM are ONLY meant to be applied when we have the data to indicate that there is even a "special cause" variation in the process. That's why it's so important to be thoughtful and diligent in selecting metrics when you are at the Level Two stage - it's THAT data that we're talking about that will be our higher level "noise detectors" that will lead us into performing OPP and then, QPM.
Performing the OPP and QPM processes on ALL project or ALL PA's invalidate their reason for being. Should you perform QPM on QPM?
I like to describe Level Two as a chainsaw, Level Three as a broadsword, and Level Four as a scalpel. Level Five is a laser, but usually the students are looking at me funny so I stop at Level Four.
Would you ask a surgeon to cut up firewood with a scalpel? Of course not. But you MAY ask him to perform analysis on a tiny fiber that looked suspicious to you as you were analysing the results of cutting up firewood.
And would he perform that analysis on EVERY log? No, of course not. Only on the one's that the data told him it makes sense.
So, neither coverage in terms of projects, or in terms of PA's is intended to be 100% - or even close to it. The percentage of coverage is based on the data - not on the need to meet the models requirements.
In this case, you should always be saying "the data made me to it!" If you have that to show, the SEI would be plenty happy.
www.broadswordsolutions.com
Does the PPQA group have to be the 'Pain in th A$#' group?
Hi Jeff, nice blog!
I have a question about PPQA and PPQA of PPQA (GP2.9): how to implement these practices without creating a burden or overhead to the organization? I mean: how to make this group add value to the company instead of being the “pain in the a#$ group?"
Hey thanks! It's always nice when the readers don't hate the blog! Also, this is the first post I've had to censor for language. Fun stuff!
What a great question this is! You must have heard that PPQA stands for "Process Police and Quality A#$###*&. Well, contrary to popular myth this isn't really what PPQA is for. So let's start with my definition.
PPQA, or Process and Product Quality Assurance (and GP2.9 of any process area) exist to HELP us. How about that! It's like the old fashioned police whose job was "to protect and serve" but somehow has morphed into "profile and pull over." Of course, just like the real police, PPQA is so much more than what is percieved and joked about.
How did this metamorphasis happen? Well, let's go back to the beginning. Why do we even need PPQA? We need it to ensure that what we rolled out as a process is actually working, is helping projects, and is having a positive effect on the business. In order to do that, it must be determined whether or not people are even using the process to begin with. And that's where too many PPQA teams stop! They struggle so much with the compliance piece (mostly because people are not using it) that they're exhausted and never move on the the more important "providing insight" piece.
Providing insight back to the process owners (and SEPG) is the best way for us to learn whether we did a good job rolling out the process.
Whoa! Hold on a minute.
What was that you say? You mean we didn't do a perfect job designing and rolling out the process and now some PPQA puke wants to tell me something I already know! Darn PPQA zealots!
When I hear an executive or engineering manager whining about how people aren't following "my" process, my tingly senses start going off. So I tell them . . "hmm, guess you kind of screwed up on rolling out the process, eh bub?"
You see, if we did a great job designing the process, communicating the process, and educating about the process, people would probably use it. If we over-engineered it, designed it poorly, announced it with an email, and said "go forth and be process focused," then we did a poor job.
And only PPQA can tell us that.
So, start with a re-definition of PPQA's role. It's to provide independant insight back to the process owners as to how well the process is working. This is a direct reflection on the process owners and their ability to roll out processes effectivley. The SEI wisely realized that we drink our own kool-aid, the that can cloud our judgement.
Say that, and the project teams will look at you in a whole different way. Good luck!
http://www.broadswordsolutions.com/
I have a question about PPQA and PPQA of PPQA (GP2.9): how to implement these practices without creating a burden or overhead to the organization? I mean: how to make this group add value to the company instead of being the “pain in the a#$ group?"
Hey thanks! It's always nice when the readers don't hate the blog! Also, this is the first post I've had to censor for language. Fun stuff!
What a great question this is! You must have heard that PPQA stands for "Process Police and Quality A#$###*&. Well, contrary to popular myth this isn't really what PPQA is for. So let's start with my definition.
PPQA, or Process and Product Quality Assurance (and GP2.9 of any process area) exist to HELP us. How about that! It's like the old fashioned police whose job was "to protect and serve" but somehow has morphed into "profile and pull over." Of course, just like the real police, PPQA is so much more than what is percieved and joked about.
How did this metamorphasis happen? Well, let's go back to the beginning. Why do we even need PPQA? We need it to ensure that what we rolled out as a process is actually working, is helping projects, and is having a positive effect on the business. In order to do that, it must be determined whether or not people are even using the process to begin with. And that's where too many PPQA teams stop! They struggle so much with the compliance piece (mostly because people are not using it) that they're exhausted and never move on the the more important "providing insight" piece.
Providing insight back to the process owners (and SEPG) is the best way for us to learn whether we did a good job rolling out the process.
Whoa! Hold on a minute.
What was that you say? You mean we didn't do a perfect job designing and rolling out the process and now some PPQA puke wants to tell me something I already know! Darn PPQA zealots!
When I hear an executive or engineering manager whining about how people aren't following "my" process, my tingly senses start going off. So I tell them . . "hmm, guess you kind of screwed up on rolling out the process, eh bub?"
You see, if we did a great job designing the process, communicating the process, and educating about the process, people would probably use it. If we over-engineered it, designed it poorly, announced it with an email, and said "go forth and be process focused," then we did a poor job.
And only PPQA can tell us that.
So, start with a re-definition of PPQA's role. It's to provide independant insight back to the process owners as to how well the process is working. This is a direct reflection on the process owners and their ability to roll out processes effectivley. The SEI wisely realized that we drink our own kool-aid, the that can cloud our judgement.
Say that, and the project teams will look at you in a whole different way. Good luck!
http://www.broadswordsolutions.com/
Subscribe to:
Posts (Atom)