Tuesday, March 27, 2007

We'd like to go to Level 5 and want to set up a metrics office. Where should we start?

Dear Appraiser,
We are a small CMMI Level 3 organization located in Cairo, Egypt. We are currently looking into going for Level 5 and would like to set up a metrics office.
Could you please provide me with info on the typical number of people in a metrics office for a 16-person organization, in addition to recommended training/certification, and primary responsibilities, and where I can look for more info?


A metrics "office" is not a bad idea, although I would have to say that a:) for 16 people it may not be necessary and b:) you may be a little late.

It all comes down to this: how well-oiled is your metrics collection process? If you're ML Level Three you must have demonstrated at least some level of metrics performance. Do you collect "process metrics" as opposed to "project metrics?" The sub-process metrics are what the focus of Levels 4 and 5 should be. Remember, your goal is to create a set of process "levers" that can be turned based on metrics information to solve specific problems.

If
your processes generates the appropriate metrics just as a consequence of performing the process, collection may not be an issue for you. As far as analysis and decision making based on the results of those metrics goes, do you really want to entrust this to a "metrics office?" The kind of data you'll be collecting will be strategic, and should be driving you to implement "innovative" solutions (OID) based on granular metrics (OPP/QPM) using CAR to determine the real problems you're trying to solve.

The SEI is on a mission to clarify the practices in ML4/5 and has recently put out a series of articles and messages about this subject. You can find these on their website. Reading the informative material in the book helps also.

Bottom line? I sense by the question you're asking that perhaps you should step back for a minute and try to really understand what ML4/5 is about. A "metrics office' is something I see often in Level Two and Three companies. It's not about a lot of metrics - it's about real information that you never had before and using that information to drive your business.

www.broadswordsolutions.com

We're a CMMI Level Three Organization and need to use SAP's ASAP method. How does CMMI fit into this mixed methodology environment?

Dear Jeff,

Have you ever done a comparison between the CMMI and SAP's ASAP methodology? We're a Level Three company that needs to implement SAP using ASAP. There seems to be a lack of defined processes and policies. What do you think?

I'm not an expert on ASAP, but since I'm a blogger I get to have an opinion :)

As a Level Three company you know that the CMMI is neither a methodology nor a process, whereas ASAP is a methodology for identifying current-state process with improvements and codifying them into a specific product.

While defining your "as-is" for business operational processes is part of any process design, the software/systems development process is not represented in a contiguous way in ASAP. Sure, there are components that address planning and requirements management, as well as testing and deployment, but it's not a holistic method. Instead of focusing on the process areas like PP, PMC, RSKM, et al it treats those PA's as ancillary - necessary steps to complete a successful SAP implementation. This often results in "sub optimization" of processes around business processes like HR, Finance, and Payroll. It doesn't really even address development processes.

Consider the fact that the SAP "abort rate" (this is the term the Germans use) is almost 70% for SAP implementations worldwide and that should tell you that they have significant holes in their process. They've tried to fill this by partnering with so called "Big 4" integrators - who don't seem to have much of a process at all. I know, I used to work for one of them.


The implementation of policies, organizational training, and real tailoring guidelines are absent in ASAP and, as I'm sure you know, are critical to successful process deployment. And of course, the organizational PA's (OPF, OPD, OPP, OID, etc) just don't exist in ASAP, not does the concept of quantitative management and continuous improvement.

As a ML3 company you should have no problem adapting your process using tailoring (assuming you designed your guidelines appropriately) of your RD, TS, and PI processes.

Let me know how it turns out.

www.broadswordsolutions.com

Technology problem fixed!

Dear Readers,

For those of you who are wondering why your many questions have gone unanswered this week rest assured that it wasn't because I didn't WANT to answer your interesting questions. I did! I did! I can only blame Bill Gates and Co (yes, he probably wrote the code) whose Windows XP decided to choke and refuse to boot while I was at the SEI's annual conference (SEPG 2007) in Austin. So here I am, with no CDs, no disks, and I'm out of town. And it was a Sunday - so there were few options for repair.

After enduring the "Technowledgists" at CompUSA for six hours I finally took it back, removed the drive, backed it up, and then replaced the operating system. Because I have been practicing good Configuration Management lately I was able to get 99% of it back (everything but Windows hidden files) but it took awhile. If I had been at my office it would have taken an hour - but I had to literally graze for hardware.

In any case, I'm back online and I'm going to try to swear off Microsoft for awhile so I have Firefox, Thunderbird, and Sunbird installed from Mozilla. We'll see how long I can hold out :)

www.broadswordsolutions.com

Tuesday, March 20, 2007

How should a maintenance organization interpret TS.SP1.1 (Alternative Solutions Criteria)?

We are a very small software company completely focused on the maintenance of its software product, we do not accept changes to the architecture nor the design of the product.

Question1: Can we interpret TS SP 1.1 “Develop alternative solutions and selection criteria” as the different ways that the development team gives to the customer to solve the customer requirement through already implemented options of the product?

The development team says that there is no room for evaluate alternatives for design, because there are no changes to the design of the product. If a “new” customer requirement needs a change in the design of the software product the requirement is rejected. In this case and only if it is possible, the development team gives different alternatives to obtain or process the information using already implemented options.

Question 2: In this context, Can we interpret TS SP 1.1 as the alternatives of coding the specification? There are always different ways for coding the same specification …


Great question! Remember that SP1.1 “Develop alternative solutions and selection criteria” is about both creating a set of criteria and generating a list of potential solutions. So the first part, the criteria, should be no problem for you. The criteria you set may very well lead you down a path of NOT generating alternatives, or it may guide you to select different approaches to solving the problem.

There is no requirement in this practice that you evaluate different architecures or designs. It only speaks of "solutions" which could be code snippets you re-use, different coding techniques, a library that you purchase and install, or the various combinations of options that you refer to in your question (these are just some examples).

You can also view SP1.1 as "alternatives to coding" as you mentioned. The spirit of the practice is that you have explored different options to solving the problem using pre-determined criteria.

If this sounds like DAR you're on the right track. It's essentially a DAR process customized for solution design and implementation.

www.broadswordsolutions.com

What's with 'expected practices' and can't I use an alternative?

Dear Jeff,

I attended the Introduction to CMMI 1.2 course, I learned that the goals only are mandatory and practices are expected. Last week I attended a formal appraisal, we mapped each practice to a direct evidence and indirect evidence, so in reality the practices are taken as mandatory.

In order to skip some practices you should apply alternatives and provide rationale behind your decision.


The question is, have you meet any situation where practices are skipped in favor of other alternative practices? or can you give me examples of doing so?



Ahh .... the 'ol "SPs are only expected but seem to be required" routine again eh? You're not the only one to make this observation. Some say it's the SEI's attempt at trying to be funny . .. hmmm, not sure they were successful.

While Goals (both SGs and GGs) are "required" and Practices are "expected" it would be impractical to evaluate only the goals, which are pretty broad, without some sort of practices that allow for the collection of detailed evidence in order to be sure they are satisfied.

This "rollup" method appears to be an attempt by the SEI to "standardize" the appraisal method. In other words, a Lead Appraiser may not understand your business and domain enough to really understand whether of not you are performing the "Manage Requirements" Goal satisfactorily, so the SP's are a "map" that will lead them to a conclusion (note to self: don't engage a Lead Appraiser that cannot demonstrate an understanding of your business and domain).

So, the SPs are simply a set of practices that we would expect you to perform in order to satisfy the goal. But the SEI was smart enough (and forward thinking enough) to allow for "Alternative Practices." This was an acknowledgement that maybe, just maybe, there might be another way to approach something that they had not anticipated (unfortunately I've met too many LA's that scoff at the notion that they may not get it . . . but that's another story :)).

Have I seen these "Alternative Practices" while working with clients? You bet! Sometimes it comes in the form of "compressing" practices from different PAs or Goals into a single practice, and sometimes it's just a brand new idea.

One example is the integration of "Configuration Audits" into the PPQA process. The key here is that neither the practice related to configuration audits, nor the practices associated with PPQA, are performed in a "traditional" manner. The PPQA process has to be robust enough to support it, but it is still an alternative way to perform the process (this can really only work if the methodology is "deliverable based" by the way).

Another is estimating and planning using Agile methods. Remember, in the Agile world, scope is flexible while cost and schedule are fixed. In the waterfall world the inverse it true - cost and schedule are flexible (usually grow), but scope tends to be fixed. By definition Agile projects are estimated and planned using releases and iterations as their foundation (not tasks), and "features" are allocated across those iterations in very small "chunks." When the iteration ends, whatever "value" is created is delivered to the customer, the rest is put on a backlog for the next release. This way estimates are limited to: how many people do I have for how many releases?

Of course, the customer has to be on board with this :)

http://www.broadswordsolutions.com/

Saturday, March 10, 2007

Is a Peer Review Report with no defects reported valid evidence for a SCAMPI A Apprasal?

Dear Appraiser,

A Peer Review Report that shows no defects reported -is that valid evidence for an instance in a SCAMPI A appraisal?

Hmmmm. If a tree falls in the forest and . . . .

As far as I can tell, there is nothing in the CMMI model that requires you to actually have defects found during a peer review . . . let me check on that again . . . right, not required.

While it might be unlikely, there are smart developers out there that either 1) don't demonstrate a defect during a peer review or 2:) are really good at hiding them.

Of course, a blank piece of paper would be insufficient.

VER guides us to 1:) prepare for the peer review; 2:) conduct the peer review; and 3:) Analyze Peer Review Data. Nowhere in the VER PA does it say "create defects where they don't exist."

The required evidence would prove that the peer review was prepared for, conducted, and its resulting data analyzed. That's not the same as saying that your peer review was sufficient, but the SCAMPI method is "evidence based" and if you produce the appropriate evidence, we trust that you had the peer review and you conducted it with integrity.

Prepare for Peer Review includes things such as: inviting stakeholders, defining roles, distributing peer review material, guidelines, agenda, etc.

Conduct Peer Review includes things such as: having an agenda, ensuring the appropriate people are present, walking through the material in a structured way, taking minutes, following the roles and guidelines, etc.

Analyze Peer Review data includes such things as reviewing the defects, estimating the impact, evaluating the effort to correct the defects, etc.

If you're dong these things and you can provide evidence of performance then you're probably performing VER appropriately.

http://www.broadswordsolutions.com/

Thursday, March 8, 2007

Can VER and CM Audits Satisfy PPQA Requirements?

Dear Appraiser,

Our current PPQA processes are currently under a process improvement review cycle in preparation for Level 3. Through some internal discussions, a few of us believe we can satisfy PPQA requirements, minus PPQA of PPQA, through the Verification (Peer Review) and Configuration Management (CM Audit) activities. What are your thoughts, pitfalls with this approach?


- Detroit



It's nice to see a neighbor on the blog! I'm from Detroit (although you wouldn't know it from all my trips to the airport).

Satisfy PPQA through VER and CM Audits? Hmmmm. You might be on to something. As the SEI always tells us, that depends.

First I would want to understand the scope of both your VER and CM processes. VER is normally performed as a "qualitative review" of work products (no, it's not just testing) including both code and all other non-code work products and requires peer reviews. CM, of course, is the infrastructure you use to manage storage and revision control of all your work products (both code and non-code). A "typical" CM audit audits these mechanics . . . are labels created, are versions correct, are the changes controlled?

PPQA, on the other hand, is more complex than it sounds and is more "quantitative". PPQA.SP1.1 tells us to evaluate "the Process" itself to determine if it is appropriate, where SP1.2 is directed at process performance and work products. If you're a "deliverable-based" organization, and if you keep a detailed CM plan, and if your CM Audits include those things (and all the details associated with them) AND your work products faithfully reflect process execution, then I could see where SP1.2 might be satisfied. But what about 1.1 (and 2.x as well)? How would you evaluate the process itself, as well as "provide objective insight?" I suppose if your CM process adds in all of those "features" (remember, this is a "Process Product" we are talking about) then it could satisfy the PPQA goals, but at that point you a have PPQA process don't you?

What about objectivity? Is you're CM and VER process performed objectively (i.e.; not by anyone who might want to influence the results)? If not, then the "spirit" of PPQA would not be satisfied. The VER Peer Review Goal is by definition not objective because it is performed by "peers."

I think you're on the right track from the perspective that you're seeking to "combine" process areas to be more process-efficient, thereby reducing overhead. I like that idea and encourage you to do more of that.

Pitfalls? The biggest one of all is to misinterpret the complexity, effort, and uniqueness of PPQA, and thereby underestimate its scope. It's by far the greatest cause of appraisal "failures" according to the SEI. As a Lead Appraiser I can confirm their findings.

Your best bet is to ask an expert to evaluate your VER and CM process independently and determine if your interpreting PPQA appropriately. You don't want to find out about a PPQA weakness at your Level 3 Appraisal.

Best of luck!

www.broadswordsolutions.com

Friday, March 2, 2007

We're a small company. Do we need metrics for GP2.8 (Monitor and Control the Process) for every Process Area?

Dear Appraiser,

I have a doubt about what artifacts might constitute good evidence for GP2.8. The model calls for measurements. But it is often the case for small organizations not to have the need or manpower to create, collect and maintain these measurements. Also, in small organizations (around 30 people, as is the case with this one), the performance of processes can be analyzed through direct observation of those responsible for the process. A measure on each PA sounds a little bit like overkill.

I know PMC relates to GP2.8. Does it mean that regular status meeting to discuss the status of the process will suffice for the practice?



Your question often comes up with small organizations seeking to adopt CMMI. As it happens, small is my speciality.

While GP2.8 tells us to “Monitor and Control the Process” it doesn’t give us any requirements beyond that. While the informative material discusses metrics as an example, you’re correct in that it is permissible to adopt alternative methods. Remember, Goals are Required, Practices are Expected, and everything else is just information.

Two questions to consider:

1. Is there a plan to manage and maintain the process itself, as well as perform it on a project?

2. How much information does your organization need to ensure that plan is on track?

If you have a plan, then monitoring it means reviewing the milestones, deliverables, work products, and schedule to be sure the plan is on track. Must you have metrics for that? For a small organization I would argue that it may be overkill when you can just have a simple once-a-month status meeting to cover it. Are there outputs from that meeting? Minutes, defects, corrective actions, assignments? If so, you could point to those items as evidence that GP2.8 is being performed.

If you conduct PPQA audits on your projects what are you gathering? If you’re gathering data about process performance are the people responsbile for the process looking at it? If so, you could point to this data for GP2.8 also.

Do you conduct Configuration Audits (hint . . . you could combine this with PPQA and save a step)? If so, you have outputs from that for all processes don’t you? The work products for every process are part of executing that process. Isn’t that performing GP2.8 also?

Do you have an SEPG or process steering group? Do they meet to review the status of the process and how well it is performing? If so, you could point to this as some evidence that GP2.8 is being performed also.

A document or metric is often the way organization’s react to GP2.8, but it’s only the most obvious answer. You can get a lot more creative and combine processes to get a “two’fer” so you don’t have to add another document or metric. Give the questions some thought and you'll have your answer.

http://www.broadswordsolutions.com/

Which comes first in Technical Solutions - The Architecure or evaluating Alternatives?

Dear Appraiser,

In the Technical Solution process area, where should I start? Create the application architecture and then develop alternatives for implementation or find alternative solutions and then define the architecture. Is alternative solution considered as architecture? I don't understand this SG at all!


The CMMI isn't a process. It's a Process Model. The practices are not presented in sequence (although sometimes it appears that way). You may perform the SPs in any order that makes sense for your business, and sometimes you may even repeat SPs at a later point in your project.

Every organziation approaches this slightly differently. However, you might consider both places as an opportunity to evaluate alternatives.

As a software engineer, I would want to consider alternative software architecture’s before I considered alternative approaches to building the application. Is this J2EE, COM+, SOAP, or some asynchronous messaging architecture? Each choice dictates a second level of alternatives. Since the architecture is the foundation, and often dictates the way an application is developed, I would start there. This is all too often overlooked and the architecture is an afterthought – causing all kinds of problems when it comes time to maintain, extend, or modify the application in the future.

In terms of the CMMI, both are good candidates to consider for alternatives. Some alternative examples in software engineering may include server applets vs. browser, fat vs thin-client, OO or procedural etc. etc. etc. In a world where COBOL still dominates when lines of active code are counted, anything is possible :)

www.broadswordsolutions.com