What’s Costing Time? CPM vs. Critical Path Analysis

The most recent article on this blog, regarding the MIT paper about using critical path drag to optimize manufacturing throughput, generated a number of interesting reactions. First, it has been very popular, attracting well over 100 views per day and several “Likes” in the LinkedIn discussion forums where I mentioned it. However, some people seem to have negative views about the value of such a process. All these people seem to be conflating critical path analysis and CPM and they specifically reject CPM as a worthwhile scheduling technique, expressing a preference for critical chain scheduling or one of the flavors of agile methodologies.

So let me try to improve my communication technique: there is an important difference between CPM and critical path analysis!

  • The former is a technique for developing a schedule for a project, and is almost always performed upfront.
  • The latter is a technique to analyze the detailed aspects of any process (like manufacturing), project or program, upfront, during progress, or after completion, with the purpose of identifying, measuring and perhaps reducing the total duration of execution.

Why should we want to reduce the duration of a process or project? Because, to paraphrase what a really smart guy wrote over 260 years ago: “Time is a whole lot of benjamins!” If we start recognizing that all projects and programs are, as my book emphasizes, investments, then we will quickly conclude that two major factors that impact project investment value are:

  • Scope; and
  • Total duration.

Along with that important but often over-emphasized third constraint of cost, these are the parameters over which project teams have some control, and that project and program managers are paid to manage. And we control completion date through the critical path – of any process, project or program!

Whether a project is scheduled using “naked” CPM or resource leveling or critical chain or agile or darts at a dartboard, at the end it still will have an “as-built” longest path (comprised of activities, constraints, sprints, stumbles, dropped batons, feeding buffers, schedule reserve, hesitations, and any other delays) that always determines its total length. Surely if time has value (as it does on 99% or more of projects!), then it must be worthwhile analyzing:

  1. What items are extending the duration (i.e., have critical path drag)?

  2. By how much?

  3. How much is that extension reducing investment value?

  4. What might we be able to do to reduce that negative impact?

traffic jam

It doesn’t matter what method of scheduling we used! Even a serial string of sprints, if analyzed, will usually reveal a place where we can shorten the critical path by adding a resource, or dividing the process into parallel streams, or deciding not to include functionality whose value-added is worth less than the time it consumes (i.e., its drag cost). And if someone says that doesn’t happen, how do they know unless they do the analysis and determine which sprints/activities/resources/rework have how much critical path drag?

If some item that you need to perform your project is really expensive, wouldn’t you try to see if you could get it for less? Well, how is that any different from using critical path analysis to identify the big drag cost items and seeing if you can perform them for less?

That is part of the beauty of Blake Sedore’s analysis for his MIT Master’s thesis. It’s entirely possible that the manufacturing organization was comfortable with their process, and felt that it was optimized. Then he performed the critical path analysis, identified where the drag was, figured out how to reduce it, and voila! – throughput and value were increased!

Whatever the scheduling method, critical path analysis has always had value. I remember reading an article over 20 years ago about how Motorola used it to increase throughput on the shop floor of their pager division. But the enhancement of critical path drag computation puts the emphasis where it belongs: not on what can take longer without causing delays (i.e., float), but on what’s causing how much delay. The technique for computing it is straightforward, if somewhat brain-intensive in a complex process or project.

A process that is both brain-intensive and can add a lot of value – gee, that sounds like just the sort of thing software packages should compute!

Fraternally in project management,

Steve the Bajan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s