Predictive Management versus Empirical Control…and why our sensors save us from chaos

This article tries to explain the differences in approach between predictive techniques in project management and agile/iterative techniques that use extrapolations of past performance measurements as a guide to future performance. I’ve tried to keep the explanation simple and used a very mechanical metaphor for a Scrum team’s operation!

What is a predictive project?

This is a type of project where there is enough known about the requirements and the work required to achieve those requirements in a predictable way. Using project management terms this means:

  1. The full scope can accurately be described at the start of the project and captured in a detailed work breakdown structure (WBS).
  2. Work packages from the WBS may be elaborated into a definitive series of tasks each of which can accurately be estimated duration-wise.
  3. The results of the task definition and estimation processes may be collated in a schedule which also shows the dependency between the tasks. This is also known as big up-front planning, the result of which is typically a Gantt chart which is used to orchestrate the execution of the project. A Gantt chart looks like a series of waterfalls which why big up-front planning projects are referred to as ‘Waterfall’ executions.
A Gantt chart usually looks like a series of waterfalls

A Gantt chart usually looks like a series of waterfalls

Good Waterfall projects can up to 90% accuracy but only when the scope is clear-cut and the execution risks are low (or the budget and execution timelines have been padded with adequate contingency allowances to counter the risks – ok but not an efficient approach economically).

So what do you do when the scope is not known up-front or the activity durations cannot be estimated?

Agile methodologies are selected in environments where the scope of the project is hard to predict. This may be due to several reasons some of which are listed below:

  • There is a general lack of maturity about the requirements or delivery roadmap from the customer as it is a ‘new thing’
  • There may be considerable technical risk, perhaps it’s the first time that a technology was used by a team
  • The customer may not know how to proceed until they’ve seen a simple tangible solution.

In other cases the predictability problem is exacerbated by an inability to estimate task durations with any real confidence.

Ok, I know haven’t answered the question yet…keep reading

Enter ‘empirical control’ – something we do all the time

In the real world where we live and operate there are many contexts for which the definitive relationship between parameter x and parameter y is unknown i.e. it may exist as a formula but we don’t know it! At the same time we are able to get by, based on what we have learned over the years (empirical data= what we have learned through observation).

As an example: “If I leave home at 8:35 am I am fairly confident that I will get to work just before 9am”. It is plausible that some accurate predictive model may exist and takes into account traffic patterns, the distance to work, car’s performance, traffic light sequencing etc. It’s not usually worth the effort finding that out simply because the empirical data offered sufficient predictability i.e. I drive to work every day and I’ve found that with a fair degree of confidence it typically takes around 25 minutes. In that last case:

  • Parameter x: Time I left home
  • Parameter y: Time I arrived at work

We do this sort of thing regularly.

Empirical predictions are derived from experiment and observation rather than theory.

If there is no way of predicting the outcome of an initiative: run an experiment, gather lots of data frequently and use this data to predict future trends.

A Scrum Team viewed as an experiment…

To make more sense of a project running with performance data gathering and related control, I’ve tried to illustrate the workings of a Scrum Team using a mechanical process as a metaphor. On the off chance that you don’t already know, Scrum is an agile delivery framework.

Scrum as a mechanical process experiment

Scrum as a mechanical process experiment

The image shows:

  • A product backlog shown as a funnel of user stories or user requirements
  • The sprint is shown as a big drum where selected user requirements are converted to completed functionality (demonstrable) through an imaginary fixed-duration combustion process (I couldn’t resist the pun on a ‘burndown’…snigger).
  • The Sprint scope throttle valve control how many user stories will be tackled in the Sprint. This is a control mechanism.
  • We can track the rate at which the product backlog is being depleted by monitoring the product backlog burn rate. This is a measurement sensor.
  • We can track the rate at which the Sprint backlog tasks are being burnt. This is a measurement sensor.
  • If the sprint burn rate sensor suggests that the work of the Sprint will not be completed by the end of the allocated time, the return “oh dear, we bit of more than we can chew” return line allows for Sprint scope to be returned to the Product Backlog. This is a control mechanism.
  • The actual effort is tracked measured sometimes even at a task granularity. This allows the team to understand their sprint planning accuracy as well as give an indication of the budget burn rate. Note that single Scrum teams do not scale up very well* (you can’t add people easily and 7 is considered the ideal team size) so this is a measurement sensor (as opposed to a control mechanism).   * I’ll handle the challenges of scaling Scrum in subsequent posts.  
  • Finally the “Completed functionality buffer” allows for completed functionality to be staged as product releases which may or may not be at the end of each Sprint. This together with the “Finished product release valve” is a control mechanism.

So to wrap up, when there is flux around requirements or the delivery platform/technology is new (or both) then up front planning cannot be done with a great deal of confidence. Under these circumstances, it would make more sense to use an agile delivery mechanism that runs the project as a controlled experiment. We put various sensors into this ‘experiment’ to capture performance data. This data helps us to predict the future performance of the project as well as to alert us quickly to any corrective actions that may be required (saving us from chaos).

In the next post, I’ll explain how to predict the end of a flexible scope/agile project.

This entry was posted in Project management and tagged , , , , , , , , . Bookmark the permalink.

1 Response to Predictive Management versus Empirical Control…and why our sensors save us from chaos

  1. Peter says:

    great explanation, thanks

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s