Using a Waterfall/Agile hybrid model to reduce risk on a massive system go-live

This post has been a long time coming. I’ll start by painting the context to explain the challenge:

  1. Big corporate IT system replacement program
  2. Involves a new architectural paradigm and re-engineered business processes
  3. >200 full time resources on a multi-year development and roll-out cycle
  4. SDLC aspects use scaled up Scrum (> 20 Scrum teams)
  5. Here’s the kicker: due to particular constraints around the architecture and revised business processes, we needed to deploy a critical mass of new architecture and business processes together in order for the solution be internally cohesive i.e. as much as we hated to go this route from a risk point of view, the first push had to be a big-bang. We did take smaller atomic subsets live earlier but there was no getting around the big-bang.

Coming into the start of the year, I volunteered to handle the commissioning phase planning and execution coordination. This included the planning for the build-out of the Production host environment that would host the solution as well as the system deployment coordination leading up to the go-live.

Challenge: Our Agile tooling was inadequate for the commissioning phase

In a purer agile model we would be taking small increments of solution delivery live relatively frequently, something like the picture below. Scrum demands that you deploy all the way to operational usage at the end of each sprint. That’s relatively easy to do at a small tech start-up and more complicated at a big corporate were there are various degrees of role separation between the developers and the deployers. Yes, it is an anti-pattern and all going well, will eventually be cured by a fully automated deployment pipeline – a subject for another post. For now I’ve drawn it as a three sprint batch that is buffered as a release.

Small functional increments to Production

Small functional increments to Production

In truth even with the three Sprint buffer, we would have been happy if this were actually the case. I did however mention in the introduction that we were forced to take a large chunk live just for the system to be functional. The real situation looked like this (ouch!):

One big batch needing to go live

One big batch needing to go live

The Scrum teams in the SDLC cycle had been continuously deploying to the integration testing environment (“Dev-Test”). The solution then languished here for an extended period of time because it would not be functionally cohesive until partnered with additional business processes to create an atomic whole in the new business paradigm.

With such a big piece of tech needing to be commissioned and a limited deployment window (we still have over a hundred thousand customers in the pilot country so the allowed downtime was a window of a few hours), Sprint planning and post-it notes were not going to suffice as the mechanism to pull off this execution. We would not be able to transparently see the constraints: the team was too big and the tech too complicated for people to hold the dependencies in their heads.

Step 1: Create an execution model using predictive planning techniques (enter: Gantt)

Unlike the SDLC process which is creative and iterative, we needed the deployments to be utterly predictable: the business risk was high and the short deployment window was going to be unforgiving to execution glitches.

We needed to appropriately model the data migration and solution deployment sequence with full transparency of  all dependencies. In order to optimize the execution time, we also needed transparency of the critical path (spending time tuning anything other than the critical path would be a waste!).

Personal disclaimer: I don’t have any default preference between project management methodologies. Like anything in engineering: choose the right tool for the job and if you understand many techniques then you can leverage different approaches to solve the problem at hand 🙂   

…So after a long break I blew the dust off MS-Project and got stuck in, working closely with all the propeller-heads to create an execution model that satisfied the various constraints that became apparent.

Deployment Gantt: Models dependencies and provides critical path visibility

Once we could see the critical path we were able to check that the allowed deployment window was indeed feasible and find ways of tuning the execution from a time point of view. The final sequence including once-off data migration activities involved over a hundred atomic tasks that had to be executed across 30 or so people in a few hours (and yes, that’s after automation…).

Step 2: Create a view for the Team that abstracts away the planning complexity (enter: Trello)

In manufacturing operations, shop-floor workers are interested in three things:

  1. What to make
  2. When to make it
  3. How to make it

i.e. the people on the floor don’t need to be bothered with full supply chain visibility or plant flow constraints (that’s why you buy finite scheduling software). Taking this philosophy into the challenge at hand: I had the answers to the three questions above but I needed to present it to the team in a way that was:

Easy to access, showed (1),(2) and (3) and provided real-time transparency of where we were in the sequence (we had teams in separate geographic locations). I’d been using Trello for some teams that we were coaching and was impressed with it’s slick online Kanban features.

Porting from MS-Project to Trello

There was an initial challenge of porting from MS-Project to Trello but that turned out to be a breeze using Excel as a bridging mechanism. I would typically copy the “Task Name”, “Duration”, “Start” and “Finish” from MS-Project (they are all conveniently next to each other on the default Gantt view

Copying tasks from MS-Project

Copying tasks from MS-Project

…and paste all tasks into a spreadsheet. Several things happen in the spreadsheet:

  1. Delete parent tasks
  2. Concatenate the start time, description and task duration to come up with a better Trello task description (simple Excel formula). I also deleted leading spaces created by MS-Project’s indents.
  3.  Reorder the task sequence so that it was based on the start time (oldest to newest)

An excerpt is shown below. The “Trello name” is the generated name in the far right hand column.

Excel bridging between MS-Project and Trello

Excel bridging between MS-Project and Trello

Once the previous steps were done (they only take a few minutes), the entire “Trello name” column can be copied and pasted into Trello (see screen grab below).

Trello importTrello allows you to import up to a hundred tasks at a time. As we had over a hundred I had to do it twice. The entire process going from MS-Project to Trello took about 10 minutes in total with checking!

The one part I wasn’t able to automate in Trello was the color coordination of tasks (for better readability) and the addition of the resources to the tasks. Although it’s possible to write a parser and use the available Trello API, we didn’t have the time to go that route this time ’round 🙂

Step 3: Empowering the Team with a ‘Live view’ of the go-live execution

The end result is a prepped deployment board that allowed everyone to see what they needed to do and when they were expected to do it without needing to understand the intricacies of the planning process. They can also see which tasks are in motion and which things have been completed (people move their tasks as they do on a traditional Scrum board). This allowed people to be ready when it was their time to deliver on something. We aimed for a near-zero communication loss between parties and as Trello was a live view on the deployment, everyone was able to get themselves setup to act in advance of their ‘cue’!

The picture below shows the board in the latter stage of the execution (no tasks left in ‘Planned’). Unfortunately I didn’t have a screen capture from earlier in the day when the originally imported plan was intact.

Excerpt from the DEVOPS Trello board

Excerpt from the DEVOPS Trello board

Step 4: Weekly ‘dress rehearsals’ for practice and fine tuning the plan based on empirical performance data

At this point in time, I had a comprehensive execution model on the Gantt and an elegant way of presenting those tasks for consumption by the deployment team resources (using Trello). As the Production environment was new (no consumers yet) we were in a position where we could use it as a staging environment until the actual go-live. What we then decided to do, is to run through the full deployment execution as a dress rehearsal once a week. This would:

  1. Give everybody practice on the execution process
  2. Allow us to refactor the Gantt model based on empirical performance data (everyone captured the start and end times of their tasks during the dress rehearsals). This made the plan more accurate.
  3. Expose gaps in the Production environment as part of each dress rehearsal (the smoke tests had to go green each time and if they didn’t, the techies would troubleshoot)
Task duration capture

Task duration capture

In effect we were running a “Plan Do Check Act” Deming Cycle:

  1. Plan the execution on the Gantt
  2. Present the plan for execution using Trello as an online Kanban board
  3. Gather the execution statistics and note Gaps found
  4. Refactor the task duration on the Gantt using the empirical performance data

…and repeat every week starting two months before Go-Live

Deming Cycle: Plan->Do-> Check->Act

Execution Tip: Keeping everyone informed

We created a Trello task called “Deployment Commentary” which I used to keep all stakeholders informed as to where we were with the execution. As the coordinator of the deployment I was fully aware of the actual state (because I was monitoring the Trello board and orchestrating the actions) but in the background I was reconciling the critical path tasks against their planned start and end times on the Gantt chart. This meant that at any point in time I knew whether we were running ahead or behind of the planned execution and I knew the number of minutes. As each Critical path task was done, I checked the Gantt, worked out the delay and posted on the commentary so that EVERYONE knew what was going on from a single source. Most people had configured alerting on that task so they would get the update pushed directly to their smartphones or e-mail. It also limited the number of people coming into the Command Center to ask “where we’re at?”.

Excerpt of deployment commentary

Excerpt of deployment commentary

So how did it all pan out?

As the famous (South African) golfer, Gary Player was credited with saying: “The more I practice, the luckier I get”. The crew completed the solution deployment and data migration an hour ahead of time and handed over to the testing team for deployment confirmation. The system went live smoothly and was fully operational the next day and all was well with the world 🙂

Posted in Project management | Tagged , , , , , , , | 2 Comments

Planning Poker and User Story Sizing

Planning poker cardsMany Scrum teams that I’ve coached use Planning Poker as a mechanism for sizing user stories on their Product Backlogs. Often the use of this technique came about as a mandated process aimed at standardizing the Scrum practices across teams on a corporate program (as opposed to something that was cherry picked by teams on their own journeys of process discovery). Developers are inclined to view non-coding time as being non-productive time, so it’s possible that the story sizing session may become viewed with skepticism and ‘ceremony disdain’.

In my opinion, Planning Poker is really good at what it was intended to do! What follows here is a fun training exercise aimed at coaching teams around the answer to two questions:

  1. Why do we bother with user story sizing?
  2.  Why do we use Planning Poker and its weird abstract scale?

Note that this exercise and the article content will benefit teams who are already practicing Scrum and have already attempted planning poker.

What you will need:

  • 4 to 6 players and a facilitator
  • Bag of marbles
  • Some cups
  • 4 small items of vastly varying weight.
  • A kitchen scale
  • Stopwatch or timer
  • 4 to 6 sets of planning poker cards (one for each participant)

The coaching is achieved in two parts: a lecture preceding a fun, practical exercise.

Part 1: Why do we bother with story sizing and planning poker?

The audience is typically composed of Scrum Pigs so the explanation is presented around indisputable truths (i.e. a logical argument).

Truth 1: The Project Sponsor needs an understanding of how long funding needs to continue on a particular initiative. This requirement is agnostic of the methodology being followed and is always present.

What sponsors want to know

What sponsors want to know

Truth 2: We use Scrum when we have constraints around requirements stability and/or practice on the technology

The concept below was first explained to me on a Scrum Master course with Ken Schwaber in Miami in 2012. It’s a Stacey diagram and shows the type of situation where it makes sense to use Agile techniques (as opposed to predictive techniques like Waterfall). Put simply, when UNKNOWNS > KNOWNS then any attempt to come up with a long term plan is likely to be fraught with frequent updates (replanning sessions). Under these circumstances, Agile techniques using past performance data to predict the future – make more sense. As a developer on an Agile project or program the corollary is also true: you are probably in a situation that has a lot of requirement flux or tricky delivery technology (or both).


In Scrum, there is typically one predictor mechanism that answers the question: “How much more time?”…and that is the trend of the Product Backlog burndown curve. Where that extrapolation intersects the horizontal ‘time’ axis is the most likely end date based on the current user story burn rate.

So, the answer to “why do we do we bother with story sizing?”:

  1. The team has consensus on the relative size of things effort-wise (it’s a ‘wideband delphi‘ mechanism for you inquisitive pedants). The exercise to follow exposes how uncomfortable things can get without a consensus tool!
  2. The discussion of story scope and what it entails is a valuable entrance to Sprint planning (we usually do Product Backlog grooming and story sizing in advance of task breakdown in Sprint planning)
  3. We have just enough data to be able to answer sponsorship’s question regarding project timelines (we use velocities and a Product Backlog burndown to predict the end of a project).

…ok but you’re still looking at me skeptically so the next part aims to do some practical convincing!

Part 2: Why do we use a funny abstract scale? Why not just use ‘man hours’?

Truth 3: Planning is a time-boxed affair. We stop planning before the accuracy starts leveling off (because of a lack of requirements stability and/or practice on the technology)

Plan accuracy is sensitive to:

  1. Level of understanding of user requirements
  2. Level of practice on the use of the delivery technology
  3. Enough time to process this knowledge and decompose the work into atomic planned tasks

It would be efficient to continue the task decomposition process only until you get to diminishing returns on accuracy. I have drawn the following picture (sometimes in anger…) on several occasions. Usually I do it to explain to someone in management that more time spent planning is not going to improve the accuracy of the plan because our constraints are knowledge (i.e. execution practice) and data!

Stop planning when accuracy starts to plateau

Stop planning when accuracy starts to plateau

As I mentioned in Part 1, agile delivery mechanisms are selected because either the requirements are not well understood and/or the technology is new – to the extent that frequent new discoveries would result in frequent refactoring of the plan (this is like a tax as its not productive work and involves contributions from the most experienced team members). In an agile planning exercise the philosophy goes along the lines of: “We concede that the processes and/or technology are a journey of discovery so we will spend the barest minimum effort on planning before we get to diminishing returns on accuracy”.

Because its hard to tell when we are going to get into the space of diminishing returns accuracy-wise (much easier in retrospect), we use a ‘timebox’: a preset agreed-upon duration of time which may not be exceeded.

In a time-limited planning session sensitive to unknowns in requirements and technology, the best we can do is to get down to a relative size of things and give it a rudimentary estimate:

Shoes <skateboard < bicycle < motorcycle < car < dump truck < combine harvester

“…on that scale, this story looks like a ‘skateboard’ and the other one is a ‘dump truck’ ”

The practical exercise that follows affirms the value of Planning Poker’s consensus mechanism and also shows why the alternative of measuring in hours (an absolute scale) is just not practical given the constraints of available data and time.


1)      Four players sit around a table. On the table is a bag containing 4 small items of vastly differing weight, some polystyrene cups and a bag of marbles.

2)      The facilitator takes one of the items (not the heaviest or lightest), presents it to the group and says that this is the ‘benchmark story’ and it has a value of “8” story points. Facilitator explains that in this exercise, the objects represent user stories and that their weight is (metaphorically) equivalent to the amount of delivery effort.

3)      The facilitator now pulls another item out of the bag and starts a timer. The team has 8 minutes strictly to do 3 planning poker estimates on the remaining three objects. They are supplied one at a time. The team can refactor their story points after all three objects have been supplied (provided that they stay within the 8 minutes timebox). At the end they will have a relative sense of the weights. It is assumed that these are Scrum pigs who are already familiar with planning poker so they can facilitate the poker estimates themselves. The facilitator can give them updates as to how much time they have left (or do it on a kitchen timer that they can see). Typically team members pass the item around and feel the weight in their hands relative to other objects before voting.

4)      The facilitator creates the table (example shown below) based on the team’s planning poker estimates:

Object Story points
Calculator 5
Sampler 8
Impact Driver 20
Hammer 40

Of course, your objects will be different (these happened to be what was lying around my workshop when I concocted the exercise)! Note that for team’s that I’ve run this exercise with, they usually get to consensus on three planning poker estimates relative to a supplied benchmark inside of 8 minutes. Because they only have to do 3 estimates and the scale is quite crude there aren’t too many debates.

5)      Now the Players are asked to estimate the equivalent weight in marbles of the four small items one at a time taking no more than 8 minutes for the estimate (i.e. exactly the same timebox). The facilitator explains that this is equivalent to doing an absolute estimate, such as estimating the stories in hours (as opposed to the initial relative estimate).

6)      After each estimate is given, the facilitator uses an actual kitchen scale to verify the accuracy of the estimate (by weighing the marbles) and logs the variance against the true value (percentage error). I suggest that you weigh your four objects in advance so that you have their true weights. At this stage the facilitator does not allow the team to see their accuracies. Watch carefully for team members who say “Take a few marbles out” or “Add two or three marbles” *.

Object Percentage error (estimate vs actual)
Calculator 29%
Sampler -8%
Impact Driver 15%
Hammer -32%

7)      When the team is done, the facilitator plots their result on a distribution graph. The distribution will be fairly random and not very accurate (certainly not at the ‘take out two marbles’ level of accuracy).

8)      Facilitator points out that:

  • Spending more time on the exercise would probably not result in more accurate results
  • It’s not that easy to weigh something with just your hands…nor is it easy to predict how long something is going to take to build when you don’t know the ‘something’ nor how to build it (typical of an agile project). * Now would be a good point to make the observation that ‘adding (or removing) a few marbles’ gave you an illusion of estimation accuracy when in fact you were 40% inaccurate in reality (or whatever the case was – I’ve yet to encounter a team that had any level of consistent accuracy here).
  • The marble represents a commonly understood unit of measure (like ‘hours’) yet it’s very difficult for four people to agree on a common value with any reasonable level of accuracy given crude estimation mechanisms. In comparison there was very little debate or discomfort with the first part of the exercise using story points and planning poker.
  • Coming up with a relative value is far easier than an absolute value (in marbles or hours) for a timeboxed exercise.

Conclusion: the planning poker exercise exposes the estimating ‘crudeness’ for what it really is and forces people to agree on the barest minimum accuracy that is possible under the circumstances. 

Posted in Project management | Tagged , , , , | Leave a comment

Predictive Management versus Empirical Control…and why our sensors save us from chaos

This article tries to explain the differences in approach between predictive techniques in project management and agile/iterative techniques that use extrapolations of past performance measurements as a guide to future performance. I’ve tried to keep the explanation simple and used a very mechanical metaphor for a Scrum team’s operation!

What is a predictive project?

This is a type of project where there is enough known about the requirements and the work required to achieve those requirements in a predictable way. Using project management terms this means:

  1. The full scope can accurately be described at the start of the project and captured in a detailed work breakdown structure (WBS).
  2. Work packages from the WBS may be elaborated into a definitive series of tasks each of which can accurately be estimated duration-wise.
  3. The results of the task definition and estimation processes may be collated in a schedule which also shows the dependency between the tasks. This is also known as big up-front planning, the result of which is typically a Gantt chart which is used to orchestrate the execution of the project. A Gantt chart looks like a series of waterfalls which why big up-front planning projects are referred to as ‘Waterfall’ executions.
A Gantt chart usually looks like a series of waterfalls

A Gantt chart usually looks like a series of waterfalls

Good Waterfall projects can up to 90% accuracy but only when the scope is clear-cut and the execution risks are low (or the budget and execution timelines have been padded with adequate contingency allowances to counter the risks – ok but not an efficient approach economically).

So what do you do when the scope is not known up-front or the activity durations cannot be estimated?

Agile methodologies are selected in environments where the scope of the project is hard to predict. This may be due to several reasons some of which are listed below:

  • There is a general lack of maturity about the requirements or delivery roadmap from the customer as it is a ‘new thing’
  • There may be considerable technical risk, perhaps it’s the first time that a technology was used by a team
  • The customer may not know how to proceed until they’ve seen a simple tangible solution.

In other cases the predictability problem is exacerbated by an inability to estimate task durations with any real confidence.

Ok, I know haven’t answered the question yet…keep reading

Enter ‘empirical control’ – something we do all the time

In the real world where we live and operate there are many contexts for which the definitive relationship between parameter x and parameter y is unknown i.e. it may exist as a formula but we don’t know it! At the same time we are able to get by, based on what we have learned over the years (empirical data= what we have learned through observation).

As an example: “If I leave home at 8:35 am I am fairly confident that I will get to work just before 9am”. It is plausible that some accurate predictive model may exist and takes into account traffic patterns, the distance to work, car’s performance, traffic light sequencing etc. It’s not usually worth the effort finding that out simply because the empirical data offered sufficient predictability i.e. I drive to work every day and I’ve found that with a fair degree of confidence it typically takes around 25 minutes. In that last case:

  • Parameter x: Time I left home
  • Parameter y: Time I arrived at work

We do this sort of thing regularly.

Empirical predictions are derived from experiment and observation rather than theory.

If there is no way of predicting the outcome of an initiative: run an experiment, gather lots of data frequently and use this data to predict future trends.

A Scrum Team viewed as an experiment…

To make more sense of a project running with performance data gathering and related control, I’ve tried to illustrate the workings of a Scrum Team using a mechanical process as a metaphor. On the off chance that you don’t already know, Scrum is an agile delivery framework.

Scrum as a mechanical process experiment

Scrum as a mechanical process experiment

The image shows:

  • A product backlog shown as a funnel of user stories or user requirements
  • The sprint is shown as a big drum where selected user requirements are converted to completed functionality (demonstrable) through an imaginary fixed-duration combustion process (I couldn’t resist the pun on a ‘burndown’…snigger).
  • The Sprint scope throttle valve control how many user stories will be tackled in the Sprint. This is a control mechanism.
  • We can track the rate at which the product backlog is being depleted by monitoring the product backlog burn rate. This is a measurement sensor.
  • We can track the rate at which the Sprint backlog tasks are being burnt. This is a measurement sensor.
  • If the sprint burn rate sensor suggests that the work of the Sprint will not be completed by the end of the allocated time, the return “oh dear, we bit of more than we can chew” return line allows for Sprint scope to be returned to the Product Backlog. This is a control mechanism.
  • The actual effort is tracked measured sometimes even at a task granularity. This allows the team to understand their sprint planning accuracy as well as give an indication of the budget burn rate. Note that single Scrum teams do not scale up very well* (you can’t add people easily and 7 is considered the ideal team size) so this is a measurement sensor (as opposed to a control mechanism).   * I’ll handle the challenges of scaling Scrum in subsequent posts.  
  • Finally the “Completed functionality buffer” allows for completed functionality to be staged as product releases which may or may not be at the end of each Sprint. This together with the “Finished product release valve” is a control mechanism.

So to wrap up, when there is flux around requirements or the delivery platform/technology is new (or both) then up front planning cannot be done with a great deal of confidence. Under these circumstances, it would make more sense to use an agile delivery mechanism that runs the project as a controlled experiment. We put various sensors into this ‘experiment’ to capture performance data. This data helps us to predict the future performance of the project as well as to alert us quickly to any corrective actions that may be required (saving us from chaos).

In the next post, I’ll explain how to predict the end of a flexible scope/agile project.

Posted in Project management | Tagged , , , , , , , , | 1 Comment

Update: Hamsters on overdrive

Ok, it’s been a long time since I’ve posted anything! The company is swamped with work, has more demand than supply and we’ve been hellishly busy. This is certainly healthier than the alternative but can also be a frustrating place. For quite some time, I’ve been personally involved looking after clients’ initiatives leaving little time for further recruitment (watch this space). The thing is, I really enjoy being in the trenches with  badass project teams. I have to balance that with keeping a wary eye on the operational runnings of the business, its investment portfolio and joint ventures. Erm, I also enjoy sailing and am in the process of forming a new rock band. And I got a new son last year… 🙂

My original motivation for the blog was to capture some of my real world learnings as an enabler for coaching activities. That way it’s easier to point people in this direction and say “just hear me out for now, don’t worry about trying to make notes – I’ll mail you a link to the article”.


Having fun in the trenches

As its been a while since I posted, there’s been quite an accumulation of project management IP that can be packaged and put out here for critical consumption. I’ll give it a bash.

Posted in Entrepreneurial, Project management | Leave a comment

Crossbolt fully in the cloud!

It’s been a while since the last post and there’s been a lot going on (I’ll try and catch up over the course of a few posts). The other day, after seeing me tab through various browser windows at speed, our bookkeeper asked in jest if there was anything left in the company that wasn’t in the cloud. I had a good laugh and then went silent -to my amazement, I couldn’t think of anything!

Local not always lekker...

Let me backtrack a bit – Crossbolt has since its inception, pushed the technology boundaries on business automation. Examples include being paperless from the beginning, being truly location agnostic for several years (that’s hard), running WiFi and using mobile platforms since 2001 etc. Generally, if there was a more efficient way to tune the back-office function I was going to find it!

Over the years as vendor SAAS models began to mature and bandwidth got cheaper and more reliable, I started moving pieces of the company’s application infrastructure ‘into the cloud’. The business case was simple: no more upgrades and back-ups to worry about and the freedom to work off any platform!  

Without further ado, here’s the list.  It’s taken a while to settle on a system that worked and there were some failures along the way (it’s an evolution).

E-mail, Calendar and Tasks: Google Apps, with a domain redirect. The fact that gmail is powering everything is invisible to customers.  Cheap (the free version suffices for now), powerful and much better than anything Outlook-based. I still use Outlook on customer sites where I have to use a company specific e-mail address, but the calendars synch up quietly in the background using the google calendar synchronization tool. So yes, Google knows everything there is to know about Crossbolt, but they know everything about everyone else too…

File server: Dropbox is the bees knees, and we have a paid-for 50GB subscription for all company related data [$99/year]. This thing is invisible and does exactly what it promises, seamlessly making the data available across multiple devices and allowing easy access by several employees and contractors. This is much better than a LAN-based file server unless you are into huge media files (not a problem for us). 

Timesheets: We use Harvest for this [$12/user/mnth] and it turned a tedious spreadsheet-based, once a month catch-up into something that happens seamlessly through the day using an iPhone application. Allows for rapid scaling as contractors come onto projects and roll off again later.

Workflow and task management: We use FogBugz as an internal workflow tool to assign tasks between staff. This is a gem of a product that I’ve been using for years at various companies. Its ease of use and accessibility makes all case-management type of workflows a breeze to operate and scale. 20000 other organisations think so too! [$25/user/month]

Accounting: We use Softline Pastel’s My Business Online  product [R140/month for 2 user accounts]. Crossbolt has been with Pastel since inception and the move to the cloud-based product happened this year and its brilliant! Finally the bookkeeper can do everything remotely and we all have access to the books and management reports without having to mail files around. This is major progress from having a client-based application on one PC at the office that everybody had to use.

Payroll: We use Softline VIP’s Liquid Payroll, which demystifies everything related to payroll at R15/employee/month. This is well worth it unless you’re a masochist who enjoys dealing with managing somebody else who is also a bit confused trying to handle all the statutory rubbish that a business has to wade through to stay compliant (medical aid credits, fringe benefits, contributions, deductions, EMP 201’s, 501’s IRP 5’s blah blah blah). Bitchin’!

Statutory taxation: SARS e-filing [free, other than the tax you have to pay…], which though clunky compared to some of the other products here, is light years ahead of the place from whence we came…and most of the rest of the world.

Marketing: LinkedIn and WordPress and Facebook to an extent too. I also use Twitter and Google Alerts occasionally for market research (particularly around investments). You already know about these.

Project Management: We use Scrumy for online Scrum boards to manage work internally and between geographically separated collaborators (we use graphic designers in Durban, Toronto and a printer in Cape Town as an example).

Banking and Investment: Nedbank Online Banking and Standard Bank Online Share Trading. The latter is brilliant (there were two predecessors), the former doesn’t warrant a link!

VoIP: Skype kills everything dead. We’ve been using for intercontinental comms since 2003.

While many of these applications are household names, getting all the way there has taken some doing – particularly the accounting and payroll applications.

And the downside: well, without an internet connection, I might as well go sailing! Fortunately in the interests of uptime, we have several net connection mechanisms (ADSL, Vodacom 3G, Cell C broadband) and in fact can continue to run the business from the yacht. Like right now.

Have net connection, will work

Posted in Entrepreneurial, Technology and culture | Tagged , , , , , , | Leave a comment

Crossbolt sponsors CH2

Continuing the off-the-wall marketing mentioned in the last post, we decided to give these two (very) talented nutters a boost. As their website says:

Corneille & Leon recently returned from the USA after performing @ the 2011 Lee Ritenour Six String Theory International Guitar Competition World Finals. They rubbed shoulders with the likes of masters like Steve Lukather, Lee Ritenour, Scott Tennant, Joe Bonamassa and Doug Smith to name a few. Corneille & Leon won the World Classical / Flamenco category. They recorded their first DVD with a full symphony orchestra in September 2011.

CH2 and Crossbolt

CH2 get fantastic corporate exposure due to their (remarkably afrocentric) upbeat classical/jazz vibe. Good for them and great for us.

Sponsoring musicians does have its unique perks: I was at the coffee bar with some colleagues at a broadcasting client’s site recently when Corneille walked past and gave me a ‘hey bro’ high five. He was on his way to a TV shoot and I could nonchalantly claim – “oh, just some guys that we sponsor”… 🙂

Watch this space, they’re hardworking, immensely talented and have a growing cult following.

Posted in Entrepreneurial | Tagged , , , | 3 Comments

Crossbolt 10 year anniversary graphic

With a decade in business coming around we decided to commission a graphic to celebrate the milestone! We wanted this to be slick with just a whiff of anti-establishment rebellion. There is a method to the madness: you probably can’t remember any project management company brands can you? They’re typically staid old establishments. The only way to create a memorable brand here was to pitch it way out to the left. So we’ll be the punk rock IT project managers…

Being a petrolhead at heart I figured that a hotrod would be a good theme and I also had some retro 80’s era skateboarding related ideas. Here was my original concept drawing:

The concept image

Enter the amazing Andy Wright (ex frontman for Leek and The Slashdogs and owner of design company BOFA), quite possibly the only go-to guy for that esoteric combination of themes. He absolutely nailed it 🙂

10 Years of Smooth Rollin' Projects

Vroooom! Bring it on.

Posted in Entrepreneurial | Tagged , | 1 Comment

Scrum tip: Conduct better sprint retrospectives

Sprint retrospectives have in the past proved challenging for me as a Scrum Master:

  1. Some pigs take an inordinate amount of time to give their feedback. In a team of 8 pigs we often had the first few volunteers getting a lot of bandwidth and short-changing the folks at the back of the queue.
  2. Discussing all points that are brought up – without sensitivity to relevance and then running short on time to give really burning issues appropriate analysis time. Often there is an underlying feeling that some points are less important than others but out of respect for the pig originating an idea, fellow pigs feel inclined to avoid filtering discussions.
  3. Some pigs have not done any introspection and try to come up with their feedback on-the-fly – thus doing their fellow pigs who have taken the time to think about things a disservice.
  4. Scrum Master’s leading rather than facilitating the process (guilty as charged)!

Based on these challenges, I introduced a few process changes to our last retrospective and it made a huge difference. We got rich feedback from all pigs, discussed only the relevant points and had far less Scrum Master interference in the process improvement solution space. Here’s what we did:

(A) Precursors [First 5 minutes]

Elect a Scribe: A scribe was nominated to take points down on the whiteboard. Rule: The scribe at the whiteboard cannot offer an opinion or do any filtering. He/she must simply take down what is being said in a succinct way (the scribe is allowed to ask questions just for clarity). As far as is possible the scribe must use the words of the person making the point – don’t unnecessarily paraphrase. The scribe can be the Scrum Master provided that the rules are followed.

Elect a Timekeeper: The time keeper monitors the timeboxed activities and calls ‘Time’.

(B) Pig feedback: What went well, what could be improved [Next 45 minutes]

  1. Each pig was given exactly 5 minutes to say in their own words what went well and what could be improved. They could talk about whatever they wanted in these 5 minutes and other pigs were limited to questioning only for purposes of clarity. This also empowers/forces the pig to choose points that are especially relevant if they have a long list of items to go through.
  2. The scribe takes down the points in two adjacent columns on the whiteboard.
  3. No discussion is allowed on the topics at this point. Pigs were asked to respect their fellow pig’s right to an uninterrupted opinion.
  4. Note the scribe being a pig in the team also gets a chance to give his/her feedback and someone else does the scribe duty during this time.

(C) Discussion points voting [20 minutes]

Once the feedback points were captured, each pig was given a distinct color whiteboard marker. They were given 15 minutes to peruse the board and were allowed to make seven votes in total against points that they felt warranted further discussion (you need the distinct color to keep track of your vote count). The number “7” was arbitrarily chosen based on the total number of potential discussion points on the board. Pigs were not allowed to make more than one vote per point thus limiting the potential for any pig to force an issue into discussion (the jury is out on this part).

At the end of 15 minutes, once all votes were cast, we could sum up votes and select the pertinent points for discussion (5 minutes).

Previously we had no aggregated view of topic relevance. The voting mechanism addresses this shortcoming!

"What we can improve" - note the voting marks against the points!

(D) Discussion [60 minutes to 90 minutes]

In this timeboxed period, the pertinent points were explored in order of reducing importance. This ensured that items that were seen to be especially important were not short changed on discussion time.

The scribe would take down any process improvements that came out of the exploration exercise.

(E) Retrospective – knowledge capture [5 minutes!]

Previously I tried to capture notes through the laissez faire discussions that ensued and then retrospectively capture these notes in a document and distribute. As I was often an active facilitator/participant in the discussions I didn’t always have a chance to capture all the relevant points – bad. I also tried on a previous project to record the retrospectives but it was horrible task to have to sit through a 3 hour recording to capture notes retrospectively.

This time was different. Because the scribe had neatly captured the points against “What went well”, “Areas for improvement” and “Process improvements”, capture of the knowledge was simply an exercise in photographing the whiteboards and distributing them on e-mail to the pigs (awesome!).

Posted in Uncategorized | 5 Comments

Using short videos as a change management tool

Some time last year I started investigating whether videos are a good mechanism for driving change on a project. Project stakeholders typically don’t read long documents and even if they do it’s usually a cursory glance at the pictures. Roadshows and presentations can be effective but I sometimes got the feeling that stakeholders begrudged the hour-long intrusion in their lives (especially when presence is enforced by a sponsor’s command). If the presentations are effective then there is the other problem of being asked to repeat them ad-nauseum to different parts of the functional organisation. This is fantastic but time-consuming and at some point you still have to get back to the actual work of the project delivery!

As a result of that investigation, in recent times I’ve been using short videos very successfully at one of Crossbolt’s clients. Here’s an example…

[Example pending client approval]

I was asked on at least four separate occasions today ‘how do you make those videos?‘ and this is intended to share some pointers to folks looking to explore using video as a project change or training mechanism.

(1) Create a light-bulb moment: My primary inspiration came from the folks at Epipheo (a play on the words ‘epiphany video’). In particular I learned that the thing that makes a video go viral – is the creation of the epiphany in the mind of the viewer as shown in the video example here.

If you can trigger a genuine ‘light-bulb moment’ about the subject matter, the viewer will want to share it thus taking care of your message distribution. People are more likely to watch something if their friends recommend it than from some chap they don’t know.

(2) Innovate – you don’t need a fancy animation studio: At a business meeting focused on a viral marketing venture, I was pointed to “In Plain English”, a second source of inspiration. Take a look at one of Commoncraft’s examples here and watch how efficiently a message can be delivered in a low-tech way. Don’t be fooled into thinking this is light work though!

(3) You don’t need a fancy camera: For now anyway, any camera will do. In order to ensure that LAN/internet streaming speeds are fast I tend to prefer files less than 200Mb. Just about any reasonable digital camera will do.

(4)  You do need a tripod and good lighting: These two are fundamentally important. I try and find locations with good, abundant natural light. Do not attempt this without a tripod – nobody wants to watch a shaky video. Any tripod will do and I often shoot with the camera on a cheap mini tripod.

(5) You must have video editing software: You need to be able to edit sound, add background audio, clip video, order clips and still shots etc. Just shooting a video of a presentation and hoping that this will do the trick – that’s being optimistic (unless you’re a fabulous orator…). This also means that you need to know how to operate the software! Most tech-savvy folks will be rolling in a few hours.

(6) Videoing a whiteboard drawing is great: As we often use whiteboards to explain concepts, this mechanism translates very well into the video space.

Whiteboard video's are fun

It’s also has a more intimate interaction as it has a real person in the video (as opposed to animation techniques).Some tips:

  • Whiteboard lighting is tricky. Get some tips here and invest the time before you shoot video.
  • Make sure you mark the visible area on the camera on the whiteboard. You don’t want to shoot your video and then find out that you’ve strayed off the camera view (happens to everyone the first time…).
  • Write the narrative in advance so that you know what you are going to draw and what message you intend getting across with the picture.
  • Go through a few trial runs before shooting. Be aware of not blocking the whiteboard with your body as you draw (can be tricky)
  • Speed up the video. It’s boring watching someone draw in real time. Keep this in mind when shooting as you can take more care and do decent drawings with several colors.
  • Add the narrative afterward otherwise you’ll sound like a chipmunk when you speed up the video.

(7) Life is easier on a fast computer: Don’t let this be an obstacle to progress as it is optional. Manipulating video is processor and memory heavy and having a good machine means a whole lot less time sitting around watching the ‘hourglass’. I have a potent Quad Core iMac  with a 27″ screen. I’ve been able to do effective videos on a 13″ business laptop too (with a bit more pain).

(8) Spend time with the audio track: More than half my time is consumed with narratives, background audio and sound effects. It makes all the difference.

(9) Don’t aim for perfection: The videos must be quick and easy to produce. If it’s good enough let it go glitches and all. This is a project tool not a commercial production! Interestingly an over-produced video becomes a barrier to entry for you doing more videos on the project as your stakeholders will expect that quality again (this is a hard lesson).

(10) Use a cheap trick to encourage viral distribution: This is easier said than done but we often try to inject some skit or aspect of the video that inspires a good laugh or shock. In the example that I had earlier in this blog it was the Simpsons sofa spoof at the end.

There you go – happy shooting!

Posted in Project management | Tagged , , , | 7 Comments

Eating my own dogfood…

In my personal capacity I’ve had an initiative that’s been open with just about no progress for a couple of months: I’m having to vacate my home office to make way for my daughter’s new room.

Great spot, but it has to go

A big reason for the lack of forward progress is: I can’t see a sustainable solution just yet. I’m often away at client sites during the day and tend to do office work in the evenings. External offices don’t work for me as time spent there is time away from my family in the precious few hours I have before my daughter’s bedtime. The other option is to ‘MOVE TO A BIGGER PLACE’ as I am prompted regularly by all my friends. This brings other challenges into the picture: as an example try to find a reasonably priced spot with four parking bays in our school district! Or that we have a killer bathroom that took me the better part of a year to get right. The thing is – we’re happy and we don’t want to move so the solution has to be found within the available space constraints.

I’m also anxious about losing my place of refuge. The office is my quiet spot in a house shared with two females and several hundred cubic meters of (typically) joyful chatter. But I had promised my daughter her own room and that’s a sacred covenant. Tick-tock-Tick-tock…

I had an epiphany while driving several days ago: “I had the wrong methodology in mind!”. I was thinking of the initiative as a waterfall project: see the solution in its entirety and then do a big up-front plan for its execution. The fact that I couldn’t see the full sustainable solution stopped me from making ANY progress on this front. The realization was that I was running an unpredictable change management initiative. It demanded an agile delivery approach. I needed to make a few small changes and try those on for size, then use this experiential feedback to elaborate the solution in bite sized chunks.

Eat my own dogfood!

Take a step back to think about the irony here: (a) I own an agile management coaching practice ( and (b) I regularly run projects where the full solution was not known up-front.

I shopped around online for a virtual SCRUM board so that I could quickly smash out some user stories and tasks. For the uninitiated, SCRUM is a project management process that allows for a solution to be advanced in small increments. The guys at have done a truly wicked job.

  1. It took me 30 minutes to whack out a full plan of what was known.
  2. I made substantial progress in my first day following the plan.
  3. You folks all get a ringside seat right here:
  4. While I haven’t completely solved the problem yet, what I have already is working quite nicely

Satellite office 1

This blog entry was done in satellite office 1. I haven’t got my wicked Aeron chair yet but the Big Mac and I have gotten off to a rocking start. 

Posted in Project management | Tagged , , , | Leave a comment