â€œInsanity: doing the same thing over and over again and expecting different resultsâ€. While thereâ€™s some dispute over who said it, Benjamin Franklin or Albert Einstein, it seems to ring true.
And itâ€™s just as true in the world of the projects that commercial and public sector organisations undertake to move themselves forward in some way, such as; business change programmes, cost reduction programmes, business process redesign, ERP Enterprise Resource Planning system implementations or complex IT projects.
We tend to go about them in the same way over and over again. We organise and carry out our projects in ways declared â€œBest Practiceâ€. We train our staff in â€œBest Practiceâ€ methods and the consulting firms we engage, tout their â€œBest Practiceâ€ credentials.
And the results according to sources such as industry analysts, is that 60 to 80% of these projects are doomed to fail. That is, the business benefits expected by the organizations paying for the projects, will not materialize.
My own observation of 30 or so client projects within some of the worldâ€™s largest corporations – and not so large private and public sector organisations, is firstly, that the private sector is no more successful than the public sector in this regard, and secondly, the real failure rates are probably higher than the those reported by the analysts. And there is I will suggest, good reason for that disturbing state of affairs.
But first letâ€™s remember what those project failures mean for the organisations involved.
The most obvious is the financial millions in wasted funding. Plus there is the disruption to business, and the time and cost of the ensuing damage limitation and rectification work. Then, there is the human cost in terms of wasted time and talent, the frustrations, the disillusionment with management leadership and consequent distrust of future improvement initiatives.
Thatâ€™s not to forget the delays and lost opportunities in putting in place the performance improvements, costs reductions or increased competitiveness the organization wanted in the first place.
After declaring that we will â€œlearn the lessonsâ€ from these failed projects, the odd thing is, that we then set about the next round using exactly the same â€œBest Practiceâ€ methods as before, but armed with the earnest hope that â€œthis time – for some reason, it will all be different!â€
Many authorities on projects and programmes list 10 to 12 factors they say lead to the high failure rates. Their lists range across factors such as: poor project management, lack of senior management support, failure to identify requirements, poor communication, and so on.
Certainly, those factors can be involved and be important. But a conclusion Iâ€™ve arrived at is that they are not fundamental to the failures.
We are all guilty
This is where I have to confess to being a sinner myself. Â I am by no means an innocent observer. My own rude awakening was as a Project Manager of an ERP system implementation some years ago.
At the time, I happened to read a column in the Computer Weekly newspaper. The columnist pithily asserted â€œany project manager who refuses to quantify their objectives, is either incompetent, a charlatan or just plain stupidâ€.
Being a project manager who had not quantified my objectives, I somewhat took exception to this assault upon my affectionately held personal qualities.
â€˜What could this idiot mean?â€ I raged. â€œWhy did I need to quantify my objectives? My objective was clear â€“ implement the new system. One or maybe two, new systems. There, quantified! Job done!â€
I should have noticed that being enraged meant a nerve had been touched. It took me a few weeks to realize that I was the idiot; not the columnist. The point being made wasâ€¦ why are we as a business, implementing a new system?
What are we trying to achieve? What do we expect to get out of the expense, the technical work, the new ways of working, the staff training, the disruption and all the other paraphernalia of a large system implementation?
Okay, so I’m a slow thinker
After a bit more pondering â€“ which admittedly took me several years, I realized that the whole pointâ€¦ [drum roll]â€¦ is, to improve the business; to improve the performance/ capabilities/costs of the business. There is no other purpose.
Itâ€™s not about delivering â€œstuffâ€; the â€œstuffâ€ is simply a means to an end. Itâ€™s not enough to hope that delivering â€œstuffâ€ will somehow make the organisation better. Itâ€™s about business results. The project objectives are about some form of improved business performance.
Now, that columnistâ€™s point made sense. In fact, it made vital, crucial sense.
The real question was â€“ how much improvement did we expect or want the project to make to the business? From what existing level of performance, to what improved level of performance – and by when was it to be achieved?
Which aspects of performance did we want to improve â€“ and how would we specify, quantify and measure those improvements? What numeric scales of measurement would we use? That gives a level of clarity rarely seen in project objectives.
As the project manager, I could try claiming (as so many project managers do), â€œmy job is to implement the new system. Itâ€™s not my job to improve the performance of the businessâ€. But thatâ€™s exactly what the business is really paying for. Thatâ€™s the purpose of the project. This is where we come back to â€˜Best Practiceâ€™.
Many corporations and organisations internally promote and follow a â€˜Best Practiceâ€™ approach to business and IT projects. Itâ€™s based on what is called a â€˜waterfallâ€™ model.
If your organization is mid-to-large sized, it is probable you are following a waterfall model. In the UK, the dominant project management method in the private and public sectors is PRINCE2Â®. PRINCE2 is based on the waterfall model and is positioned as â€˜Best Practiceâ€™ by the UK Government Cabinet Office.
The â€˜waterfallâ€™ model has a structure in which typically â€“ an idea for a project is proposed, a business case is written to justify funding, and the project planned and structured as a series of stages. Stages such as analysis and design work are planned and carried out, followed by development work and maybe procurement, a pilot implementation and then full implementation. Given the failure rates, itâ€™s tempting to add â€˜Post Mortemâ€™ at the end, although thatâ€™s not yet an officially recognized stage.
Between the stages â€“ each of which can take several months, there is usually a management review in which managers typically review documents produced by the project team.
Itâ€™s called â€˜waterfallâ€™ because each stage follows on from the previous, in a step-by-step, linear, downhill manner; water not being noted for naturally travelling back uphill. Eventually, something â€“ some form of deliverable or â€œstuffâ€ emerges at the end of the process.
Mr Royce tried to warn us
The waterfall model is a conceptually simple and logical process. Unfortunately, in all but the simplest of projects it usually doesnâ€™t work in real life. Mr Winston Royce, credited with originating the waterfall model, did warn about this characteristic; but to no avail.
Real life tends not to cooperate with step-by-step linear processes. It would be like driving from London to Edinburgh and determined to do so in a straight line. Try explaining that to the traffic police!
Real life â€“ including driving from London to Edinburgh and staying on the road, demands sensing and responding to changing circumstances as they happen. Itâ€™s a learning process in which we are learning continually in the real world, about what works and what doesnâ€™t; building on what does and correcting what doesnâ€™t.
The management reviews of paperwork in the waterfall model canâ€™t and donâ€™t substitute for validating the projectâ€™s basic assumptions and actual effectiveness in the real world.
The waterfall structure is such that the â€œstuffâ€ being delivered only gets its first exposure to the organisationâ€™s real world realities towards the end of the project â€“ when itâ€™s too late to change course.
On knowing all the answers at the start
Itâ€™s also worth recalling that business change and large IT projects in complex organisations are themselves are far too complex for it to be humanly possible to foretell with precision at the start, exactly how the project requirements and the business environment will evolve during the course of the project. Although the waterfall model assumes the exact opposite.
Back to fundamentals
Iâ€™m suggesting that rather than the 10 to 12 or so reasons for the distressingly high rate of project failures, there are really two fundamental mistakes, namely:
Mistake 1: we plan to deliver â€œstuffâ€ rather than quantified levels of business improvement. And we usually compound that basic error by proposing the solution â€˜stuffâ€™ before weâ€™ve gained a clear understanding of the real problems or obstacles to performance improvement
Mistake 2: we follow a ‘waterfall’ project management model which does not provide for testing every few weeks or months that the project is either actually addressing the real problems, or even capable of making business improvement in the real world. We rely instead on untested assumptions and expectations.
Â So what can we do?
While there isnâ€™t space in this article to explain all the practical details â€“ thatâ€™s for another article, here are the key principles, starting with the launch of a project…
Â “Hereâ€™s the solution; now, whatâ€™s the problem?”
Most organisations decide the favoured solution first, and then set about justifying it.
Instead, let’s ask the critical question â€œwhat business improvements do we want to make for our organization?â€ This means deciding the improvement objectives before deciding what the solution will be.
Thatâ€™s a key mindset change in itself.
And then further, ask â€œexpressed in terms of numbers, what level of improvement do we need? By how much and by when?â€
What would be the minimum acceptable level of improvement? That is, what level of improvement must we attain in order to have succeeded?
What would be the target level? What level above the acceptable minimum, do we plan to achieve?
Now we have a clearer idea of what improvements we want to achieve, we can get a clearer idea of what the obstacles are between where we are, and where we want to be.
We can gather potential ideas (solutions) for achieving the improvement aims, and meaningfully start evaluating those candidates in relation to their costs and their likely benefit contribution to achieving our improvement aims.
It can take some hard thinking to answer those questions. Itâ€™s so much easier to think in terms of delivering â€œstuffâ€ â€“ but as weâ€™ve seen, avoiding the hard questions is costing organisations immensely in financial, business and human terms.
As a test, ask the following question about a proposed or â€˜inflightâ€™ project in your own organization. But please do so with caution. The answers will be illuminating but can discomfit those who are wedded to their favourite solutions, and would prefer the question to remain unasked!
The question is: â€œwhat performance improvement do we expect this project to make, and by how much (in numbers) do we expect that improvement to be?â€
Even if you go no further, simply asking that question, will immediately give you penetrating insights into the rationale of a project, and its likely success. But you can go further.
Designing the projectâ€¦
Structure the project to deliver – within a few weeks of starting, some useful and real improvements to the business,Â and then onwards, in regular delivery cycles of say, every 4 weeks.
Measure how well what was put in place actually worked compared to what you expected, and use what you learn to improve the next delivery cycles to the organization. Rinse and repeat!
That is, structure the project as an iterative process of improvement cycles. Measure the actual improvement made in each cycle, and use that feedback to adjust the next delivery cycles to keep the project on track to delivering the overall needed levels of improvement.Â That way you enjoy early business results, and receive early warning of risks, new emerging objectives or faulty expectations before any serious damage or project failure.
And there may be some schooled in â€˜Best Practiceâ€™ who will protest that itâ€™s not possible to deliver projects this way. The question for them is â€œare they saying it canâ€™t be done, or are they simply saying they donâ€™t know how?â€
Or, we can keep doing the same thing over and over again â€“ and just hope that unlike the last time, this time â€“ for some reason, it will be different!
PS: to help you and your organisation introduce a business improvement-led projects approach – the simplicity of which, I admit took me several years to grasp, there are various explanations of the techniques elsewhere on this site. Or to discuss via email, contact us