Step 1: Assess

Why Projects Fail

CRM projects fail because of the so-called "three P's": people, process, and politics. Some CRM projects fail because of a poor tool choice, but it's actually quite rare that the tool itself is the root cause for the failure. True, the tool is a convenient scapegoat and is almost always blamed for project failures, and even branded as the only cause for the failure. The reality is that a properly run project should be able to conduct a successful tool evaluation that yields a solid, well-suited tool. If that selection should turn out to have been ill advised, a solid project team should be able to recover the project without going through a "failed" phase, although certainly there would be some unpleasant moments. You need to determine how people, process, politics, or the tool contributed to the failure without automatically blaming the tool.

The Assessment Session

Whether you believe your own project is failing or you were brought in to rescue a failing project, start by holding an assessment session. This is very similar to the post-mortem analysis described in , although post-mortem analyses follow projects that went to completion. Bring the entire project team together for the assessment, and as much as possible organize a face-to-face meeting since there are many emotional topics to be discussed. Make sure the executive sponsor attends the assessment. The assessment should be conducted with the full team, just like a post-mortem, although the key internal players usually reconvene afterwards without the integrator and without the individual contributors to review the results of the assessment and to make the business decision on whether to proceed or halt the project. It's somewhat easier for an outsider to conduct the assessment session because an outsider has a more objective view and is better able to explore unpleasant areas. If you are managing the assessment session for your own project, keep your emotions in check and make every effort to be objective and thorough, while leveraging the knowledge you have gained from working on the project to probe the issues. In any case, expect the team's morale to be low by the time a project is in failing mode. Failing projects are depressing for all team members, regardless of their individual levels of responsibilities for the failure. Maintain a tone of reasonable optimism for the assessment session: after all, the project is failing, but it's not dead yet, and recovery can only occur after an appropriately balanced assessment session. This is not the time to despair, but don't allow fake optimism either: the team is in a serious situation that should be acknowledged as such.

The Good

Start the assessment by reviewing what went well with the project. It may seem crazy to start with the positives since there won't be many (after all, the project is failing) but it's critical to figure out the pieces that can be rescued if the project will be restarted, which is a very likely outcome. Also, starting on the positive side helps everyone's morale, which can be a big help at this difficult juncture. Take a look at the three Ps, people, process, and politics. About people: are there key individuals that are contributing to the success of the project despite the challenges? Identify both technical team members and business team members. About the process: although it has failed to produce a winning result, are there aspects of it that are working? For instance, is the communication process working? Is the testing producing good results? Are end-users appropriately engaged in the project? Is the coordination between the technical team and the business team working properly? For the political aspect, the very critical question to ask is: is the executive sponsor active in the project and providing tangible support? If the answer is yes, then it's almost always worthwhile to press forward with the project, even if difficult choices must be made. There may be other positive political aspects, in particular if the business functions are behind the project. Can end-users perceive benefits from the tool, even if only in a few areas based on the functionality released so far? If the end-users are demonstrating any level of enthusiasm during an assessment session, the future of the project is very positive even if the future looks dim right now. Finally, look at the tool. Tools typically get blamed for any and all problems, and certainly CRM tools are far from perfect, but try hard to isolate the features of the tool that are working well. After all, the team selected it in the first place, so there must have been something attractive about it. Especially at the beginning of the session, conduct the assessment in brainstorming mode, accepting all suggestions and delaying discussions until everyone has had a chance to participate. If you allow participants to discuss the suggestions as they are made, you run the real risk of shutting off the dialog.

The Bad

Once you have a list of positives, go to the negatives, looking again at the three Ps, people, process, and politics, and also at the tool. Are the right people on the project? Are there any areas of weakness? Consider weaknesses at all levels, from the project manager to the business owners and super-users, to the technical team, and to the integrator. People issues are difficult to confront in a group setting, so expect roundabout answers, for example, a suggestion to add someone with a specialized skill rather than a condemnation of a particular individual. Carefully note the nuances. On the other hand, if the level of interaction gets aggressive, you will know that the team is not communicating well and that problem needs to go on your list of issues. People problems can also arise within the user community. Are the business owners or the super-users not participating appropriately? Are end-users refusing to use the tool? Can you determine whether the reasons behind the refusal are based on (lack of) functionality? Communication issues? Motivational issues? Moving on to process issues, it's common to find problems with the way the project itself is being conducted. Assessments often uncover that the requirements definition was done too hastily, that the project plan was overly ambitious, that the users have not been involved enough throughout the project, or were not involved early enough, and that testing and QA are insufficient. Besides determining what processes have been weak, probe carefully on when the project went wrong, not just how. This is because, if the project will be restarted you need to know at what point to restart it. If it's just a problem of buggy programming, then you can restart the coding phase (perhaps with different programmers!). But if the problem occurred at the requirements definition phase, you need to go back to that point, which means a great deal of additional time and resources will be needed. Process issues can also come from the business side. Are the processes that are to be modeled in the tool inefficient, ineffective, or simply the wrong ones? If the project included formalizing processes for the first time it's quite common to find that the newly formalized processes are "wrong" and cannot be used as they are. They looked fine on paper, but once they got automated in the tool the users realized that they were simply not the right processes. If that's the case, go back to the process definition phase and use more effective validation techniques. The tool is often blamed when the wrong process was automated, so make sure that users can distinguish between a bad automation of a good process (which is a tool or a customization problem) and a correct automation of a bad process (which is a process problem). Process issues can also arise if the business model is changing, prompting process changes that are incompatible with the tool. If that's the case, the current configuration of the tool (assuming you have started customizing it) may not work any more, but this doesn't automatically mean that a new tool is required. What you need to do is to re-evaluate the tool against the new business model and processes before proceeding with a decision. You may be pleasantly surprised to see that the tool can be adapted to the changes, although it will take time and resources to perform the re-evaluation. It's always tricky to probe political issues in a group meeting so you can limit yourself to one simple test: is the executive sponsor attending? If not, then you can politely thank everyone and go spend your energy on something else, since the project cannot succeed without the sponsor. Political problems often take the form of active or passive resistance to the project from the business functions or from the IT group. What is the basis for the resistance? A strong executive sponsor should be able to help resolve the problems, but only if he or she is aware of them in the first place. An out-of-touch executive sponsor points to either a weak project manager or a power-challenged sponsor, both issues that would need to be addressed if a rebirth is to be successful. Politics often get blamed for the lack of availability of an appropriate budget, and indeed negative politics can derail well-planned budget. However, do not allow politics—or the executive sponsor—to be blamed for failing to get large cost overruns approved. If the project costs are out of control, it's the performance of the project team that must be scrutinized, both at a process level and at the individual level. Likely you have problems with people or processes (or both) rather than political opposition. Also include careful consideration of tool weaknesses in the assessment. Almost all projects run into some kind of tool problem, so the issue is not so much to make a list of the tool problems you encountered, or even to consider its length, but rather to qualify the impact of the tool issues on the project. Tool issues need not be an automatic death sentence for the project. In my experience, tool issues that can completely derail a project are either disastrous performance issues or critical functionality failures that are in the way of meeting the basic project goals. If a tool just won't scale in your environment and the vendor cannot deliver or suggest appropriate fixes, the future is very grim for the tool. Chances are that users just won't use it consistently. And if you are faced with functionality gaps that make it impossible to achieve critical project goals, continuing the project will be throwing good money after bad. I find it very helpful to invite the tool vendor to confirm the critical failures identified by the project team. Vendors tend to be very optimistic when it comes to fixing even large problems, so if the vendor tells you something cannot be overcome you can be sure that there is no hope. Evaluate carefully their recommendations for fixes. The assessment session will last several hours. Consider it a good investment whether or not you conclude that the project can be saved. If the project can be saved, you should be able to use each and every suggestion made in the assessment to make the second half less painful and more successful. If the decision is to halt the project, the assessment brings certainty that it's the right decision, and the lessons can be carried over to future projects.

Can the Project be Saved?

Many so-called failing projects are failing just a little bit: they were a little too ambitious perhaps, or the team lacked some discipline, or some technical problems were not properly anticipated. Such projects should be saved and require only small amounts of tweaking to turn into successes. But even a seriously failing CRM project can be saved as long as it has just two—admittedly demanding—characteristics:

  • A committed, appropriately powerful executive sponsor.
  • A tool that offers a reasonable fit with no critical gaps. This means that the tool can support the business processes as they are currently defined (which could be different from the ones that were used at the beginning of the project), that critical features are functioning as needed, and that the performance meets your requirements.

You should be able to confirm both items right in the assessment session, although making a final decision on the tool side may require some additional detective work with the technical team and with the vendor. As mentioned earlier, vendors are known to be optimistic about their assessments of the likelihood and the speed of repairs or alleviations of problems, so when they recommend fixes you must insist on a firm schedule. Table any further work on your part until the tool issues are resolved to your satisfaction and maintain assertive and frequent communication with the vendor to ensure your issues are given priority. If the vendor says the issues cannot be fixed, agree with it immediately and negotiate a reasonable settlement. Contracts rarely have escape clauses for technical failures, but you should be able to get some concessions even though litigation is not likely to bring any positive outcomes. Vendors are very concerned about bad press so exert some appropriate pressure. If the tool turns out to be a poor fit, the only alternative is to go back to square one: tool selection. You may be able to proceed a bit faster since you can reuse your requirements list, but it would be essential to understand why a poor tool choice was made using that list in the first place. Use the assessment to probe that point. Assuming that you have both an executive sponsor and a reasonably fit tool, you can proceed to step 2.