Success Factors

While there are many techniques that deliver a successful CRM project, they derive from just a few general principles or success factors: "Think small, dream big" (have a general plan, but take small steps), "stay in the box" (don't overreach), "get users involved" (user buy-in is key), and "measure success" (set a solid goal and prove you are succeeding).

Think Small, Dream Big

While some see CRM projects as multi-year projects that affect the entire organization and require massive change both to infrastructure and process, I believe such mega-projects are very risky and that many of the public failures we hear about are caused by overlong, over-complex projects. Huge projects have many weaknesses.

  • Changing requirements. The competitive environment in which your organization exists changes quickly, so that requirements defined today are likely to be inadequate two years from now, even if they were carefully crafted. Requirements defined for 90-day deliverables are much more likely to remain valid by the time the project rolls out.
  • Scope creep. Scope creep is the phenomenon through which additional features and requirements are added to a project after the initial requirements have been defined and agreed upon. It's pretty easy to banish scope creep from a short project, both because there's little time to generate additional requirements and also because there's simply no time to fit them in. With massive projects, on the other hand, change requests are often required since conditions do change over longer periods of time. And with an extended schedule it's tempting to accommodate the changes, with the hope that the time can be made up. So the already massive scope of long projects tends to get more massive over time.
  • Sponsorship stability. CRM projects need a strong sponsor at the executive level. The longer the project, the more likely that the sponsor will lose interest, change roles, or leave the company, obliterating the chances of the CRM project.
  • Technical complexity. While short-term projects can be complex, they usually don't get as complex as longer ones. More importantly, they provide a discipline that encourages tackling the technical challenges one at a time, along with short-term milestones to assess success and to refocus the strategy as needed.
  • Technical obsolescence. Over the course of a long project, the technical components do not sit still. For instance, over a two-year project the CRM software itself is likely to go through eight or so maintenance releases and one or two major upgrades. And this is only the beginning: there will be upgrades and changes in the underlying network, servers, desktop equipment, browser software, wireless devices, and all the other technical pieces of the CRM project. Don't think you can simply ignore the upgrades. Vendors will stop supporting the older products after several years, so you will be forced to upgrade multiple times throughout the project. Ugly!
  • Project management complexity. The amount of coordination required by massive projects is staggering and the complexity increases by an order of magnitude with the length of the timeline and the size of the staff involved. The more complex the project, the more likely breakdowns can develop and remain undetected for long periods of time. Smaller-scope projects are incomparably easier to manage.
  • Cheerleader fatigue. Enthusiasm and optimism are important components of successful projects. It's much harder to sustain them for two years as compared to three months. If the scope of a project requires a multi-year schedule, proceeding with a series of three-month mini-projects ensures tangible positive outcomes each quarter that should help recharge everyone's batteries and sustain the team throughout the extended schedule.

Huge projects are risky. This is not to say that long-term plans are not useful. Without a long-term plan the organization can make shortsighted mistakes. For instance, a single-minded focus on solving the issue of handling incoming customers' e-mail may bring a great e-mail processing system that fails to connect with any of the other systems in place, creating an unhelpful island of information. With a long-term plan, the organization can select a tool that has an integration component, even if the integration doesn't happen for a while. Take the time to create a high-level plan (the "dream") to coordinate the various CRM efforts. At the same time, avoid analysis paralysis: it does not make sense to create a detailed five-year plan since one can't even begin to imagine where CRM vendors will be then. Don't wait for the perfect tool to plug into your perfect dream. Once you have a long-term strategy in place, think small when it comes time to defining an actual project. It's much easier to shepherd small projects to a successful conclusion, and over time they are more effective than large projects. They are much easier to manage. There are fewer people involved, so there is less potential for communication breakdowns, and there's also less time to mess up. It's also a lot easier to adapt to changing circumstances when each step is small. Small projects make it easier to meet the users' expectations. It's easier to define realistic expectations on small projects to begin with, and because of the short duration, expectations don't have much time to inflate or change significantly. As a consequence, small-scope projects are less likely to fail than larger projects. When something goes wrong the feedback comes quickly, before much damage is done. It's much easier to fix a short-term deliverable than one that was months in the making. Small-scope projects do have drawbacks, although I think they are more than offset by the advantages described above. Isn't it more expensive to work with multiple small deliverables compared to one big deliverable? It's true that there is a fixed minimum overhead associated with any deliverable, so that a project with many small steps carries more overhead than one with fewer steps. However, large-scope projects have a much higher communication and coordination burden, so the difference is not that great in the end. Note that this optimistic assessment assumes the all-important point that you have created an overall strategy and you are deploying against that strategy. If you are taking small steps in an uncoordinated, haphazard approach, you run the risk of having to redo some of the steps when you realize that they don't fit well together, and that would be a very great expense indeed. Does it take more time to work through small steps? In a way, yes. If you could do everything perfectly with one large-scope project, you would probably do it faster than if you use multiple small steps. The reality is that the likelihood that a single large-scope project would be done perfectly is very low to nil, even with strong project management, because what makes a CRM system really work is the tuning that can only happen once the users are actually using it. It's more realistic to implement small steps quickly and to go through several tuning exercises rather than to implement one big step that needs no tuning. And now for the big question: can small steps really be used for large organizations with complex needs, or is it a technique that can only be used for small-scale deployments? Certainly, if your needs are complex you will find that even small steps are bigger than small steps suitable for a small organization with simple needs. However, I would argue that it's especially important for large deployments to be structured through smaller steps, each providing a complete solution to a particular issue, and each allowing a realistic validation by the end-users before proceeding. Only the initial step in the project should be significantly longer for large deployments to allow for the development of the overall data model (what data is tracked in the system and how it is organized) and of the system architecture. So go ahead and have big (coherent) dreams, but implement them in small steps.

Stay In the Box

There's the old story of the people who want to repaint their kitchen and fall prey to the "while we are at it" syndrome:

  • While we are at it (repainting), we should redo the floor.
  • While we are at it (redoing the floor), we should change the stove.
  • While we are at it (changing the stove), we should get all new appliances.
  • While we are at it (getting new appliances), we should redo the cabinets.
  • While we are at it (redoing the cabinets), we should get a bigger window.

After many months and many, many times the cost of a fresh coat of paint, they get a new kitchen, the walls of which we can only hope are painted the right color. And they may keep going and decide to push a wall out (add two months), creating their own monster project. The same thing can happen to CRM projects. Here are some typical examples of the "while we are at it" syndrome.

  • While we are at it, we should customize the screens so they all conform to our internal web format standards.
  • While we are at it, we should extend the rollout to the channel sales group, not just direct sales. ("Rollout" or "deployment" is the last phase of the implementation during which the system is made available to end-users as their primary work environment as opposed to a test situation.)
  • While we are at it, we should integrate the CRM tool with the ACD (automatic call distributor) so we can do intelligent routing and automatic dialing.
  • While we are at it, we should review all the documents in the knowledge base and not roll out the system until the review is absolutely complete.
  • While we are at it, we need to train all staffers on a new sales methodology, or advanced support skills.

It's important to address new issues and ideas encountered during the project, but it's usually best to keep pretty much within the scope defined at the beginning of the project and to defer new items to a second phase. Let's analyze the examples we just saw.

  • On the issue of customizing screens to conform to a corporate standard, the answer is a clear "yes" for the customer interface. Customers should not be aware of jarring differences between CRM portal pages and other pages on your web site. This requirement should be identified right at the beginning of the project, and it should be relatively easy to achieve, at least with modern CRM tools that allow applying templates to the portal screens.

    It's a different story for internal users. While consistency is visually pleasing (your corporate standards are visually pleasing, right?), and while consistency makes for greater efficiency, changing dozens of screens to conform to an ever-evolving corporate standard is a waste of resources in my mind. By all means make the easy changes such as including your logo on the page or matching your color scheme, but focus your precious resources on making sure that the screens are efficient and uncluttered rather than matching the corporate standard. Find other ways to make friends with your Corporate Identity department.

  • Extending a rollout to another sales organization (from direct sales to channel sales) will probably require revising the workflow and making changes to some data fields and screens. Does it make sense to include the channel sales organization in the planning so there can be one combined rollout for both sales organizations? If the entire company is moving from direct sales to channel sales, then yes, you must change your plan. If that's the case, we are not talking about a "while we are at it" issue, but about a "just found out I must" issue, which has a completely different urgency.

    Barring such a drastic change, I recommend completing the rollout to the direct sales force as planned and considering the needs of the channels sales force in a second phase. Making the system usable for channel sales may (or may not) require changes to the data model as well as changes to the app itself, but such changes and any impact they have for the direct sales team can be handled in a later phase. Even if it's known from the start that both sales organizations need the CRM tool, it may still be a good strategy to tackle one organization, then the other. In that case I would definitely start by defining a data model that meets the needs of both organizations.

  • Extending a rollout to include an integration with another tool, in the example the ACD, is usually a kiss of death. Integrations are costly in both time and resources, they add significant risk to the project, and if you integrate the CRM tool with other tools before the end-users have had a chance to try it hands-on, you may well integrate the wrong functions entirely. Better to roll out the CRM tool without the integration first, work out the kinks, and then do the integration when it's very clear what functionality is needed.

    There are good reasons to include an integration in a CRM project if the decision is made upfront. One such situation is if the integration is the one tangible benefit of the project. If you are changing the CRM system specifically to allow integration with the phone system and there is no other major benefit to the project, then the integration is the project. Another reason to include an integration in a project is funding. It may be a lot easier to go to the well once, albeit with a much larger request. Even in this case you should seriously consider deploying in two phases, the first one without the integration and the second one being the integration, once you have some production experience. The CFO should be amenable to a two-step approach to minimize risk.

  • Delaying the rollout of a tool because of concerns about the quality of the documents in the knowledge base is almost always a mistake. The tool by itself cannot overcome weaknesses in your document creation and publication process, although it can be a big help in making the process function more efficiently.

    If the documents you are concerned about have been accessible all along in their imperfect and unreviewed state, why not roll out with them? In parallel, accelerate the review process, perhaps by giving the reviewers a clear incentive (read: a bonus) to complete the review quickly. If, on the other hand, the concern is that unreviewed documents will be exposed to customers who did not have access to them before, then you have a problem, but one that should be solved by withholding (through the tool's permission scheme) the potentially offending documents until they are reviewed. As before, expedite the review process through judicious incentives to avoid having to roll out with an empty knowledge base. Finally, seriously consider putting all the unreviewed documents in the big round file in the sky and starting over. I once spent close to a year paying overtime compensation to support staffers to review old documents we could not bear to throw away. In the end we salvaged only a handful of documents, definitely not our money's worth. Your experience is likely to be similar in fast-changing environments where any document older than two years is likely to be truly obsolete.

  • Is it wise to delay a rollout to train the users on new methodology? It's often the case that a CRM project uncovers gaps in the underlying business skills of the staffers who will use the tool. If the issue is that the users are not as competent as first thought on the existing sales methodology or the existing troubleshooting protocol, I would roll out the tool anyway and conduct the training when time and resources become available. The tool is not the solution to the entire issue of customer management so it should not be held hostage to the methodology training.

    On the other hand, if learning the new methodology is required to use the tool, then the training should be a part of the rollout, and that should have been identified early in the planning process. You should hold the methodology training before the rollout (and remember the problem for the post-mortem analysis). If you hold training on the methodology separate from the tool training, use the sessions to reinforce each other. If the methodology training comes first then the tool training should showcase how the tool helps the users follow the steps in the methodology. If the tool training comes first then the methodology training should show how the tool can help put it in practice.

These examples illustrate situations where new requirements occur during the actual project, but the same rule of "stay in the box" applies to defining the initial requirements. For instance, if your CRM project is focused on adding an online sales tool but you also find that it would be good to improve the sales methodology, the marketing materials, and the color of the Palm Pilots, I very much recommend limiting the CRM project to implementing the online sales tools, at least if the sales methodology is clearly identified and agreed to before selecting the tool. Once the tool is launched (or in parallel, if you have enough resources), attack the other issues. Don't torture the tool. If you're trying to do something that's simply alien to the way the tool was designed, the results will be 1) expensive and 2) never quite right. By conducting a reasonable selection process, as will be described in the upcoming chapters, you should end up with a tool that does most of what you need. Bring your current process with you when you evaluate systems and evaluate the demos through the lens of the process. Aim for a good overall fit between your process and the tool (without being obsessed by a complete, 100% fit: perfection is not of this world). In the same vein, if your process does not fit well with the tool, consider changing your process rather than the tool. This is particularly true if your process doesn't seem to fit with any tool that you see. Yes, it's possible that you have discovered a secret way to do things that's better than everyone else's. On the other hand you could be driving the wrong way on the freeway, which is why everyone else is going the other way. If all the tools do things a certain way, chances are it is a best practice and you should simply adopt it. Keep customizations to less than 10%. This is an arbitrary number, for sure, but it illustrates that you should shop for a good overall fit for the tool and keep customizations to a minimum. Customizations are expensive and do not port well to new releases, so they are very expensive in the long run. As long as you have a good overall fit between your process and the tool, first consider adapting your processes to the tool rather than automatically customizing the tool.

Get Users Involved

One of the key tests of the success of a CRM initiative is whether the users actually use the system. The problem is that, even when the new tool has clear advantages over the old one, it's difficult to switch to a new tool where familiar things are no longer familiar and even routine operations may require checking the handy cheat-sheet that was provided during the training sessions. By getting users involved early, you have a chance to build up enthusiasm and support for the benefits of the system that will help overcome the barriers to adoption. Take care to avoid overselling the system. No, the new tool will probably not double the users' productivity (let's face it, it will probably lower productivity in the short-term as users get used to it). No, the new tool will not allow 50% of customer requests to be fulfilled automatically (at least if the requests you get are reasonably complex). And no, the tool will not contain all the data the users ever need to do their job. Instead of overreaching claims, present a nuanced picture of realistic benefits: they will be able to see your pipeline at the touch of a button; they will be able to automatically attach interesting documents to customer e-mails, etc. Besides the advantages of psychological preparation, the other benefit of involving users early is that they are the ones who know how to do the job and what's needed to do it. End-users—not their managers, not some mythical "super-user," and certainly not ersatz process managers—need to be a part of the entire project cycle, from selection to implementation, to keep everyone honest. Certainly it's neither feasible nor desirable to involve everyone in each step. Have you ever tried to hold a demo for 500? And what about stopping all sales activities for two days while we debate the tool workflow? You can't get everyone involved every step of the way, but you need to involve actual end-users, and ideally some of the best-performing ones, at each crucial juncture. You may find that end-users are not very interested in the project because, especially if they are top performers, they are busy selling customers or helping them resolve service issues. Find the right levels of stick and carrot to get them to participate. In particular, understand that their time is valuable and make sure they are asked to participate only in high-value activities. For instance, if a vendor is doing a daylong dog-and-pony show, end-user input may be best spent on the demo portion. Even then, their participation may be required only if the tool is a real contender (did the CIO nix it because of vendor viability concerns?) and only if the demo can be interactive (we'll come back to techniques to get the vendor to deliver the information that you want in , "Shopping for a CRM System"). Another characteristic of top performers is that they are usually good readers. Don't spend 30 minutes giving them "background information" when they could read it in ten minutes when it's convenient for them. They also like to be kept informed (again, not in a long and tedious meeting). Why was the tool they liked so much not selected? Why are we behind schedule and what is being done to catch up? They'll want to know. In addition to top performers, it's well worthwhile to involve the group's informal leaders, who may not be top performers themselves but who have a lot of influence on the group and command respect and admiration. It's a challenge because they tend to be busy, but they will spread the word on the project very well, better than top performers who may be more of the loner type. There is tremendous value in getting individual contributors involved, not just managers. The nature of their job is different, and it's quite common that managers don't have a very good sense of what individual contributors in their groups do all day. If there are subgroups that are likely to use the tool differently (say, focused on different markets, or different products), make sure you get representatives from all groups, at least until you prove to yourself that the requirements are similar. We'll come back to this topic in more detail when we discuss the project team in . It's a mistake to assume that IT staffers or even process managers can substitute for actual end-users. Even when these individuals are well aware of the requirements of the end-users, they simply cannot have the same level of awareness about what really matters on a daily basis that the actual end-users have. Don't assume that IT or the process group can be a complete substitute.

Measure Success

Human beings like to have tangible feedback for their efforts. This is particularly true of long-term, large-scope projects where it's hard to tell whether one is really making a difference. Setting up a system to capture and communicate progress on the project will help sustain the enthusiasm of the participants. At the same time, since there is always a contingent of skeptics attached to any CRM project, the same information can be used to contain their criticisms, if not to generate their enthusiasm. Metrics are typically the responsibility of the project manager, although on larger projects this can be delegated. How does one measure the success of a CRM project? We'll see detailed metrics suggestions in , "Measuring Success," but here are some high-level points.

Start Early

Establishing metrics that are meaningful and acceptable to all parties before the start of the project makes them more credible in the long run, avoiding any suspicion of manipulation. It's fine to set only high-level targets when you start. You can then assign specific quantitative targets only when the implementation starts, once you have a better idea of what the tool will really be able to do.

Be Simple

Using 12 different calculations, or anything that requires knowledge of advanced statistics to understand, is counterproductive. Stick with three to five high-level measurements, each of them a simple arithmetic computation (you can use averages, but standard deviations are probably not required to make your point).

Measure Results, not just Activities

Having reviewed 10 vendors (an activity) is nowhere as important as having narrowed down the list of candidates to two (a result). During the implementation phase, having completed 80 test items (an activity) is an interesting achievement, but even better is having passed 78 of the test items (a result). In the same vein, measuring projects by elapsed time is nowhere near as useful as using milestones, which, if well defined, refer to actual results. Measuring for results implies the use of targets—i.e., how much did we accomplish versus what we were planning. Make sure that the results you measure are meaningful both for customers and for internal users. Now for some examples for long-term (not project-oriented) business goals: For a marketing organization:

  • Reaching 2000 customers within 2 months is an activity goal.
  • Creating 20 qualified leads within 2 months is a better goal.
  • Creating leads that generate sales of $X within 2 month is a good result goal.

For a sale organization:

  • Entering all pending deals within 24 hours is an activity goal.
  • Creating complete forecasts daily is a better goal.
  • Increasing forecast accuracy to within 10% in the last week of the quarter is a good result goal.

For a support organization:

  • Entering 2000 documents into the knowledge base is an activity goal.
  • Increasing knowledge base usage by 50% is a better goal.
  • Decreasing the case/customer ratio by 20% is a good result goal.
Report Good and Bad

One eerie characteristic of failing CRM projects is the incredible disconnect between the status reports and reality. While the status reports include vaguely-worded and minor delays and difficulties, the implementation team is overwhelmed with problems, its members either screaming at each other or no longer communicating at all, and so frustrated they are barely able to drag themselves to the office in the morning. The disconnect can be sustained for amazingly long periods in larger organizations with many layers and many people involved—thankfully, that's not usually the case in smaller organizations. To gain credibility, share both good and bad news. This includes areas of potential over-investment. For instance, if you planned for a testing period of four weeks and you're done in two, could it be that the planners were sandbagging? Is it possible that the testing that was accomplished was insufficient? Meeting the goal by a mile should raise questions, not only congratulations.

Be Transparent

Status reports are meant to be shared widely. A wide distribution is a great incentive to create accurate reports as well as a vehicle to get them corrected quickly when needed. In particular, special reports to the executives can bring confusion and misinformation. If they are required because the regular status reports are too long, then they should also be shared downwards to benefit from the effects of a wide distribution. While we will see many more practical pieces of advice as we work our way through the tutorial, the top four success factors: "think small, dream big," "stay in the box," "get users involved early" and "measure success" are our guides to CRM success.