Creating the Short List

Having asked the right questions, having seen the right demos, you have the tools to create the short list. The short list should be really short, with only two or three vendors, and should consist only of tools that you are happy with from a technical and functional perspective. It is often the case, in my experience, that you are truly happy with only one vendor, but force the team to consider a backup so that you are not forced into an unhealthy financial surrender to one vendor. On the other hand, if you have lots of vendors that meet your needs at this point, look a little harder at which ones are the best fit from a functional perspective.

It's a Gradual Process

Creating a short list is rarely a big-bang event. Gradually, as the evaluation unfolds, candidates may fall by the wayside, sometimes as early as the very first meeting when the key requirements are evaluated: too expensive, or not robust enough, and off they go. Don't be afraid to disqualify candidates as you discover major issues with them so you can focus on more promising ones. You can always go back to them later if the front-runners fail to fulfill other requirements.

Scoring the Requirements

Since there are so many factors to selecting a CRM tool, it's useful to have a logical process to analyze your findings. Whether you are using an RFP process or a streamlined process, it's very useful to create some kind of a rating matrix for the various candidates to organize the scoring.

Use the Requirements List

If you did your homework for the requirements list, you have the essential elements for a rating matrix: the requirements list. It's fine to add, delete, and change some requirements as the evaluation progresses, but if you find yourself making significant changes you should go back to creating a long list again.

Define Weights

Not all requirements are created equal, so it makes sense to give weights to the various elements in the requirements list. Rather than spending hours assigning very precise weights, I suggest you use a simple 1/2/3 weight selection, starting with assigning each element a weight of 1 (normal, lowest weight) and picking out the key requirements to have a higher weight. For instance, the must-have requirements can have a weight of 3 and all others a weight of 1. Don't spend too much time fiddling with the weights. I've found that most evaluations end with remarkably few strong candidates so that the decision hinges on strategic considerations rather than a few points here and there, weights or no weights.

Score Each Item

Here's the fun part. Go through the entire requirements list and score each vendor still in competition for each requirement. This needs to be done whether or not you use an RFP (in other words, don't just take the vendor's word for meeting a requirement). It is useful here to compare vendors to develop a robust scoring method. For instance, if you are scoring scalability and you need the tool to support 500 distributed users, you may want to give 10 points to vendors who have multiple production installations with more than 500 distributed users, versus 5 to vendors that only have one such installation. Scoring is often an iterative process. Taking the scalability example, you may realize as you are scoring that you did not confirm the exact number of users for each reference so would have to go back to the reference before completing the scoring. This is completely normal and should be planned for in the project schedule. I like to start the scoring process relatively early so I can spot and correct problems before the scheduled end of the selection phase.

Add 'em Up

Unless you are spreadsheet-challenged, adding up the scores should be pretty easy. Once it's done, compare the scores. Typically the candidates that the team thinks are best come out with the best scores (although not always in the order one would expect, as we will discuss below) and the others come out significantly behind. If you find large surprises, such as an underdog coming up with great scores, go back and analyze the areas that made the difference. It could be that that the weights for the scoring system are not defined appropriately, in which case you can go back and fix them. Another reason for surprises is that the team's impression of a vendor is strongly colored positively or negatively by the relationship with the sales team. If that's the case, remember that the sales team will fade away as the purchase is completed. If the tool is poor you will be stuck with a poor tool anyway. If the tool is great but the sales team is difficult to deal with, make an effort to work with other individuals on the vendor's side, in particular the post-sales team: are they efficient and friendly? That's more important than the performance of the sales team in the long term.

On or Off the Short List?

The whole business of scoring is to help you make a decision, but scores cannot and should not make the decision for you. Compare the scores but also trust your intuition: if a candidate is scoring higher but the team truly likes another one better, it may well be that the preferred candidate is the better one for you. The scoring sheet is only a tool and it may not be perfect, usually because of the choice of the weights. If a candidate truly feels better than another, it's probably the better choice. Thank the vendors that did not make it to the short list (making sure that you have at least one backup to your preferred candidate). There's no need for them to expend more energy at this point, or for you to put effort into maintaining a relationship with them. As you say "no thanks" you may be surprised to receive some interesting financial proposals from the rejected vendors. If the only reason for rejecting a particular vendor was that the expected price tag widely exceeded your budget but you are now offered something reasonable, by all means reverse your decision.



   
Comments