The purpose of scoring an RFP is to identify the supplier which most closely matches the buyer's needs. So the first step is to gather an accurate representation of the buyer's requirements - their evaluation criteria. This stage of the RFP process is addressed elsewhere on this site - an article on writing good rfp questions and faq articles on creating an rfp questionnaire.
Once the evaluation criteria have been defined, information is gathered from the suppliers. In SupplierSelect this means drafting an RFP questionnaire and issuing it online for suppliers to complete. Once the supplier data has been collated, it's time to score the RFP.
Some of the objectives for an RFP scoring methodology are:
The spreadsheet embedded below shows a very simple example scoring grid. The example RFP is to assess and select a vehicle. There are three evaluation criteria: Fuel Economy, Top Speed and Cost of Maintenance.
The most simplistic RFP scoring method is to assign a score to each supplier for each criteria:
This approach assumes the same scoring scale for each question - in this case 1 to 10.
The problem with this method is that all criteria contribute equally to the total score, thus implying they are all of the same importance. In reality, this is never the case. Some evaluation criteria will always deserve a higher weighting than others, and this should be reflected in the scoring.
A quick way to reflect the varying importance of criteria is to score each question on a different scale, thus combining the supplier score and the buyer's weight. For an example of this approach click the second tab in the spreadsheet "Combined Weights + Scores".
Whilst this rfp scoring method provides considerably greater insight in our evaluation, it is still limiting. For one thing, scores have to be allocated on a different scale for each question, which is confusing and error prone. Secondly, the approach limits our flexibility when there are different parties,with different priorities, involved in the evaluation and decision. In this case we want to be able to capture these varying priorities as distinct sets of weightings so that we can compare the resulting final scores.
To address these problems, weightings can be assigned separately from total rfp scores (See the third spreadsheet tab - "Distinct Weights"). This approach involves using a standard scoring scale for each criteria (e.g. 1 through 10), and then assigning a distinct weight value. Total scores are calculate as SUM ( score x weight) for each criteria.
Once rfp criteria weightings become independent from scoring we gain considerable flexibility. It becomes very easy to create additional "Weighting Sets". For example, the Finance department (who want to keep down vehicle costs) would likely have a very a different opinion to the Sales department (who want to wizz around prospects in a fast car). Multiple weighting sets allow these differences to be highlighted very clearly.
Strategically significant RFPs usually involve hundreds of of evaluation criteria. This demands a more sophisticated scoring methodology. To assign accurate weightings to these criteria, and to facilitate clearer analysis, these criteria are grouped in to sections, sub section and so forth.
Within a hierarchical questionnaire structured, it should be possible to assign weights at any level - Section, Sub Section or question. The weight assigned to a section should cap that section's contribution to the whole questionnaire. i.e. the weights assigned to sub-sections should not change the parent's contribution to the whole. It is possible to model weightings like this in spreadsheets, but the formulae become complex and tend to be brittle as the spreadsheet is worked on.
Assigning weightings in this way is one of the key ingredients of the Analytical Hierarchy Process decision making algorithm
SupplierSelect's RFP scoring methodology follows the hierarchical, distinct weightings approach. All the arithmetic for calculating derived weights and total scores is automated.