Myth-Information: Proposal Rating
Myth-Information:
You have to rate proposals in a source selection.
When discussing the evaluation of competitive proposals with my students, I make a point of asking the following two questions (in order):
1. Are agencies required to evaluate proposals?
2. Are agencies required to rate proposals?
Usually, students respond affirmatively to question #1 and are able to support their answers by citing FAR 15.305(a), which states "An agency shall evaluate competitive proposals and then assess their relative qualities solely on the factors and subfactors specified in the solicitation." However, confusion sets in when I follow with question #2 and students read the very next sentence of FAR 15.305(a), which states "Evaluations may be conducted using any rating method or combination of methods, including color or adjectival ratings, numerical weights, and ordinal rankings." Clearly, the language regarding use of a rating method in conjunction with an evaluation is permissive, not mandatory.
"What's the difference?", "Why wouldn't you rate proposals?", "How do you decide who is the better value if you don't rate the proposals?" are typical student responses. These are all good questions.
Evaluation v. Rating
A good way to understand the difference between evaluation and rating is to look at a typical article in Consumer Reports (CR). Here?s an example of a summary evaluation of a new car?s ?Driving Experience? (model name omitted):
The ride is steady and composed. It absorbs bumps smoothly but is firm. Road noise is reduced, but the tires still rumble noticeably and slap over pavement joints. Routine handling is responsive and fairly agile. Body lean is suppressed, and the quick steering has good weight and feedback. It displayed good grip and balance in emergency maneuvers, and its standard electronic stability control is well calibrated. The [car] posted a commendable speed in our avoidance maneuver. The smooth 166-hp, 2.4-liter, four-cylinder engine provides adequate acceleration. The five-speed automatic transmission is very smooth and responsive. We measured 21 mpg overall on regular fuel. The all-wheel-drive system sends power to the rear wheels when needed more quickly than in the previous [model]. The brakes provided short, straight stops on wet and dry pavement. Low-beam headlights reached only a fair distance, and high beams reached a good distance.
?Driving Experience? was one evaluation factor under the heading ?Road Test.? CR also evaluated ?Reliability?, ?Safety?, and ?Owner Satisfaction?, to name a few. According to the Web site, there were over 50 different tests and evaluations performed on the car. Presumably, this produced a mountain of data. However, the typical car buyer does not have the time to peruse the data, nor do they fully understand it. As such, CR established a 100-point scale and a set of predetermined criteria to translate test and evaluation results into scores on the scale. In addition, they partitioned the scale into quintiles and assigned an adjective to each (Poor, Fair, Good, Very Good, and Excellent). Using this rating method, the car described above received a score of 74 and an adjectival rating of ?Very Good.? In this case, CR used a combination of rating methods (numerical scoring and adjectival rating) to translate complex evaluation results into an easily consumable format for its readers.
But Teach, Why Wouldn?t you Rate Proposals?
First, it's not required. Besides that, the results of the evaluation may not be particularly complex. For example, let?s say I used price and performance risk as my evaluation factors in a source selection. Performance risk had two subfactors?past performance and experience. In the solicitation, I instructed offerors to submit a one-page write-up and customer point of contact for each of their relevant contracts. The evaluation of performance risk consisted of an assessment of the write-ups as well as interviews with the customer points of contact to validate the offeror?s claimed experience as well as to ascertain how well the offeror performed. The evaluators then wrote an evaluation of each offeror?s performance risk, documenting the relative strengths and weaknesses of each. Why would it be necessary to translate this information into a rating? How would this aid my decision-making? I?m not going to be faced with volumes of information.
Another reason I would avoid the use of ratings is when I was dealing with evaluators that didn?t understand them. In my experience, when ratings are used, ratings are all you get. I can recall receiving technical evaluations that had nothing more than the word ?Excellent? (when I used adjectival ratings) or ?95? (when I used a numerical rating). I wanted an evaluation and I got a rating.
How do you decide who is the better value if you don't rate the proposals?
The answer is the same way that you would if you did rate proposals?by performing a comparative assessment of proposals against all source selection criteria in the solicitation. A source selection authority (SSA) relies on ratings to make their source selection decision at their peril. See, for example, Si-Nor, Inc., B-282064, 25 May 1999, where the source selection authority based her decision to award to a higher-priced offeror on the fact that the offeror had a higher past performance rating. One of the reasons the protest was sustained was because the SSA did not describe the benefits associated with the additional costs, as required by FAR 15.308. ?Because they had a higher rating? will typically fail to meet this requirement.
So we shouldn?t use ratings?
Not necessarily. The point is that you have discretion to use or not use ratings. Most people don?t know why they use ratings other than the fact that it?s traditional where they work. The decision to use (or not use) ratings should result from thoughtful deliberation, not a successful copy and paste from your office mate?s old source selection plan. A wise man once said ?Tradition is the hobgoblin of mediocre minds.?
2 Comments
Recommended Comments