Jump to content

Ranking Proposals


DODCO

Recommended Posts

Is it standard procedure within anyone's agency to rank all proposals when accomplishing a source selection utilizing an evaluation rating method other than numerical? For example, if you utilize the new DOD Source Selection Procedures and you evaluate Past Performance, Technical Approach and Price, assigning a performance confidence assessment rating to past performance and a combined color code and risk rating to each technical approach factor (or subfactor, if used), does the Source Selection Authority make a tradeoff decision amongst all offerors to determine who wins, who is ranked second,

who is ranked third, etc? If so, what are the benefits/advantages of doing so? What are the disadvantages?

Link to comment
Share on other sites

Our tech eval boards don't assign a ranking, just a rating to the tech proposal. Our pricing team only assigns an Acceptable/Not Acceptable rating to the cost proposals and documents there reasonableness/realism (depending on what RFP said). Then the SSA (usually our KO) will make the determination of apparent winner, second, etc. The KO does a trade-off analysis among all proposals that were rated acceptable or higher and that have a lower price than the apparent winner based on tech ratings. If we get less than 4 proposals, the KO often doesn't establish a competitive range. If a competitive range is established, the trade-off analysis is done on everyone in the competitive range. This process has worked well for us - our post-award protest rate is just under 4% and we've only had one sustain (and that dealt with OCI analysis).

Disadvantages - it takes time to educate the TEB on how to describe the value of a proposal's strengths without talking dollars.

Link to comment
Share on other sites

I usually don't rate the second, third, and so forth -- I usually just select the best value awardee -- there is no requirement at the FAR level to rank order the unsuccessful offerors, and I generally advise against it.

Advantages of rank-ordering the unsuccessful offerors: None that I can perceive.

Disadvantages: It takes far too much time -- why spend hours developing a rationale and documenting a write-up to explain why one unsuccessful offeror is rated fourth and another unsuccessful offeror is rated fifth.

My recommendation is to select the best value offeror and do your write up to justify that selection.

Link to comment
Share on other sites

Here is a contractor's perspective.

My impression, which could be wrong, is that some agencies do not rank proposals because if they did, FAR 15.506( d )(3) would require them to debrief "The overall ranking of all offerors, when any ranking was developed by the agency during the source selection...". It seemed to be part of a plan to keep debriefing information to the bare minimum and avoid the dangerous territory of documentation of a type that could be used in a protest.

My usual practice is to refamiliarize myself on FAR 15.506 before a debriefing. Most of the time this is the closest thing I can get to an agenda. During a debriefing it is always a little surprising that the Government debriefer appears to be less familiar with 15.506 than the debriefee, and seems to be guided more by local procedure or practice than directly by the FAR.

Link to comment
Share on other sites

ji20874 - We currently do not rank proposals, but are considering doing so in order to be able to determine who an "interested party" would be should an acquisition be protested (pre-emptive strike per se). For example, If we did rank offerors and say the offeror ranked third were to file a protest, we should be able to show that this offeror is not an interested party as they would not be in line for award, and therefore, the protest should be dismissed. To do this, however, the Source Selection Authority would have to acomplish a trade-off amongst all offerors (winner against all non-winners, each non-winner against the other non-winners, etc to determine the final ranking of all offers). This seems to be like a lot of work to ward off a "potential" protest. As CajunCharlie states, there is no requriement to rank proposals. Other than determining who an "interested party" would be, I don't see any other potential advantages. In addition, I'm not convinced that determining who an "interested party" would be would absolutely result in a protest being dismissed on those grounds. I believe that the GAO would review the protest to determine if a procedural issue was basis for the protest, or if the protestor simply disagreed with the results of the evaluation of their (or others) proposal(s). Obviously we cannot pre-determine every single potential protest issue, and we cannot know how the GAO will rule on a protest until we see their response. That being the case, I do not see the advantage in spending a great deal of time to rank proposals. I would appreciate feedback from other WIFCON members on this issue. Thanks!

Link to comment
Share on other sites

Guest Vern Edwards

Just a comment: The notion of ranking offerors in order to determine who is an interested party may have been prompted by a GAO decision issued last December, NCI Information Systems, Inc., GAO Decision B-405745, 2011 CPD para. 280 (Dec. 11, 2011), in which GAO said in a note:

The agency contends that NCI is not an interested party to challenge the award to Harris because NCI is not next in line for award. Specifically, the agency argues that even if the Harris proposal were eliminated from the competition, and NCI received the additional credit it claims it deserves in the evaluation of its proposal, NCI still would not be eligible for award because another offeror had a higher technical score and a slightly lower price. First Supp. AR at 2. While the other offeror had a higher technical rating and lower price, the protester had a higher past performance score. We note that the agency did not rank offerors' proposals and that the source selection decision contained a comparison of the awardee's proposal to NCI's proposal and to the other offeror's proposal, but it did not contain any comparison of the other offeror's proposal to NCI's proposal. Determining which offeror is next in line for award would require a trade-off decision between technical/price and past performance. Therefore, we cannot conclude that NCI is not an interested party to pursue this protest.

In order to rank all offerors from best to worst one must make paired comparisons among all of them. Full ranking is not necessary in order to determine which proposal is the best value. In order to accomplish full ranking, an agency will have to make n(n - 1)/2 comparisons, where n is the number of proposals. If there are five proposals the agency will have to make and document ten paired comparisons. If there are ten offerors the agency will have to make and document 45 paired comparisons. That does seem like a lot of work just to determine whether an offeror is or is not an interested party in the event of a protest, and I would refuse to do it unless there is some other good reason to do so.

Link to comment
Share on other sites

Thanks Vern - I appreciate your feedback and I agree that unless there is some other good reason to rank proposals I wouldn't want to spend the time requried to make the paired comparisons. I have never personally ranked proposals in order of winner, second, third, etc. I have only accomplsihed trade-offs to determine the best value "winner" using other than a numerical evaluation approach. Based on the lack of response regarding any agency ranking proposals as a standard practice, it appears that it isn't occuring, or isn't occuring often. I wonder if ranking is only accomplished if one uses a numerical evaluaiton approach. Any thoughts?

Link to comment
Share on other sites

Guest Vern Edwards

Ranking can be done with any scoring or rating system or with no system at all. it's easier with numbers, but it's not otherwise impossible.

Be careful about drawing conclusions based on the number of responses your receive here. The vast majority of people who visit this site never contribute anything, They just look.

Link to comment
Share on other sites

DODCO, One must consider price in the trade-off comparison, as Vern already explained above.

How would one simply rank order proposals using a mechanical, numerical evaluation approach without going through some type of qualitative cost/technical trade-off comparison between the various proposals?

Would you score price then add to the quality points? What does that tell you? How would you correlate points per dollar to quality points?

Do you divide the price by the technical points ("$/Point" ratio)? That's a goofy method that our organization abandoned 20 years ago. In addition, we must comply with AFARS, which has prohibited scoring price for years (5115.305 -- Proposal evaluation. (a) (1)) and has prohibited using numerical weights for non-price factors since 2004 (5115.304, (B)(2)(D))

Link to comment
Share on other sites

Guest Vern Edwards

Joel:

I'm not sure what you were getting at in your last post, but if by that business about "mechanical, numerical evaluation approach" you were suggesting that numerical scoring and ranking techniques are unsound and that it is necessarily a bad idea to assign numerical scores to price, then I disagree with you. That is bull mouthed by the GAO, which was looking at the work of people who didn't know what they were doing.

It is entirely possible to make valid nonprice/price tradeoffs and rank proposals using numerical methods, including the numerical scoring of price or cost. The reason that the Army and other agencies have prohibited numerical scoring, numerical scoring of price, and the use of numerical weights is that their contracting officers, proposal evaluators, and decision makers have shown themselves to be incapable of using such techniques properly, despite the fact that such techniques have long been used successfully to aid decision-making. There is plenty of guidance about their use in print, e.g., http://home.ubalt.ed...artix.htm#rwida and http://www.edmblog.c...white_paper.pdf. Here is a 2009 article about the use of decision analysis in public procurement, which describes the use of the SMART (Simplified Multi-Attribute Rating Technique) http://uir.unisa.ac.....pdf?sequence=1. Here is an article about the use of decision analysis in vendor selection http://www.elsevier....EJOR_free17.pdf.

See also Clemen, Making Hard Decisions with Decision Tools Suite Update Edition (Southwestern College 2004); Goodwin & Wright, Decision Analysis for Management Judgment, 3d ed., (Wiley 2004); and Edwards and von Winterfeldt, Decision Anaysis and Behavioral Research (Cambridge University Press 1986). There is even a quarterly journal, Decision Analysis, published by the Institute for Operations Research and the Management Sciences (INFORMS).

Numerical decision analysis techniques are aids to decision, they don't make decisions for you. In the hands of competent users they are much superior to verbal expressions and goofy schemes like color rating, which are relatively amateurish. Decision makers should of course consider many inputs, including their intuition, and they should not justify their decisions entirely on the basis of scores or ratings, whether numbers, adjectives, or colors.

Given the existence of outfits like the Air Force -- which apparently cannot buy airplanes without screwing up the proposal evaluation, even when assisted by "peer" reviews -- it is probably wise to prohibit the use of powerful tools by people who don't know what they are doing. (Of course, color rating has not made them any better at it.) But let's not attribute the defects of a certain class of users to methods that are otherwise sound and effective in the hands of competent people.

Link to comment
Share on other sites

Vern, I just noticed that you have updated your post #10, above. I was primarily curious about how "DODCO", who I am assuming is a contracting officer at DOD level or at one of the Services, would rank order proposals using a numerical evaluation approach.

Link to comment
Share on other sites

Joel - I am indeed a CO working for a DOD activity. We currently do not rank proposals, but if we were to begin here is how I think we would do it: The SSA would accomplish a comparative assessment of all proposals remaining in the competitive range against all source selection criteria (to include cost/price considerations) and make the best value decision (identifying the winner) as required in the DOD Source Selection Procedures. Next, the SSA would have to acccomplish another comparative assessment amongst all remaining proposals and determine who the next best value offeror would be (offeror ranked number two), and so on until all offerors had been ranked. Using this approach, cost/price considerations are considered along with all other source selection evaluation criteria. As Vern mentioned above, and I agree, this could be a very time consuming effort - and I just don't see the benefit to the Government of doing so.

I agree that if one were to use a 100% numerical evaluation approach to ranking, not only would cost/price have to have a numerical value assigned, but so would Past Performance (if evaluated). I'm really not interested in this type of approach, but appreciate the discussion.

The question that I really want to get input on is whether agencies are ranking proposals, and if so, what benefit they percieve that approach provides them. For offerors, I would imagine that they would like it if the Government were to rank proposals as they would know that say out of a field of six offerors they came in second, or fifth, etc. I could see this potentially providing value to them, but not sure what the value to the Government would be.

Thanks.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...