Posted 06 March 2012 - 08:10 AM
who is ranked third, etc? If so, what are the benefits/advantages of doing so? What are the disadvantages?
Posted 06 March 2012 - 08:32 AM
Disadvantages - it takes time to educate the TEB on how to describe the value of a proposal's strengths without talking dollars.
Posted 06 March 2012 - 04:57 PM
Advantages of rank-ordering the unsuccessful offerors: None that I can perceive.
Disadvantages: It takes far too much time -- why spend hours developing a rationale and documenting a write-up to explain why one unsuccessful offeror is rated fourth and another unsuccessful offeror is rated fifth.
My recommendation is to select the best value offeror and do your write up to justify that selection.
Posted 07 March 2012 - 07:39 AM
My impression, which could be wrong, is that some agencies do not rank proposals because if they did, FAR 15.506( d )(3) would require them to debrief "The overall ranking of all offerors, when any ranking was developed by the agency during the source selection...". It seemed to be part of a plan to keep debriefing information to the bare minimum and avoid the dangerous territory of documentation of a type that could be used in a protest.
My usual practice is to refamiliarize myself on FAR 15.506 before a debriefing. Most of the time this is the closest thing I can get to an agenda. During a debriefing it is always a little surprising that the Government debriefer appears to be less familiar with 15.506 than the debriefee, and seems to be guided more by local procedure or practice than directly by the FAR.
Posted 07 March 2012 - 09:22 AM
Posted 07 March 2012 - 09:47 AM
In order to rank all offerors from best to worst one must make paired comparisons among all of them. Full ranking is not necessary in order to determine which proposal is the best value. In order to accomplish full ranking, an agency will have to make n(n - 1)/2 comparisons, where n is the number of proposals. If there are five proposals the agency will have to make and document ten paired comparisons. If there are ten offerors the agency will have to make and document 45 paired comparisons. That does seem like a lot of work just to determine whether an offeror is or is not an interested party in the event of a protest, and I would refuse to do it unless there is some other good reason to do so.
Posted 08 March 2012 - 03:49 PM
Posted 08 March 2012 - 04:33 PM
Be careful about drawing conclusions based on the number of responses your receive here. The vast majority of people who visit this site never contribute anything, They just look.
Posted 08 March 2012 - 04:39 PM
How would one simply rank order proposals using a mechanical, numerical evaluation approach without going through some type of qualitative cost/technical trade-off comparison between the various proposals?
Would you score price then add to the quality points? What does that tell you? How would you correlate points per dollar to quality points?
Do you divide the price by the technical points ("$/Point" ratio)? That's a goofy method that our organization abandoned 20 years ago. In addition, we must comply with AFARS, which has prohibited scoring price for years (5115.305 -- Proposal evaluation. (a) (1)) and has prohibited using numerical weights for non-price factors since 2004 (5115.304, (b)(2)(D))
Posted 08 March 2012 - 05:48 PM
I'm not sure what you were getting at in your last post, but if by that business about "mechanical, numerical evaluation approach" you were suggesting that numerical scoring and ranking techniques are unsound and that it is necessarily a bad idea to assign numerical scores to price, then I disagree with you. That is bull mouthed by the GAO, which was looking at the work of people who didn't know what they were doing.
It is entirely possible to make valid nonprice/price tradeoffs and rank proposals using numerical methods, including the numerical scoring of price or cost. The reason that the Army and other agencies have prohibited numerical scoring, numerical scoring of price, and the use of numerical weights is that their contracting officers, proposal evaluators, and decision makers have shown themselves to be incapable of using such techniques properly, despite the fact that such techniques have long been used successfully to aid decision-making. There is plenty of guidance about their use in print, e.g., http://home.ubalt.ed...artix.htm#rwida and http://www.edmblog.c...white_paper.pdf. Here is a 2009 article about the use of decision analysis in public procurement, which describes the use of the SMART (Simplified Multi-Attribute Rating Technique) http://uir.unisa.ac.....pdf?sequence=1. Here is an article about the use of decision analysis in vendor selection http://www.elsevier....EJOR_free17.pdf.
See also Clemen, Making Hard Decisions with Decision Tools Suite Update Edition (Southwestern College 2004); Goodwin & Wright, Decision Analysis for Management Judgment, 3d ed., (Wiley 2004); and Edwards and von Winterfeldt, Decision Anaysis and Behavioral Research (Cambridge University Press 1986). There is even a quarterly journal, Decision Analysis, published by the Institute for Operations Research and the Management Sciences (INFORMS).
Numerical decision analysis techniques are aids to decision, they don't make decisions for you. In the hands of competent users they are much superior to verbal expressions and goofy schemes like color rating, which are relatively amateurish. Decision makers should of course consider many inputs, including their intuition, and they should not justify their decisions entirely on the basis of scores or ratings, whether numbers, adjectives, or colors.
Given the existence of outfits like the Air Force -- which apparently cannot buy airplanes without screwing up the proposal evaluation, even when assisted by "peer" reviews -- it is probably wise to prohibit the use of powerful tools by people who don't know what they are doing. (Of course, color rating has not made them any better at it.) But let's not attribute the defects of a certain class of users to methods that are otherwise sound and effective in the hands of competent people.
Posted 09 March 2012 - 02:23 PM
Posted 12 March 2012 - 07:00 AM
I agree that if one were to use a 100% numerical evaluation approach to ranking, not only would cost/price have to have a numerical value assigned, but so would Past Performance (if evaluated). I'm really not interested in this type of approach, but appreciate the discussion.
The question that I really want to get input on is whether agencies are ranking proposals, and if so, what benefit they percieve that approach provides them. For offerors, I would imagine that they would like it if the Government were to rank proposals as they would know that say out of a field of six offerors they came in second, or fifth, etc. I could see this potentially providing value to them, but not sure what the value to the Government would be.
Also tagged with one or more of these keywords: ranking, proposals, evaluation
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users