Jump to content

Recommended Posts

I'm looking through course material from Source Selection taught by Management Concepts and they have the following definitions for "Marginal" and "Unacceptable."

Marginal - Fails to meet minimum evaluation standards; low probability of satisfying the requirement; has significant deficiencies, but they are correctable.

Unacceptable - Fails to meet a minimum requirement; deficiency requires a major revision to the proposal to make it correct.

This goes against everything I thought I knew. Has anyone ever rated a proposal that had "significant deficiencies" anything other than unacceptable?

Link to comment
Share on other sites

Guest Vern Edwards

It does not matter whether anyone has ever rated a proposal one way or another. An agency may define its ratings any way it sees fit, whether it is consistent with what you knew or not.

Link to comment
Share on other sites

Inasmuch as this is a course on teaching source selection methodology (I assume), its not a very good teaching point. How do they describe "acceptable" or whatever the next higher rating is?

Link to comment
Share on other sites

Did the Wiz Kid teachers say how or when the weaknesses will be corrected? Through discussions? After award as Changes or technical direction or wait for failure or poor performance then direct correction?

Can they describe what's the difference between an aspect of an "acceptable" proposal that fully meets the requirements (doesn't exceed) and doesn't have any objectionable ("we don't like it") weaknesses vs. one with "easily correctable weaknessess"

What is the difference between an "acceptable proposal" vs. one with no deficiencies but has objectionable weaknesses vs. one that has numerous weaknesses but no outright deficiencies?

I guess these would have to be matters for discussions if conducted or matters to consider in the trade-off analysis? Why rate something acceptable if it really isnt acceptable to the persons who will have to live with the contractor or live with the end product or something that requires discussions to make it truely acceptable to the government and/or end-user or will require a change to make it acceptable?

To me the distinctions are real and important, especially when the requirements are stated as performance requirements and especially in the design-build or construction source selections.

Link to comment
Share on other sites

Joel:

I don't understand your criticisms.

Vern,

1. There appears to a gap between

"Acceptable - Meets evaluation standards; has good probability of satisfying the requirmeent; any weaknesses can be readily corrected" and

"Marginal - Fails to meet minimum evaluation standards; low probability of satisfying the requirement; has significant deficiencies, but they are correctable."

So the rating choices are 1) an acceptable proposal with some easily correctable weaknesses or 2) a proposal with 'significant' but 'correctable' deficiencies. What about 3) a proposal with a technical (minor) deficiency or more significantly 4) a proposal that does meets the minimum evaluation standard but contains some objectionable weaknessess that may or may not be readily correctable or 5) a proposal that contains several weaknesses that, when considered together may pose a high risk of failure and which might require a major rewrite or change in approach?

2. Is a proposal which would require discussions to allow the proposer to easily correct any weaknesses (including objectionable weaknessess that are unacceptable to the government or to the user) really "acceptable"? Is a proposal really acceptable that, if the government doesnt want to conduct discussions, the government would have to award then issue technical direction, issue changes, encourage the contractor to 'easily correct' the weaknesses, and/or wait until performance is unacceptable to direct the corrections?

Bottom line: In my opinion, acceptable proposals, in general, ought to be awardable without having to correct objectionable weaknesses (ones that the user or government can't live with) through discussions or through those post award steps that I outlined. A marginal proposal is one needs some action to become acceptable but that is distinguishable from a total loser, unacceptable proposal that either is uncorrectable or that would require a major rewrite or totally new approach to be susceptible to being acceptable. A marginal proposal shouldn't necessarily have to have any "significant deficiencies" or ANY "deficiencies" for that matter if it has some objectionable weaknesses that should be addressed to make it acceptable to the government.

A marginal proposal might be one that doesnt clearly meet all the minimum eval criteria and/or contains some significant weaknessess or a combination of weaknessess that might significantly risk satisfactory performance. Proposal appears to be correctable. One could also add something to the affect "and/or contains one or more deficinencies that are easily correctable without a major proposal rewrite"

Link to comment
Share on other sites

In addition to the problem of their "acceptable" rating not really being very acceptable without further action and their very narrow definition of a "marginal" rating, I forgot to mention that these rating descriptions are not consistent with the mandatory rating descriptions in the DoD Source Selection Manual. Although the rest of government doesnt have to use those, DoD is supposed to. Why teach something that is inconsistent with the mandatory ("consistency") rating criteria used by largest procurement program in the government? I'm not saying that I totally agree with those definitions, but they are workable and again - mandatory.

As an alternative, they could offer the DoD rating definitions as another example...

Link to comment
Share on other sites

Guest Vern Edwards

Some thoughts:

1. As a proposal rating, "Acceptable" need not entail legal acceptability. It ought to, but need not. It can allude to nothing more than a degree of technical appeal.

2. You said:

Is a proposal which would require discussions to allow the proposer to easily correct any weaknesses (including objectionable weaknessess that are unacceptable to the government or to the user) really "acceptable"? Is a proposal really acceptable that, if the government doesnt want to conduct discussions, the government would have to award then issue technical direction, issue changes, encourage the contractor to 'easily correct' the weaknesses, and/or wait until performance is unacceptable to direct the corrections?

FAR 15.001 defines "weakness" as follows:

Weakness means a flaw in the proposal that increases the risk of unsuccessful contract performance. A "significant weakness" in the proposal is a flaw that appreciably increases the risk of unsuccessful contract performance.

A weakness, so defined, does not make a proposal legally unacceptable. A deficiency does, but not a weakness. See the definition of "deficiency" in FAR 15.001. Award may be made to a proposal with a weakness. The DOD official definition of Outstanding is:

Proposal meets requirements and indicates an exceptional approach and understanding of the requirements. Strengths far outweigh any weaknesses. Risk of unsuccessful performance is very low.

The DOD official definition for Acceptable is:

Proposal meets requirements and indicates an adequate approach and understanding of the requirements. Strengths and weaknesses are offsetting or will have little or no impact on contract performance. Risk of unsuccessful performance is no worse than moderate.

So even an outstanding or acceptable proposal can contain a weakness. To "outweigh" or "offset" is not to eliminate or remove.

3. There does appear to be a gap between Acceptable and Marginal as defined in the quotes provided by anonco, and it ought to be closed by something in between. But the gap need not be fatal, because the source selection decision should not be based on ratings but on the detailed evaluation findings. Ratings should be used only for preliminary comparisons.

4. I would not recommend the DOD rating scheme to anyone. It is marginal, at best.

Why don't you write a set of adjectival ratings and definitions and let us critique it.

Link to comment
Share on other sites

Guest Vern Edwards

There are several things to consider when developing an adjectival proposal rating (or scoring) scheme. Here are just some of the things to think about:

First, who is going to use the ratings and what will he (she, they) use them for? (The answer to that is NOT obvious.)

Second what do you want the ratings to communicate to the user? What do you want the user to know after they review the ratings that they didn’t already know. (The answer to that is NOT obvious.)

Third, what are you rating?

The proposal’s legal status (i.e., assent to the terms of the RFP versus rejection of one or more terms)?

The proposal as a whole or its performance on each of specific factors?

The proposal’s “goodness,” aside from legal status? If that, then do you want the user to know simply whether a proposal is good or not (go/no go) or do you want the user to know where the proposal stands on a scale of goodness, i.e., the degree of goodness? If the latter, and since adjectival ratings are usually ordinal in nature, how many ordinal categories (excellent, good, fair, poor, etc.) do you want to establish? Categories are what enable you to make distinctions. Four? Five? More? The more categories you have, the more difficult it will be to write definitions that make clear distinctions.

The proposal's standing with reference to common standards or the proposals standing in comparison with other proposals? (Does “outstanding” mean (a) that it does well in terms of a set of standards against which all proposals are measured or ( b ) that it stands out among the proposals received? If (a), is “outstanding” the best word to use?)

Fourth, do you want to use the same terms and definitions to rate the proposal’s performance on all factors, e.g., proposed technical approach, proposed organization, proposed key personnel, experience and past performance, or do you want to use terms and definitions appropriate to specific factors.

Fifth, if you want to have a set of terms and definitions for each of specific factors and then ratings for an overall assessment, how do you want the user to integrate results on diverse rating scales?

Sixth, do you want to rate price or leave price unrated? (Think about that before you answer.)

These are just a few of the things that thoughtful practitioners will consider.

Most practitioners use the rating system given to them, without having the opportunity or inclination to think things through. That will probably work, since that’s the way it’s been done in acquisition since contracting by negotiation came into widespread use in the mid-1980s. But that’s a professional failure.

Link to comment
Share on other sites

Some thoughts:

1. As a proposal rating, "Acceptable" need not entail legal acceptability. It ought to, but need not. It can allude to nothing more than a degree of technical appeal.

2. You said:

FAR 15.001 defines "weakness" as follows:

A weakness, so defined, does not make a proposal legally unacceptable. A deficiency does, but not a weakness. See the definition of "deficiency" in FAR 15.001. Award may be made to a proposal with a weakness. The DOD official definition of Outstanding is:

The DOD official definition for Acceptable is:

So even an outstanding or acceptable proposal can contain a weakness. To "outweigh" or "offset" is not to eliminate or remove.

3. There does appear to be a gap between Acceptable and Marginal as defined in the quotes provided by anonco, and it ought to be closed by something in between. But the gap need not be fatal, because the source selection decision should not be based on ratings but on the detailed evaluation findings. Ratings should be used only for preliminary comparisons.

4. I would not recommend the DOD rating scheme to anyone. It is marginal, at best.

Why don't you write a set of adjectival ratings and definitions and let us critique it.

After two hours of pure frustration after having pushed the backspace key and twice wiping out my response, I need to move on to other things today but will provide my response.

Short(er) version is

I agree with Vern that an acceptable proposal may contain some weaknesses.

This rating scheme doesn't distinguish between "signficiant proposal weaknesses" which may pose a risk to successful performance and/or are unacceptable to the user or government, "proposal weaknesses" or multiple proposal weaknesses that, when considered together, may pose a risk to successful performance. The acceptable rating should generally not include those serious weaknessess or so many weaknesses that would render the proposal objectionable or require correction after award to avoid failure ior poor performance

This marginal rating doesnt consider proposal weaknesses at all. It doesn't consider that a single, material deficincy renders a proposal unawardable** . It requires that the proposal fail to meet the minum evaluation requirements and contain "significant deficiencies" (plural). It doesn't consider a proposal that may or may not comply with the requirements but that the Government cannot ascertain to be compliant.. It does state that the features are correctable, versus an unacceptable rating - which would require a major rewrite. It is interesting to note that the unacceptable rating requires one deficiency (okay) while the marginal requires multiple "significant deficiencies".

The marginal rating should serve as a bridge between an unawardable, unacceptable proposal and an awardable, acceptable proposal. It can be a factor to consider whether or not to include a proposal in the competitive range for discussions, if to be conducted. It should address significant weaknesses that would render the proposal objectionable and/or that would increase the risk of poor or unsuccessful performance. It should address multiple weaknesses that, when taken together are objectionable and/or increase the risk of poor or unsuccessful performance . It should address ONE or MORE deficiencies that require correction to be awardable but that are susceptable to being made acceptable without affording the proposer the opportunity for a major rewrite.

This scheme does nothing to encourage the government to bargain for better performance before award through discussions, where there are objectionable weaknesses and/or significant weaknesses. I had a long spiel as this is a sore point with me b ut it got wiped out in the backspace swipe. There are still those pre-1997 FAR 15 re-write KO's and other acquisition officials who remember the prohibition on "technical levelling", which is now gone. There was such fear of 'technical levelling' that that we were discouraged or prohibited from mentioning any fetaure in a proposal that was considered a weakness or could be improved to be more competitive or more desirable or that was just plain objectionable but met the minimum RFP requirement. These folks have passed this restriction on to others who still caution against or prohibit any such discussions as "technical levelling".

Indeed, I have been told in my Design-Build class and by KO's at Design-Build project after action review conferences around USACE that they cant or don't ever discuss objectionable features of proposals that otherwise meet the minimum performance requirements because that is "technical levelling". GRRRRRRRRRRRR!!!!!!!!!!!

As a result we have had Army level officials and installation officials gripe to high Heaven about the looks or functionality of a completed design-build project. Every time I would ask if the weakness or objection was known ("Yes") and if it was discussed with the proposer before award ("No"). Each time, I asked the contractor if they would have fixed the weakness had they been asked before award ("Yes, that could have been easily addressed"). I asked how much it would have added to the cost. "We would have included it had we known about it before award (at little or no cost)." This was due to to the competitive nature of the source selection and the magnitude of the project.

On one project, we ended up with two new barracks or an L shaped building (I don't remember which) with red, vinyl siding that looked like a barn. Of course, everyone at the base level and at Army level has raised Hell about these barracks as eyesores and blights upon the landscape that are surrounded by brick sided barracks . While the discussion was going on at the site visit during the after review and the HQ Army dude was railing on and on, I asked the typical questions of the government and contractor project managers and got the typical answers - we knew it didnt match the surrounding brick barracks- no mention to the proposer before award - yes we the contractor could have fixed it - no we couldnt discuss it before award - yadayadah. I asked the contractor how much it would have cost after award to change the siding to brick veneer that would have matched the the government surrounding buildings. The contractor said about $15,000 each but nobody ever asked him. When asked in the subsequent meeting, the KO stood up and said it was their District policy not to discuss something if it met the minimum requirements. GRRRRRRRRR!!!!

FAR Part 15 now allows and encourages bargaining for better performance ("better value" to the government) when in the best interest of the government, even when the proposal meets the minimum requirements. Shay Assad did at least one good thing as Director of Defense Acquisition Policy when (I think it was him) he stated that disscussions should be the norm and award without discussions should be the exception where we can obtain better value through TALKING with proposers. Oh what a dirty word to many government employees...

**Matter of: AT&T File: B-250516.3 Date: March 30, 1993: In negotiated procurements, it is fundamental that any proposal that fails to conform to the material terms and conditions of the solicitation should be considered unacceptable and may not form the basis for award. See Martin Marietta Corp., 69 Comp.Gen. 214 (1990), 90-1 CPD Para. 132; Consulting and Program Mgmt., 66 Comp.Gen. 289 (1987), 87-1 CPD Para. 229.

Link to comment
Share on other sites

As for drafting my own definitions for your critique, after three hours of great effort with littl to show for it above, "I ain't saying a word - I ain't making a move - until SOMEbody pays me..." (One of my favorite lyrics by Geoffrey Lewis and "Celestial Navigation" ).

People are paying Management Concepts to teach this stuff. My suggestion to them is to provide some multiple examples because what was described above aint getting it for me, plus if they are going to use bad definitions, they could also include the "marginal, at best" DoD rating scheme that is supposed to be mandatory for DoD source selections.

Link to comment
Share on other sites

Guest Vern Edwards

They don't pay you and me for anything we do here.

The purpose of a proposal rating scheme is to communicate in shorthand and summary fashion the detailed findings of the evaluators. Ratings are not used to make decisions. They are used to prepare for decision making. In order to make a sound decision the source selection authority must look at and compare the detailed evaluation findings.

Okay, then here is my cut at an overall proposal rating scheme. It took me 30 minutes.

Excellent. The contracting officer found that the proposal clearly shows that the offeror assents to the material terms of the RFP. The evaluators found that It indicates that the offeror’s performance would always be satisfactory and would often beneficially exceed our expectations.

Good. The contracting officer found that the proposal clearly shows that the offeror assents to the material terms of the RFP. The evaluators found that it indicates that the offeror’s performance would always be satisfactory and would occasionally be better than that.

Acceptable. The contracting officer found that the proposal clearly shows that the offeror assents to the material terms of the RFP. The evaluators found that it indicates that the offeror’s performance would be generally satisfactory and would, at most, only occasionally require minor corrective action.

Marginal. The contracting officer found that the proposal does not clearly show that the offeror assents to all of the material terms of the RFP, but that it does not take express exception to any material term, or the evaluators found that it indicates that the offeror’s performance would occasionally be unsatisfactory in ways that would hamper Government operations.

Unacceptable. The contracting officer found that the proposal takes express exception to a material term of the RFP, or the evaluators found that it indicates that the offeror’s performance would be generally unsatisfactory in ways that would hamper Government operations.

Note that I make no mention of strengths, weaknesses, or deficiencies. Strength is not defined in FAR. Weakness is defined, but as defined it is a useless and confusing term. Deficiency is defined and is a useful term, but is technical and not essential in a proposal rating scheme.

Link to comment
Share on other sites

If I were an SSA, I don't think I would use ratings at all. I would have the evaluation team evaluate the proposals against the evaluation factors stated in the solicitation, document the relative strengths, deficiencies, significant weaknesses, and risks (as required by FAR 15.305) and report back. I would then use those findings to make my decision. I don't see how having the evaluation team assign ratings would make my life easier.

Link to comment
Share on other sites

Guest Vern Edwards

I'm with you, IF we're going to use only two or three evaluation factors and the factors don't have a lot of sub factors. For instance, if the only factors are experience, past performance, and price, then why have a rating scheme?

Link to comment
Share on other sites

I'm with you, IF we're going to use only two or three evaluation factors and the factors don't have a lot of sub factors. For instance, if the only factors are experience, past performance, and price, then why have a rating scheme?

That may work for some source selections. At any rate, you raked me over the coals a few months ago for insisting that it is a needless, waste of time for individual evaluators to assign ratings during their individual proposal reviews I said that they should focus on documenting the strengths, weaknesses, noted deficiencies, unclear aspects, etc. I am glad to see you take a different even if more bold viewpoint.

Link to comment
Share on other sites

Guest Vern Edwards

I don't really remember what I said to you, at this point in my life a few months is an eternity. But I do think that if an agency is using a rating system, then individual evaluators should assign ratings in preparation for the consensus session.

And hey, your Wifcon mailbox is full. I have some books for you if you want them. Empty your mailbox and send me a message.

Vern

Link to comment
Share on other sites

Guest Vern Edwards

FAR does not require the use of a rating (or scoring) system. An agency can conduct a source selection without rating or scoring at any time for any acquisition, unless its own policies prevent it from doing so.

DOD's source selection policy (as described in the DPAP memo of March 4, 2011) appears to confuse evaluation with rating and appears to require the use of adjectival ratings. So I suppose that DOD must use ratings in all FAR Part 15 acquisitions, but not acquisitions conducted under FAR Subpart 8.5, 13.5, or 16.5. I think some other agencies have similar policies.

For those who think it would be good to dispense with ratings, it is important to understand the distinction between evaluation and rating. Evaluation entails reading proposals and considering their content in light of stated evaluation factors (criteria), and then determining how well each proposal performs in terms of each of the factors. The determination should be written down in a declarative sentences that (1) state the factor, (2) describe the proposal content that is relevant to that factor, and (3) states a conclusion about whether the proposal satisfies, fails to satisfy, or beneficially exceeds the factor.

Rating or scoring is used to (a) aggregate and ( b ) summarize the conclusions reached by evaluators in simple terms -- outstanding, good, acceptable, marginal, and unacceptable, etc. This is done in order to provide an array of the evaluation results that makes them immediately comprehensible and facilitates preliminary comparisons. However, aggregation, summarization, and simplification result in the loss of detail. For example, the DOD definition of "Outstanding" is:

Proposal meets requirements and indicates an exceptional approach and understanding of requirements. Strenghts far outweigh any weaknesses. Risk of unsuccessful performance is very low.

That rating tells you that the approach is "exceptional," but it does not tell you what that means or the bases for that conclusion. The rating would enable an SSA to make preliminary comparisons among offerors, but not the kind of rational technical/price tradeoffs that will enable a decision to withstand critical scrutiny. While such ratings are helpful when there are several factors and subfactors, they are not necessary if there are only two or three factors, in which case an SSA can go directly to the write-ups.

Adjectival ratings are less useful for aggregating complex results than numbers, but many agencies prohibit the use of numbers because acquisition practitioners don't know how to use them properly.

Link to comment
Share on other sites

DOD's source selection policy (as described in the DPAP memo of March 4, 2011) appears to confuse evaluation with rating and appears to require the use of adjectival ratings. So I suppose that DOD must use ratings in all FAR Part 15 acquisitions, but not acquisitions conducted under FAR Subpart 8.5, 13.5, or 16.5. I think some other agencies have similar policies.

For those who think it would be good to dispense with ratings, it is important to understand the distinction between evaluation and rating. Evaluation entails reading proposals and considering their content in light of stated evaluation factors (criteria), and then determining how well each proposal performs in terms of each of the factors. The determination should be written down in a declarative sentences that (1) state the factor, (2) describe the proposal content that is relevant to that factor, and (3) states a conclusion about whether the proposal satisfies, fails to satisfy, or beneficially exceeds the factor.

Rating or scoring is used to (a) aggregate and ( b ) summarize the conclusions reached by evaluators in simple terms -- outstanding, good, acceptable, marginal, and unacceptable, etc. This is done in order to provide an array of the evaluation results that makes them immediately comprehensible and facilitates preliminary comparisons. However, aggregation, summarization, and simplification result in the loss of detail. For example, the DOD definition of "Outstanding" is:

That rating tells you that the approach is "exceptional," but it does not tell you what that means or the bases for that conclusion. The rating would enable an SSA to make preliminary comparisons among offerors, but not the kind of rational technical/price tradeoffs that will enable a decision to withstand critical scrutiny. While such ratings are helpful when there are several factors and subfactors, they are not necessary if there are only two or three factors, in which case an SSA can go directly to the write-ups.

Adjectival ratings are less useful for aggregating complex results than numbers, but many agencies prohibit the use of numbers because acquisition practitioners don't know how to use them properly.

Im glad that DoD doesn't allow the use of numerical ratings. Numerical ratings systems provide the [EDIT: allusion illusion] of precision in the score ratings and overemphasize small differences in "scores" between proposals. And it also fostered the notion that one can focus on the scores rather than the underlying basis for the ratings. It could be a very lazy method. In reviewing protests, one could tell that the evaluators and SSA were making selections and justifying them based upon scores.

I don't know about Navy but the Air Force color system (equivalent to the use of adjectives) was around for a long time, while much of the Army used numbers until the AFARS banned the use of numerical weights for factors and number scoring for ratings in the 2004 AFARS revision. It was very painful and a mystery to those in our agency who had relied on the scores rather than the detailed comparison of the actual underlying differences between proposals, which are the meat of the evaluation. It makes it more difficult to try to assign a rating then back into it by providing the necessary strerngths, weaknesses, deficiencies, etc.

[EDIT: That's why it ] It is a useless waste of time for individuals to assign ratings ahead of the consensus. The individual evaluator effort must focus on " reading proposals and considering their content in light of stated evaluation factors (criteria), and then determining how well each proposal performs in terms of each of the factors." The determination should be written down in a declarative sentences" [or bullet statements] "that (1) state the factor, (2) describe the proposal content that is relevant to that factor, and (3) states... whether the proposal satisfies, fails to satisfy, or" provides benefit(s) that would exceed the minimum factor criteria.

When the team gets together for the consebnsus evaluation, each member of the team should contribute input from their notes to do the same thing in full detail, as Vern described above. Once all the comments are agreed to in a consensus fashion, the group looks at the rating criteria and it is usually a very simple and quick step to pop out the resulting, appropriate rating for the factor or subfactor. ESSENTIALLY ALL THE FRUITFUL EFFORT WAS SPENT ON DETERMINIMG AND DOCUMENTING THE UNDERLYING BASIS OF THE RATING.

Then, as Vern said the group develops a summary matrix of the factor and subfactor ratings, which at best allow the determining offical a high level view of which factors are more important within a proposal and the relative rating differences between proposals along with the prices.

The official then makes a detailed comparison between proposals, having the detailed comment writeups for their use. Of course, the official must use their own judgement and can question or override the evaluation board's evaluation. In more complex DoD source selections, the official will have some assistance and advice from a source selection advisory council, which is supposed to "QC" the evaluation board and be able to advise the selection official.

This is a simplified outline of the process but I do prefer the use of adjectival systems, having used both numerical and adjectival systems.

Link to comment
Share on other sites

Guest Vern Edwards

Joel:

Im glad that DoD doesn't allow the use of numerical ratings. Numerical ratings systems provide the allusion of precision in the score ratings and overemphasize small differences in "scores" between proposals. And it also fostered the notion that one can focus on the scores rather than the underlying basis for the ratings. It could be a very lazy method. In reviewing protests, one could tell that the evaluators and SSA were making selections and justifying them based upon scores.

With respect to the above comments on numerical scoring, you're attributing to an effective system of notation the faults properly attributable to incompetent users, just like the man who can't shoot blames his rifle or who can't ride blames his horse. Had you been trained by someone who knew what he was doing in source selection, read any of the numerous books on decision analysis, or taken a university course on decision-making, you'd know better. If agencies that prohibit the use of numbers were honest, they would say that they prohibit such use because their people don't know how to use numbers properly.

As for this:

I don't know about Navy but the Air Force color system (equivalent to the use of adjectives) was around for a long time, while much of the Army used numbers until the AFARS banned the use of numerical weights for factors and number scoring for ratings in the 2004 AFARS revision. It was very painful and a mystery to those in our agency who had relied on the scores rather than the detailed comparison of the actual underlying differences between proposals, which are the meat of the evaluation. It makes it more difficult to try to assign a rating then back into it by providing the necessary strerngths, weaknesses, deficiencies, etc.

That's why it is a useless waste of time for individuals to assign ratings ahead of the consensus.

The third sentence in that quote describes something that only incompetent people would do -- reach a conclusion before reasoning Such behavior cannot be blamed on the use of numbers. The last sentence in the quote is a non sequitur. It does not follow from what you said in the three sentences before it. There is no logic in it.

As for this:

Numerical ratings systems provide the allusion of precision in the score ratings and overemphasize small differences in "scores" between proposals.

The "allusion" of precision? I assume you meant illusion. Well, untrained minds misuse their communication tools and misconstrue the information they receive. That doesn't make the tool or the information bad. You would blame the phone for the erroneous message. Let's put the blame where it belongs, on the user.

As for our difference of opinion about the utility of having individual evaluators assign ratings before the consensus meeting, I'll leave it at that -- a difference of opinion. Either way will work for competent people. But consider this: letting people form their own opinions about ratings and then discuss them with others might reduce the chance of groupthink. Otherwise, weak members of a group tend to go along with strong members. I'll let it go at that.

Link to comment
Share on other sites

Vern, I didn't personally have a problem with numerical rating systems. I understood how the system should work and trained my evaluation board members and technical evaluators how to review and evaluate proposals, methodically documenting the basis for the rating that the board assigned after developing the consensus comments. I didn't allow my boards to back into the rating. Yes, my boss did. I learned how NOT to do evaluations. And after I assumed his role, it stopped completely in our District. However, I observed many other organizations outside of our local USACE District that started with or focused on the numerical rating then may have backed into it by developing some comments to support it. You could also see such practices by reading through the lines in GAO decisions. I met a Contracting person shortly after the 2004 AFARS update who was the chairperson of the DA committee that developed the ban on numerical weights and scoring. She confirmed the problems that I explained were occurring on a widespread basis throughout Army.

I didn't have any problems adapting to adjectival ratings. I had to explain to a lot of KO's and others that I met in the USACE community after the AFARS banned numerical scoring how to evaluate and rate proposals using non-numerical/non-weighted rating systems, though.

Yes I should have used "illusion" rather than "allusion" in post #21. Thanks.

Please remove the words "That's why" from the sentencein post #21 about it being a waste of time for individuals to assign ratings during their initial proposal reviews . I started a thought, then changed it mid-sentence. Forgot to proof-read it. Thanks.

The other thread that I mentioned in post #17 concerned disappointed offerors unsuccessfully trying to make a big deal out of differences between individual rating notes and the ultimate consensus ratings in their protests.

Guess we will agree to disagree.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...