Jump to content

Interpreting what "Neutral" means in the eyes of GAO- Past Performance-


FARmer

Recommended Posts

Guest Vern Edwards
24 minutes ago, ji20874 said:

Vern,

This thread is about past performance evaluations.  My point was that we don't always do past performance evaluations as a comparative assessment.  If we did always do past performance evaluations as a comparative assessment, rather than evaluating against a factor standard, and if we didn't define relevance so narrowly in our solicitations, then a lot of the angst about neutral ratings in past performance evaluations would disappear.

... The perception also exists that past performance evaluations have to be done against a factor standard, rather than as a comparative assessment -- but as I have been trying to point out, FAR 15.305( a )( 2 ) clearly calls for a comparative assessment for the past performance evaluation.  Notwithstanding all the "training" that occurs, many contracting officers do not know this.

ji20874:

I am confused, and I think others may be as well, about what you mean by "comparative assessment." In order to help me understand, please describe such an approach. How would it be done?

Link to comment
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

ji20874:

I would like to know more about what you are saying to see if it can be applied within DoD.

It sounds like you are calling for past performance evaluations to comparatively assess which offer has the most relevant past performance.

Right now, seems as though you are advocating for COs to:

Step 1) gather past performance data on offerors

Step 2) review data and determine which offeror has more relevant or better relevant past performance as compared to each other versus a factor standard

DoD acquisitions under the Source Selection Procedures are very structured and might not facilitate such a scheme for past performance ratings. Particularly, I don't know that DoDs three legged stool approach to evaluating past performance facilitates such evaluations.

DoD relevancy ratings consider scope and magnitude and complexities. Offerors are required to have all three legs or the stool doesn't stand. Hypothetically, an offeror could be spot on in scope and complexities, but not have the magnitude and end up with an unknown/neutral rating.

Absent a mechanism to waive one of the legs of the stool, I don't see how we can comparatively assess offerors without first evaluating them against a factor standard to ensure they meet the agencies requirements. Otherwise, seems like you could be highly ranking the best available offeror without regard if they satisfy agency requirements.

I'm not limiting you to DoD SSP, just looking at it from my perspective.

 

Link to comment
Share on other sites

I'm not constrained by the DoD source selection procedures guide.  As  I understand the guide, it calls for evaluation against a factor standard and assignment of adjectival ratings, with supporting rationale.  Maybe everyone comes out SUBSTANTIAL CONFIDENCE, or maybe everyone comes out NO CONFIDENCE, because everyone is evaluated against the standard.

Para. 1.4.4.4.3 of the DoD Source Selection Procedures prohibits the evaluators from performing a comparative assessment for the past performance evaluation, contrary to my reading of FAR 15.305( a )( 2 ) which expressly calls for a comparative assessment.  The DoD procedures require the evaluators to simply evaluate against the standard and assign adjectival ratings -- the comparative assessment is done by the SSAC (if one is appointed) and/or the SSA after the evaluation is complete and is done at the proposal level considering all factors.

Yes, Jamaal, I think a past performance evaluation is more useful and more in keeping with FAR 15.305( a )( 2 ) when it is done on a comparative assessment basis rather than an evaluation against the standard basis.  I realize that most contracting officers are constrained by their agency supplements and guides, as well as the prevalent notion that all evaluations (including past performance evaluations) must be against a standard and cannot be on a comparative assessment basis.  When opportunities arise, I remind contracting officers that FAR 15.305( a )( 2 ) expressly calls for a comparative assessment for the past performance evaluation.  I cannot mandate a particular approach to doing the comparative assessment, and would not mandate a particular approach even if I was empowered to do so.  But I do want contracting officers to be aware of what the FAR says, and I want for myself the privilege of doing what the FAR says (doing the past performance evaluation on a comparative assessment basis).

Vern, turning to the technical evaluation, I read FAR 15.305( a )( 3 )( ii ) differently than you do -- to me, it calls for:

  • a summary of each technical proposal using the evaluation factors, along with appropriate supporting narrative;
  • a matrix of each technical proposal using the evaluation factors, along with appropriate supporting narrative; or
  • a quantitative ranking of each technical proposal using the evaluation factors, along with appropriate supporting narrative.

I think you read it as calling for:

  • a summary ranking of each technical proposal using the evaluation factors, along with appropriate supporting narrative;
  • a matrix ranking of each technical proposal using the evaluation factors, along with appropriate supporting narrative; or
  • a quantitative ranking of each technical proposal using the evaluation factors, along with appropriate supporting narrative.

I am not accustomed to seeing evaluation teams prepare rankings within the technical evaluation, and our attorneys don't ask for them.  Even when I was in DoD, I never saw rankings within a technical evaluation.  Of course, I didn't see everything.

Link to comment
Share on other sites

Guest Vern Edwards

ji20874:

You told me in no uncertain terms that this thread is about past performance evaluations. And you have now told us three times how important you think it is to do a comparative evaluation of past performance without evaluation standards. So I asked you how you would conduct a comparative evaluation of past performance. Instead of answering, you now want to tell me how you interpret FAR 15.305(a)(3)(ii) about technical evaluations.

I frankly do not care how you interpret FAR 15.305(a)(3)(ii). I know what it says and what it means. I disagree with you, but I won't waste a moment trying to change your mind. Instead, I want to follow your lead about past performance evaluation and understand your thinking about how to conduct comparative evaluations of past performance without evaluation standards.

So are you going to describe how you would do that or are you not? If so, please proceed. I'm interested in your ideas about this.

Link to comment
Share on other sites

 

Quote

 

Past Performance versus Responsibility Determinations

It is important to distinguish comparative
past performance evaluations used in the tradeoff process from pass/fail performance evaluations.

Pre-award surveys and pass/fail evaluations in the lowest price technically acceptable process help you determine whether an offeror is responsible. Responsibility is a broad concept that addresses whether an offeror has the capability to perform a particular contract based upon an analysis of many areas including financial resources, operational controls, technical skills, quality assurance, and past performance. These surveys and evaluations provide a “yes/no,” “pass/fail,” or “go/no-go” answer to the question, Can the offeror do the work?” and thus help you determine whether the offeror is responsible.

Referral to the Small Business Administration may be necessary if a small business is eliminated from the competitive range solely on the basis of past performance. SBA referral is not required as long as the use of past performance information requires a comparative assessment with other evaluation factors and not as a pass or fail decision. The comparative assessment of past performance information is separate from a responsibility determination required by the Federal Acquisition Regulation.

Unlike a pass/fail responsibility determination, a comparative past performance evaluation conducted using the tradeoff process is a very specific endeavor that seeks to identify the degree of risk associated with each competing offeror. Rather than asking whether an offeror can do the work, you should ask, whether it will do that work successfully. In short, the evaluation describes the degree of confidence the Government has in the offeror’s likelihood of success. If properly conducted, the comparative past performance evaluation and the responsibility determination will complement each other and provide you with a more complete picture of an offeror than either one could by itself. 

 

 

Link to comment
Share on other sites

On April 28, 2016 at 7:33 AM, ji20874 said:

Joel,

Is your quote from the DoD May 1999 "A Guide to Collection and Use of Past Performance Information"?

I was trying to edit on my iPad .  No it is from the 2001 Version 2. I hate iPad.  I hate Apple.  Not for Dummies, I guess.

Link to comment
Share on other sites

Guest Vern Edwards

I can only conclude that ji20874 does not know how he would do what he says ought to be done, which is to evaluate past performance comparatively, but without the use of evaluation standards. 

Link to comment
Share on other sites

   

The following excerpt is from the 2001 "DOD Guide to the Collection and Use of Past Performance Information."  It explains the difference between a comparative evaluation and a pass/fail evaluation. In both methods one must evaluate past performance against some criteria. 

 

Quote

 

Past Performance versus Responsibility Determinations 

It is important to distinguish comparative
past performance evaluations used in the tradeoff process from pass/fail performance evaluations. 

Pre-award surveys and pass/fail evaluations in the lowest price technically acceptable process help you determine whether an offeror is responsible. Responsibility is a broad concept that addresses whether an offeror has the capability to perform a particular contract based upon an analysis of many areas including financial resources, operational controls, technical skills, quality assurance, and past performance. These surveys and evaluations provide a “yes/no,” “pass/fail,” or “go/no-go” answer to the question, Can the offeror do the work?” and thus help you determine whether the offeror is responsible. 

Referral to the Small Business Administration may be necessary if a small business is eliminated from the competitive range solely on the basis of past performance. SBA referral is not required as long as the use of past performance information requires a comparative assessment with other evaluation factors and not as a pass or fail decision. The comparative assessment of past performance information is separate from a responsibility determination required by the Federal Acquisition Regulation. 

Unlike a pass/fail responsibility determination, a comparative past performance evaluation conducted using the tradeoff process is a very specific endeavor that seeks to identify the degree of risk associated with each competing offeror. Rather than asking whether an offeror can do the work, you should ask, whether it will do that work successfully. In short, the evaluation describes the degree of confidence the Government has in the offeror’s likelihood of success. If properly conducted, the comparative past performance evaluation and the responsibility determination will complement each other and provide you with a more complete picture of an offeror than either one could by itself. 

 

I gave up trying to edit my previous post on iPad. Everytime i tried to submit,  someone else had posted, freezing the edit  😡

Link to comment
Share on other sites

I think it is important for all participants in a source selection to agree that essentially all past performance is relevant, but some past performance is more relevant than other past performance. If this is agreed to, then one offeror's past performance can be seen as more relevant than another offeror's past performance in a comparative way. That is a bedrock principle that allows a past performance evaluation team to approach its work in such a way that allows it to produce a comparative assessment product or document that can be of use to a source selection authority.

I agree with the point of Joel's quotation. I disagree with the point asserted earlier in this thread that "Past performance is nothing more than an assessment of how well the contractor has done that work in the past." No, a past performance evaluation is supposed to assess the likelihood of the offeror's future success for the instant procurement, based on its past performance. I do not like past performance evaluations that are nothing more than an assessment of how well the offeror has done that work in the past -- I want past performance evaluations that give a professional, subjective, and comparative assessment looking to the future -- I want an assessment of the likelihood of each offeror's success based both on each offeror's own past performance but also in comparison with each other. Because of my feeling in this matter, I am taking steps towards giving meaning to the comparative assessment called for in FAR 15.305( a )( 2 ). I am challenging the prevalent notion that past performance evaluations must be done against a standard, with no comparisons of offers, and that a comparative assessment can only be done by the source selection authority.

And since this thread is about neutral past performance ratings, I am trying to raise awareness in our professional community that FAR 15.305( a )( 2 ) expressly calls for a comparative assessment for the past performance evaluation. I believe that if more contracting officers looked for ways to do past performance evaluations as a comparative assessment, then much of the angst regarding neutral ratings and past performance evaluations would disappear, and we would get more meaningful and more useful past performance evaluations. I hope this discussion will encourage contracting officers to read or re-read FAR 15.305( a )( 2 ), and to find the "comparative assessment" words for themselves, and to try to put meaning into those words. I don't want any reader to think that I am opposed to evaluation standards -- I'm not -- I just am not satisfied with past performance evaluations that are only evaluations against a standard or that are nothing more than an assessment of how well the offeror has done that workin the past, so I generally want to ask for more from my past performance evaluators.

Link to comment
Share on other sites

Guest Vern Edwards

Ji20874:

How do you do a comparative past performance evaluation without standards? That's what you said should be done. Tell us how.

Link to comment
Share on other sites

ji, It is my belief, at least from reading the DoD Guide, that a "comparative assessment" as used in 15.305 (a)(2) meant not using a pass/fail evaluation standards for PP in a BV trade-off.  Instead, use some type of comparative evaluation standards to evaluate EACH proposal and then make a comparative analysis between proposals.  Paragraph (a) (2) falls under 15.305:

Quote

 

15.305 Proposal evaluation.

(a) Proposal evaluation is an assessment of the proposal and the offeror’s ability to perform the prospective contract successfully. An agency shall evaluate competitive proposals and then assess their relative qualities solely on the factors and subfactors specified in the solicitation.

 

 

.

Link to comment
Share on other sites

29 minutes ago, Vern Edwards said:

Ji20874:

How do you do a comparative past performance evaluation without standards? That's what you said should be done. Tell us how.

That's not what I said.

37 minutes ago, ji20874 said:

I don't want any reader to think that I am opposed to evaluation standards -- I'm not -- I just am not satisfied with past performance evaluations that are only evaluations against a standard or that are nothing more than an assessment of how well the offeror has done that work in the past, so I generally want to ask for more from my past performance evaluators.

Joel, I understand your point about comparative not being pass/fail, and I accept that as a reasonable approach.  However, I understand from the DoD Source Selection Procedures that a contracting officer can do a proposal evaluation under FAR 15.305 with past performance on a past/fail basis (notwithstanding FAR 15.305( a )( 2 )'s call for a comparative assessment).  To me, FAR 15.305 doesn't allow for a past performance evaluation on a pass/fail basis -- this is in contrast to the technical evaluation text in 15.305( a )( 3 ) "When tradeoffs are performed..." -- no similar words appear in ( a )( 2 ).  Even so, I have never done past performance on a pass/fail basis as a matter of professional choice -- I don't think it makes any sense.

Link to comment
Share on other sites

Guest PepeTheFrog

ji20874:

Agency Alpha conducts a past performance evaluation using formal, objective standards-- the method you don't like. Then, in the source selection decision, "The source selection authority’s (SSA) decision [is] based on a comparative assessment of proposals against all source selection criteria in the solicitation...the documentation shall include the rationale for any business judgments and tradeoffs made or relied on by the SSA, including benefits associated with additional costs" (FAR 15.308).

Agency Beta conducts a past performance evaluation using the method you prefer (whatever comparative analysis you're trying to convey in this thread). Then, in the source selection decision, "The source selection authority’s (SSA) decision [is] based on a comparative assessment of proposals against all source selection criteria in the solicitation...the documentation shall include the rationale for any business judgments and tradeoffs made or relied on by the SSA, including benefits associated with additional costs" (FAR 15.308).

Are there any important differences between Alpha and Beta? To be clear, PepeTheFrog is emphasizing two stages: stage (1) proposal evaluation (FAR 15.305) and stage (2) source selection decision, including a comparative assessment of the proposals against all source selection criteria (FAR 15.308).

The only thing that hops out at PepeTheFrog is that under the method you prefer, more past performance will meet the relevancy test, and therefore be evaluated, based on your statement here, and others:

26 minutes ago, ji20874 said:

I think it is important for all participants in a source selection to agree that essentially all past performance is relevant, but some past performance is more relevant than other past performance.

It sounds like you want to lower the standard for relevancy, and therefore spend more time and effort evaluating a greater amount of past performance. To accomplish that goal, you need not mention comparative assessment of past performance.

28 minutes ago, ji20874 said:

I want an assessment of the likelihood of each offeror's success based both on each offeror's own past performance but also in comparison with each other.

Comparative assessment of past performance will happen in stage (2), mentioned above.

What is PepeTheFrog missing? What is the harm in having the comparative assessment (of past performance) happen at the tradeoff analysis or source selection decision stage?

Link to comment
Share on other sites

Guest Vern Edwards
On April 27, 2016 at 10:04 AM, ji20874 said:

Vern,

... If we did always do past performance evaluations as a comparative assessment, rather than evaluating against a factor standard, and if we didn't define relevance so narrowly in our solicitations, then a lot of the angst about neutral ratings in past performance evaluations would disappear.

*     *     *

I've been encouraging direct comparisons (rather than evaluation against a standard) for technical evaluations for a while.  I hope the practice spreads.  Part of the hindrance is a notion in the community, as a result of all the "training" that has occurred, that we absolutely muist evaluate against the standard first, before the comparative assessment of the selecting official, even in the Subpart 8.4 world.  That notion isn't true, but the perception exists.  The perception also exists that past performance evaluations have to be done against a factor standard, rather than as a comparative assessment -- but as I have been trying to point out, FAR 15.305( a )( 2 ) clearly calls for a comparative assessment for the past performance evaluation.  Notwithstanding all the "training" that occurs, many contracting officers do not know this.

 

23 hours ago, ji20874 said:

Yes, Jamaal, I think a past performance evaluation is more useful and more in keeping with FAR 15.305( a )( 2 ) when it is done on a comparative assessment basis rather than an evaluation against the standard basis.

ji20874 has been confused. I think he knows it, and now he's in denial. 

Here is the straight skinny, but we have to study some history:

Traditionally, past performance was used as a pass/fail responsibility consideration. But the GAO stated on a few occasions that responsibility considerations could be used as evaluation factors in source selection, and when used as such on a comparative basis the SBA's certificate of competency (COC) program did not apply.

After enactment of CICA, agencies began to see an advantage in using responsibility considerations as source selection evaluation factors as a way to to side-step the COC program. They thought that SBA was too generous with COCs and resented SBA forcing them to do business with firms that they did not think were qualified. GAO said it was okay as long as such factors were not evaluated on a pass/fail basis. If used as source selection factors, but evaluated on a pass/fail basis, then the COC procedure would still apply. See FitNet Purchasing Alliance, B-410263, 2014 CPD ¶ 344, Nov. 26, 2013.

Quote

Under the Small Business Act, agencies may not find a small business nonresponsible without referring the matter to the SBA, which has the ultimate authority to determine the responsibility of small businesses under its COC procedures. 15 U.S.C. §637(b)(7); FAR subpart 19.6; Federal Support Corp., B–245573, Jan. 16, 1992, 92–1 CPD ¶81 at 4. Past performance traditionally is considered a responsibility factor, that is, a matter relating to the offeror's ability to perform the contract. See FAR §9.104–1(c); Sanford & Sons Co., B–231607, Sept. 20, 1988,88–2 CPD ¶266 at 2. Traditional responsibility factors may be used as technical evaluation factors in a negotiated procurement, but only when a comparative evaluation of those areas is to be made. See, e.g.Medical Info.Servs., B–287824, July 10, 2002, 2001 CPD ¶122 at 5; Nomura Enter., Inc., B–277768, Nov. 19, 1997, 97–2 CPD ¶148 at 3. Comparative evaluation in this context means that competing proposals will be rated on a scale, relative to each other, as opposed to a pass/fail basis. Docusort, Inc., B–254852, Jan. 25, 1994, 94–1 CPD ¶38 at 6. We have cautioned that an agency may not disqualify a small business under the guise of a relative assessment of responsibility-based technical factors in an attempt to avoid referral to the SBA. Federal Support Corp., supra, at 4; Sanford & Sons Co.supra, at 3.

See also Cibinic, Nash, and Yukins, Formation of Government Contracts 4th, pp. 702-713.

In 1994, OFPP proposed a program for the use of past performance information in source selections. See 59 FR 18168, April 15, 1994. The proposal reflected the GAO case law, and thus stated, in pertinent part:

Quote

Responsibility Determinations vs. Source Selection Decisions. Some confusion exists because past performance is used in making responsibility determinations as well as contract award decisions. Federal Acquisition Regulation (FAR) 9.104-3(c) requires a satisfactory performance record before a contractor can be determined to be responsible. Similarly, *18169 FAR 9.104-1(e) requires that the contractor “have the necessary organization, experience, accounting and operational controls . . .,” in order to be determined responsible. If an agency determines that a small business is “not responsible” (i.e., not capable of performing) that determination may be appealed to the Small Business Administration (SBA), under “Certificate of Competency” (COC) procedures.

Contract award or evaluation decisions are (in comparison to responsibility determination), made pursuant to evaluation criteria stated in the RFP. When treated as an evaluation factor, past performance information is used to make comparisons among competing firms to determine relative ratings or rankings; e.g., firm A is better (presents less risk to the government) then firm B; firm B is better than firm C, etc. Evaluation decisions assess the relative capability of firms. They are not “go/no go” decisions and are not subject to the COC process or referral to the SBA.

The program would eventually become the basis for coverage in FAR Parts 15 and 42.

What about the "neutral" problem? According to OFPP:

Quote

“New” Firms. One of the most frequently asked questions about using past performance information in source selections is, “How are new firms to be treated?” In OFPP's view, new firms should be neither rewarded nor penalized as a result of their lack of performance history. If, for example, past performance is to be rated on a scale of one to ten, a new firm should be given the average score of the other competing offerors. Unless the RFP contains a specific requirement for prior performance based on safety, health, national security, or mission essential considerations, agencies should “neutralize” the past performance factor and evaluate the merits of proposals received from new firms in accordance with other stated evaluation criteria.

It was the use of past performance as a comparative evaluation factor instead of a pass/fail responsibility consideration that gave rise to the "neutral" problem, which is why ji20874 was off base in saying:

Quote

If we did always do past performance evaluations as a comparative assessment, rather than evaluating against a factor standard, and if we didn't define relevance so narrowly in our solicitations, then a lot of the angst about neutral ratings in past performance evaluations would disappear.

That's nonsense. It was comparative assessment that caused the angst. A "neutral" rule was thought to be necessary so as not to be unfair to new firms.

ji20874 confuses the idea of evaluating against standards with comparative evaluation. As the quotes above show, be believes (or believed) that evaluation against standards is inconsistent with comparative evaluation. He's wrong. He simply doesn't understand that evaluation against standards is usually Step 1 of evaluation and comparison (and ranking) is Step 2.

However, ji20874 is right that evaluation against standards is not essential. Steps 1 and 2 can be done simultaneously by comparing proposals in a series of pairs and going directly to ranking without application of standards. How could that be done? First, you look at the information about Offeror A's past performance. You do not assign a rating or score. You then look at the information about Offeror B's past performance. You then decide which you think is better, A or B, and rank them 1st and 2nd without assigning a rating. Suppose that you decide that B is better than A. You then compare Offeror B to Offeror C. If you decide that C is better than B it stands to reason that C is also better than A, and you now have a ranking 1st, 2nd, and 3rd. But if C is not as good as B, you then compare C to A to determine which of them is better. You have to do that in order to make tradeoffs later in the process. You continue in that way until you have a complete ranking of all offerors. Since you do not assign ratings, you do not need rating standards. The OFPP proposal addressed this, but somewhat confusedly:

Quote

Should Past Performance Be Scored or Not Scored? The evaluation of past performance is a subjective assessment based on specific facts and circumstances. Past performance evaluations are not generally based on absolute standards of acceptable performance and there is no requirement that such assessments be scored. The decision of whether to score past performance or to use other assessment methods such as color codings, adjectival descriptions or rankings is a determination that must be made by each procuring agency.

Comparison without ratings or scores is easy to do with a quantitative factor, but hard to do, impracticable in most cases, with something like past performance. That's because of the complexity of the information on which comparisons are to be based. Keep in mind that the sole function of raings and scores is to simplify more complex information in order to facilitate comparison.

What about risk? Evaluation of past performance and evaluation of risk are separate matters. Presumably, past performance has predictive value. But risk is a more complex idea than predictive value. That's why DOD uses its "confidence assessment" scheme. I proposed such a scheme more than  20 years ago in a monograph for The George Washington University Law School: How to Evaluate Past Performance: A Best-Value Approach (1994), which went into a second edition (my copy of which I can't find at the moment). I called the scheme the Level of Confidence Assessment Rating or LOCAR, which has been used successfully. See AdapTech General Scientific, LLC, B-293867, 2004 CPD ¶ 126, June 4, 2004, and Colmek Systems Engineering, B-291931.2, 2003 CPD ¶ 123, June 9, 2003. However, my LOCAR scheme used numbers, and most agencies are not comfortable with the use of numbers. Adjectival or color rating is the norm.

There's a lot more history of the use of past performance as an evaluation factor, but a forum post is not the place to provide all of it. Perhaps I'll write a blog post some day.

Finally, contrary to ji20874's claim, FAR does not require the use of a comparative approach. In devising approaches to source selection within the best value continuum, past performance can be evaluated using either a comparative or a past/fail approach, or not evaluated at all. But if evaluated on a pass/fail basis, the COC procedure will apply if an offeror is found to be unacceptable due to poor past performance. See FAR 15.101-2(b)(1). That's true of all traditional responsibility-type considerations.

Link to comment
Share on other sites

FAR 15.305( a )( 2 ), Past Performance Evaluation, calls for a "comparative assessment of past performance information."

FAR 15.308, Source Selection Decision, calls for a "comparative assessment of proposals against all source selection criteria."

I am suggesting that these are two separate comparative assessments -- the first done as part of the past performance evaluation (hence its mention in FAR 15.305( a )( 2 ) and the second done as part of the selection decision (hence its mention in FAR 15.308).  Others seem to believe that these are the same.

Vern, I absolutely disagree with your notion expressed earlier in this thread that past performance is nothing more than an assessment of how well the contractor has done that work in the past.  No amount of mocking will get me to change my mind on this.  I hope for intellectual honesty in this forum -- I'm okay with different opinions, but I hope for intellectual honesty.  I have already pointed out the following--

8 hours ago, ji20874 said:

I don't want any reader to think that I am opposed to evaluation standards -- I'm not -- I just am not satisfied with past performance evaluations that are only evaluations against a standard or that are nothing more than an assessment of how well the offeror has done that work in the past, so I generally want to ask for more from my past performance evaluators.

I don't see anything wrong with this statement.  For those here who are satisfied with past performance evaluations that are only evaluations against a standard or that are nothing more than an assessment of how well the offeror has done the work in the past, more power to you.  But I want more from my past performance evaluators.

Link to comment
Share on other sites

Guest Vern Edwards

ji20874:

Don't talk to me about intellectual honesty. You're conveniently skipping over the other comments you made about evaluating against standards, which I quoted, and emphasizing something you said late in the game, after you realized that you'd spoken too soon and said too much. You're not opposed to evaluation standards? Wasn't it you who said:

Quote

I've been encouraging direct comparisons (rather than evaluation against a standard) for technical evaluations for a while.  I hope the practice spreads... The perception also exists that past performance evaluations have to be done against a factor standard, rather than as a comparative assessment -- but as I have been trying to point out, FAR 15.305( a )( 2 ) clearly calls for a comparative assessment for the past performance evaluation... I think a past performance evaluation is more useful and more in keeping with FAR 15.305( a )( 2 ) when it is done on a comparative assessment basis rather than an evaluation against the standard basis. 

Was that written by someone who stole your identity,  And what's with this new thing about "two separate comparative assessments"? Man, you've been all over the map. Get a grip. Read a book: Competitive Negotiation: The Source Selection Process 3d, by Nash, Cibinic, and O'Brien (2011).

Link to comment
Share on other sites

Just a note: other observers are interested on how ji20874 is executing past performance evaluation without standards in his office. A method was mentioned by another post, but I think it is worth further discussion. Thanks.

Link to comment
Share on other sites

Guest Vern Edwards

In light of JMG’s request, it might be helpful to understand what “standards” are and why agencies use them.

Standards are criteria for judgement. Suppose that you are evaluating different candidates for office chairs and one of your criteria is comfort. The more comfort the better. Well, comfort is a highly subjective criterion. Different people will likely have different ideas about what is and is not comfortable in a given context. Thus, if you are going to appoint a panel to evaluate the relative comfort of competing chairs you will either have (a) to aggregate what may be widely differing opinions or (b) set a common standard for comfort in order to ensure (1) that everyone is evaluating comfort the same way and (2) that the competing chairs will be evaluated on the same basis. Thus, you might establish standards for ratings such as:

  • Comfortable
  • Uncomfortable

or

  • Extremely comfortable
  • Very comfortable
  • Acceptably comfortable
  • Somewhat uncomfortable
  • Extremely uncomfortable

Et cetera. Each standard should describe what the evaluators must think or experience in order to assign a particular comfort rating.

Standards are generally set before receipt of proposals in order to avoid complaints of that personal biases affect the development of standards after evaluators learn or suspect the identities of the competitors. Evaluators are told to evaluate strictly on the basis of the standards and to avoid direct comparisons in order to (1) ensure that minimum requirements are met and (2) avoid protests that personal preferences affected the evaluation outcome.

However, neither statute nor FAR require the development of evaluation standards (some agencies require them), and evaluation by direct comparison is not generally illegal. Each panel member could sit in each chair and personally rank the chairs from most to least comfortable. The panel members could then share and discuss their rankings and either reach a consensus or vote on a ranking.

If an evaluation factor is simple enough, direct comparison might work perfectly well. If a factor is multi-faceted or otherwise complex, it generally becomes harder to reach a consensus among evaluators, and a majority-rule vote may be necessary. Consensus is generally preferred to voting, because majority-rule decisions are not considered the best way to determine best value, but voting is not illegal.

One problem with setting standards before receipt of proposals is the possibility of short-sightedness or the emergence of an unanticipated proposal feature that is not encompassed by the standard. Such an event could give rise to an issue as to what to do: proceed on the basis of the original standard or change the standard after the fact.

The GAO has indicated that evaluation standards must be disclosed to prospective offerors. See RJO Enterprises, Inc., B-260126, 95-2 CPD ¶ 93, footnote 13.

Now we can all wait for ji20874’s response to JMG’s request.

Link to comment
Share on other sites

So this topic took off like no tomorrow. Based on my original posting, my scenario involves a past performance evaluation factor that puts experience and performance together but fails to have a confidence assessment scheme; like whats in the DoD SSP.  Because of this, the past performance evaluation factor is less than effective.

Link to comment
Share on other sites

Thanks Vern. Would another example of standards, taken from the DOD Source Selection Procedures, be relevancy of past performance (p.27) ? The standards for relevancy being :

Very relevant; relevant; somewhat relevant;  not relevant 

(setting aside your opinion of the document)

Ref: https://acc.dau.mil/docs/DoDSSP/Source%20Selection%20Guide%20and%20Memo%201%20April%202016%20ljm.pdf

 

Link to comment
Share on other sites

Guest Vern Edwards

JMG:

Yes, those are evaluation standards, e.g.:

Quote

Very relevant: Present/past performance effort involved essentially the same scope and magnitude of effort and complexities this solicitation requires.

I can't help but say that those standards are dubious. But they're definitely standards.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

×
×
  • Create New...