Jump to content

Is it necessary to use the terms "weakness," "significant weakness" and "deficiency" when communicating during FAR Part 15 discussions?


Vern Edwards

Recommended Posts

Since FAR 15.001 defines a "deficiency" separate from a "weakness"; I would think the CO must differentiate them accordingly.  Deficiency is a material failure of a proposal to meet a Government requirement or a combination of significant weaknesses in a proposal that increases the risk of unsuccessful contract performance to an unacceptable level.  Weakness means a flaw in the proposal that increases the risk of unsuccessful contract performance. A "significant weakness" in the proposal is a flaw that appreciably increases the risk of unsuccessful contract performance.

Link to comment
Share on other sites

I believe one may describe the problems without labeling them. This belief isn’t grounded in substantive research. Rather, without doing research outside of the FAR, I simply believe this ‘labeling’ is shorthand for the full definitions at FAR 15.001; an optional convenience. I am not aware of anything within FAR that mandates the use of the terms.

Here, under certain conditions the contracting officer must, indicate to, or discuss with, each offeror still being considered for award, deficiencies and significant weaknesses to which the offeror has not yet had an opportunity to respond. FAR 15.306(d)(3). Now, using the terms would likely increase clarity and brevity, but I don’t read that their use is required.

I won’t be surprised if this question is already settled elsewhere (e.g., in a FAR supplement, policy, or case law).

Link to comment
Share on other sites

22 hours ago, Vern Edwards said:

When conducting source selection discussions pursuant to FAR 15.306(d) and telling offerors about problems in their proposals, must COs label the problems using the terms weakness, significant weakness, or deficiency, as applicable, or may they simply describe the problems without labeling them?

I do not believe they need to be labeled pursuant to this reference - https://www.gao.gov/products/b-409187%2Cb-409187.2%2Cb-409187.3 

Link to comment
Share on other sites

3 hours ago, C Culham said:

I do not believe they need to be labeled pursuant to this reference - https://www.gao.gov/products/b-409187%2Cb-409187.2%2Cb-409187.3 

“…nor was it required to specifically label its concern as a “significant weakness,” as Wolf Creek claims.[15] See Grunley Constr. Co., Inc., B‑407900, Apr. 3, 2013, 2013 CPD ¶ 182 at 8.”

Link to comment
Share on other sites

@C CulhamI'll say one thing for you, Carl—you are a good researcher!

I talked with some people last week who fall under the Department of Homeland Security and who said that they were told they had to use the labels weakness, significant weakness, and deficiency when conducting discussions. The case I gave them was General Dynamics Information Technology, Inc., GAO B-420282, January 19, 2922:

Quote

 

Finally, GDIT asserts that the agency’s discussions were less than meaningful because the agency failed to label each of its requests for additional information as relating to relative weaknesses, significant weaknesses, or deficiencies. Protest at 42-47. In this context, GDIT acknowledges that the agency’s discussion letter “asked numerous questions,” but complains that the agency “neglected to advise GDIT whether any of these areas were significant weaknesses (or even relative weaknesses) to allow GDIT to better focus on its final proposal.” Id. In “neglecting” to specifically identify the level of concern associated with each area, GDIT asserts that the agency “misled” GDIT. Id.

The agency responds that its discussions with GDIT were proper. Specifically, the agency points out that it meaningfully directed GDIT’s attention to all of the areas in its proposal that contained significant weaknesses or deficiencies, reasonably leading GDIT into the areas of its proposal where more information was required.

* * *

Here, we find no merit in GDIT’s assertions that the agency was obligated to label each of its discussion questions as to the level of the agency’s concern. Where an agency is assessing an offeror’s understanding of the requirements, identifying the area of a proposal that creates concern is more than sufficient; indeed, in assessing an offeror’s relative understanding...

 

I also gave them Dolphin Park, LLC v. U.S., COFC No. 21-1693, 15 September 2022:

Quote

 

FAR § 15.306(d)(3) puts forth the relevant regulatory standard for discussing deficiencies:

At a minimum, the contracting officer must . . . indicate to, or discuss with, each offeror still being considered for award, deficiencies, significant weaknesses, and adverse past performance information to which the offeror has not yet had an opportunity to respond. The contracting officer also is encouraged to discuss other aspects of the offeror’s proposal that could, in the opinion of the contracting officer, be altered or explained to enhance materially the proposal’s potential for award. However, the contracting officer is not required to discuss every area where the proposal could be improved. The scope and extent of discussions are a matter of contracting officer judgment.

FAR § 15.306(d)(3). Discussions are deemed meaningful when “they generally lead offerors into the areas of their proposals requiring amplification or correction[.]” WorldTravelService, 49 Fed. Cl. at 439 (quoting Advanced Data Concepts, Inc. v. United States, 43 Fed. Cl. 410, 422 (1999), aff’d, 216 F.3d 1054 (Fed. Cir. 2000)) (internal quotations omitted). The meaningful discussion requirement “does not mean that an agency must ‘spoon-feed’ an offeror as to each and every item that must be revised, added, or otherwise addressed to improve a proposal.” Id. at 439–40 (citation omitted). The government is not required to use “the word ‘deficiency’ or ‘weakness’” for discussions to be meaningful. CACI Field Servs., Inc., 13 Cl. Ct. at 734.

 

There are other decisions from both tribunals that say agencies don't have to use the labels strengthweakness, significant weakness, or deficiency.

All COs need do is point out shortcomings in proposals that offerors should address in their final proposal revisions.

The labels don't hurt, but the problem with using them is that they can lead to needless disagreements and wastes of time due to arguments about which label should be applied to a particular problem in a proposal. ("That's not just a weakness! That's a significant weakness!" or "You rated me acceptable, but you should have given me a strength and rated me as very good!"

I blame the FAR for putting too much emphasis on the labels. I have had other people tell me that COs must use the labels.

See: "Source Selection Decisions," The Nash & Cibinic Report (June, 2018):

Quote

What is the purpose of identifying parts of a proposal as strengths, weaknesses, or deficiencies? It strikes us that if the evaluation factors are well established, as described above, then it is unnecessary to add the intermediate step of labeling particular statements or sets of statements as “strengths,” “weaknesses,” or “deficiencies.” Such ratings—while perhaps helpful to a Source Selection Authority in setting the stage for detailed consideration of the proposals based on the proposal facts and evaluation findings—become one more thing to explain, justify, document, and argue about if cited as bases for the source selection decision. And we think that the bid protest decisions we cited above, and the many others like it, prove our point.

Why be inefficient? Why get bogged down in intermediate ratings that are not essential to decisionmaking? Why not just reason from evaluation factor definitions and standards (major premises), through particular proposal facts (minor premises), to evaluation findings about proposal value? We realize that FAR 15.305(a) states: “The relative strengths, deficiencies, significant weaknesses, and risks supporting proposal evaluation shall be documented in the contract file.” But we do not read that sentence as requiring the rating of proposal content as strengths, deficiencies, and weaknesses. Our interpretation of that sentence is that if a source selection team chooses to apply such ratings it must explain (“document”) them.

* * *

The assignment of the intermediate ratings of strength, weakness, and deficiency may be helpful, but it is not necessary and is therefore inefficient, or potentially so, especially if evaluators, or an agency and an unsuccessful offeror, get bogged down in disagreement about whether such and such proposal content was or was not a strength, weakness, or deficiency. It is also risky if evaluators get lost while sailing in the proposal ocean and drift into rocks and shoals. Nevertheless, it does no harm to refer to strengths, weaknesses, and deficiencies during internal discussions, as long as the Source Selection Authority remembers that they are merely ratings and makes no mention of them in the source selection decision document. That document should refer to proposal facts, to evaluation findings about how well each proposal performs in terms of each of the evaluation factors for award, to the differences among the proposals in terms of those factors, and to the resultant differences in the value that they provide.

Nevertheless, government personnel must comply with their agency's polices.

 

Link to comment
Share on other sites

I have never used these labels in my exchanges with offerors.  I regret that some DHS contracting officers think they must.

I'll go even further -- it is not necessary to use these labels even in the internal technical evaluation report.  I have led evaluations where we scrupulously avoided using those labels.

Link to comment
Share on other sites

4 hours ago, ji20874 said:

I have never used these labels in my exchanges with offerors.  I regret that some DHS contracting officers think they must.

I'll go even further -- it is not necessary to use these labels even in the internal technical evaluation report.  I have led evaluations where we scrupulously avoided using those labels.

Agree.

Link to comment
Share on other sites

The revised DOD Source Selection Procedures define significant strength as follows, on page 38:

Quote

Significant Strength is an aspect of an Offeror’s proposal with appreciable merit or will exceed specified performance or capability requirements to the considerable advantage of the Government during contract performance.

Link to comment
Share on other sites

On 10/15/2022 at 2:26 PM, ji20874 said:

I have never used these labels in my exchanges with offerors.  I regret that some DHS contracting officers think they must.

I'll go even further -- it is not necessary to use these labels even in the internal technical evaluation report.  I have led evaluations where we scrupulously avoided using those labels.

 

On 10/15/2022 at 6:52 PM, Vern Edwards said:

Agree.

I'm not saying I disagree, just wanted to clarify if y'all are suggesting labels shouldn't be used or just that labels are not required to be used? 

Link to comment
Share on other sites

14 minutes ago, dsmith101abn said:

'm not saying I disagree, just wanted to clarify if y'all are suggesting labels shouldn't be used or just that labels are not required to be used? 

In my opinion they are not required to be used, and they should not be used because I think they are a needless, and generally useless, complication.

Link to comment
Share on other sites

The DoD source selection procedures, whether good or not so good, are intended to provide some continuity and consistency in conducting DoD source selections.

There are situations where standardization in evaluation criteria and rating terminology, vs. everyone making up their own systems or otherwise not giving the industry rating guidelines is useful or necessary. 
 

Link to comment
Share on other sites

2 hours ago, Vern Edwards said:

@joel hoffmanI am the OP for this thread, and I would like to know what your last post has to do with the question I asked.

What is your point? That uniformity of practice is a good thing?

Yes, that it can be. DoD has decided that a certain amount of uniformity is their policy. That was my opening point. 

Then I explained that industry told the USACE that there was little uniformity across the various USACE Districts in their source selections , including rating criteria for the same project types. It was a hindrance to their widespread participation in a huge, critical new program.

Firms told us that, for instance,  the same feature or capability (same project type, essentially same performance based design or performance based design criteria) would be a highly rated by one District and lower or negatively rated in another District. There was little continuity.

You made a general statement of your opinion:: 

3 hours ago, Vern Edwards said:

In my opinion they are not required to be used, and they should not be used because I think they are a needless, and generally useless, complication

Design, construction and design-build firms need to know what the customer is looking for, if they are going to spend $100k or more to compete in each Design-Build competition.

The Army stated it’s specific, challenging goals and objectives for the MiILCON Transformation Program. The Army and USACE had to significantly improve and streamline the delivery processes to achieve full scope and high quality, within the budget and within a much shorter acquisition cycle.

In my opinion, using standardized, understandable rating criteria for the thousands of new, standardized facilities, was essential.

In the one fiscal year that I was aware of performance metrics for the Army MILCON program, we achieved 99% project awards of the total MILCON program, within 100% of the total FY budget. That was vastly better than normal FY metrics. That year, the Army MiILCON program was six times larger than the previous several FY’s (about 12 billion vs. $2 Billion).

And we had data showing for a huge program at Fort Bragg, for example, that the contract completion times were 40% faster than previous design-build projects for the same type facilities.

Edited by joel hoffman
Link to comment
Share on other sites

@joel hoffman

16 minutes ago, joel hoffman said:

In my opinion, using standardized, understandable rating criteria for the thousands of new, standardized facilities, was essential.

You apparently have no idea what we've been talking about. We have not been talking about rating criteria. We have been talking about the use of ratings. What you have said proves my point—that the use of ratings is poor practice.

Link to comment
Share on other sites

1 hour ago, joel hoffman said:

Using labels to identify significant strengths, strengths, weaknesses, deficiencies, etc. is [1] directly related to the rating criteria and [2] to the relative differences between proposals. 

[1] is tautological. [2] is false under the DOD scheme. Think about it.

The labels get in the way of the proposal facts, which should be what matter. Ratings like strength, weakness, significant weakness, and deficiency draw attention away from proposal facts and become a distraction and source of needless debate and dispute. Moreover, they are strictly nominal. Two things can be a strength, but one can be "stronger" than the other, something the rating does not reveal unless you elaborate with something stupid like +, ++, +++, etc. Ralph Nash says the only ratings that should be used are "Good things" and "Bad things." Or you could say, "Things we like" and "Things we don't like."

I have said that ratings can be useful. But in my opinion adjectival ratings are a needless and not particularly useful distraction. Most source selections are too simple to require rating schemes.

But if you have a large number of evaluation factors (as when evaluating competing aircraft, spacecraft, or ground combat vehicle designs) and want to use ratings to simplify complex findings for presentation purposes, it's hard to aggregate adjectives. In such a case you should use numerical ratings on an interval scale. But that is beyond the know-how of most contracting officers. The paranoia about numerical ratings is a long-standing embarrassment that could be cleared up with adequate education and training in the methods of decision analysis.

Quote

The fundamental principle might be called numerical subjectivity, the idea that subjective judgments are often most useful if expressed as numbers. For reasons we do not fully understand, numerical subjectivity can produce considerable discomfort and resistance among those not used to it. We suspect this is because people are taught in school that numbers are precise, know from experience that judgments are rarely precise, and so hesitate to express judgments in a way that carries an aura of spurious precision. Judgments indeed are seldom precise—but the precision of numbers is illusory. Almost all numbers that describe the physical world, as well as those that describe judgments, are imprecise to some degree. When it is important to do so, one can describe the extent of that imprecision by using more numbers. Very often, quite imprecise numbers can lead to firm and unequivocal conclusions. The advantage of numerical subjectivity is simply that expressing judgments in numerical form makes it easy to  aggregate them. The aggregation of various kinds of judgments is the essential step in every meaningful decision.

Von Winterfeldt and Edwards, Decision Analysis and Behavioral Research (1986), p. 20, funded by Navy contract N00014-79-C-0529.

Link to comment
Share on other sites

On 10/14/2022 at 9:48 AM, Vern Edwards said:

When conducting source selection discussions pursuant to FAR 15.306(d) and telling offerors about problems in their proposals, must COs label the problems using the terms weakness, significant weakness, or deficiency, as applicable, or may they simply describe the problems without labeling them?

While perhaps not mandatory to use labels, I think that the government representative(s) must clearly identify and distinguish between those “problems” or aspects of the proposal which would preclude award and those which could or should be improved to materially enhance the proposal’s chances for award. This would likely include weaknesses, significant weaknesses and deficiencies.

But of course,  effective discussions in a trade off process often requires or may involve more than describing “problems”.

 

Edited by joel hoffman
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...