Jump to content

Justifying award to higher priced


Freyr

Recommended Posts

To what degree are agencies required to justify paying a premium for a better technical solution if the solicitation says the technical factors are significantly more important than price? I understand that each higher dollar doesn't need to be accounted for (like Feature A is worth $20,000, Feature B is worth $50,000, etc) but how in depth should the analysis for that trade off be? 

I've read this article and the cited GAO cases but I'm not sure I fully grasp what it means to provide meaningful consideration of price without quantifying features of a proposal that may not be quantifiable. 

Link to comment
Share on other sites

If you are using a tradeoff process, as opposed to, say, a higher technically rated with a fair and reasonable price process, then you must explain what good things you are getting from the higher-priced offeror that you are not getting from the others, or what bad things you are getting from the others that you are not getting from the higher priced offeror, and why the marginal difference in nonprice value is worth the difference in price.

In other words, if you are paying a price premium, you must explain what are you getting for it and why it's worth it.

Link to comment
Share on other sites

Vern's answer is right.  But I will add that the justification need not be dollarized or objectified.  See FAR 15.308:  "The source selection decision shall be documented, and the documentation shall include the rationale for any business judgments and tradeoffs made or relied on by the SSA, including benefits associated with additional costs. Although the rationale for the selection decision must be documented, that documentation need not quantify the tradeoffs that led to the decision."

Example--

Pillow A -- $10 -- Soft
Pillow B -- $12 -- Softer

acceptable:  I select Pillow A as the best value.  It is not as soft as Pillow B, but it is still sleepable.  And I am not able to justify paying 20% extra for B's additional softness. /s/SSA

  • Metro Productions Government Services, LLC B-416203, B-416203.2, Jul 6, 2018:
  • While the SSA noted that “Metro’s approaches have appreciable merit,” he determined that the “level of technical superiority of Metro’s proposal [did] not justify a price premium of approximately 18% higher” price when compared to DCG’s proposal, which was assigned a good rating under the technical factor. Id. In making his final decision, the SSA took into account the substantial confidence ratings both proposals received for past performance, and decided that price was the significant discriminating factor in Metro not receiving award of the contract. Accordingly, we deny this protest allegation because our review of the record confirms that the Army reasonably selected a lower‑rated, lower‑priced offer after concluding that the price premium involved in selecting Metro’s higher‑rated proposal was not justified in light of DCG’s good technical competence that was available at a lower price.

also acceptable:  I select Pillow B as the best value.  It is softer than Pillow A, and that softness will contribute to better sleep and better other outcomes.  The better sleep and other outcomes is worth the 20% price premium.  /s/SSA

  • Allied Technology Group, Inc., B-282739, August 19, 1999:
  • In making the tradeoff decision resulting in an award to an offeror with a higher technically rated, higher priced proposal, there is no requirement that the agency provide an exact quantification of the dollar value to the agency of the proposal's technical superiority.

 

  • MVM, Inc., B-407779, B-407779.2, Feb 21, 2013:
  • Where a cost/technical tradeoff is made, the source selection decision must be documented, and the documentation must include the rationale for any tradeoffs made, including the benefits associated with additional costs. However, there is no need for extensive documentation of every consideration factored into a tradeoff decision, nor is there a requirement to quantify the specific cost or price value difference when selecting a higher-priced higher-rated proposal for award.
Link to comment
Share on other sites

Before the Brooks IDO Act was repealed in 1996, the GSBCA (General Services Board of Contract Appeals had jurisdiction over protests of all ADP equipment and resources contracts, including what constituted Federal Information Processors (FIP). One prominent protest was a July 1994 decision: B3H Corp. v. Department of Air Force, GSBCA No. 12813-P, 94-3 B.C.A. (CCH) p 27,068, 1994 WL 372020 (1994), granting the protest of B3H Corporation (B3H) in a best value procurement. The GSBCA overturned the Air Force’s determination that a higher priced , higher rated offer was worth paying the difference in price. The GSBCA held that the Air Force didn’t quantify the additional benefits (including all tangible and intangible advantages) to justify its best value decision.

In addition, per some outlandish interpretation (by GSBCA?), the GSBCA protest jurisdiction was extended to any construction contract that included fire protection systems, HVAC systems and other building systems with microprocessors (FIP)  in the monitoring sensors and control systems. 

In 1995, GSBCA was requiring agencies (including DoD) to quantify virtually all advantages or benefits whether tangible or intangible to justify the best value award decision, including construction contracts. My office was caught up in one of those protests where the B3H Decision was cited as the precedent. We had to go back and re-evaluate proposals and try to quantify every advantage of a first rate offer over a mediocre proposal for a major hospital utility, mechanical, electrical and HVAC system monitoring and control system (“UMCS”) upgrade.

Fortunately, there was eventually an Air Force appeal of the B3H protest of the Air Force’s  best value Determination to the US Court of Appeals for the Federal Circuit  ( 75 F.3d 1577 (Fed. Cir. 1996)). The Circuit Court reversed the GSBCA position that the best value determination had to be quantified to justify paying more than the lowest priced technically acceptable offer or any other technically acceptable, lower priced offer. See: https://law.justia.com/cases/federal/appellate-courts/F3/75/1577/475281/

Also fortunately, The DOD authorization bill for fiscal 1996 repealed the Brooks Act and with it, the GSBCA protest jurisdiction, thank God!

Link to comment
Share on other sites

6 hours ago, Don Mansfield said:

The only thing I'd add is that when he decides whether to pay the extra $2 or not, what he said in his solicitation about the relative importance of price and softness is irrelevant.

I think I know what you mean, but given the OP's request for explanation it might help if you explained. I can provide quotes from plenty of GAO decisions that would seem to contradict you. For example:

Quote

Here, the record demonstrates that the SSA's best-value tradeoff decision was based on a detailed qualitative comparison of the proposals, consistent with the stated evaluation scheme, and identified discriminators between the proposals under each factor. AR, exh. 14, BCM at 48-50. As part of the SSA's tradeoff decision, the SSA acknowledged that LinTech received the highest past performance rating. Id. at 50. However, the SSA found that given the solicitation's stated relative importance of evaluation factors-- technical approach was more important than past performance--LinTech's superior past performance record was not substantial enough to outweigh the superiority of Nexagen's technical approach. As a result, the SSA concluded that Nexagen offered the highest-rated overall proposal under the non-price factors.

Protest denied. LinTech Global, Inc., GAO B-419107, 2021 CPD ¶ 5, Dec. 10, 2020. That kind of thing does not seem to indicate that statements of relative importance are "irrelevant."

Link to comment
Share on other sites

18 hours ago, Vern Edwards said:

@joel hoffmanA tale from the crypt. 😳

And your point is...? 

The FAR 15.308 “Source selection decision” language that doesn’t  require $$ quantification of the additional benefits of selecting a higher priced proposal was added by FAC 97-2 in Sep 1997 (the “FAR 15 Rewrite”).

Before that, FAR 15.611 “Best and final offers” (d) merely said “Following evaluation of the best and final offers, the contracting officer (or other designated source selection authority) shall select the source whose best and final offer is most advantageous to the government, considering only price and the other factors in the solicitation (but see 15.608(b):” [which described conditions for rejecting all offers, revising the solicitation and resoliciting new offers].

The pre-FAC 97-2, source selection procedures were not well written or very clear. I would read any references that I could find, especially GAO Protest decisions on guidance for determining what constituted the “best value”. I was pretty confident that we didn’t have to $$ quantify every difference between proposals to make that determination.

Then we were punched in the stomach by a GSBCA protest on our 1995 Utility Monitoring and Control System overhaul of a large Army Hospital, where the GSA Board cited the 1994 GSBCA protest decision of B3H vs. the Air Force. I’d never heard of that decision before or the GSBCA jurisdiction over DoD acquisitions. This was our first UMCS project for the Army MedCom. Another USACE office had been handling those projects. We learned later, the hard way, that every single SS competition for such systems was protested to GSBCA and MedCom would shop around other Districts. In this case, the loser protested within an hour or two of receiving the selection notification and had a canned protest, because there was no time to debrief and the protest claimed errors that it could never have even known about plus they weren’t true. The only basis that the GSBCA really had was the B3H decision.

 The GSBCA directed us to replace the SSA (Contracting Officer) and the entire source selection Board and re-evaluate proposals. The follow-on Board came to the very same conclusions that we did and the loser appealed again.

This time HDQTRS USACE appealed the next goofy GSBCA Protest decision to a court, and was able to overcome GSBCA jurisdiction by getting the Court to agree that the type of data processors used in typical building systems was not “FIP” for purposes of the Brooks ADP Act.  Thus, GSBCA had no jurisdiction over our DoD acquisition. And the Court agreed that the SS Decision was reasonable.

About the same time in 1996, Congress repealed the Brooks ADP act, which stripped GSBCA of its protest jurisdiction. And the Air Force successfully appealed the B3H GSBCA decision.

Then the FAR language was updated  in 1997 to make it clear that no quantification of every benefit is required. Due to the chronology of events,  I’m fairly certain now that the language was added or at least the need to was reinforced by the GSBCA’s incorrect interpretation of how to determine the best value.

This thread caught my attention and re-opened some bad memories and disgust with the GSBCA protest machinations.

 

Link to comment
Share on other sites

@joel hoffman

17 minutes ago, joel hoffman said:

This thread caught my attention and re-opened some bad memories and disgust with the GSBCA protest machinations.

They were occasionally good for entertainment though. I'll never forget this quote from SMS Data Products Group, Inc., 87-1 BCA 19496, GSBCA 8985-P, Dec. 3, 1986:

Quote

As we have earlier noted, Federal Acquisition Regulation 15.609(c) requires that vendors be notified “at the earliest practicable time” after it is determined that their proposals are no longer in the competitive range. In reply to any assertion that the contracting officer should have ejected SMS from the competitive range in early February 1986, after he had received the reports from the mandatory requirements and greatest value panels on SMS's revised technical proposal of January 6, 1986, Federal Data and the respondent regurgitate the same swill that we were offered by the contracting officer. We find this offering no more palatable on the second serving than it was on the first. 

 

Link to comment
Share on other sites

If you think about it, it should be obvious that one couldn’t even evaluate and compare many aspects or relative merits of the performance capability of proposers or even their technical approaches, if each additional dollar cost had to have a quantitative benefit value associated with it. 

In my opinion, the GSBCA was out of their league or at least out of their minds by getting into the protest arena., especially for DoD acquisitions. $/?&@##% agitators!!!

I don’t know why the Air Force waited two years to appeal the B3H decision. I begged our HDQTRS a year earlier to urge the USAF to appeal… Of course, I was simply an outside the Beltway employee.

Link to comment
Share on other sites

2 minutes ago, Vern Edwards said:

@joel hoffman

They were occasionally good for entertainment though. I'll never forget this quote from SMS Data Products Group, Inc., 87-1 BCA 19496, GSBCA 8985-P, Dec. 3, 986:

 

Voila!! 

Link to comment
Share on other sites

5 minutes ago, joel hoffman said:

If you think about it, it should be obvious that one couldn’t even evaluate and compare many aspects or relative merits of the performance capability of proposers or even their technical approaches, if each additional dollar cost had to have a quantitative value associated with it. 

You might be overstating your case. You can use numbers to analyze anything. COs just are not trained to do it. The problem is not that it cannot be done. The problem is that you cannot expect COs to be able to do it.

Link to comment
Share on other sites

1 hour ago, Vern Edwards said:

You might be overstating your case. You can use numbers to analyze anything. COs just are not trained to do it. The problem is not that it cannot be done. The problem is that you cannot expect COs to be able to do it.

Actually, the old $/point ratio that my boss used to make and justify the selection decision recommendation is a (crude) form of quantification. It just didn’t make any sense to me. I began getting rid of it by first relegating it as an “indicator” of the relative value, when he assigned me to conduct some, then many of the acquisitions. 
When he moved to another USACE office, I cleared it with our Chief of Contracting (the primary SSA) and we dumped it.

I was actually surprised that the contractor community simply accepted the $/point basis for the selection in debriefings. Probably because they were all engineers or business majors, used to numbers and formulas. Plus the whole thing was a mystery to them anyway and they respected my boss. 🤪

That was in the 1989-1991 time frame. 

Link to comment
Share on other sites

Yes, you can do almost anything with numbers. When we were using point scoring, my boss would set up 100 points and divvy it up between factors and subfactors. However, sometimes there would only be a few points difference in ratings between proposers, so it looked like we were nitpicking.

I anticipated “dissatisfaction” among the unsuccessful offerors. So I set up 1000 points, which made the distinctions to appear 10 times greater. It’s all in the eyes of the beholder. It’s easier to perceive a ten point difference than one point.
 

I am glad that the Army banned point scoring. 

Link to comment
Share on other sites

@joel hoffman

On 7/16/2021 at 6:07 AM, joel hoffman said:

I am glad that the Army banned point scoring. 

The prejudice against numerical rating methods in source selection is strong and, as a practical matter, insurmountable. However, if you read any textbook or article about decision-making the authors will likely use a 0 to 100 point rating system. (NASA uses a 1,000 point system, and you can hardly call them idiots.)

Two highly distinguished authors of one of the greatest of all textbooks on decision-making said this about the prejudice against numerical methods:

Quote

The fundamental principle might be called numerical subjectivity, the idea that subjective judgments are often most useful if expressed as numbers. For reasons we do not fully understand, numerical subjectivity can produce considerable discomfort and resistance among those not used to it. We suspect this is because people are taught in school that numbers are precise, know from experience that judgments are rarely precise, and so hesitate to express judgments in a way that carries an aura of spurious precision. Judgments indeed are seldom precise—but the precision of numbers is illusory. Almost all numbers that describe the physical world, as well as those that describe judgments, are imprecise to some degree. When it is important to do so, one can describe the extent of that of that imprecision by using more numbers. Very often, quite imprecise numbers can lead to firm and unequivocal conclusions. The advantage of numerical subjectivity is simply that expressing judgments in numerical form makes it easy to use arithmetical tools to aggregate them. The aggregation of various kinds of judgments is the essential step in every meaningful decision.

[Emphasis added.]

I find adjectival/color rating systems to be seriously dumb and, as a practical matter, almost useless. But they've been with us since the late 1970s and are not going away, giving the ignorance of decision analysis among policymakers and the workforce. It's funny how people who oppose the use of well-established numerical methods are willing to live with and base important decisions upon vague concepts such as "strengths" "weaknesses" and "significant weaknesses."

It's very hard to overcome a prejudice among those who won't learn. I usually don't try.

But I think you and I have discussed this before.

There is a new book out entitled, Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else, by Jordan Ellenberg. The first chapter tells how Abraham Lincoln taught himself the art of logical persuasion by studying Euclid's Elements. I read about it in The Wall Street Journal, and I'm reading it now.

Link to comment
Share on other sites

Yep, we have discussed it previously. I didn’t have problems working with with points other than the $/pt. Method but obviously many others did.

7 hours ago, Vern Edwards said:

people are taught in school that numbers are precise, know from experience that judgments are rarely precise, and so hesitate to express judgments in a way that carries an aura of spurious precision. Judgments indeed are seldom precise—but the precision of numbers is illusory. Almost all numbers that describe the physical world, as well as those that describe judgments, are imprecise to some degree.

My biggest problem with numerical scoring rating levels based upon a range or specific points is how others in the Army and particularly the USACE seemed to rely on an illusionary sense of precision for the rating system.

In fact, they often would assign a “ point score”, to fit a desired rating level, then back into justifying the rating, rather than first documenting and describing the strengths, weaknesses, uncertainties, deficiencies, etc. , then assign the rating, which is relatively easy to do, either for numerical or adjectival rating systems.

In addition to handling the source selections for my District I taught the Corps-wide life cycle acquisition process for design-build construction as well as other contract admin and various contracting courses for the USACE over the course of 30 years. My KO fellow instructors did not seem to understand the simplicity of the process to simply first identify, list and come to a consensus on the underlying evaluation comments, then let the score or adjectival rating  fall out per the rating definitions. And - then they often simply tend to compare the factor roll up ratings rather than the relative differences.

We can just agree to disagree, then.  Just my opinion from observing the seeming confusion and Protests over about 30 years. 

Link to comment
Share on other sites

I am not a fan of numerical scoring in government procurements, but I don't think it is because of ignorance.  I'm not a fan because the process is unnecessarily complex for the vast majority of government procurements as my primary reason.  But I acknowledge that there is a place for numerical scoring for very complex acquisitions.

Link to comment
Share on other sites

On 7/16/2021 at 9:25 AM, ji20874 said:

I'm not a fan because the process is unnecessarily complex for the vast majority of government procurements as my primary reason. 

@ji20874I'd like to address that.

FAR requires that agencies evaluate offerors and their offers on the basis of stated evaluation factors.

Following the concepts and principles of decision analysis, I say that "evaluation factors" are attributes (characteristics, features, properties, qualities, etc.) of the things to be evaluated. They are qualities that will contribute the the achievement of the government's objectives, either by their presence of absence.

A description of an evaluation factor should include a description of the object of evaluation—the thing to be evaluated—and the attribute of interest. If the evaluation factor is "soundness of proposed approach," then the object of evaluation is the proposed approach and the attribute of interest is soundness

The purpose of evaluation is to determine whether and to what extent various facets of offerors and their offers possess specified attributes. The findings of the evaluators in that regard are the principle products of evaluation. If the evaluation factor is soundness of approach, the evaluators will read the offerors' descriptions of their proposed approaches and determine whether and to what extent they possess the quality called "soundness." The evaluation findings will be statements of the extent to which each of the proposed approaches is or is not "sound."

Both the GAO and the COFC have condemned the practice of basing a source selection decision on ratings alone. See, e.g., Braseth Trucking, LLC v. U.S., 124 Fed. Cl. 498, Dec. 4, 2015:

Quote

The government's argument to the contrary—that Braseth and Corwin had no substantial chance because they would have received the same “satisfactory” rating even if the CO had not imputed Connie's past performance to them—is unavailing, for “proposals awarded the same adjectival ratings are not necessarily equal in quality.” Blackwater Lodge & Training Ctr., Inc. v. United States, 86 Fed.Cl. 488, 514 (2009). Indeed, adjectival ratings are simply “useful as guides to decision-making” and are not intended to be outcome-determinative. See Redstone Tech. Servs., B–259222, 1995 WL 153633, at (Comp.Gen. Mar. 17, 1995) (rejecting a CO's “mechanical” application of adjectival ratings).

And see Manhattan Strategy Group, LLC, GAO B-419040.3, 2021 CPD ¶ 216, May 21, 2021:

Quote

[T]he evaluation of proposals and the assignment of adjectival ratings should not generally be based upon a simple count of strengths and weaknesses, but upon a qualitative assessment of the proposals consistent with the evaluation scheme; it is well established that adjectival descriptions and ratings serve only as a guide to, and not a substitute for, intelligent decision-making. Environmental Chem. Corp., B–416166.3 et al., June 12, 2019, 2019 CPD ¶217 at 12. 

Since FAR does not require ratings, and since the GAO and COFC will not permit agencies to base their decisions upon them, why use them? The reason is to reduce complexity and facilitate rationality.

The finding associated with each of the various evaluation factors is stated in different terms than the others. That's because each factor is a different attribute, with a different description and a different scale of measurement or assessment. The terms used to describe the findings about "soundness of approach" will not be the same as those used to describe, say, "depth of experience," because those are attributes of different natures measured or assessed on different scales. Taking all factors into consideration at once in order to determine each offeror's overall value can be very hard if there are more than two or three factors.

Ratings are used in order to simplify complex factor findings by converting findings expressed in different terms and on different scales to common terms and scales.

In the quote I provided above, the authors stated:

Quote

The aggregation of various kinds of judgments is the essential step in every meaningful decision.

Each finding is like a fraction with a unique denominator. You cannot aggregate them until find the lowest common denominator (LCD). And that's where rating comes in. Ratings, whether they be numbers, adjectives, colors, emojis, or whatever, are like LCDs. A rating system converts judgments expressed in various terms to common terms like 85 points or "Good" or Green. However, you still cannot aggregate adjectives or colors.

I agree that if you have a simple acquisition, by which I mean one that entails the evaluation of only two or three factors, then rating is unnecessary and perhaps a needless complication. Taking two or three factors into consideration at once is not too hard. But if you have an acquisition in which there are 5 or even 100 factors, then rating is useful and even necessary.

Now, if rating is a useful or necessary means of simplifying complex information for decisional purposes, then a numerical system is superior to an adjectival system, because, quoting the authors again, "The advantage of numerical subjectivity is simply that expressing judgments in numerical form makes it easy to use arithmetical tools to aggregate them." 

If one has had the proper training, the process of using numerical rating is not inherently complex. There are several ways to do it (e.g., Simple Additive Weighting) and many books that describe them.  But to one who has not had the proper training, writing a simple English declarative sentence is complex, and so is long division. And goodness knows, the contracting workforce does not receive proper training in formal decision-making techniques.

I will agree in advance to disagree. I expect disagreement. I long ago gave up trying to convert anti-numericals. I am not trying to change anyone's mind here. I am not seeking anyone's agreement.

I was once a convinced anti-numerical myself, back in the late 1970s, until properly taught by knowledgeable persons.

I have been able to convert a few, but not many, either because I have not been convincing or because they were beyond the reach of my ability to inform and persuade. Based on my training and wide-reading, I believe that government source selection is in the stone age, but the government manages to chose contractors, usually, eventually, so what the heck.

All I'm doing here is thinking out loud and talking to myself.

Link to comment
Share on other sites

The primary value or advantage of using numbers for me was to convey the relative importance and magnitude of the hierarchy of the various factors and subfactors for perspective to both industry and government selection teams.

Many Design Build projects have numerous features of interest and importance to evaluate in competitions, because we had to award FFP Contracts. Meetings of the minds on scope, functionality and quality are vital before selection and award due to the constrictions and limitations  of the allowable processes.  

Link to comment
Share on other sites

Off topic but somewhat humorous.  I worked in ADP acquisitions at GSA at the peak of Brooks Act influence. GSA wrote a source selection manual for ADP for governmentwide use and often conducted acquisitions for other agencies - agencies had to request a delegation to conduct their own procurement or GSA did it for them.

The policy required everything, including price/cost to be point scored.  The lowest price offeror got the maximum and other higher price offers received proportionally less.  The winner was the source with the most points! 

Link to comment
Share on other sites

13 hours ago, Vern Edwards said:

I think I know what you mean, but given the OP's request for explanation it might help if you explained.

Fair enough. Assume ji's pillow solicitation stated that softness was significantly more important than price, but he decides Pillow A is a better value. He trades off extra softness for a lower price. The offeror of Pillow B protests because their pillow was softer, the solicitation said softness was significantly more important than price, and the difference in price is only $2.

In deciding such protests, the GAO has historically focused on whether the tradeoff decision is rational--not whether enough emphasis was placed on a particular factor in making the tradeoff decision. 

In "Relative Importance of Evaluation Factors: Much Ado About Nothing, 18 N&CR ¶ 30", Professor Nash analyzed several protest decisions where the protestor unsuccessfully tried to make an issue of the statement of relative importance not being consistent with the tradeoff decision. I can't find the article, but here's a response they provided to a letter they received criticizing their conclusion:

Quote

We should have limited our commentary to the role of relative importance in making the ultimate trade-off decision between price and the nonprice evaluation factors. It is there that the statement of relative importance is of little importance because that decision should always be made based on a determination of whether the difference in the nonprice factors is worth the difference in price. After all, that is the essence of best value.

The OP's question suggests they think that the statement of relative importance matters more in a tradeoff decision than it actually does.

Link to comment
Share on other sites

@Don Mansfieldreading what you just put logically makes sense but why then do COs always put "is significantly more important" when what it really comes down to is the basic question of whether or not the technical superiority is worth the price? Is it just to comply with the FAR? Is it just a clumsy way of saying "we'll pay more for a better technical solution if it's worth it"? 

Every evaluation I've seen starts, as the FAR requires, with an eval against started criteria. Why doesn't the FAR allow us to just cut the chase and compare each proposal to one another? 

Link to comment
Share on other sites

38 minutes ago, Freyr said:

Why doesn't the FAR allow us to just cut the chase and compare each proposal to one another?

Does the FAR prohibit this?  Where?

Comparative evaluations are expressly allowed for simplified acquisitions, and the guidelines for fair opportunity comparisons says scoring of offers is not required.

For source selections, some people say FAR 15.303(b)(4)'s statement that source selection authority (SSA) shall "ensure that proposals are evaluated based solely on the factors and subfactors contained in the solicitation" means the agency is required to evaluate each proposal against factors and subfactors and is a prohibition on comparison.  I beg to differ with those people.  For me, an evaluation involving a direct comparison of offers based solely on the factors and subfactors is okay for a source selection.  For a source selection, the FAR does not require the use of either numerical scoring or adjectival ratings -- although I admit that most practitioners seem to think that scoring or rating is required.  Indeed, FAR 15.305(a) expressly allows for ordinal rankings, and, well, the only way to do ordinal rankings is to directly compare.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...