Jump to content

Highest Technically-Rated, Reasonable Price/Offer (HTRO) and Responsibility Determination


govt2310

Recommended Posts

I have to suppose the PIL's "not recommended for use under FAR part 15" for the direct offeror-to-offeror comparison was not because the PIL believed it was prohibited -- rather, I suppose the PIL simply didn't want to engage in those arguments for formal selections and tactically chose to encourage the technique in simplified acquisitions (including commercial items up to the FAR 13.501 threshold) and ordering situations instead.  I believe the use of words was very carefully done.  The PIL Workbook does not require the use of ratings or scores for Part 15 procurements.  The PIL Workbook does not prohibit comparison of offers for FAR Part 15 procurements.

The Workbook's cover says, "Everything here is intended to be helpful; nothing here is intended as policy."  The cover also says, "This document is a training aid to support the PIL Boot Camp all-day experience for the DHS acquisition community. This is not a stand-alone document."

I appreciate the on-going efforts of the PIL to loosen things up a little and to throw off excess baggage.

Link to comment
Share on other sites

4 hours ago, ji20874 said:

This comparison is inapt. 

Do you mean the example I provided of the assertion/guidance found in the PIL Workbook? Why is it inapt? The Workbook’s technique doesn’t qualify or provide rationale for its assertion. Moreover, nothing in the FAR system—that I’m aware of—requires ratings so the technique is not a novel idea for limited application where the limitation would clearly make sense. Comparative evaluations are not limited to direct comparisons without ratings.

5 hours ago, ji20874 said:

What we're talking about in this thread is not that

I’m not sure what you have limited your discussion to but, comparisons were first mentioned in the eighth post and the original poster mentioned comparison language in the ninth post. The comparisons mentioned related to qualitative evaluation of responsibility type factors (e.g., not pass/fail or unacceptable/acceptable). I assumed, seemingly correctly, that the original poster was under the belief that comparative evaluation [of factors] weren’t allowed under FAR part 15. The original poster’s reference to the PIL Workbook is not surprising even if you believe it’s misapplied.

Link to comment
Share on other sites

1 hour ago, Vern Edwards said:

you have seen that unsophisticated readers can only wonder about that given the way they put things.

I wonder who the PIL’s target audience is within the DHS? I know a lot of people outside of DHS reference their work when making arguments for or against a practice.

Link to comment
Share on other sites

The original poster erred in asserting that evaluations and selections under FAR Part 15 cannot be made on a comparative basis.  Up to that point in this thread, a comparative basis was used to contrast with a pass-fail basis. The original poster's pointing to the PIL Workbook to save his or her assertion fails, as the comparative technique described therein is not identical to the comparative basis discussion throughout this thread.  Yes, the word comparative exists in both but the contexts are different.

It is true that the PIL Workbook does not recommend the idea of head-to-head-comparisons-without-adjectival-ratings for Part 15 procurements, but "not recommended" does not mean "prohibited." 

I am glad people are reading the PIL Workbook.  If it is generating professional dialogue, that is good.  If it is helping people shake off baggage, that is good.  I'm not aware that there really is anything else out there doing anything helpful.  But the PIL Workbook does not present itself as policy or as a textbook; rather, it presents itself as a hopefully helpful tool for those who are willing to help make the acquisition process more responsive.  For those practitioners, I think the PIL Workbook is one of the best things out there.  If there are other useful aids in other places, I hope our readers will make mention of them here.

I hope this discussion has been helpful to the original poster.  There is good advice in this thread.

Link to comment
Share on other sites

1 hour ago, ji20874 said:

I am glad people are reading the PIL Workbook... I'm not aware that there really is anything else out there doing anything helpful.

@ji20874If you are not aware of "anything else" doing anything helpful, maybe you're not looking in the right places.

I will see if I can arrange for Bob to be able to post some helpful things.

Link to comment
Share on other sites

Another maybe unnoticed benefit of the PIL workbook is it shows doing some action is alright.  It’s sometimes tough for an 1102 contract specialist to try something different and get approval from policy people and contract law attorneys that review actions without proof that it’s an acceptable practice.

Link to comment
Share on other sites

1 hour ago, formerfed said:

Another maybe unnoticed benefit of the PIL workbook is it shows doing some action is alright.  It’s sometimes tough for an 1102 contract specialist to try something different and get approval from policy people and contract law attorneys that review actions without proof that it’s an acceptable practice.

The PIL Workbook discussion of comparative evaluation is not proof of anything. That's not to say that it's wrong about anything, but that it is short on explanation. That's my main complaint about it. It is incomplete. Look at page 22, which presents "Innovation Technique 5, Comparative Evaluation."

WHY is comparative evaluation "Not recommended for use under FAR Part 15"? That's a natural question, which the Workbook should answer. It should explain.

In what way does comparative evaluation provide "ultimate subjectivity/flexibility"? Explain. Is subjectivity better than objectivity?

More fundamentally, what the heck is comparative evaluation and how does it work? The workbook says it's "comparing offers to each  other." Don't we always compare offers to each other when making a source selection decision? Isn't that the very essence of competition? What I think they mean is compare offerors to each other, not against a standard or rating scale. They could have explained that in a few sentences. Maybe they could have used some of the space they gave to the two big and not especially useful text boxes that take up three fourths of the page.

It refers readers to the "GAO Guide," but when you get there the only reference to comparative evaluation is to a single protest decision about a task order competition. That 14-page decision does describe direct comparative evaluation in two short paragraphs on page 3. But some explanation in the PIL Workbook about what to look for in that decision and what to see would have helped. (The protest did not challenge evaluation by direct comparison.)

Ideation without explanation is practically useless, unless the people you are communicating with already know and understand the underlying concepts and principles. 

The Procurement Innovation Lab is a good thing, and my objective in writing this is not to trash them. But they need to do better for a workforce whose leadership has failed to provide them with essential professional education and training and essential reference tools of their trade. The PIL Workbook is interesting and pretty, but it's not enough.

Link to comment
Share on other sites

Okay, so Vern doesn't like the PIL Workbook.  He seems to wants a exhaustive comprehensive textbook.  Okay.  The Workbook's cover says, "Everything here is intended to be helpful; nothing here is intended as policy."  The cover also says, "This document is a training aid to support the PIL Boot Camp all-day experience for the DHS acquisition community. This is not a stand-alone document."  One errs when judging it as a stand-alone policy document, and outside the Boot Camp setting.  But still, if it is prompting professional dialogue, that is good. 

I am glad that many DHS practitioners (contracting officers and procurement attorneys) and maybe practitioners in other agencies, are successfully using comparative evaluations without assigning adjectival ratings in simplified acquisitions and ordering situations.  That, I think, was the PIL's goal, and that is an honorable goal.  It seems the PIL Workbook is fulfilling its intended purpose.  It would be unfair to expect the PIL Workbook to fulfill other purposes beyond its intended purpose.

Vern hopes to get some other helpful things posted here -- that's good.  I have never heard of anyone other than the PIL providing help to practitioners in doing comparative evaluations without assigning adjectival ratings, but I will be gratified to learn that others are also teaching this approach. 

Link to comment
Share on other sites

2 hours ago, ji20874 said:

I have never heard of anyone other than the PIL providing help to practitioners in doing comparative evaluations without assigning adjectival ratings, but I will be gratified to learn that others are also teaching this approach. 

I didn't know whether to laugh or cry when I read that comment.

PIL did not provide "help" practitioners in doing comparative evaluations without assigning adjectival ratings. They just told them about the possibility of doing it in certain kinds of acquisitions. They did not describe the process or the pros and cons. Aside from one GAO protest decision cited on another page, they did not refer practitioners to a source of practical information, like Hammond, Keeney, and Raiffa, Smart Choices: A Practical Guide to Making Better Decisions, Harvard Business Review Press (1999), Chapter 5, or Goodwin and Wright, Decision Analysis for Management Judgment, 5th ed., Wiley (2014), p. 40, "Direct rating." Neither book teaches the use of adjectival ratings.

In 1993, Ralph Nash and John Cibinic wrote this, in Competitive Negotiation: The Source Selection Process, p. 350, The George Washington University.

Quote

Ranking [by direct comparison, without scoring] runs the risk of not identifying the key deficiencies in the proposals unless it is accompanied by detailed narratives. Nonetheless, the direct ranking of proposals has been upheld, Development Assocs., Inc., Comp. Gen. Dec. B-205380, 82-2 CPD ¶ 37. The Comptroller has even stated that "ranking proposals may be a more direct and meaningful method" than numerical scoring, Maximus, Comp. Gen. Dec. B-195806, 81-1 CPD ¶ 285.

See Development Associates, Inc., B-205380, July 12, 1982:

Quote

The protester also contests the failure to use scoring or rating procedures in the final evaluation. The RFP did not indicate, however, that proposals would be evaluated on the basis of numerical point scores. Moreover, although a point scoring system may be useful as a guide to intelligent decision-making, numerical scores merely reflect the disparate judgments of the evaluators and, as such, do not transform the technical evaluation into a more objective process. Ranking proposals directly, that is, without scores, may be a more meaningful method if ranking permits the contracting activity to gain a clearer understanding of the relative merits of the proposals. See MAXIMUS, B–195806, April 15, 1981, 81–1 CPD 285. Thus, we find nothing inherently improper with ranking proposals without the aid of numerical scores.

(Use of adjectival ratings was first mandated by the Air Force in the early 1980s because there were too many foul-ups using numerical scores, and because they thought it would foster more subjectivity.)

According to Westlaw I have written something like 50 published articles in which I discussed scoring or rating. Fifteen years ago, In 2006, I wrote an article entitled, Scoring or Rating in Source Selection: A Continuing Source of Confusion, which Bob will post at Wifcon as soon as the publisher grants permission. In that article I wrote this:

Quote

It is essential that everyone involved in contractor selection understands the distinction between evaluating offerors and their offers and scoring or rating them. Evaluation is the process of determining the relative value of a thing. Scoring or rating is the use of words or symbols to express evaluation findings in simple terms. The Federal Acquisition Regulation requires agencies to evaluate offerors and their offers, but it does not require that they score or rate them. FAR 15.305(a) is misleading in saying that evaluations “may be conducted using any rating method or combination of methods, including color or adjective ratings, numerical weights, and ordinal rankings,” because rating, in the sense of the assignment of adjectives or symbols, is not a method of conducting an evaluation, it is a method of expressing the results of an evaluation.

I wrote this in 2009:

Quote

If an agency is considering only one or two factors, say, engine thrust and price then it does not make much sense to bother with converting evaluation findings to scores or ratings. The evaluators can simply report their findings to the decisionmaker, who should find it easy to make nonprice/price tradeoffs and decide which competitor offers the best value. But if there are multiple evaluation factors, and if each factor is measured on a different scale, then the SSA will face a rather complex tradeoff analysis problem. The greater the number of evaluation factors and the greater the number of offerors the more complex the problem. A preliminary and conditional ranking based on scores or ratings might help an SSA get oriented. But if an agency is going to use scores or ratings in that way, it should want to develop a rational scheme. In order to be valid and useful, the scores or ratings must preserve the differences among competitors and make them discernible.

In a lengthy analysis of a protest against agency proposal scoring, Prof. Nash wrote this in 2010:

Quote

This takes us back to our premise in 23 N&CR ¶ 61--that scoring systems tend to do more harm than good. Wackenhut is the perfect example because there the agency lost months litigating the accuracy of scoring and further damaging the procurement process by inducing the court to demand an excessive amount of documentation supporting the accuracy of the scoring system. Dropping scoring from the process thus has two benefits. First, it forces source selection officials to base their decision on the real substantive data resulting from the evaluation process. Second, it keeps losing offerors from filing protests addressing the accuracy of the scoring. That's a double benefit that seems compelling to us. 

PIL did not invent the wheel. There is nothing new about evaluation by direct comparison without using ratings or scores.

If you have never heard, then maybe you should review your reading program. 

Link to comment
Share on other sites

That's fair, Vern.

The PIL modus operandi is to assign a coach to an interested acquisition team that wants to try a new technique with a coach's assistance -- ideally, the coach will be up to date on the latest state of the practice for a particular technique.  That information is relayed orally.  

Link to comment
Share on other sites

@ji20874 That's a good idea!

The PIL has done and is doing good things. But written explanatory material is of enormous value, especially when widely disseminated.

The power of ideation to drive innovation when combined with clear and targeted explanation is virtually limitless.

One of the greatest examples of the 20th Century. Four pages.

https://psychology.okstate.edu/faculty/jgrice/psyc3214/Stevens_FourScales_1946.pdf

Every one of my source selection students gets a copy.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...