Jump to content

Required to Roll-up Adjectival Ratings?


Lwall

Recommended Posts

Are agencies required to roll up adjectival ratings to a single rating for each offeror?  I have found cases discussing roll-ups but nothing from the GAO saying that agencies must establish an overall rating.  Any help/citations would be much appreciated! 

Link to comment
Share on other sites

Guest Vern Edwards

The FAR contains no requirement to aggregate ("roll up") either adjectival, color, or numerical ratings. There may be such a requirement in some agency FAR supplement or policy document.

Link to comment
Share on other sites

As you have referenced GAO but not FAR, unsure if you had read this so simply providing the following as a reference in case you had not.....

FAR 15.305(a)(3) -

(3) Technical evaluation. When tradeoffs are performed (see 15.101-1), the source selection records shall include --

(i) An assessment of each offeror’s ability to accomplish the technical requirements; and

(ii) A summary, matrix, or quantitative ranking, along with appropriate supporting narrative, of each technical proposal using the evaluation factors.

Link to comment
Share on other sites

Vern - Agreed.  No intention to imply it did.  Sorry that you took my post to imply otherwise as it was intended exactly as stated to just bring to light the FAR reference regarding records requirements regarding proposal evaluation noting that the OP's post could be read that FAR was not a part of the OP's research just the GAO.

For the good of the order I reviewed all Department/Agency supplements to the FAR to see if there was possibility that a "roll-up" requirement existed and I found none.

Link to comment
Share on other sites

  • 3 weeks later...

Some agencies do require a 'rolled-up' rating in each factor. This is straight from the Army Source Selection Supplement ASM3.

Step Six: Assign Ratings for Non-Cost Evaluation Factors When Using the Tradeoff Process -- At this point, the evaluators may or may not individually assign ratings to each evaluation factor or subfactor for which they are responsible. At a minimum, each evaluation group must convene to discuss the offeror’s proposal. The purpose of the discussion is to share their views on the offeror’s strengths, weaknesses, deficiencies, risks and uncertainties related to their assigned evaluation factor(s)/ subfactor(s) and to reach a final rating for each factor and subfactor using the Adjectival Rating(s) identified in the SSP. In exceptional cases where the evaluators are unable to reach an agreement without unreasonably delaying the source selection process, the evaluation report shall include the majority conclusion and the dissenting view(s), in the form of a minority opinion, with supporting rationale which must be briefed to the SSA. Consensus requires a meeting of the minds on the assigned rating and associated deficiencies, strengths, weaknesses, uncertainties and risks. A simple averaging of the individual evaluation results does not constitute consensus.

This is from the DoD Source Selection Guide.

3.2.1 SSEB Initial Evaluation. Following the initial round of evaluations, the SSEB Chairperson will consolidate the inputs from each of the evaluation teams into an SSEB
report for presentation to the SSA. The PCO and the SSEB Chairperson shall ensure that proposals are evaluated solely against the criteria contained in the solicitation and no
comparative analysis of proposals was conducted by SSEB members unless clearly stated in the SSP or otherwise directed by the SSA. All evaluation records and narratives shall
be reviewed by the PCO, Legal Counsel, and the SSEB Chairperson for completeness and compliance with the solicitation. In the event that the SSEB members are not able to come to a consensus opinion on the evaluation of a particular proposal, the SSEB Chairperson will document the basis of any disagreement and raise it to the SSAC Chairperson, or if no SSAC, to the SSA to resolve.

We also used this method at NASA, however I do not recall any NASA regulations which stated we had to do 'roll-ups'. 

Link to comment
Share on other sites

Read the opening post too quick. He is asking about wrapping up every factor into one overall rating, I have never seen an agency regulation requiring that.

 

Link to comment
Share on other sites

For the topic in this thread, I see no benefit , only possible pitfalls in requiring an overall "roll up" adjectival rating. It could encourage simplistic decision making. It could deter some officials from performing effective tradeoff analyses. 

When we (an Army Command) used numerical rating systems, we generally totaled the point score of all the factors.

 As I recall, the Army banned use of numerical rating systems in 2004 in an AFARS revision, because many Army organizations were making simplistic selection decisions, relying on overall scores, rather than  comparing the underlying strengths and weaknesses, relative advantages and disadvantages, etc. between proposals  under the individual factors and sub factors. 

When you view a roll up of individual factor ratings, you aren't seeing the ratings of the individual factors. Individual factors are often not equally weighted, using numerical ratings or equally significant, using adjectival ratings. It may be possible for two or more proposals to have the same or similar overall ratings but have dissimilar individual factor ratings. What is the value of requiring or just using a roll up rating?  

Some organizations were making selections using oversimplified techniques such as dollars per point ratios.  Our organization was one of those back in the late Eighties and early Nineties.

State Highway Departments were citing use of mathematical $/pt. ratios and similar techniques in their Design-Build Institute of America National Transportation Conference presentations fifteen years later.  Best Value, Design-Build acquisition methods are relatively recent developments in many State Transportation Departments. My impression is that many  highway construction contractors, as engineers and such, didn't really understand the limitations  of such methods.  They appeared to be really straight forward and easy to understand.  Our contractors didn't complain either. "Best Value trade-offs" were mysterious and the simple way to determine the winner with the lowest price per points seemed to make sense to them. :) 

However,  I used to read  GAO protests of Army source selections concerning decisions that primarily relied on the numerical scores. I was able to steer our District away from those simplistic methods in the early 90's, after my boss transferred.

We did still utilize point scoring until the 2004 AFARS ban on numerical ratings and numerical weighting  of the factors and sub factors.

I once taught a Design-Build class with the chairwoman of the AFARS committee which made the decision to go adjectival rating systems.  She confirmed that the reasons for banning numerical scoring in Army source selections were pretty much what I described above.  

I have taught numerous KO's and evaluation personnel in USACE training classes how to focus on documenting the strengths and weaknesses, etc., then use them to assign the adjectival factor or sub factor ratings during the consensus evaluation. The actual ratings usually "fell out", based upon the rating definitions. Many of those people had been on panels that assigned the score or rating to a factor, then found reasons to back up the ratings. That is the reverse of the correct way to do it.  

There is no reason,  in my estimation,  why one would need to roll up all adjectival factor ratings to one overall rating. 

Link to comment
Share on other sites

For the benefit of the original poster, when undue focus is placed on rollups in summary tables and narratives, rather than the underlying differences between proposals of strengths, weaknesses, advantages, disadvantages, etc., the selection officials, their advisory panels and and evaluation boards are missing the point.  The Decisions can be vulnerable to successful protests for inadequate recommendations, inadequate trade-off analyses, lack of documentation and resulting justification for the selection decision. 

From my experience,  multi-discipline advisory boards, usually composed of higher graded, supervisory employees, were less likely to delve into details.  They gravitated to summaries and generalizations, such as rollup ratings. My higher level boards required a lot of hand holding. My role  often was as their "professional advisor".  

By the way, I personally did not have a problem using a numerical rating system nor did I have any problems transitioning to adjectival systems. 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...