Jump to content
The Wifcon Forums and Blogs

Sign in to follow this  
Neurotic

Cost/Price Risk

Recommended Posts

One of our COs wants to issue a solicitation (cost type contract) that would assess cost risk and assign a risk rating to cost/price as a stand-alone factor. Below are the risk ratings/criteria proposed for the solicitation.  In my opinion the risk should be assessed on the non-price factors (the degree to which these factors are present or not) which are the cost drivers. Other than that I am struggling to articulate in an email why assigning a risk rating to the cost factor would be a bad idea (if it really is). Any thoughts on this?  

High Risk:

Likely to cause significant decreases in performance or increases in schedule or cost, even with increased contractor emphasis and increased Government monitoring.

Moderate Risk:

Could potentially cause some decreases in performance or increases in schedule or cost. However, increased contractor emphasis and increased Government monitoring may be able to overcome difficulties.

Low Risk:

Limited potential to cause decreases in performance or increases in schedule or cost. Normal contractor effort and normal Government monitoring will probably be sufficient to overcome difficulties.

 

Share this post


Link to post
Share on other sites

I'm a big supporter of the guiding principles in FAR 1.102 et seq., so my first thought is that if a contracting officer wants to do it, I want to be supportive.

What's the harm in assigning a risk rating to cost?  The contracting officer has to do a cost realism analysis and has to develop a probable cost for each offeror anyway (FAR 15.404-1( d )) -- what's the harm in assigning an adjectival rating to the cost risk?  After all, cost risk is real.

Maybe you can recommend better wording -- the wording seems to be more approriate for a technical evaluation factor. How about--

  • HIGH RISK
    There is a high risk the contractor will overrun the estimated cost, even with increased contractor emphasis and Government monitoring.
  • SOME RISK
    There is some risk the contractor will overrun the estimated cost; however, increased contractor emphasis and increased Government monitoring may be able to overcome difficulties.
  • LOW RISK
    There is low risk the contractor will overrun the estimated cost.  Normal contractor effort and normal Government monitoring will probably be sufficient to overcome difficulties.

Share this post


Link to post
Share on other sites

The issue that you run into when you assess risk using these types of categories is in how to fold that into scoring to come up with a fair award.  Typically, it's possible to quantify risk (cost, schedule, and technical) in terms of dollars one way or another, which results in a more objective award summary, or at least one that can be discussed in objective terms.

Pushing forth with the above approach is great, but that you'll get better results by pairing it with a corresponding quantified adjustment to offered price.

Share this post


Link to post
Share on other sites

Doesn't the cost realism assessment tell you how much risk there is, measured as the difference between the proposed estimated cost and the most probable cost? Why adopt an adjectival description of risk?

If contractor A proposed an estimated cost of $1,000,000 and the most probable cost is $1,500,000, what does a rating of "High Risk" do for you that the dollar figures don't? I think it's a dumb idea.

Share this post


Link to post
Share on other sites

Neurotic,

I don't get it. What problem does your CO think they are solving by using an adjectival rating for cost risk? To add to Vern's question, if contractor B also proposed an estimated cost of $1,000,000 and the most probable cost is $1,500,000, would it be possible for contractors A & B to get different cost risk ratings?

Share this post


Link to post
Share on other sites

Offeror A

  • Tech Rating 98/100
  • Past Performance: Good
  • Estimated/Most Probable Cost: $1,000,000/$1,500,000

Offeror B

  • Tech Rating 90/100
  • Past Performance: Superior
  • Estimated/Most Probable Cost: $1,200,000/$1,200,000

Now what would adding "High" after Offeror A's cost and "Low" after Offeror B's cost do for you, other than to force you to write a justification for the adjective rating? What additional information would it provide? How would it help you choose between A and B?

Share this post


Link to post
Share on other sites
Quote

Other than that I am struggling to articulate in an email why assigning a risk rating to the cost factor would be a bad idea (if it really is). Any thoughts on this?  

"Because the cost risk is most accurately described by the Estimated/Most Probable Cost ratio."

(Puts the ball back in your CO's court as to why adjectives could better describe the risk.)

Share this post


Link to post
Share on other sites

From a policy perspective, there is no prohibition on using adjectival ratings for a cost realism assessment.  There is also no harm.  I would be willing to let the contracting officer try it.  

In the on-going interactions between contracting officers and reviewing procurement analysts, I want to enable contracting officers to be innovative.  In this case, I hope the procurement analyst merely gives advice to the contracting officer, rather than disapproving the approach.  I have never used adjectival ratings for a cost realism analysis because the probable cost speaks for itself, and don't intend to start, but I don't like for reviewers to disapprove without a really firm basis.  

Share this post


Link to post
Share on other sites

ji20874:

It doesn't matter that it's not prohibited or that you would be willing to let the contracting officer do it. What matters is whether it's a good idea, and it isn't. It would entail work that would yield no benefit. That's dumb. Source selection is enough work without adding more that won't make the outcome better and easier to reach.

When soliciting proposals for a cost-reimbursement contract, the CO must conduct a cost-realism analysis. The result of the cost realism analysis tells the SSA the cost risk associated with each proposal, and in clear, understandable terms. What would those three adjectives add? All they would do is prompt questions that would have to be answered. The descriptions that accompany those adjectives are simplistic and miss the mark. (They have been around for a long time.)

The goal of innovation is to produce improvement, not needless work. Not every innovation is a good idea. I have a lot of ideas, but most of them aren't workable or good, and some are bad, and some are just dumb. When I offer an idea to my offline colleagues, I want and expect criticism. As a reviewer, I would insist that a CO proposing an innovation explain why his innovation is a GOOD idea. I would ask him what benefit it adds to the process. I would ask him why it's in the best interest of the government. He'd have to make a good argument. 

Sooner or later, every idea must face a critical test. An uncritical, anything-goes approach is contrary to the goals of FAR 1.102(d) and 1.102-4(e).

Share this post


Link to post
Share on other sites

Vern,

Let's disagree, but remain agreeable.  I understand your point.  However, the culture of some of our Government contracting offices is oppressive to contracting officers who want to add value -- think of previous postings about the guiding principles.  I want to help change the culture so that some change can creep in.  At present, in some offices, nothing can happen unless everyone agrees, but I believe staff reviewers should not have absolute "NO" authority -- the chief of the office, HCA, and such can have NO authority over the contacting officer, but not staff reviewers.  So when a staff reviewer notes something, I want him or her to provide a citation for the comment and offer his or her comment as advice to the decision-maker (such as contracting officer, chief of the contracting office, or HCA) -- advice, not commandment.  

Then, the decision-maker considers the advice, and makes his or her own decision.  Thus, in this case and assuming the original poster is a staff reviewer, I would want the burden to be on him or her to prove that the idea is a bad idea -- he or she might persuade the contracting officer.  But if the contracting officer is the decision-maker, I want him or her to be able to make a decision, even if it is not a perfect decision.  Learning will occur, and that is the goal.  If the original poster is a decision-maker, then he or she can choose either to persuade or to command.

Yes, contracting officers should be able to persuade -- but staff reviewers cannot have absolute "NO" authority -- here, let the staff reviewer persuade the contracting officer, because there is no policy prohibition and there is no real harm.  

Share this post


Link to post
Share on other sites

Great ideas can come out of failed attempts. I'd argue most success starts with failure. I generally feel discretion and latitude should be wide if it's not going to have long-lasting or significant negative effects, excessively waste resources, or jeopardize life, limb, or sight. This isn't a blanket waiver to allow any and everything and quite often it's prudent to request thorough explanations on why a particular course of action or request should be granted.

I think the goal is to develop talent and let others know that you are 1) open-minded; 2) not always going to be around so they better become good decision makers; 3) a believer that experience can be a great teacher and there can be success in failure so never stop trying and challenge the status-quo.

Who knows what can happen or be discovered in the process of trying something. If Alexander Fleming did business as usual we might not have penicillin. Thomas Edison failed a thousand times before getting the lightbulb right. I think there is greater harm in stifling creativity (thought) than allowing someone to do something we think is dumb. It's definitely a balancing act.

Share this post


Link to post
Share on other sites

Jamaal:

You are confusing science and invention with administrative procedure. Fleming was looking for things, and Edison was trying to develop something that didn't exist. Administrative innovation in contracting is not scientific discovery or engineering development. Science discovers and engineering proceeds through hypothesization, experimentation, and observation. The problem discussed in the first post, the use of an adjectival scale, is not the same kind of problem as discovering penicillin or inventing the light bulb. You're off the mark.

If I were your boss and you came to me with the idea proposed in the opening post, my first question would be: Why should we do that? And you'd better come back with an answer that demonstrates how using any adjectival scale, and that particular adjectival scale, would make that particular source selection more efficient or produce a better outcome. I wanted to try a lot of new things when I was learning contracting, and each time I proposed something that I thought hadn't been done before, my boss told me to sit down and tell him or her why we should do that. If my explanation was decent, they'd then start asking deep questions, which usually sent me back to the drawing board. That was and is the best way to develop talent. If you want to develop people, teach them how to think and argue.

Go back and read my last blog entry.

 

Share this post


Link to post
Share on other sites

Jamaal:

P.S. Here is the scale from the opening post--the one that was to be used to describe "cost risk":

High Risk:

 

Likely to cause significant decreases in performance or increases in schedule or cost, even with increased contractor emphasis and increased Government monitoring.

Moderate Risk: 

 

Could potentially cause some decreases in performance or increases in schedule or cost. However, increased contractor emphasis and increased Government monitoring may be able to overcome difficulties.

Low Risk: 

 

Limited potential to cause decreases in performance or increases in schedule or cost. Normal contractor effort and normal Government monitoring will probably be sufficient to overcome difficulties.

 

Study it carefully, every word. Then Google: sorites paradox. By the way: What type of scale is that? What's the problem with using that type of rating scale in source selection?

Vern

 

Share this post


Link to post
Share on other sites

Vern:

I think the scale is a bad fit for what I believe is its intended use. I don't understand how cost alone can represent a risk. A cost realism analysis appears to be a better solution to figure out what the cost risk to the Government is (i.e. most probable cost of each offer).

The original poster indicated they want to evaluate risk on the cost factor alone, but it seems cost or price has to be considered with a non-price factor in order to represent something other than an abstract number. Maybe my lack of experience and working knowledge of cost-type contracts has me off base.

I think of price as largely a responsibility type factor - can the offeror successfully perform at a given price. As such, we have to consider that what makes a price risky weighs heavily on the technical approach and the offeror behind it. Seems cost should be treated similarly and that's done through cost realism. In doing a cost realism analysis you are assessing risk, albeit not through a fixed adjectival scale that you'll have to fit offers into.

Strangely enough, I was recently reading on sorites paradox in preparation for price fair and reasonableness training. Admittedly, I don't completely understand it and it's hard for me to make it connect with anything in this thread. Can I get a hint as to what I should be focusing on?

Share this post


Link to post
Share on other sites
Quote

I don't understand how cost alone can represent a risk.

The risk is not in cost, per se, but in the proposed cost estimate. And we're talking about estimated cost, not price. Cost realism in cost-reimbursement contracting is not a matter of responsibility.

If you proceed with an undertaking on the basis of an unsound cost estimate, you may find in the future that you have sunk cost into a pursuit that was not achievable within the expected cost. You will either have to spend more than you planned or abandon the effort because it is beyond your financial means.

As for your questions about sorties paradox, it is also known as the paradox of vague predicates. If you say that something cannot be too heavy, the question is: What is "heavy" and what is "too" heavy? When is something "heavy"? When is it "too" heavy? (When do a collection of grains of sand become a "heap" of sand?)

So, look at the scale. Are the distinctions between "high," "moderate," and "low" entirely clear? If not, then they are vague. What are the sources of their vagueness? And if they're vague, how useful will they be to the SSA? How would we be helping the SSA by giving him or her a set of vague ratings? How much more useful will they be than, say, REA'n maker's estimated-to-probable cost ratio, or just the dollar amounts? If what you're worried about is the amount of risk associated with various cost estimates, and if the categories of risk on the scale do not have clear boundaries, why use them? And by the way--cost realism analysis is not, itself, an assessment of cost risk, but it provides information that enables the assessment of cost risk. Even if you could clearly define each category, would the results be more useful than the dollar amounts?

Please stay on track. What we're discussing is how to handle proposals for administrative innovation. What I'm arguing is that all proposals for administrative innovation must be assessed and criticized. They must not be uncritically approved so that we do not stifle innovation. They must be subjected to close scrutiny. The burden of persuasion is on the would-be innovator. An "innovator" who proposes to do something in a source selection, but cannot argue persuasively on behalf of the proposal, is useless. What I'm doing with you is demonstrating what "close scrutiny" means. And don't tell me that this kind of scrutiny frightens would-be innovators and stifles innovation. It didn't stifle me or my colleagues when we were young. Knocked down wasn't knocked out.

Now, how about answering my questions: What kind of scale is the one in the original post? What's the problem with using it for cost risk?

Share this post


Link to post
Share on other sites

As always, thank you for investing the time. I missed the two questions during my original post or else I would have attempted to answer them.

In terms of measurement scales, I classify it as an ordinal scale. The scale is attempting to convert subjective evaluation findings into a common scale for comparison by decision-makers. Problem is, ordinal scales don't have the ability to quantify the difference between ratings and are subject to sorites paradox.

In regards to source selection in general and cost risk specifically, this scale is of limited value because it is ambiguous. All the scale really tells us is that in some vague way, an ambiguous aspect of cost risk was unquantifiably differentiated. Nothing, at least in the scale itself, establishes the basis for the ratings. This would likely lead to a mechanical comparison rather than a rationale evaluation or tradeoff analysis.

I would want to know at what cost an offeror moves from low, to moderate, to high risk.

Share this post


Link to post
Share on other sites

Ordinal scale. Correct. But subjectivity is not the issue, as we know from Edwards and von Winterfeldt. The issue is that the distinctions between low and moderate and between moderate and high are not clear. Moreover: (1) even if the distinctions between low and moderate and between moderate and high were clear, two offerors could fall into one category on the scale, e.g., "moderate," but one might be more or less risky than the other in more than a merely nominal way without being differentiable on the scale, and (2) the scale cannot tell you how much more risky "high" is than "moderate" in any particular comparison--a high risk offeror and a moderate risk offeror might be quite close.

Now:

  • Suppose that you review source selection plans (SSP) for the contracting staff.
  • And suppose further that a program office CO has submitted an SSP to you that includes the use of the cost risk scale we've been talking about, touted  as an innovation and accompanied by no additional explanation or justification.
  • And suppose yet further that the CO wants you to sign the SSP cover sheet prior to submission to the source selection authority (a technical office SES or two-star military officer, with no contracting expertise) for approval, indicating by your signature that the SSP conforms to FAR.

Would you sign it?

If so or if not, would you address any comment to the CO or to the SSA about the use of the scale?

If so, what would you say?

Share this post


Link to post
Share on other sites

Would I sign it? No, not without further information.

I would address the CO -- if necessary, the SSA -- to ensure they understood the scale from a source selection perspective. If I consider it from that perspective and put myself in the SSA's shoes, it doesn't seem worthwhile to create a scale that is largely useless and then have to train the SSA to ensure they know the scale is of limited use in making and more importantly supporting their decision. I would expect to be considered a fool for wasting the SSA's time and complicating the process.

I would relay this to the CO and ask if they still wanted to proceed or take corrective action. If they wanted to proceed, I want a written explanation of what pros and cons were considered as well as answers to any specific question I had. A few questions I would have include:

1- How was it determined that this strategy leads to more effective, value added, contracting than an alternative method such as: not evaluating cost risk as a standalone factor, using a cost realism analysis and risk assessment, or determining an estimated/probable cost ratio?

2- How is the strategy in the best interest of the Government? 

3- How is the CO going to manage the risk associated with this strategy?

 

Share this post


Link to post
Share on other sites

The CO responds:

Your job on the contracting staff is to say whether or not the SSP conforms to FAR. It's my job to advise the SSA on procedure, and it's up to the SSA to accept or reject my advice. I have to explain and justify to the SSA. I don't have to explain or justify to you. You are using your review function and withholding your signature  in order to force me to do things your way. That's improper. It's just the kind of thing that stifles innovation. Read the FAR guiding principles. Also read FAR 15.305(a), which says:

Quote

Evaluations may be conducted using any rating method or combination of methods, including color or adjectival ratings, numerical weights, and ordinal rankings.

Would use of the scale violate FAR in any way? If so, how? If not, on what basis are you refusing to sign off? 

Share this post


Link to post
Share on other sites

My three questions come directly from FAR principles in Part 1 and are consistent with FAR 4.801. Likewise, I would simply remind the contracting officer that in order to continue with their innovation the contract file should already have answers to my questions.

I don't know of anything in the FAR that explicitly prohibits the strategy, but FAR repeatedly states that the Government shall exercise discretion and use sound business judgment. 

Discretion isn't a license to do anything and is tempered by the requirement to use sound judgement. Contracting officers should expect to be questioned or given a rationality test when exercising discretion. If the contracting officer provides a reasonable basis for their exercise of discretion they should get clearance. As the reviewer I'm not requiring agreement, but I am expecting adherence to the known rules. I don't think this is stifling and I'm perfectly okay with minority reports when appropriate. 

As members of the acquisition team, contracting officers and clearance approval authorities have a duty to ensure actions are reasonable, sound, consistent, and in the best interest of the Government. They do this as a team - hopefully, a team with clearly defined responsibilities and authorities. Contracting officer's responsibilities, per FAR, specifically include complying with clearance approvals, ensuring performance of all necessary actions for effective contracting, and safeguarding the interests of the United States in its contractual relationships. A reviewers job is to ensure these responsibilities are executed properly.

My overall feeling is that whomever owns the responsibility should maintain the authority in 'agree to disagree' situations. It's viable to withhold the signature initially and issue the contracting officer your comments. Presuming it's the contracting officer's decision to make, if we couldn't come to terms and I felt strongly about potential negative effects I would attach a minority report, sign, and push the package forward.

Share this post


Link to post
Share on other sites

That was a lecture, not an answer.

As reviewer, my response to the SSP submission would be as follows:

Use of the scale does not violate anything in FAR, so I have signed. FAR says nothing of substance about rating systems and expressly provides for the use of adjectival and ordinal ratings.

So I've signed off on the SSP with the following comments:

I would not use the adjectival scale for cost risk, for the following reasons:

  1. Upon close (i.e., lawyerly) scrutiny, the ratings definitions are vague and do not make clear distinctions between low and moderate and between moderate and high cost risk. Upon challenge, some rating assignments might turn out to be so subjective and hard to justify as to seem arbitrary.
  2. The scale will not reveal close-to-boundary ratings--i.e., between moderate-but-close-to-high and between high-but-close-to-moderate. Thus, it will be useless for making tradeoffs. 
  3. The scale will not reveal distinctions between offerors within each of the three categories, which will also make it useless for making tradeoffs. Two proposals might both entail "moderate" risk, but one might be more moderate than the other to a significant degree. We cannot know that based on the scale.

The scale is hardly an innovation. It featured in a protest more than twenty years ago. See RJO Enterprises, Inc., B-260126, 95-2 CPD ¶ 23, July 20, 1995. It can be traced back to 1988 and to Air Force Regulations 70-15 and 70-30. I do not know if it has ever been applied to cost risk.

It provides for nothing more than broad characterizations. The SSA will not be able to rely on it when making a decision without first looking into the specific findings of the cost-realism analysis and referring to those specific findings in the decision document. The scale ratings will be superfluous. There is nothing to be gained by its use other than a gross and useless summation of evaluator opinions. Yet its use will require writeups to explain and to justify its application to each offeror, and each writeup must track to the specific cost realism findings or be challengeable. Given the low utility of the ratings, that will be needless work.

In short, use of the scale violates the KISS principle. But since dumb is not illegal, be my guest. B)

Share this post


Link to post
Share on other sites

One of my takeaways, from the scale, is that we want to maximize the use of plain, clearly defined, and enforceable language in contract related documents. Limiting our use of adjectives (not including empirical adjectives) and adverbs could clarify our writing. Adjectives and adverbs often tell us about a thing, but fail to show us. This brings subjectivity and potentially unwanted flexibility because readers have to rely on opinions and interpretations.

 

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.
Sign in to follow this  

×