Jump to content

DoD Source Selection Procedures (March 31, 2016)


Guest PepeTheFrog

Recommended Posts

Guest PepeTheFrog

DPAP released the March 31, 2016 version of the Department of Defense Source Selection Procedures (DoD SSP).

http://www.acq.osd.mil/dpap/policy/policyvault/USA004370-14-DPAP.pdf

 

Section 3.1.3.3 clears up the pervasive misunderstanding and misapplication of FAR 15.305(a)(2)(iv): "Although the SSEB may not rate an offeror that lacks recent, relevant past performance favorably or unfavorably with regard to past performance, the SSAC may recommend and the SSA may determine, that a "Substantial Confidence" or "Satisfactory Confidence" past performance rating is worth more than a "Neutral Confidence" past performance rating in a best value tradeoff as long as the determination is consistent with stated
solicitation criteria." Having no recent, relevant past performance means the past performance factor rating will not be rated favorably or unfavorably per FAR 15.305(a)(2)(iv). Some think this means the offeror can't be "penalized" on the best value tradeoff analysis and decision for not having recent, relevant past performance, which is not true. The DoD SSP makes this explicit. 

 

Value Adjusted Total Evaluated Price (VATEP) is a way to objectify the subjective tradeoff process based on specific value assigned to superior performance listed in the solicitation. VATEP "handcuffs" the Government into predefined tradeoff metrics, which is good for industry and transparency, but reduces Government flexibility. PepeTheFrog thinks this method was already available before this revision of the DoD SSP. Some in industry might say VATEP is helpful, providing predictability that will improve proposals and performance. Some in Government might say, "Why would I want to give away the subjectivity of tradeoff?"

 

Appendix C-1 provides circular and self-referential guidance for choosing LPTA: "The LPTA process is appropriate when best value is expected to result from selection of the technically acceptable proposal with the lowest evaluated price." (Use LPTA when you expect the best value from using LPTA.) Although C-1 explains that LPTA is best used for clearly defined requirements, it also sneaks in the word "non-complex" as a description of items to procure using LPTA, which has been discussed in this forum and elsewhere. Why not procure complex items (that are clearly and objectively defined) using LPTA? PepeTheFrog is dismayed whenever someone declares that "complexity" means that tradeoff should be used instead of LPTA. The important consideration is how well the requirement is defined, so that technical acceptability can be defined. Computers are quite complex, but LPTA works fine if the Government knows the minimum specifications.

 

PepeTheFrog is sure that others have something to say about the revisions to the DoD SSP.

 

 

 

Link to comment
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

The new guidelines expressly admit that the purpose of VATEP is "taking some of the subjectivity out of the best value evaluation" (see para. 1.3.1.4).

The VATEP approach is an attempt to quantify or dollarize the tradeoff decision, and to take away subjective discretion from the selecting official -- FAR 15.308 specifically says the selection documentation "need not quantify the tradeoffs that led to the decision," and the GAO routinely reinforces this principle.  However, maybe some selecting officials in DoD don't want to make subjective best value decisions?  Or someone at a higher level doesn't trust them?  Regardless, use of VATEP seems to be optional.

Link to comment
Share on other sites

Use of VATEP is optional so I don't buy the concern regarding distrust at the higher levels.  Furthermore, according to the guidance, the purpose is not merely to "[take] some of the subjectivity out of the best value evaluation" but also (according to the preceding sentence) to "[provide] the offeror information to determine if the additional cost of offering better past performance will put the offeror in a better position in the source selection" (1.3.1.4).  This method is just another tool in the toolbox - if the acquisition team decides that giving away subjective tradeoff flexibility is in the best interest of the Government so that offerors are more informed regarding the value of higher technical performance, I don't see how that is inherently bad.

Link to comment
Share on other sites

You're right -- it's not inherently bad -- and in a disciplined acquisition with knowledgeable and empowered participants, the VATEP process might actually work.  But I do know that there are some decision-makers who do not want to make difficult and subjective decisions, and some reviewers who don't want decision-makers to make difficult and subjective decisions -- they are the wrong people for the VATEP process.  I hope VATEP is only used where it makes sense.

Link to comment
Share on other sites

It looks like performance and price tradeoff with a pass/fail technical factor is no longer an option. This should be corrected.

Considering risk for all evaluations involving technical, with the exception of LPTA, doesn't always make sense and can complicate simple technical factors such as experience. This should be corrected.

I'm heavily considering leaving the government and returning to industry. I don't understand how we expect to retain top talent with all the dysfunction within the career field. It's maddening at times.

Maybe they'll ask the GAO to draft the next version - it might be more helpful.

Link to comment
Share on other sites

Guest PepeTheFrog
1 hour ago, Jamaal Valentine said:

It looks like performance and price tradeoff with a pass/fail technical factor is no longer an option. This should be corrected.

Jamaal Valentine, do you think you can describe and execute (the functional equivalent of) a performance price tradeoff with a pass/fail technical factor within the existing, current rules of FAR Part 15 and the DoD SSP? PepeTheFrog is not asking whether a risk-averse, internal reviewer who needs explicit, written direction to feel safe will approve it. Assume your entire chain of command has framed copies of FAR 1.102(d) on the wall.

Along these lines of discussion, PepeTheFrog thinks that VATEP (or its functional equivalent) was available before this revision of the DoD SSP.

 

Link to comment
Share on other sites

2 hours ago, Jamaal Valentine said:

It looks like performance and price tradeoff with a pass/fail technical factor is no longer an option. This should be corrected.

Considering risk for all evaluations involving technical, with the exception of LPTA, doesn't always make sense and can complicate simple technical factors such as experience. This should be corrected.

I've read through the updated DoD SSPs (admittedly only once, so I may have missed it), but I don't see any language prohibiting the use of performance and price tradeoff with a pass/fail technical factor.  Could you reference the section/paragraph that does?

In fact, I think the language under section 1.3  (pg. 2-3) still allows the use of the technique (emphasis added below):

Quote

This document describes source selection processes and some techniques that may be used to design competitive acquisition strategies suitable for the specific circumstances of the acquisition including: Value Adjusted Total Evaluated Price (VATEP) tradeoff source selection process with monetized adjustments included in the evaluated price for specific enhanced characteristics; tradeoff source selection process with subjective tradeoffs; and lowest price technically acceptable (LPTA) source selection process.  These are not the only source selection processes available on the best value continuum.  SSTs should carefully consider and use the approach that is most appropriate for their acquisition.

In other words, just because the DoD SSPs are silent on that particular technique, barring any language within the SSPs prohibiting the use of the technique it appears to be permissible.

Link to comment
Share on other sites

I don't think they explicitly prohibited it per se. Rather, I think we would have to reconcile the shall statement in section 2.3.4.2.1, Technical Risk, (page 20).

I'm unable copy and paste the specific section from my phone, I get an HTML error on the post.

Admittedly, I've only read it once, but that was my first take from the passage. Additionally, I believe there are standard use evaluation tables, under the "one of two ways" methodologies when including technical factors other than LPTA (see page 24 - 26).

*Edited to add quote (emphasis added)

Quote

All evaluations that include a technical evaluation factor shall also consider risk,
separately or in conjunction with technical factors, with the exception of LPTA where the technical
proposal is evaluated only for acceptability based on stated criteria.  Risk can be evaluated in
one of two ways
:

•    As a separate risk rating assigned at the technical factor or subfactor level (see paragraph
3.1.2.1).

•    As one aspect of the technical evaluation, inherent in the technical evaluation factor or
subfactor ratings (see paragraph 3.1.2.2).

 

Link to comment
Share on other sites

 I don't have my earlier edition of the DOD Source Selection procedures with me in Atlanta (and I don't feel like looking it up tonite on the Internet on the iPhone), but I'm pretty certain that it treated offerors with no record of past performance similarly to this latest version. In addition to what has been said above about not covering every possible situation, one should read the applicability provisions of the procedures.  Per paragraph 1.2, they are applicable to negotiated Part 15 procedures exceeding $10 million. They don't apply to commercial item acquisitions under Part 12 below $10 mill or to commercial item acquisitions above that threshold that don't mix Part 15.3 source selection procedures, acquisitions under Parts 8.4,13, 14 , 16.505, fair opportunities procedures and other exceptions.  

Link to comment
Share on other sites

Guest Vern Edwards
Quote

Per paragraph 1.2, they are applicable to negotiated Part 15 procedures exceeding $10 million.

Read the memo again, Joel. See 1.2, Applicability and Waivers, first sentence:

Quote

These procedures are applicable to all acquisitions conducted as part of a major system acquisition program, as defined in Federal Acquisition Regulation (FAR) 2.101, and all competitively negotiated acquisitions with an estimated value greater than $10 million.

Emphasis added.

The plain language says that the procedures apply (a) to "all" acquisitions conducted "as part of" a major system acquisition program and (b) to negotiated acquisitions that are valued at $10 million or more. It doesn't say that they apply to all acquisitions conducted as part of a major system acquisition program that have an estimated value greater than $10 million.

A major acquisition program consists of many individual acquisitions, large and small, including $500,000 (and smaller) study contracts and experiments to $1 billion system development contracts.

Link to comment
Share on other sites

Guest Vern Edwards

I have read it.

It is a sad piece of work--a rambling 71 page mess that is needlessly wordy, often confusing, and sometimes incoherent. It shows that the authors don't understand fundamental and crucially important ideas. (Don't they read books?) And it fails to provide helpful guidance about topics for which guidance is is badly needed. (Don't they read protest decisions?) It's not just flawed. It's an embarrassment.

I wish I could say that it's a disappointment, but the sad truth is that while I had hoped for better, I didn't expect it. I've talked to and corresponded with others who are horrified, even outraged, but resigned.

When I think of some of the astonishingly good work American bureaucracy has produced in the past, first rate stuff, I can only feel deep regret over how far our bureaucracy has fallen. It's very sad. Why not the best?

What a chance missed.

Link to comment
Share on other sites

9 hours ago, Vern Edwards said:

Read the memo again, Joel. See 1.2, Applicability and Waivers, first sentence:

Emphasis added.

The plain language says that the procedures apply (a) to "all" acquisitions conducted "as part of" a major system acquisition program and (b) to negotiated acquisitions that are valued at $10 million or more. It doesn't say that they apply to all acquisitions conducted as part of a major system acquisition program that have an estimated value greater than $10 million.

A major acquisition program consists of many individual acquisitions, large and small, including $500,000 (and smaller) study contracts and experiments to $1 billion system development contracts.

In reading the plain language, it would seem that the exceptions in 1.2 still apply to acquisitions under major systems acquisitions.  

The various DOD Activities usually have further implementing instructions. 

Link to comment
Share on other sites

Guest Vern Edwards

WARNING: The description and explanation of the VATEP method in Appendix B are complex and obscure. The method strikes me as complicated. I fear that its use will create a protest-rich environment if not done very, very carefully. Don't allow yourself to be seduced by any notion that it will be less likely to prompt a protest because it's an "objective" method. 

Link to comment
Share on other sites

Guest Vern Edwards
23 minutes ago, joel hoffman said:

In reading the plain language, it would seem that the exceptions in 1.2 still apply to acquisitions under major systems acquisitions.  

The various DOD Activities usually have further implementing instructions. 

I agree about the exceptions. As I read the first sentence of 1.2, there are two alternative applicability criteria for competitive negotiated acquisitions: (1) whether the acquisition is "part of" a major system program, regardless of dollar value and (2) whether the acquisition is worth more than $10 million, whether part of a major system program or not. The procedures do not apply to any acquisition that falls under any of the exceptions.

The decision logic might work as follows:

1. Do any of the exceptions listed in 1.2.1 apply? If yes, then the source selection procedures do not apply. If no, go to 2.

2. Is the acquisition "part of" a major system program? If yes, then the source selection procedures apply. If no, go to 3.

3. Is the acquisition worth more than $10 million? If yes, then the source selection procedures apply. If no, then the procedures do not apply.

There is provision for waivers.

That's what the memo says. They might have meant that the procedures apply only above $10 million. If that had been the case it would not have been necessary to mention major system programs.

Link to comment
Share on other sites

I agree.

i tried to edit my last post to cite paragraph 1.2.1 for the exceptions but was unable to post the edit on iPad .

Quote

1.2.1 These procedures are applicable to all competitively negotiated procurements meeting the requirements in paragraph 1.2, except those using...

 

Link to comment
Share on other sites

Guest PepeTheFrog
2 minutes ago, Vern Edwards said:

WARNING: The description and explanation of the VATEP method in Appendix B are complex and obscure. The method strikes me as exceedingly complex. I fear that its use will create a protest-rich environment if not done very, very carefully. Don't allow yourself to be seduced by any notion that it will be less likely to prompt a protest because it's an "objective" method. 

PepeTheFrog agrees, and sees the same danger as reliance on numerical scoring (rather than underlying strengths and weaknesses from adjectival ratings). Like numerical scoring, VATEP provides a veneer of objectivity that might not hold up in protest scrutiny: "Subjective and unreasonable decision without adequate documentation?!? Just look at these fancy VATEP graphs!"

VATEP will also create demand for consultants, instructors, gurus, etc. to explain the mystical intricacies of VATEP to the crowds.

PepeTheFrog thinks that DoD and other large bureaucracies make similar mistakes, as exemplified by the DoD SSP. It seems like the DoD SSP provides too much guidance; it's overkill. It could have been more concise, and provided a longer leash for contracting professionals. But identifying, evaluating, and correcting mistakes and bad actors takes time and effort on the part of leadership and management. Instead, bureaucracies create blanket, reactive, constrictive, and burdensome "solutions" that apply to everyone, including those who can perform well in the absence of the "solution."

 

Link to comment
Share on other sites

Oh Oh, Pepe the Frog, prepare to be "gigged". You mentioned numerical scoring systems...

Link to comment
Share on other sites

Guest PepeTheFrog
42 minutes ago, joel hoffman said:

Oh Oh, Pepe the Frog, prepare to be "gigged". You mentioned numerical scoring systems...

PepeTheFrog just learned what "gigged" means and is hopping away in horror!

https://en.wikipedia.org/wiki/Gigging

24 minutes ago, Vern Edwards said:

"a veneer of objectivity" Sigh. :rolleyes: No one reads books.

Vern Edwards, did you not like the phrase, or did you not like its application to numerical scoring? Will you explain?

 

Link to comment
Share on other sites

Guest Vern Edwards

Hi Pepe:

Love the avatar and moniker.

The long-standing knock on numerical scoring is that it falsely indicates objectivity. Numerical scoring doesn't do that. Rather, some people who interpret numerical scores interpret them that way. When used by people in the know, numerical scoring is a powerful analytical tool that is far superior to adjectives and colors. I will quote from a very great book, written under contract to the U.S. Navy by two very great decision scientists, Detlof von Winderfeldt and the late Ward Edwards. The book is Decision Analysis and Behavioral Research (Cambridge University Press, 1986). The quote that follows is from page 20:

Quote

Numerical subjectivity

The SEU [subjectively expected utility] model [of decision-making] embodies a fundamental principle. Both the utilities and the probabilities that enter into it are numbers, but they are inherently subjective in the sense that you and your colleagues might disagree about them and would have no way of resolving any such disagreement. The fundamental principle might be called numerical subjectivity, the idea that subjective judgments are often most useful if expressed as numbers. For reasons we do not fully understand, numerical subjectivity can produce considerable discomfort and resistance among those not used to it. We suspect this is because people are taught in school that numbers are precise, know from experience that judgments are rarely precise, and so hesitate to express judgments in a way that carries an aura of spurious precision. Judgments indeed are seldom precise--but the precision of numbers is illusory. Almost all numbers that describe the physical world, as well as those that describe judgments, are imprecise to some degree. When it is important to do so, one can describe the extent of that imprecision by using more numbers. Very often, quite imprecise numbers can lead to firm and unequivocal conclusions. The advantage of numerical subjectivity is simply that expressing judgments in numerical form makes it easy to use arithmetic tools to aggregate them. The aggregation of various kinds of judgments is the essential step in every meaningful decision.

Emphasis added.

Back in the 1970s and early 1980s, the GAO encountered faulty use of numerical scoring by source selection personnel in a number of protest cases and complained that the method lent a false air of objectivity and precision to what is a matter of subjective judgment. But numbers didn't do that; poorly educated and trained human beings did that. Still, the GAO has not said that numerical scoring cannot or should not be used. See IAP World Services, Inc.; Jones Lang LaSalle Americas, Inc., B-411659.2, 2015 CPD ¶ 302:

Quote

IAP argues that the agency used the artificial precision of a weighted numerical score to gloss over what would otherwise be a complex weighing of proposals' strengths and weaknesses. IAP Protest at 17–19; IAP Comments (July 27, 2015) at 31. The record shows that although the SSA's best value tradeoff discussion makes multiple references to the offerors' numerical scores, the SSA was well aware of the strengths and weaknesses underlying the evaluations. See AR, Tab 33, Source Selection Statement, at 20 (“I conducted my deliberations by first looking at the overall evaluated scores for Mission Suitability and the findings that led to those scores....”) While numerical scoring cannot substitute for a nuanced evaluation of proposals, here, the selection official had a basis to conclude that the scores were an accurate representation of the relative strengths and weaknesses of the proposal.

There we have a case of an SSA who knew what he or she was doing. However, some agencies (e.g., DOD) have prohibited the use of numerical scoring rather than properly train their people in the use of that extremely effective tool. Ironically, DOD's contractors use numbers when conducting systems engineering tradeoff analyses, and no one thinks anything of it.

Sigh.

There was a time when I used to try to explain all this to contracting people, but ignorance is so deeply rooted among us that I no longer waste my time. I have learned that many people in our business simply will not read books and will not study and prepare themselves for the tasks ahead unless their boss sends them to some government-sponsored class or they can hop on Wifcon for a quick and easy answer. They have allowed their ignorance to deny them the use of a powerful tool for the conduct of complex source selections. In explaining now I am making an exception, because I like your style and because I sense a keen intelligence behind the humor.

I will not discuss this matter further with anyone.

Vern

Link to comment
Share on other sites

Vern, I know you said you wouldn't discuss this matter further with anyone, but I'm interested whether or not the use of numerical scoring (back in the 1970s/1980s or even today) prompts protests during the solicitation phase with potential offerors arguing that the numerical scoring system chosen "stacks the deck" against them (or in favor of another offeror)?  I'm not asking the question to imply that a potential increase in protests during the solicitation phase would be a reason to forego the use of numerical scoring due to the fear of protests, I'm merely curious.  Also, thanks for the reference to scholarly material regarding numerical scoring/decision making - I hope you'll make an exception to your declaration not to discuss this topic anymore.

If you're not willing to discuss numerical scoring (or even if you are), would you be willing to discuss your vision for Source Selection Procedures that provide helpful guidance for the DoD contracting workforce?

Link to comment
Share on other sites

A current challenge is centralizing the rules (physically where to find the rules to the game). Seems we create our own write-ups by failing to follow an agency procedure or policy more often than we fail to comply with statutes or regulations. DoD SSP will lead to agency FAR supplement changes, several more layers of informational guidance and mandatory procedures, etc. I wish a lot of the superfluous additions would include sunset provisions of mandatory review periods.

But let's not waste an opportunity. There should be a some public comment periods, for FAR supplement changes, and we can voice our opinions. I think DAU and leaders should allocate resources to teaching practitioners on the Federal Register process. A lot of people are unhappy with the rules, but never comment when given an opportunity. It's like complaining about politicians, but never voting.

Heck, did the DoD SSP go out for public comment? If not, should it have?

Link to comment
Share on other sites

Guest Vern Edwards

Matthew:

Quote

Vern, I know you said you wouldn't discuss this matter further with anyone, but I'm interested whether or not the use of numerical scoring (back in the 1970s/1980s or even today) prompts protests during the solicitation phase with potential offerors arguing that the numerical scoring system chosen "stacks the deck" against them (or in favor of another offeror)?

I don't recall having ever seen a protest against the use of a numerical scoring scheme during the solicitation phase. That doesn't mean that there has not been one, just that I don't recall having seen one. Protests involving numerical scoring have generally arisen in reaction to the source selection outcome. Complaints seem mainly to have been about the way scores were assigned or distorted tradeoffs, i.e., based solely on numbers and without consideration of actual strengths and weaknesses.

Link to comment
Share on other sites

Guest Vern Edwards
15 minutes ago, Jamaal Valentine said:

Heck, did the DoD SSP go out for public comment? If not, should it have?

I think the DOD source selection procedures are strictly internal. As such they need not be published n the Federal Register.

Link to comment
Share on other sites

I was thinking the same, but based on some of Don's latest posts I'm unsure. Curious to know how they reconcile FAR 1.401(e),1.401(f), and 1.404 requirements.

I think a reasonable argument can be made that the DoD SSP has a significant effect beyond the internal operating procedures of the agency or has a significant cost or administrative impact on contractors or offerors. 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

×
×
  • Create New...