Jump to content

Proposed Army Source Selection Supplement - Evaluation Factors and Subfactors


airborne373

Recommended Posts

I am relatively new to federal contracting and became a procurement analyst several weeks ago.

I seek some advice regarding use of the DoD?s two methodologies for evaluation of technical approach and related risk.

The DoD Source Selection Procedures (see pg. 14) provide two distinct technical rating evaluation processes. DoD methodology 1 includes risk associated with the technical approach in a single rating and DoD methodology 2 provides for separate technical and risk ratings. I understand that Navy organizations and the Air Force as a service assess risk separately from the adjectival technical rating (similar to DoD methodology 2) for FAR Part 15 negotiations.

The proposed Army Source Selection Supplement (pg. 17) states that the Army prefers the combined single rating of methodology 1, but that the separate risk ratings may be applicable in the R&D area.

I wonder if the Army?s preference of the combined single rating of methodology 1 may impact evaluation matters if a proposal may not fall within the express language of the single rating descriptions. If anyone could share their experience regarding the advantages of use of the combined single rating of methodology 1 or explain why the combined methodology is preferred by the Army, it would be appreciated.

Thanks very much !

Link to comment
Share on other sites

I am relatively new to federal contracting and became a procurement analyst several weeks ago.

I seek some advice regarding use of the DoD?s two methodologies for evaluation of technical approach and related risk.

The DoD Source Selection Procedures (see pg. 14) provide two distinct technical rating evaluation processes. DoD methodology 1 includes risk associated with the technical approach in a single rating and DoD methodology 2 provides for separate technical and risk ratings. I understand that Navy organizations and the Air Force as a service assess risk separately from the adjectival technical rating (similar to DoD methodology 2) for FAR Part 15 negotiations.

The proposed Army Source Selection Supplement (pg. 17) states that the Army prefers the combined single rating of methodology 1, but that the separate risk ratings may be applicable in the R&D area.

I wonder if the Army?s preference of the combined single rating of methodology 1 may impact evaluation matters if a proposal may not fall within the express language of the single rating descriptions. If anyone could share their experience regarding the advantages of use of the combined single rating of methodology 1 or explain why the combined methodology is preferred by the Army, it would be appreciated.

Thanks very much !

For construction and design-build contracts, the combined technical and proposal risk rating factors works fine. I think having yet another rating system just complicates the source selection process even more for those type acquisitions. As it is, we use a separate performance risk rating scheme for past performance. And now, DoD requires a 2 step rating process for past performance. First step is a degree of relevancy assessment with the performance risk assessment in step 2.

Link to comment
Share on other sites

I asked that question to DASA(P) in my review of the draft Army Source Selection Supplement. We'll see if they respond. If a program is well-established, and superior technical approach can only be accompanied by low risk, methodology 1 makes sense. But if we're seeking any kind of innovation or creativity, it doesn't make sense to force us to combine approach with risk. - And even though the AS3 only "prefers" methodology 1, knowing how things go, we'll be forced to write some type of justification for not using methodology 1, creating more unnecessary work for overburdened contracting officers.

Link to comment
Share on other sites

I'm not DoD but we identify the risks for each factor right along with the strengths and weaknesses. Risks are identified as significant or not, whether mitigated by something in the proposal and the Government's view of probability of occurring. Significant, unmitigated and/or high probability risks can reduce the factor rating - it's all up to the evaluation team to decide. So while the Army's combined rating might identify a proposal as Good/High, our system could result in an Acceptable (Using Excellent/Good/Acceptable/Unacceptabel scale).

I work with the Army a lot and agree with KeSer that some local installations (if not big Army) will write a policy that requires a D&F or other supplemental documentation to support the evaluation method chosen. :rolleyes:

Link to comment
Share on other sites

That was our concern also.

It was noted on our responses to AS3 as a concern.

It will force the KO to write the Determination to use another methodology, and have the fun discussion with legal as why it is the appropriate method to utilize.

thanks for the feedback.

Link to comment
Share on other sites

I hope all these source selection officials aren't just picking based upon ratings. To me, if everyone is doing their jobs properly, each proposal is analyzed along with the respective merits and issues. The ratings and associated strengths, weaknesses, risks, etc., all get discussed. It shouldn't make any difference whether risks is bundled in in a single rating or is assigned separately.

Link to comment
Share on other sites

Guest Vern Edwards

airborne373:

When I read your opening post I did not respond because I was not sure what kind of response you were looking for. I think I detect an interesting topic, but I'm not sure what the topic is. You started out with:

I seek some advice regarding use of the DoD?s two methodologies for evaluation of technical approach and related risk.

But then you ended up with:

If anyone could share their experience regarding the advantages of use of the combined single rating of methodology 1 or explain why the combined methodology is preferred by the Army, it would be appreciated.

If you are still interested in a response, please clarify what it is you want to know or what you would like to discuss.

If you decide that you still want to discuss this, please clarify this from your opening post:

I wonder if the Army?s preference of the combined single rating of methodology 1 may impact evaluation matters if a proposal may not fall within the express language of the single rating descriptions.
Link to comment
Share on other sites

airborne373:

When I read your opening post I did not respond because I was not sure what kind of response you were looking for. I think I detect an interesting topic, but I'm not sure what the topic is. You started out with:

But then you ended up with:

If you are still interested in a response, please clarify what it is you want to know or what you would like to discuss.

If you decide that you still want to discuss this, please clarify this from your opening post:

Vern,

I guess Bottom line is that the DOD has proposed two different methodologies in the source selection manual, and the Army has determined that the combined evaluation/risk methodology is the preferred method unless you are in a R & D arena.

To me, forcing us to go to the combined will limit our ability to evaluate a proposal. If an offeror has a creative proposal that is outstanding technically, but the risk is a moderate we are forced to put them as an acceptable overall for that factor.

Now granted, I have limited experience with Gov't Contracting, but many years in private sector business, I cannot understand what the advantages to having a combined evaluation. To me, I believe risk should be separated out and evaluated separately in order to do a trade off.

So the question should have been to the group, are there any advantages to the combined evaluation ratings and what are they?

hope this clarifies.

Link to comment
Share on other sites

Guest Vern Edwards

I don't think there are any evaluative advantages to the combined method, and I agree with you that it's needlessly limiting. I can't think why the Army chose to make it the preferred method. It may simply be a case of the Army staff not being any smarter than the DPAP staff that put out the DOD memo. I'm sorry that you're stuck with it, but that's what happens when some genius (not) at the top decides that one size should fit all.

Link to comment
Share on other sites

Vern,

I guess Bottom line is that the DOD has proposed two different methodologies in the source selection manual, and the Army has determined that the combined evaluation/risk methodology is the preferred method unless you are in a R & D arena.

To me, forcing us to go to the combined will limit our ability to evaluate a proposal. If an offeror has a creative proposal that is outstanding technically, but the risk is a moderate we are forced to put them as an acceptable overall for that factor.

Now granted, I have limited experience with Gov't Contracting, but many years in private sector business, I cannot understand what the advantages to having a combined evaluation. To me, I believe risk should be separated out and evaluated separately in order to do a trade off.

So the question should have been to the group, are there any advantages to the combined evaluation ratings and what are they?

hope this clarifies.

Please read formerfed's reply:

I hope all these source selection officials aren't just picking based upon ratings. To me, if everyone is doing their jobs properly, each proposal is analyzed along with the respective merits and issues. The ratings and associated strengths, weaknesses, risks, etc., all get discussed. It shouldn't make any difference whether risks is bundled in in a single rating or is assigned separately.

The source selection decision is made after you compare the substance of the non-price portion each offer, including risk - whether as a separate factor or as an element of another factor, and you trade off the advantages and disadvantages identified in the comparison against cost or price differentials.

Link to comment
Share on other sites

Guest Vern Edwards

While I agree with formerfed and napolik, one cannot deny the psychological effect of a rating system. Evaluators and decision makers will assume that ratings are supposed to mean something and that they are supposed to influence the decision maker. The combined technical/risk rating system put in place by DOD creates little boxes into which offerors are presumably to be put. The result could be that evaluators will write their findings to match the descriptions on the boxes, or that decision makers will interpret findings in light of the descriptions on the boxes.

In my opinion, conclusions about (1) the merits of what an offeror proposes and (2) the risks associated with that proposal are related, but separate. I think that a decision maker should consider each in the light of the other, but I doubt the wisdom of creating combined technical/risk categories like the ones DOD has established for its Method 1.

As a decision maker, I would prefer to be told that Offeror A has proposed an approach that will yield a result that on a scale of 0 to 1 would be a 1, but that there is a 50/50 change of success, and that Offeror B has proposed an approach that will yield a result that on a scale of 1 would be a .6, and that there is a 70/30 chance of success, than to be told that Offeror A is Green and Offeror B is Purple.

But a smart decision maker can make either system work.

Link to comment
Share on other sites

While I agree with formerfed and napolik, one cannot deny the psychological effect of a rating system. Evaluators and decision makers will assume that ratings are supposed to mean something and that they are supposed to influence the decision maker. The combined technical/risk rating system put in place by DOD creates little boxes into which offerors are presumably to be put. The result could be that evaluators will write their findings to match the descriptions on the boxes, or that decision makers will interpret findings in light of the descriptions on the boxes.

In my opinion, conclusions about (1) the merits of what an offeror proposes and (2) the risks associated with that proposal are related, but separate. I think that a decision maker should consider each in the light of the other, but I doubt the wisdom of creating combined technical/risk categories like the ones DOD has established for its Method 1.

As a decision maker, I would prefer to be told that Offeror A has proposed an approach that will yield a result that on a scale of 0 to 1 would be a 1, but that there is a 50/50 change of success, and that Offeror B has proposed an approach that will yield a result that on a scale of 1 would be a .6, and that there is a 70/30 chance of success, than to be told that Offeror A is Green and Offeror B is Purple.

But a smart decision maker can make either system work.

A smart decision maker can make either system work, ( while not knocking our decision makers) they are also pressed by time, dependent on the write ups of the teams. If the team sees that the best an offeror can get is acceptable, then their is the psychological avenue to consider. Will they do their due diligence in providing enough information in the write up for the SSA to make a decision? or since it is acceptable, they will do an acceptable job on the write up.

interesting discussions, and I agree that with the right teams, it can work. But from what I can tell it would not be preferred as no advantagees have been identified.

Fortunately the Army Source Selection Supplement is still in draft, and I hope enough comments were brought up about the combined rating scheme.

Link to comment
Share on other sites

airborne,

Another problem I see is decisions made without an oral presentation of the findings. If the decision is made just on a written paper as you suggest, your agency will have problems. Effective decisions are made with briefings, slides, and lots of dialogue.

Link to comment
Share on other sites

Airborne373 has stated...

The proposed Army Source Selection Supplement (pg. 17) states that the Army prefers the combined single rating of methodology 1, but that the separate risk ratings may be applicable in the R&D area.

Besides what is currently available, AFARS Appendix AA, dated Febburary 26, 2009, Airborne373 has referenced a "proposed" Source Selection Suplement. Has new guidance been issued?

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...