Jump to content

Immediate Comparative Analysis for Fair Opportunity Selection


Guardian

Recommended Posts

On ‎1‎/‎16‎/‎2020 at 4:26 PM, ji20874 said:

https://www.dhs.gov/sites/default/files/publications/pil_boot_camp_workbook.pdf

Technique 5 in the DHS Procurement Innovation Lab Boot Camp Workbook (link above) is on comparative evaluation -- and Technique 4 is on down-select.  You can do a down-select on a single non-price factor (or a couple of non-price factors), and you can make your down-select decision on a comparative evaluation basis (no adjectival ratings).

JI, Innovation Technique 5, Comparative Analysis, states "Probably more suited to acquisitions with a few quotes and a few evaluation factors."  The point of Phase One, a downselect, is to narrow a larger pool of offerors down to a more manageable pool, i,e, best qualified.  Based on our market research, I can conjecture as to how many task order proposals we might receive.  However, I cannot say with any certainty.  I have to presuppose that all contractors under the socio-economic category for which we are setting our FOPR aside might submit task order proposals.  It should be noted that there will be a page limit placed on their responses to Phase One.  Given that we will evaluate only one non-price factor, while also considering price, we have met the second condition of the above statement, which is "Probably more suited to acquisition with...a few evaluation factors." 

During Phase Two, we will also apply a comparative analysis.  In this subsequent phase, both criterium recommended above will be met, that is, less offerors (only the most qualified) and few evaluation criteria (we will only evaluate technical approach in Phase Two).  Per the PIL's recommendation, an immediate comparative analysis seems more appropriate in Phase Two when we are guaranteed less offerors to evaluate.  However, we are proposing to apply an immediate comparative analysis in both phases one and two.

Given that we do not know how many offerors will submit responses to Phase One, might there be a better approach than immediate comparative analysis to incorporate into the downselect phase, Phase One?

 

On ‎1‎/‎18‎/‎2020 at 12:54 AM, C Culham said:

First, we'll evaluate for experience. We'll compare A's description of its experience to B's and decide, subjectively, which has the better experience by taking note of asserted facts, identifying differences, determining their significance to us, and documenting our conclusions. Let's say we decide that A is better than B. We'll then compare A to C. This time we think C is better than A. Since C is better than A and A is better than B, we assume that C is also better than B. We'll then compare C to D. We decide that D is better than C. Since D is better than C and C is better than A and B, D is also beter than A and B. So D is best on experience. Since there were four offerors, and since D is best, we'll give D four points. Since C is better than A and B we'll give C three points. Since A is better than B we'll give A two points. Finally, we'll give B one point.

Experience: D = 4, C = 3, A = 2. and B = 1.

Second, we'll evaluate for past performance, using the same procedure as we did for experience. This time the result is as follows:

Past performance: D = 4, A = 3, B = 2, and C = 1.

Third, we now compare the four quoters' prices. This is easy. The lowest price gets four points and the highest gets 1 point. The result is:

Price: B = 4, A = 3, C = 2, and D = 1.

Fourth, we total the points.

A = 8, B = 7, C = 6, D - 9.

Fifth, D is best overall, so we award to D.  

It is in the Government's best interest to apply a trade off approach for my agency's requirement.  The approach described above would not work well for our requirement as it does not apply the weighing found in a tradeoff.  Instead it consistently represents differences between offerors, be they marginal or wide, by a single point in every case.  For example, let's say Company D's experience is only slightly better than that of Company C.  However, Company C's experience is significantly better than Company A's.  In both comparison, we assign only a one point difference between the two.  The same shortcomings of this approach would manifest in past performance or any other non-price factor.  The sequential points approach plays out with even greater imbalance when we factor in our scores for price.  One company might receive a "4", whereas the next lowest priced company receives a "3".  The difference in between their prices might be less than $100 or a some near inconsequential amount in a multi-million dollar acquisition.  In fact, Offerors B, A and C could all be neck-in-neck on pricing, while D could be 100% more than the next lowest price.  However, as you can see from the hypothetical scenario above, D is selected for an award.  The net result might be that the Government ends up paying significantly more for an offeror that is only slightly better in the other factors.

Link to comment
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

1 hour ago, Guardian said:

...we are proposing to apply an immediate comparative analysis in both phases one and two...

That’s fine.  The process may become cumbersome with too many offers or factors, but there is no defined cut-off.  I hope it works for you.

I wouldn’t try to use numerical points — just subjective comparative assessments with a tradeoff inherent in the process.  But that’s me.

Link to comment
Share on other sites

On 1/17/2020 at 8:35 PM, formerfed said:

Might an evaluators confidence rating be an example of what you’re talking about?  I can see that subjectivity plays into assessing how well an offeror understands the requirement, the degree of soundness in a technical approach, and the likelihood of success.  

Yes.  Really, most of our evaluations and selections are more subjective than objective, and I am okay with that.  A point scheme may seem objective (85 is better than 75, right?), but it might be just a veneer.  If a company got 40 out of 50 points for factor one, 30 out of 30 points for factor two, and 15 out of 20 points for factor three, for a total of 85 points, well, assigning 40 instead of 38 or 42 was probably a subjective process in the first place.   Truly objective evaluations and selections can be done, but I think most major acquisitions are really subjective (even if there is an objective veneer).  Really, I don’t like veneers — I much prefer honesty, so I am okay with admitting that there is real subjectivity in our evaluation and selection processes.  Subjectivity is not inherently bad — it can be professional, legitimate, honorable, and so forth.  Subjectivity is a positive value that acquisition professionals bring to the table.  

Link to comment
Share on other sites

8 minutes ago, Guardian said:

Given that we do not know how many offerors will submit responses to Phase One, might there be a better approach than immediate comparative analysis to incorporate into the downselect phase, Phase One?

Probably yes.  Phase One's purpose is to quickly screen for proposals that are worth evaluating in more detail during Phase Two.   So what we are looking for are:

  • - Criteria that are pretty good at indicating high-value
  • - Criteria that is relatively easy to evaluate - objective, minimal interpretation needed, this usually means quantitative data.
  • - Proposal information that is relatively low-cost for offerors.
  • Avoiding FAR 15 procedures, or the appearance of using FAR 15 procedures.

Pretty Good Screening Criteria I Have Actually Used At Least Once, And Did Not Regret Using

  • Samples.   I had a requirement that involved, basically, scrubbing and QCing some messy and complicated data.   Our scientists made some fake data, with some subtle errors in it.  An evaluation factor was 'analyze this fake data and tell us what you found.'  If they don't find the errors - Fail.  If they find the errors - Meh.  If they find the errors and explain them - Good.   This took less than 10 minutes to evaluate per proposal.  Everyone involved (Program personnel and Offerors) liked it.
  • Quasi-Past-Performance.  Criteria about offeror's experience and performance on similar work, but not FAR 15 Past Performance exactly.  PIL has something about this. 

 

  • Offerors have some essential credentials, warrants, licenses, etc.  CMMI Level II.  FedRAMP.  SCIF.  Has appropriate State licenses, etc.  Pretty close to responsibility determination though, so be careful.

 

  • Challenge Questions / Short Q&As.  Offeror's give brief (like really brief - 1/2 page,  a few slides, 10 min. presentation) pitches for how the offeror proposes to do the work.  This is much more subjective, but if done right, this can be very productive.  In my area, IT, we would probably ask about proposed architecture, platforms, software development methodology, etc.  Often our technical folks will immediately know they do or do not like a particular approach based on these answers. 

 

  • Summary "Technical Approach."Give us the short version.  1 page.

 

  • Key Personnel.  Sometimes in my area (IT), at the end of the day we are buying smart people that can code.  The rest is window dressing.  You may consider just asking for KP info at first.  But be careful about this approach, it has some serious down-sides and, generally,and is appropriate only for select circumstances (small & highly technical).

 

  

Link to comment
Share on other sites

8 hours ago, Ibn Battuta said:

What do you mean by subjective and objective?

I would draw your attention to FAR 1.108(a), Words and terms, which states, "Definitions in part  2 apply to the entire regulation unless specifically defined in another part, subpart, section, provision, or clause. Words or terms defined in a specific part, subpart, section, provision, or clause have that meaning when used in that part, subpart, section, provision, or clause. Undefined words retain their common dictionary meaning."

Moreover, I would point you to the link JI included, specifically the cited GAO case, AlliantCorps, LLC; B-417126; B417126.3; B417126.4; February 27, 2019 (see page 31 under the GAO Guide for Comparative Evaluations).  The GAO has rarely held contracting officers to a standard of "beyond reproach," or "perfection in how we document of our selection decisions."  In fact, such standards are antithetical to streamlining as contemplated by FASA and initiatives such as the DHS PIL.  I'm all for semantics and a good hardy debate, but rhetorical interchanges over the meaning of commonly used words border on an exercising of "picking fly feces out of pepper with boxing gloves."  That is an exercise, which I for one, wish not to engage. 

The GAO has set a standard affording KOs generous latitude.  In documenting a decision, we are to meet the minimum requirements as set forth within the solicitation's language (read AlliantCorps, LLC, above cited). Sometimes less is more.  I have see instances in which our attorneys ask us to create multiple layers documenting our evaluation decisions under fair opportunity.  Since they are equally worried about including something in the subsequent layer that was not in the previous one, it becomes an unnecessary and redundant exercise of cutting and pasting.  We are all human.  We do the best we can and if we read GAO precedent, then we know we worry far too much at the sake of efficiency and innovation.  Maybe it's our agency attorneys and others giving rise to such worry.  When an attorney gives his or her opinion, we should not be afraid to request the legal basis of that opinion.  Sometimes I get a reply of, "that is the way we have always done it.  I tend to shudder at such a response.  Every selection decision is going to be tainted with at least a teaspoon of subjectivity.  More often than not, the subjectivity is from the evaluation team and we as KOs do our best to minimize it.  I have spent a long time thinking about the perfect evaluation criteria.  To date, no one, regardless of age, experience or intellect, has been able to gift it to me.  I am not confident that day will ever come.  We must also bear in mind, that it is never a one-size-fits-all approach.  The GAO focuses on points of protest that would make a substantive difference in the results of an evaluation, i.e., that which might tip the scales in favor of another offeror.  That ought to continue to be our focus.   

Link to comment
Share on other sites

ibn battuta,

Hmmm, a few days ago, you refused to discuss definitions.  I'm glad you're more open now.  Below are some quotes from GAO bid protest decisions where GAO uses the word "subjective."  I think my use the word is the same as the GAO's.  I hope these are helpful to you in your continuous learning...

  • The determination of the relative merits of proposals is the responsibility of the agency that solicited them, and requires weighing competing subjective considerations and exercising sound discretion...
  • For the most part, the evaluative conclusions to which Price Waterhouse objects are precisely the type of subjective judgments reserved to contracting officials, not our Office...
  • The evaluation of past performance, by its very nature, is subjective...
  • The protester primarily challenges the differences in the findings between the SSEB and SSAC and contends that the SSA should have adopted the findings of the SSEB with regard to the evaluation of its proposal, rather than the SSAC’s evaluation, which the protester argued was “subjective” and unreasonable.
  • Finally, the solicitation advised offerors that the agency intended to make the source selection decision without conducting discussions and noted that the best value evaluation is, in and of itself, a subjective assessment by the Government.”
  • When conducting a best-value tradeoff analysis . . . an agency may not simply rely on the assigned adjectival ratings to determine which proposal offers the best value because evaluation scores–whether they are numerical scores, colors, or adjectival ratings–are merely guides to intelligent decision-making and often reflect the disparate, subjective judgments of the evaluators...

GAO testimony before a House subcommittee...

  • Procuring agencies are obligated to conduct proposal evaluations in accordance with the evaluation scheme set forth in the solicitation. Such proposal evaluation judgments are by their nature often subjective; nevertheless, the exercise of these judgments in the evaluation of proposals must be reasonable and must bear a rational relationship to the announced criteria upon which the successful competitor is to be selected...
Link to comment
Share on other sites

24 minutes ago, Guardian said:

I would draw your attention to FAR 1.108(a), Words and terms, which states, "Definitions in part  2 apply to the entire regulation unless specifically defined in another part, subpart, section, provision, or clause. Words or terms defined in a specific part, subpart, section, provision, or clause have that meaning when used in that part, subpart, section, provision, or clause. Undefined words retain their common dictionary meaning."

I

Guardian,

That convention would apply to the use of words or terms in the FAR. Ibn is asking ji what he means when he uses those words. I don't think ji has explained what he means--he merely provided examples of the use of "subjective". But his answer may satisfy Ibn. 

Link to comment
Share on other sites

26 minutes ago, Don Mansfield said:

Guardian,

That convention would apply to the use of words or terms in the FAR. Ibn is asking ji what he means when he uses those words. I don't think ji has explained what he means--he merely provided examples of the use of "subjective". But his answer may satisfy Ibn. 

Fair enough.

Link to comment
Share on other sites

21 minutes ago, Ibn Battuta said:

Nor did I ask for your silly lecture or your unwarranted and vulgar criticism.

Ibn Battuta,

My dear father was a sergeant in the Army.  I believe this is a term he brought back from his enlistment which my brother and I heard countless time growing up, among many of his other favorite sayings and aphorisms from the service.  I am sorry, no harm or insult was intended  I chuckle every time I think of that saying, which in this case, I replaced with a euphemism.  As you can guess, it means focused on minor (perhaps trivial) details.  I never meant to come across as vulgar, nor subject you to a lecture.  This forum can get energetic and mildly sarky, as I'm sure you are aware; but it was not my goal to make you feel anything less than someone with whom I would want to share insights and take advice.

Link to comment
Share on other sites

15 hours ago, Ibn Battuta said:

@ji20874

  1. What do you mean by subjective and objective?
  2. What makes an evaluation subjective versus objective?
  3. What are the respective characteristics of subjective and objective evaluations?

A short answer from Jeffrey Glen at the Businessdictionary.com:

”Subjective refers to personal perspectives, feelings, or opinions entering the decision making process.[*]

Objective refers to the elimination of subjective perspectives and a process that is purely based on hard facts.”

*Some other characteristics of subjective thinking may include being influenced by perceptions or a “gut feel”, intuition, prejudices.

Other characteristic could be basing opinion of something or somebody upon your past  experiences - regardless of considerations for how a firm may have changed, addressed prior performance, improved, etc. Or - relying upon prior great experiences without regard to degradation of a person’s or company’s qualifications or recent less than stellar performance; “reading something into” a proposal;

Not all subjectivity is bad, but may be challenged and then would require justification that would survive the challenge. If one has current, ongoing experience with a person or company, irrespective of what is or isn’t specifically stated in a proposal (e.g., “past performance” and “current performance”), one might well use subjective reasoning to  form a Degree of confidence in that person or firm.

Not all persons or firms are great proposal writers but may be very reliable performers.

The opposite is also true. There are companies with flowery or great proposal writers that sometimes or often don’t or can’t  deliver. As as been said before , proposal writers write proposals to win jobs, while often, others who may or may not have even read or have been briefed, execute the work.

Plus, my experience with government performance evaluations is that they are often poorly or hastily written as an obligation rather than a reliable, valuable source of factual information.

Tired of writing, won’t go into a treatise on objective thinking or the mechanics of subjective and objective “evaluations” as that relates to the acquisition process.

Suffice it to say that there are dangers and possible pitfalls to relying solely upon objective or subjective thinking processes during evaluations of proposal submissions or any major decision making.

That’s why I like to negotiate,  to request clarifications, and to conduct discussions, either face to face or at least telephonically (with someone I know or have met, face to face).

Subjectively, I “know” that firms “”often”  don’t provide their best pricing in initial competitive proposals especially for design-build and construction contracts,. Firms have told me that.

I can often bargain for better performance, where the initial proposal meets the minimum Government requirements but is less than optimum or is not favorable to the stakeholders. We have also helped firms better understand aspects of the job that will help them improve efficiency, note mistakes or omissions  or improve performance to positively affect their bottom line earnings. Win win is the goal! . 

Edited by joel hoffman
Link to comment
Share on other sites

20 hours ago, General.Zhukov said:

Probably yes.  Phase One's purpose is to quickly screen for proposals that are worth evaluating in more detail during Phase Two.   So what we are looking for are:

  • - Criteria that are pretty good at indicating high-value
  • - Criteria that is relatively easy to evaluate - objective, minimal interpretation needed, this usually means quantitative data.
  • - Proposal information that is relatively low-cost for offerors.
  • Avoiding FAR 15 procedures, or the appearance of using FAR 15 procedures.

Pretty Good Screening Criteria I Have Actually Used At Least Once, And Did Not Regret Using

  • Samples.   I had a requirement that involved, basically, scrubbing and QCing some messy and complicated data.   Our scientists made some fake data, with some subtle errors in it.  An evaluation factor was 'analyze this fake data and tell us what you found.'  If they don't find the errors - Fail.  If they find the errors - Meh.  If they find the errors and explain them - Good.   This took less than 10 minutes to evaluate per proposal.  Everyone involved (Program personnel and Offerors) liked it.
  • Quasi-Past-Performance.  Criteria about offeror's experience and performance on similar work, but not FAR 15 Past Performance exactly.  PIL has something about this. 
  • Offerors have some essential credentials, warrants, licenses, etc.  CMMI Level II.  FedRAMP.  SCIF.  Has appropriate State licenses, etc.  Pretty close to responsibility determination though, so be careful.
  • Challenge Questions / Short Q&As.  Offeror's give brief (like really brief - 1/2 page,  a few slides, 10 min. presentation) pitches for how the offeror proposes to do the work.  This is much more subjective, but if done right, this can be very productive.  In my area, IT, we would probably ask about proposed architecture, platforms, software development methodology, etc.  Often our technical folks will immediately know they do or do not like a particular approach based on these answers. 
  • Summary "Technical Approach."Give us the short version.  1 page.
  • Key Personnel.  Sometimes in my area (IT), at the end of the day we are buying smart people that can code.  The rest is window dressing.  You may consider just asking for KP info at first.  But be careful about this approach, it has some serious down-sides and, generally,and is appropriate only for select circumstances (small & highly technical).

General, you offer a lot of good suggestions above for evaluation criteria one might use to conduct a downselect.  My original question is focused on how to support a selection decision through documentation after the Government has determined which criteria it will use to evaluate.  Our criteria for evaluation must always be stated in our solicitation.  However, the way we document our evaluation and selection does not necessarily have to be included in the solicitation language.  I have found that many decision authorities prefer to tell contractors a lot about how they will document their decisions.  Whether this is a good idea is arguable.  When was the last time anybody went on a job interview and the hiring manager and panel disclosed to the candidate exactly how they were going to document her evaluation behind the scenes?  Most of the time, she'd be lucky to get a call back.

In my office, we tend to take the stated criteria (for example, any of the criteria you suggested above) and apply it to each individual contractor performing a "thorough" evaluation.  The evaluation team then documents each contractor relative to that criteria.  We do that for each contractor, be it two or twenty-two.  We then take all that information and perform a comparative analysis.  Generally General, this involves comparing each contractor that will move on to Phase Two to each contractor that will not.  We tend to cut and paste our individual findings for each contractor and string those statements (findings) together using comparative language.  The following serves as a truncated example:

Phase 1 - Individual Evaluations of Experience

Contractor A

Contractor A has recent experience in three contracts (within the past three years) performing work largely comparable to that described in the SOW.

Contractor B

Contractor B has recent experience in one contract (within the past three years) performing work largely comparable to that described in the SOW.

Phase 2 - Comparative Evaluations of Experience

Contractor A is superior to Contract B in the area of Experience as Contractor A has recent experience in three contracts (within the past three years) performing work largely comparable to that described in the SOW versus Contractor B, which has recent experience in one contract (within the past three years) performing work largely comparable to that described in the SOW.

Granted, this is an overly general example short on specifics.  But it is an example of the model my office seems to favor.

The Individual Evaluations can run into double digit pages (in a Fair Opportunity selection) depending on how many offerors we are required to evaluate.  The comparative evaluations, which tend to regurgitate the same information found in the Individual Evaluations can also run into double digits page-wise.

The template in the PIL Boot Camp Workbook, specifically Technique 5, consolidates the above information into a single document, which it refers to as a "Comparative Evaluation."  I prefer this approach on the surface, as it provides a model and rhythm for moving through evaluations of multiple offerors with a level of efficiency and less documentation.

General, based on the criteria you suggested above, how would you document your evaluations? Would you not use a comparative analysis to downselect? How then might you otherwise document your downselect decision?

Link to comment
Share on other sites

5 minutes ago, Guardian said:

General, you offer a lot of good suggestions above for evaluation criteria one might use to conduct a downselect.  My original question is focused on how to support a selection decision through documentation after the Government has determined which criteria it will use to evaluate.  Our criteria for evaluation must always be stated in our solicitation.  However, the way we document our evaluation and selection does not necessarily have to be included in the solicitation language.  I have found that many decision authorities prefer to tell contractors a lot about how they will document their decisions.  Whether this is a good idea is arguable.  When was the last time anybody went on a job interview and the hiring manager and panel disclosed to the candidate exactly how they were going to document her evaluation behind the scenes?  Most of the time, she'd be lucky to get a call back.

In my office, we tend to take the stated criteria (for example, any of the criteria you suggested above) and apply it to each individual contractor performing a "thorough" evaluation.  The evaluation team then documents each contractor relative to that criteria.  We do that for each contractor, be it two or twenty-two.  We then take all that information and perform a comparative analysis.  Generally General, this involves comparing each contractor that will move on to Phase Two to each contractor that will not.  We tend to cut and paste our individual findings for each contractor and string those statements (findings) together using comparative language.  The following serves as a truncated example:

Phase 1 - Individual Evaluations of Experience

Contractor A

Contractor A has recent experience in three contracts (within the past three years) performing work largely comparable to that described in the SOW.

Contractor B

Contractor B has recent experience in one contract (within the past three years) performing work largely comparable to that described in the SOW.

Phase 2 - Comparative Evaluations of Experience

Contractor A is superior to Contract B in the area of Experience as Contractor A has recent experience in three contracts (within the past three years) performing work largely comparable to that described in the SOW versus Contractor B, which has recent experience in one contract (within the past three years) performing work largely comparable to that described in the SOW.

Granted, this is an overly general example short on specifics.  But it is an example of the model my office seems to favor.

The Individual Evaluations can run into double digit pages (in a Fair Opportunity selection) depending on how many offerors we are required to evaluate.  The comparative evaluations, which tend to regurgitate the same information found in the Individual Evaluations can also run into double digits page-wise.

The template in the PIL Boot Camp Workbook, specifically Technique 5, consolidates the above information into a single document, which it refers to as a "Comparative Evaluation."  I prefer this approach on the surface, as it provides a model and rhythm for moving through evaluations of multiple offerors with a level of efficiency and less documentation.

General, based on the criteria you suggested above, how would you document your evaluations? Would you not use a comparative analysis to downselect? How then might you otherwise document your downselect decision?

Just addressing here one aspect of the criteria stated above - 

A firm may have more experience (what they did)  than another.  But how WELL did they perform - how successful were they in meeting the customers’ requirements and expectations?

Recent, relevant experience and past performance are related but separate aspects of evaluating and developing a confidence rating. 

Link to comment
Share on other sites

@Guardian

I waded through all the responses after my latest post a couple of days ago.  Understanding that the single approach I offered as just one of possibly tens of ways did not appear to meet your needs just be aware that I posted it in response to @ji's challenge to me and to not necessarily provide a process that met your needs.   In reading the rest of your posts I completely understand your thoughts and concerns.   I have a gut reaction to this entire thread and it is - Everyone gets too wrapped around the axle on what the comparative process needs to be.  They read the language and specific words of FAR 16.505 as if it is promoting FAR 15.3 procedures.  I take that other view as 16.505 says it is not.   Noting this I then seriously pose ...... 

Why couldn't one use my suggested "Fair Opportunity Placement Procedures Clause", get responses, lay them out on a table and read through them to compare them then sit down and write out an award decision rational to be signed by the CO that is made on a best value basis that states basis for award and the relative importance of quality (non-cost factor(s)) and price or cost factors and the tradeoffs made with out quantifying the tradeoffs made?  

After all the only thing that 16.505 requires is that I have to give everyone fair opportunity to be compared, how I estimate, measure, or note the similarity or dissimilarity between the responses I receive is completely up to my discretion.   Every day I believe we do this in a personal way with every decision we make to buy a car, paint a house, get a lawyer, pick a doctor, etc. etc. etc. and if someone were to require me to justify why I did get the car, etc. etc. I believe I could simply document or express the measure of my tradeoff decision without having to have some dang rating system to quantify it.

 

Link to comment
Share on other sites

22 hours ago, Ibn Battuta said:

 

If someone had asked me what I think the difference is between subjective and objective evaluation, I would have said that an objective evaluation is one that turns on the observable and measurable attributes of the thing being evaluated---the object of the evaluation---and not on the opinions of the person conducting the evaluation--the subject.

 

 

FWIW, I think that is a great and usable definition, which can easily be mapped to acquisition scenarios.

Subjective = I'm looking for intangibles; AKA, the "I'll know it when I see it" philosophy, which assumes that the proposal/quote will educate me on things or approaches of which I was not previously aware. 

Objective =  I know what constitutes superior performance before I ever look at a single proposal.

One of my concerns is that I have seen COs state that they are being "innovative" by having the prospective vendors provide oral presentations in response to silly on-the-spot challenges which have nothing to do with the scope of the effort ("contractor team will have 3 hours to develop a response to an example XXX scenario") .  Might as well have them engage in Feats of Strength.  Too often 1102s get wrapped up in everything except what actually matters.  "Lazy" and "innovative" are sometimes too close for comfort.

Link to comment
Share on other sites

2 hours ago, Ibn Battuta said:

source selection.

At the hazard of starting another big debate I can see the value of those readings as they apply to a selection of contractors for the multiple award but as they apply to application of fair opportunity (and this thread) I do not agree.  Anything beyond comparing and having the ability to write up the selection decision without quantifying the tradeoff is again akin to FAR 15.3 processes.

Link to comment
Share on other sites

35 minutes ago, C Culham said:

At the hazard of starting another big debate I can see the value of those readings as they apply to a selection of contractors for the multiple award but as they apply to application of fair opportunity (and this thread) I do not agree.  Anything beyond comparing and having the ability to write up the selection decision without quantifying the tradeoff is again akin to FAR 15.3 processes.

Amen. And competitive  task order procedures are NOT source selections. 

Link to comment
Share on other sites

1 hour ago, C Culham said:

At the hazard of starting another big debate I can see the value of those readings as they apply to a selection of contractors for the multiple award but as they apply to application of fair opportunity (and this thread) I do not agree.  Anything beyond comparing and having the ability to write up the selection decision without quantifying the tradeoff is again akin to FAR 15.3 processes.

 

41 minutes ago, joel hoffman said:

Amen. And competitive  task order procedures are NOT source selections. 

Exactly.  Two things bother me about multiple award contracts.  This ordering process is one.  The other is awarding contracts to many, many sources.  IMO, the number of awards shouldn’t exceed the capability of providing around three competitive TO responses for each order need.  There are exceptions of course.  

Link to comment
Share on other sites

22 minutes ago, Ibn Battuta said:

pessimistic

Yes

 

22 minutes ago, Ibn Battuta said:

realistic.

Nope not on my watch because I lived to see the day that many things were called "pipe dreams" have become reality.  I for one believe change is in the hand of us all and we can make it happen.  It is a fact that has been proven that "pipe dreams" can become reality.

Link to comment
Share on other sites

53 minutes ago, C Culham said:

Nope not on my watch because I lived to see the day that many things were called "pipe dreams" have become reality.  I for one believe change is in the hand of us all and we can make it happen.  It is a fact that has been proven that "pipe dreams" can become reality.

I hope you’re right as well.  What makes change happen in the government is rewarding those that try new things.  Recognize and set examples of the kind of behavior that’s wanted.  Let the rank and file see risk takers and innovators are those that get ahead.

Link to comment
Share on other sites

Ibn:

Your wrote

Quote

You write a lot. Why not write an article about "fair opportunity" for NCMA's Contract Management magazine? They are eager for articles from 1102s, and you'll reach an audience that you might not reach here. It can just be an opinion piece.

I've just counted the reading statistics for the articles posted on Wifcon.com's Articles page for 2019.  Articles written nearly 2 decades ago and posted here are still being read thousands of times each year--including in 2019.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

×
×
  • Create New...