Jump to content

Me_BOX_Me

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by Me_BOX_Me

  1. This is very true on multi-year or multiple-year contracts. Understanding how it evolved (and why) would inform the acquisition team as they craft either a follow-on or other multi-year/multiple-year contracts. Gotta do that retrospective, though.
  2. That's the conclusion my agency arrived at 5 years ago. Now we either do 1) "management approaches" that are so high level they might as well not exist, 2) a basic technical approach to contract transition, or 3) what I did on my last SEB which was to only evaluate Key Personnel and Past Performance. It has worked very well for us so far and I try and strip out the technical/management approach on each SEB I run. Most of the contracts are follow-on contracts anyway so there's nothing groundbreaking the agency wants/needs. Besides all the BS horse and pony show. Still. There is valuable data in those evaluations and we would do well to mine it for what we can. CPAR evaluators rarely know what was in the proposal. They barely know what's in the PWS, especially as it changes over time. Since you need evidence that technical and management approaches are not correlated with outcomes, you have to do the comparative assessment. Otherwise senior leaders and source selection officials will continue to lean on their biases and insist on using their "expertise" to evaluate technical approaches.
  3. No. I was referring to this part: "At the end of performance no one compares what was described in the winning proposal to what was actually received and writes a comparative assessment." Just saying that what Vern has proposed hasn't been attempted, certainly not at scale. So socializing what that comparative assessment might look like would be helpful to people who are interested in improving outcomes or reform similar to what Vern suggests. None of what I quoted talked about how to write better evaluations, it was about what we do with those documents post-award, if anything.
  4. I cannot describe how frustrated engineers get on my SEBs when we have to make decisions based on these wholly-qualitative terms. It takes months and months of writing and re-writing to ensure all the analysis is consistent. The implementation varies from SEB to SEB. It's easily protestable, because the offerors have a much different perspective about what they wrote down. At the same time, it's difficult to win those protests because judges defer to SEBs on those qualitative assessments as long as they are consistent and followed the written RFP. Thanks for the essay link, btw. Hadn't read that one yet. This seems like the most actionable part of the dialogue. This could easily be a part of an integrated project team as they start market research and acquisition planning. Even better would be collecting this data throughout the contract. That would be true continuous improvement. Vern et al - It would be helpful for wifcon to facilitate sharing knowledge and tools to help professionals establish these initiatives within their agencies. Unfortunately for senior managers to sign on, something would either have to be very well developed already. My group likely has the capacity to take something like this on in the coming years. We have the centralized data, expertise, and willingness to try. That isn't the main focus of this discussion, but it's the one we can affect at the deckplate level. Everything else is for policy wonks, hopefully supported by professionals through data calls and surveys.
  5. Also - Seems like the proposal (as constructed) would encourage much more bidding, with the hopes of recouping solicitation costs on a technicality. So it would save the government money on the protest end, but would also draw out evaluation times.
  6. Joining this parade a bit late. Whatever happened to the NDAA proposal that would force large unsuccessful protesters to compensate agencies for time spent defending source selection decisions? That seems like a logical first step and it vanished into the ether almost as soon as it was proposed. When we can't even agree to hold large unsuccessful offerors financially accountable for their frivilous bid protests, how can we expect to revamp the entire system? I agree with other posters that COFC should remain, as well as emphasizing ADR in some cases. I will say that my current agency has a robust quality assurance process in place for source selections and it has dramatically improved our documentation. Despite this, the protest rate is still very high especially from unsuccessful incumbents who just want to milk a few more months' fee from their contracts.
  7. I'm with DOE. We had selection procedures that included confidence assessments but changed those recently in favor of simple strengths and weaknesses. We took care of recency by only requesting/evaluating the past X years of past performance information and sum up with an adjectival rating for the offeror that combines relevancy and strengths/weaknesses.
  8. Thank you for your insight, Vern. Much better put than various legal articles and cases I've been reading. I think my agency is trying to have their cake and eat it too. A few years ago they nixed "Experience" as a separate evaluation factor en lieu of Past Performance as a stand-alone factor comprising both assessments you reference. So it becomes a multi-step process where we first determine "relevance", and then determine whether there are Strengths or Weaknesses associated with that relevant performance. So the first step would be (A) experience, and the second step would be (B) past performance, even though Step (B) only happens when a contract is rated well in step (A). What I'm hearing from you is a suggestion that complexity should be evaluated hand in hand with scope, and space should be provided for offerors to describe those unpredictable conditions or events, along with their responses/corrective actions, to demonstrate how those conditions might be similar to the proposed scope of work/environment.
  9. In my agency, standard practice for past performance (PP) evaluation involves first determining the relevancy of past performance information to the proposed scope of work. We do this by evaluating the size of the PP contract, it's scope (by reviewing proposal information or finding the PWS), and the PP contract's "complexity". Size and scope have their own definitions, but I want to focus on complexity. Complexity is generally defined in the RFP as "performance challenges", which vary from RFP to RFP and could include subcontractor management, management of large complex contracts in highly regulated industries, cost efficiencies, etc. This results in separate documentation in our evaluation report speaking specifically to how the scope may or may not be relevant, and then how the complexity may or may not be relevant. Background: My agency requests that offerors provide a few pages of information on two or three selected past performance (or "reference") contracts to aid in evaluation. This includes discussion of scope relevance and complexity relevance, which are separate entries on the provided form. We then evaluate other past performance information that is available to us. My question to the forum is whether other agencies regularly evaluate the complexity of past performance contracts separate from scope (from my readings of GAO and COFC decisions, it seems to be a fairly standard practice), and whether or not evaluating complexity adds value. In my experience: Lesson learned: Complexity is generally not a discriminator in relevancy determinations Root Cause: The RFP definition of complexity as “performance challenges” nearly always comes in the context of scope, making it difficult to distinguish between the two. Offerors can struggle with the concept and implementation of complexity into reference contract information forms, when separated from a discussion on how a given contract is relevant to the scope of the proposed PWS. Complexity is difficult and time-consuming to evaluate when reviewing non-reference contracts. The acquisition community has used the phrase "size, scope, and complexity" for decades without carefully considering what they mean and how they each aid evaluation. Recommendation: Study the potential effects of removing complexity as an independently evaluated item in the past performance relevancy evaluation. SEB’s can still include complexity as a part of the scope relevancy evaluation. For example: “Contract relevance will be determined based on size and scope, including complexity.” Conclusion: SEB’s can safely rely solely on size and scope to determine relevancy. RFP’s can still solicit examples performance challenges in the scope description. Eliminating separate complexity determinations would streamline the relevancy evaluation process. Is "size, scope, and complexity" standard language in your RFPs? How do you approach complexity? Do you agree or disagree with my points and why? Thank you in advance for your thoughts. Note that I am not an 1102, but can pick my way around the FAR when I need to. Please be gentle. :)
×
×
  • Create New...