Jump to content

page limitations on proposals


contractor100

Recommended Posts

Guest Vern Edwards

I wrote the following in 1994. It was published in a professional newsletter in October of that year.

Quote

Almost everyone involved with Government contracting can tell a horror story about a “best value” source selection that involved the development of a lengthy and costly proposal, about a source selection that took two years to complete, and about a protest that delayed an important project and increased its costs. Legislators, policymakers, and acquisition managers are currently looking for ways to “streamline” the source-selection process. I would suggest that the single most effective thing acquisition managers can do to streamline the best value source-selection process is to improve their choices of evaluation factors for award.

Typical RFPs--An Essay-Writing Contest

Consider the typical Request for Proposals (RFP). It includes a statement of work or specification that describes the Government’s requirements as well as the Government’s preferred contract terms and conditions. It also includes evaluation factors for award like “understanding of the problem,” “soundness of approach,” or “quality of the management plan.” Such RFPs usually instruct offerors to read the statement of work or specification and then to propose a plan or “approach” to doing the work, but caution offerors that approaches that depart from the terms of the RFP may result in rejection of the proposal.

For example, a current RFP for task order “advisory and assistance services” includes a statement of work that describes about 14 generic tasks and subtasks, such as “[a]dvise and assist the [agency’s] Executive Committee on the updating and/or development and implementation of a new [total quality management (TQM)] plan,” and “[d]esign and facilitate a wide variety of structured activities. . .for the purpose of organizational improvement.” The RFP states that technical factors will comprise 80% of the total evaluation weight and lists four equally important technical factors: (1) company background and experience, (2) technical and management approach, (3) personnel qualifications, and (4) plans for use of outside resources. The RFP goes on to state that technical proposals must contain “detailed explanations of proposed approaches to performing and accomplishing the work. . .and a specific outline of the actual tasks proposed to be performed in order to complete the work.” The RFP warns: “Repeating the work statement without elaborating on the specific tasks to be performed is unacceptable.”

In response, offerors will probably submit lengthy technical proposals because of the weight assigned to technical factors, the vagueness of the tasks in the statement of work, and the warning that offerors must submit an elaborate response or risk rejection. (The RFP did not include a proposal page limitation.) The proposals will be expensive to produce, to read, and to evaluate. How much of this is necessary?

RFPs typically include two broad categories of “technical” or “management” evaluation factors. The first category relates to factual matters about offeror capability and includes such factors as experience, past performance, key personnel qualifications, capability of facilities, and product specifications. The second category relates to offerors’ descriptions, promises, or predictions about what they will do, achieve, or deliver in the future. It includes descriptions of their plans, procedures, “design concepts,” and promises or predictions about their performance or the performance of their products. This category includes such factors as “understanding the problem,” “soundness of approach,” and “merits of the proposed design.”

It is in response to the second category of factors that offerors must write lengthy proposals, especially when a RFP combines those factors with instructions to provide detailed explanations and warnings not to repeat the work statement without elaboration. These factors relate to assertions about future events, which are not verifiable. To respond to the first category of factors, an offeror should report facts. But to respond to the second category of factors, the offeror must invent something to say.

When an RFP includes a complete description of the Government’s requirements and terms and conditions, the evaluation factors constitute little more than a test of the offerors’ knowledge and rhetorical skill. The offerors’ technical and management proposals will play little or no role in contract formation. They will simply provide information for the Government to use in evaluating and comparing the offerors. The Government assumes that the technical and management proposals will indicate the offerors’ relative capabilities and prospects for success. Thus, RFPs that include such factors effectively compel offerors to compete in essay-writing contests. Source-selection decisions based on these criteria too often reflect the ability of an offeror to write a good essay rather than its ability to do the work. (For a classic example, see SMS Data Products Group, Inc., GSBCA 8589-P, 87-1 BCA ¶ 19496, 1986 BPD ¶ 206.) The reaction of competent contractors to such requirements is entirely predictable. See an article in the proceedings of the fifth annual conference of the Association of Proposal Management Professionals, which was held in Washington, D.C. this past May, suggesting that companies develop reusable “plans” and “TQM blurbs” as a means of winning these essay-writing contests.

An offeror’s assertions--predictions, plans, and promises--about the future in a proposal are not reliable bases upon which to judge its capability and prospects for success for several reasons. First, the proposal may reflect the knowledge and writing skill of consultants and proposal writers rather than the performance capability of the offeror. Second, even if the offeror did write the proposal, the ability to promise, plan, or predict does not demonstrate the ability to execute. Third, a plan is merely a statement of intentions, made without perfect knowledge about the future. The future is rarely as we hope or expect it to be, and most projects do not proceed as planned. The ability to write a good plan does not demonstrate the ability to respond effectively to unanticipated contingencies. Fourth, the Government may lack the expertise to evaluate proposed plans and designs. Indeed, as users rather than designers, Government personnel may not be able reliably to predict project success based on a performance plan or predict product or system mission performance based on a conceptual design. Requirements to write proposals in response to factors in the second category increase the cost and time of source selection. But the claim that such proposals are essential to sound source-selection decisionmaking is dubious. Government agencies can streamline the source-selection process by omitting this second category of factors.

Illusory Benefits Of BAFOs

Agencies further increase the cost and time of source selection when they conduct discussions to tell offerors what they do not like (deficiencies) about their assertions, plans, conceptual designs, or predictions and then ask them to revise and resubmit them as best and final offers (BAFOs). (Agencies are telling offerors: “You flunked the first essay exam, but we’ll give you a chance to take it again.”) The BAFO procedure may add little of value because assertions, plans, predictions, and conceptual designs may not mean much. Moreover, the procedure may encourage offerors to reduce prices or costs and fees excessively in a desperate attempt to win the contract. The procedure’s benefits may thus be doubly illusory. The BAFO may not only add nothing of value, it may plant the seeds of conflict and poor performance.

RFPs For Research And Development

What about large research and development contracts (or design-build construction contracts)? Many would argue that solicitations for such programs should request proposals that include detailed--albeit preliminary or “conceptual”--design information (e.g., problem definition, feasibility, functional flow, requirements allocation, tradeoff, and other analyses), systems engineering plans, configuration management plans, and so forth. Such proposals often contain thousands of pages, cost millions of dollars to produce, and require months to study and evaluate. Certainly, tradition weighs in heavily on the side of those who argue the need for such proposals.

But one must wonder how many of the large systems that have been fielded by the National Aeronautics and Space Administration, the Department of Defense, and other agencies are significantly different from the design concept in the winning proposal. The relationship between the winning concept and the final product is often distant because the initial statement of Government requirements tends to be vague or ambiguous, incomplete, and tentative and because the factors that drive and constrain those requirements are often highly dynamic. The plans and proposed designs solicited by RFPs are, in fact, sales presentations. They are designed to persuade the Government that the offeror has the capability to succeed. They are the answers to essay questions.

Here are three propositions, offered in response to those who argue for the need for detailed technical proposals for research and development programs:

(1) In selecting a system development contractor, the Government is entering into a partnership that is established on the basis of a developmental capability and an expectation of success.

 (2) Proposed project plans and preliminary designs will count for little in the long run because initial statements of requirements are usually vague or ambiguous, incomplete, and tentative, and because the factors that drive and constrain requirements will change during the course of development. Therefore, they are not the most reliable bases for predicting success in research and development procurements.

 (3) Contractor experience, past performance, and key personnel qualifications are far more reliable indicators of developmental capability and predictors of success than factors such as “soundness of approach” or “merits of proposed design.”

The larger and more complex the system and the longer the period of development, the greater the force of these propositions.

Rethinking Evaluation Factors

The most reliable indicators of an offeror’s capability and prospects for success are its record of experience, its reputation for past performance, and the qualifications of its key personnel. Add price or estimated cost and fee, and you have all the best value evaluation factors most agencies will ever need to select a contractor. For supply contracts, the Government may need to include a factor relating to verification of product specifications. Facility capability and capacity also may be important. Omitting essay contest factors and their associated proposal preparation instructions will eliminate the cost and time associated with the preparation, evaluation, discussion, and revision of lengthy technical and management proposals. (For an excellent example of an effective, streamlined procurement, see CORVAC, Inc., Comp. Gen. Dec. B-244766, 91-2 CPD ¶ 454, a protest involving a procurement for hazardous waste processing in which the only evaluation factors were price and past performance.)

By rethinking their ideas about evaluation factors for award, acquisition managers can reduce the time and cost associated with best value source selection. Better evaluation factors will improve the quality of competition in Government contracting and benefit both Government and industry. Beneficial change can come without new legislation, new policy, or rewriting the Federal Acquisition Regulation.

In December 1994 the newsletter published two letters, which it received in response to my article:

Quote

We received the following letter from Bryan Wilkinson, Director, Compliance Guidelines, Teledyne, Inc., commenting on Vern Edwards' guest column:

"Vernon J. Edwards' column in your October 1994 issue was so clear and sensible that I doubt it will have any impact on the procurement process.

When I was doing my doctoral work in psychology and statistics 45 years ago, the validity and reliability of ratings and rankings were a major concern, both in psychology and statistics. There was much literature published back then on the problem. There are some techniques to improve the process. Factor analysis of the rating elements is one that comes to mind. The conclusion reached by many, including myself, was that ratings or rankings will come out the way the rater wants them to, no matter the rating scheme.

When I got into the Government contracting area, I was astounded and “horrified” at the evaluation schemes used to award Government contracts. The extent of the overruns and product inadequacies should make it obvious that the selection process does not work properly.

Mr. Edwards has it right (from both a statistical and psychological standpoint) when he says use the offerors' records of experience, reputation for past performance, the qualifications of its key people, and price or cost and fee. (Though the first three factors are probably highly correlated.) These appear to provide rating factors which will result in valid and reliable ratings.

As a child of the Depression, I hate to see my tax dollars go for essay contests."

We also received the following letter from Steven Kelman, Administrator of the Office of Federal Procurement Policy:

"I read with great interest the Vern Edwards article in the October issue of the REPORT regarding streamlining evaluation factors. I strongly agree with Vern that most of our source selection process takes on the character of an essay writing contest. I also strongly agree with Vern that evaluation criteria should be centered around price/cost and past performance. I am very pleased Vern took the initiative to write this article for you, and I look forward to its arousing great interest in discussions within the procurement community--and to watching some agencies try out his suggested approach."

In December of 1998, Profs. Ralph Nash and John Cibinic wrote this about proposal page limitations:

Quote

[T]he imposition of proposal page limits is not the way to go. The purpose of a procurement should be to select the offer that is most advantageous to the Government, not to have a contest to determine who can shoehorn the most information into the allotted pages. A well-written proposal may be a thing of beauty, but it doesn't perform the work. There is a win-win alternative, however. If agencies don't require technical proposals, they won't have to impose page limitations. This approach would have the added advantage of saving contractor bid and proposal expense and speeding up the procurement process. The sooner that Government agencies get rid of the idea that technical proposals are essential elements of best value procurements, the better off we all will be.

And now, 23 years after my 1994 article, think of the conversation that you all have been having in this thread. What does it tell you about your profession?

Link to comment
Share on other sites

I didn't need to use page  limitations in my solicitations for construction, services or design build. 

In the interest of brevity, consistency between proposals and to focus on the specific information that we wanted to evaluate, I developed forms for relevant experience and related past performance owner rating and contact reference, if necessary to verify (not pp questionnaires for them to get owner's references to fill out and return) for prime, designer and specific key trade subs; for prime's and D-B designer's (only specifically identified positions ) key personnel; for identifying prime's work to be self performed. We included an outline price breakdown (not cost breakdown) and reserved the right to ask for it in the event that we needed it for price analysis or confirmation of their understanding of the work. 

For D-B, we focused on certain specific concept design information and on the level of quality of certain proposed equipment and materials. For D-B drawings, we limited the info to some specific preliminary information.

I asked for certain organizational and management approach info but not detailed plans or detailed planned approaches.  

Did not have a problem with excessively long or wordy proposals, or typical lists of every project that they had ever completed. 

The competing firms liked seeing the same forms  and similar format for every construction and D-B solicitation. 

 

Link to comment
Share on other sites

Vern's and Nash and Cibinic 1994 era articles were spot on and probably served as catalyst's for the "Streamling" efforts and for the 1996-1997 FAR 15 rewrite.  If you think FAR 15 is bad now, you should have seen it before that. 

Unfortunately, many of those acquisition personnel either didn't understand the intent or the distinctions because they taught newer personnel certain pre-1997 FAR approaches and limitations that are misunderstood and passed on to this date.  Frustrating. 

Link to comment
Share on other sites

Guest Vern Edwards

Agencies set page limitations (and font style, size, and spacing limitations, and page margin requirements, etc.) because they wrote vague instructions for the preparation of "technical" and "management" proposals. Offerors, not sure what agencies were looking for, resorted to what I call "recon by fire," an old infantry tactic. They wrote as much as they could about everything they could, hoping to score enough points to get into discussions, where they hoped to find out what the dummies wanted. Agencies never understood that they were to blame for proposal bloviation. Then agencies resorted to ridiculously small page limitations, like the one described in the opening post, too stupid to realize that they didn't need technical ("narrative") proposals at all.  

I wrote this in 2013:

Quote

[O]n April 24, 2013, the Undersecretary of Defense, Acquisition, Technology and Logistics issued another “Better Buying Power 2.0” memorandum in which he stated: “The first responsibility of the acquisition workforce is to think.” Available at http://www.acq.osd.mil/docs/USD(AT&L)%‍20BBP%‍202.0% ‍20Implementation%‍20Directive%‍20(24%‍20April%‍202013).pdf. See 55 GC ¶ 147. Okay, here is a thought from 20 years ago: Dispense with the “technical” essay exams. When capability is the key factor in contractor selection, rely instead on experience, past performance, and, in some cases, key personnel qualifications. Mr. Under Secretary, here is an even better thought: Talk to Congress about changing the law to permit the use of an architect-engineer-type process for selecting contractors to design and develop major systems (weapons, space, and information technology) and long-term, complex services. Eliminate head-to-head proposal-based competition because it encourages offerors to make unrealistic promises, takes too long, and costs too much. Get rid of the clarifications, communications, competitive range, and discussions apparatus of FAR 15.306. Instead, select contractors for such work based on their qualifications and the inventiveness and ingenuity shown in their independent research and development and then negotiate one on one with the selectee to negotiate a realistic and fair business deal. Alternatively, you can keep doing what you've been doing, even though it is inefficient and counter productive.

Acquisition personnel are often thoughtless about their work. It's maddening. They do the things they do because that's the way things have always been done, and they infect their trainees with cut and paste disease. Brain death is rampant in the ranks, from political appointees, through the ranks of the SES, down to the journeymen. All I can hope for is ruthless insurgency by the young.

Okay, back to "Rick and Morty." TINY RICK!!!

 

Link to comment
Share on other sites

49 minutes ago, Vern Edwards said:

 

Acquisition personnel are thoughtless about their work. It's maddening. They do the things they do because that's the way things have always been done, and they infect their trainees with cut and paste disease. Brain death is rampant in the ranks, from political appointeeS, through the ranks of the SES (especially), down to the journeymen. All I can hope for is ruthless insurgency by the young.

 

 

To add to this, Contract Specialists I'd work with were constantly in multi-week training seminars. Yet I'd absolutely never see any changes in outcomes, because training all gets forgotten when they get back to whatever procurement culture their agency has fostered over the decades.

Link to comment
Share on other sites

21 minutes ago, kevlar51 said:

To add to this, Contract Specialists I'd work with were constantly in multi-week training seminars. Yet I'd absolutely never see any changes in outcomes, because training all gets forgotten when they get back to whatever procurement culture their agency has fostered over the decades.

Things are much different now. When contract specialists return to the workplace after multi-week classroom training, acquisition outcomes improve by an average of 53.9% in terms of cost, quality, and delivery. These gains are directly attributable to multi-week classroom training.

Link to comment
Share on other sites

Guest Vern Edwards
7 minutes ago, Don Mansfield said:

acquisition outcomes improve by an average of 53.9% in terms of cost, quality, and delivery

Don:

How are such determinations made? Are they made on the basis of objective assessments of recorded measurable empirical data or are they based on subjective, anecdotal "feedback"? If objective, who validates the measurements and assessments? How are causal determinations made? Are the data and verification reports published?

Vern

Link to comment
Share on other sites

Hi, Vern,

Such determinations are based on an assessment of what others are likely to believe without becoming suspicious. Having said that, I'm going to revise my number to 33.85%. If "objective assessments of empirical data" include cherry-picking, then the answer to your question is "yes".  Although, an "objective assessment of empirical data" and a "subjective, anecdotal assessment" is really a distinction without a difference. 

Link to comment
Share on other sites

1 hour ago, Vern Edwards said:

Don:

How are such determinations made? Are they made on the basis of objective assessments of recorded measurable empirical data or are they based on subjective, anecdotal "feedback"? If objective, who validates the measurements and assessments? How are causal determinations made? Are the data and verification reports published?

Vern

Ok, Mr. Facts and Data,

All the support you need is in our Annual Report. You can save yourself some time by just searching for the term "acquisition outcome". After reading it, you may think that 33.85% is too conservative.

Link to comment
Share on other sites

"Offerors, not sure what agencies were looking for, resorted to what I call "recon by fire," an old infantry tactic. They wrote as much as they could about everything they could, hoping to score enough points to get into discussions, ..."

 

This is so right. Happy to see more OASIS-style procurements.

Link to comment
Share on other sites

Guest Vern Edwards
17 hours ago, Don Mansfield said:

All the support you need is in our Annual Report. You can save yourself some time by just searching for the term "acquisition outcome". After reading it, you may think that 33.85% is too conservative.

I looked at the Report, and I get it. Thanks. B)

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...