Alice Wonderly has read the article given to her by Mr. Ewing. She shared it with other trainees in her program and they’ve discussed it, but they have questions. So Alice telephones Mr. Ewing to set up a meeting.
Alice: Hi Mr. Ewing. I read the article you gave me and I wonder if you’d have some time to talk with me about it. I wrote down some questions. And, well, I shared the article with some other trainees in my program office, and they’d like to know if they can come, too. They have questions, too. And I shared my notes of our last conversation with them, the one about evaluation factors.
Ewing: Okay. Good. That’s fine. Patrick told me you’d be calling. How about tomorrow at 11 a.m.? We can work through lunch, so bring brownbag lunches. Patrick will be here, too, with some other trainees in our program.
A: That sounds great. I’ll see you then.
E: Okay. And bring your copies of the article and your reading notes.
A: Will do.
Wednesday, 11 a.m., Mr. Ewing’s conference room.
A: Hi Mr. Ewing.
E: Alice. Come in.
A small troop of seven trainees enters the room and take seats at the conference table.
E: Introduce your colleagues.
A: Mr. Ewing, this is Denise Clare, a second-year intern. This is Jack Dixon, a first-year intern, like me. And this is Jane Mera, also a first-year intern.
Patrick: And Alice, I’ve brought Carol Spicer and Sotero Dominquez, both first-year interns.
E: Hi all. And you’ve all read the article?
E: And did you make reading notes?
E: And do you all have questions?
A: We consolidated our questions and made sure they were clear as to subject, query, and presuppositions.
E: Where did you learn about those elements of questions?
A: From the reading materials you gave to Patrick.
E: Good. Very good. How was the article? Did you find it difficult?
Denise: A little. Lots of academic language and a technical style. But we got some things sorted out through a little internet research and discussion. For instance, we found some illustrations and explanations of “value trees” that helped a lot. And we found some stuff about “swing weighting.”
E: Very good. Before you ask your questions, let me give you some background that might make my answers easier to understand. Sound okay? You might want to take some notes.
All nod and take out notebooks and pens.
E: Economists have long tried to understand how people make important choices. After World War II, economists, psychologists, and systems analysts and engineers began intensive study and theorizing about how people do and should make important decisions. They were especially interested in how people make important and complex policy and business decisions when there is uncertainty about the consequences of any particular decision. They wanted to figure out the best way to make such decisions.
By the 1950s, researchers at Harvard, University of Michigan, Stanford, Rand Corporation, Johns Hopkins and other institutions were very active in the study of decision making. They read, theorized, conducted experiments, and proposed procedures, and they wrote technical papers and books about their findings. Much of their research was funded by the Department of Defense, which was very interested in systematic, rational decision making.
By the mid-1960s this field of study had became known as “decision analysis,” and its prescriptions were being applied in all kinds of government and business endeavors, including acquisition. Another name for the field is Multiple Attribute Utility Theory or MAUT.
The key problem in decision analysis was to determine the best way to go about making a decision when the decision maker has multiple and conflicting objectives and must consider multiple evaluation factors and make tradeoffs. How do you do it consistently and rationally rather than by instinct, which can be hard to explain and defend?
As you know, the article you read was by Dr. Ward Edwards, a psychologist and a pioneer and leading figure in the science of decision analysis. He died in 2005.
He had taught at Johns Hopkins University, the University of Michigan, and the University of Southern California. He did research in decision making for the Advanced Research Projects Agency, the Air Force, the Navy, and Rand, and wrote a very large number of scholarly papers, around 100, and a couple of important books. He was the co-author of the 1986 magnum opus on decision analysis, Decision Analysis and Behavioral Research. The research in that book was sponsored in part by the Navy’s Office of Naval Research.
Shows them a thick, well-worn volume.
Dr. Edwards wrote a good background paper and summary of the early research in 1954, “The Theory of Decision Making." That paper was funded by the Office of Naval Research and Johns Hopkins University. He wrote an update in 1961, entitled, “Behavioral Decision Theory,” which was funded by the Air Force. In 2001 he wrote “Decision Technology,” about the impact on computers and the web on decision making. All of those papers are available on the internet.
In the 1950s and 60s we were in the middle of the Cold War with the Soviet Union. We faced many technical and economic challenges. What kinds of systems did we need to counter the threat? What kinds of technologies should we develop and use? What design concepts should we choose? What economic choices should we make?
In 1961, Charles Hitch, the Assistant Secretary of Defense (Controller) and Roland McKean, a Rand research economist, discussed decision making challenges in a famous book, The Economics of Defense in the Nuclear Age. They discussed the problems in detail in Part II. Chapter 9, “The Criteria Problem,” is especially interesting. Actually, if you read that book you’ll see that some of the authors’ points still apply today.
Shows them another thick, well-worn volume.
Among DoD’s challenges was figuring out how to evaluate proposals for the complex new weapon and space systems we needed and how to select a concept and a contractor. DoD wanted to know how to choose the right evaluation factors, how to rank the factors in importance, how to evaluate proposals and document evaluation findings, and how to make and justify selection decisions. And when you track the development of the source selection process you can detect the influence of decision theorists and systems engineers.
You can see the influence of decision analysis in source selection in the description of the Air Force source selection for the Tactical Fighter Experimental or TFX, conducted in 1961 and 1962. The TFX became the FB-111 fighter-bomber. After evaluating proposals, the Air Force chose Boeing, but Secretary of Defense McNamara rejected the Air Force’s decision and gave the contract to General Dynamics, instead.
That was extremely controversial and distressed certain members of the U.S. Senate, which held lengthy hearings in 1963 about the Secretary’s action. In testimony before the Senate, Colonel Charles Gayle, who chaired the evaluation panel, described the proposal evaluation process in detail.
Shows them a volume of the Senate hearing testimony.
Over the course of several days Colonel Gayle provided what is probably the most interesting insider description of a major source selection that’s ever been given. What he described was an instance of the application of decision analysis methods, though not the specific method Dr. Edwards described in the paper you read.
He described the evaluation panel (about 200 people). He described aspects of the operational requirement and the work statement. He described the evaluation criteria, and in that description you would recognize a “value tree.” He described some of the evaluation findings – proposed costs, most realistic costs, and nonprice strengths, deficiencies, and weaknesses. He explained raw scores, importance weights, and weighted scores, and he explained the panel’s recommendation.
The procedure used by the Air Force in 1961-62 was very similar to procedure it used in 2008 to select the air tanker contractor. One difference was that in the TFX source selection the Air Force used numerical scoring, while in the air tanker source selection it used the current DOD color rating system.
Here’s an old official document, “Guide for Proposal Evaluation and Source Selection,” dated July 1966, 52 pages long, and published by the Systems Development Division of the U.S. Army Aviation Materiel Laboratories. It includes a sample evaluation plan. It’s right out of the TFX source selection playbook, complete with the same 0 to 10 point numerical scoring system. It’s an early decision analysis approach, but it never explicitly mentions decision analysis, and neither did Colonel Gayle in his Senate testimony.
Shows them the guide.
My pitch is this: In order to truly understand the source selection proposal evaluation process, explain the process to others, and lead a team in planning and executing major source selection, you need to understand something about decision analysis concepts and processes.
In 1991, two professors at the Air Force Academy wrote a paper entitled, “The Application of Decision Analysis Methods to Source Selection in the United States Air Force.” They cited Edwards’s 1986 book Decision Analysis and Behavioral Research and his 1977 paper describing the SMART method, which was cited in the paper that you read. The authors claimed that source selection could be improved by greater application of decision analysis, specifically, the Analytical Hierarchy Process, which is a decision analysis method. By the way, they were critical of the color rating method.
In 2006, an Air Force officer submitted a master’s thesis to the Graduate School of Engineering and Management at the Air force Institute of Technology entitled, “A Decision Analysis Tool For The Source Selection Process.”
There are at least one hundred papers catalogued by the Defense Technical Information Center, DTIC, that discuss the application of decision analysis in source selection, dating from the 1970s to the recent 2000s.
Decision analysis methods are useful not only for source selection, but for all kinds of nontrivial acquisition decision making, and COs are decision makers or, at least, they’re supposed to be. Here are some papers that provide examples:
Decision analysis can be used in business case development. Here is a handbook, Better Business Cases: Guide to Developing the Detailed Business Case, 28 February 2014, written by Treasury office in the United Kingdom.
You can even use decision analysis in negotiation and in claim decision making. Here's an article from a 2004 issue of Marquette Law Review, entitled, "Decision Analysis in Negotiation." Here's a chapter from The Handbook of Dispute Resolution (2005), entitled, "Finding Settlement with Numbers, Mays, and Trees." And here's a 1996 article from the Harvard Negotiation Law Review entitled, "Decision Analysis As A Mediator's Tool."
The paper that I gave you to read, which was published in 1994, is a relatively recent contribution to decision analysis. Contributions are still coming in.
There are many academic journals today that are devoted to decision making, such as Journal of Multi-Criteria Decision Analysis, International Journal of Management and Decision Making, Journal of Behavioral Decision Making, Judgment and Decision Making, Decision, and Decision Analysis. The current issue of Decision Analysis, has an article entitled, “Search Before Tradeoffs Are Known.” I haven’t read it yet, but the abstract says [reaching and thumbing through a stack of papers, choosing one, and reading aloud]:
That sounds interesting, since it mentions vendor selection.
Now, look -- all the stuff in those journals is academic, technical, and hard for non-specialists to understand. The journals are pricey. I don’t subscribe to them, and I don’t think you need to read them. But you can find nontechnical articles about decision making in publications like the Harvard Business Review and the MIT Sloan Management Review.
The May 2015 of HBR issue has four articles about decision making, and the HBR website has a recent digital article entitled, “How You Make Decisions is as Important as What You Decide.”
The MIT Sloan Management Review website has posted many articles about decision making. A Spring 2015 article is entitled, “When Consensus Hurts the Company.” Those are the kinds of things you might want to read.
An especially good article for contracting folk is “Even Swaps: A Rational Method for Making Trade-offs,” which was in the March-April 1998 issue of HBR. It was written by three prominent decision analysts.
So, why did I give you Ward Edwards’s technical article? Because you have a career choice to make. Contracting is a combination of professional, administrative, and clerical work. The most interesting part of it is the professional work. The administrative and clerical work is necessary and unavoidable, but boring. If that’s all there was to contracting, I would have changed careers long ago.
One of the jobs of a contracting pro is to explain professional issues and methods to laypersons like engineers and other requirements personnel. Source selection is a contracting task, which is why FAR makes the contracting officer the default source selection authority. That means that one of the jobs of the contracting officer is to be able to explain the proposal evaluation process to those who must do it. In order to be able to do that, the contracting officer must have what I call “deep conceptual understanding.” And that kind of understanding comes only after a struggle. You have to read, think, read some more, and think some more.
Official guidance, such as the DoD source selection procedures, explain procedures, but not the reasons for them and the concepts that underlie them. It’s likely that the people who write such manuals don’t know about the reasons and concepts. You have to read other things, things like Dr. Edwards’s paper, in order to see and understand the deep stuff. And, as you now know, reading that kind of thing is real work.
Now, look -- you can do good work in contracting without knowing the deep stuff. You can study the FAR and GAO decisions and learn the rules, you can cut and paste, and you can get promoted. But you’ll never be a leader and innovator. You’ll never be the person that they won’t start the meeting without, unless all they want to hear about is rules. And you won’t be able to teach the next generation, except by talking to them about rules and giving them something to cut and paste.
Anyone can learn to read FAR, read GAO decisions, and cut and paste a source selection plan. But you can’t learn underlying concepts that way or how to apply those concepts in different types of acquisitions. What you learn by cutting and pasting won’t help you brief and explain proposal evaluation and tradeoff analysis to a trainee or an inexperienced source selection team. Cutting and pasting won’t teach you how to be an expert professional advisor to a program manager and how to be an acquisition leader and innovator. Cutters and pasters can only be advisors and leaders in the art of cutting and pasting. They can only be expert followers.
By reading the article I gave you and coming here today, you proved to me that you’re willing to do the hard professional work in pursuit of deep understanding.
So far, at least.
Okay. End of the introductory briefing and sermon. Take out your copy of the article and your reading notes.
Having read the article, you know that Dr. Edwards’ SMARTS and SMARTER methods of decision analysis entail:
There are several variations on this method, such as the Analytical Hierarchy Process and others. But they are generally based on the same kind of “additive weighting” approach.
Now, what are your questions?
To be continued…