Jump to content
The Wifcon Forums and Blogs
Matthew Fleharty

Is Artificial Intelligence a Solution to Contracting Problems?

Recommended Posts

Guest Vern Edwards

I think government contracting is a trivial concern when it comes to AI. But see the current issue (June 2018) of The Atlantic. It contains an interesting article about artificial intelligence by Henry Kissinger, "How the Enlightenment Ends," which is a little unnerving and might give one pause.

In the context of this thread, consider this quote and its implications for source selection decision making:


AI may reach intended goals, but be unable to explain the rationale for its conclusions. In certain fields—pattern recognition, big-data analysis, gaming—AI’s capacities already may exceed those of humans. If its computational power continues to compound rapidly, AI may soon be able to optimize situations in ways that are at least marginally different, and probably significantly different, from how humans would optimize them. But at that point, will AI be able to explain, in a way that humans can understand, why its actions are optimal? Or will AI’s decision making surpass the explanatory powers of human language and reason? Through all human history, civilizations have created ways to explain the world around them—in the Middle Ages, religion; in the Enlightenment, reason; in the 19th century, history; in the 20th century, ideology. The most difficult yet important question about the world into which we are headed is this: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?

How is consciousness to be defined in a world of machines that reduce human experience to mathematical data, interpreted by their own memories? Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?


Share this post

Link to post
Share on other sites

Mark Zuckerberg made hoped for further developments in AI the centerpiece of his deflections while appearing before Congress last month. When pressed on FB’s own enforcement of community standards...his invocation of AI as a prospective cure seemed to me at best a transparent attempt at political alchemy and at worst an inversion of rational standards of trust. I hope that such superficial blandishments are not a harbinger of  things to come. Issues over commonplace AI technology do not strike me as a trivial...these issues call for great care in what we cede knowingly or not, to AI. We may not get a redo in our lifetime. The feedback loop is necessarily the most important part of any  iterative system...Just what will happen should AI become “self-interested” and already have a role in many feedback processes?   We hold to a chain abounding in weak links...will we catch them all before any one of them breaks...and who is doing the watching?

Share this post

Link to post
Share on other sites
This topic is now closed to further replies.