Jump to content
View in the app

A better way to browse. Learn more.

The Wifcon Forums and Blogs - 27 Years Online

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

According to the Harvard Business Review, AI produces "Workslop"

Featured Replies

From the Harvard Business Review Insider (which, unfortunately, is for subscribers only):

"Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers. On social media, which is increasingly clogged with low-quality AI-generated posts, this content is often referred to as “AI slop.” In the context of work, we refer to this phenomenon as “workslop.” We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task."

***

"As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver."

***

"When asked about their experience with workslop, one individual contributor in finance described the impact of receiving work that was AI-generated: 'It created a situation where I had to decide whether I would rewrite it myself, make him rewrite it, or just call it good enough. It is furthering the agenda of creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces.'”

1 hour ago, Vern Edwards said:

slop

Thanks Vern. I struggled to find a place to offer this up when reading past WIFCON posts. But now I can. Back around December 15, 2025 this was announced - "Merriam-Webster names 'slop' as its 2025 word of the year." The "AI Overview" -

"Merriam-Webster's Word of the Year for 2025 is "slop," chosen for its surge in usage to describe low-quality, often AI-generated digital content like fake news, absurd videos, and propaganda flooding social media feeds. The word reflects a cultural moment where people are increasingly encountering vast amounts of mediocre, manipulative digital material, leading to significant interest and searches on Merriam-Webster's dictionary. "

On 12/29/2025 at 11:16 AM, Vern Edwards said:

"When asked about their experience with workslop, one individual contributor in finance described the impact of receiving work that was AI-generated: 'It created a situation where I had to decide whether I would rewrite it myself, make him rewrite it, or just call it good enough. It is furthering the agenda of creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces.'”

What's really scary is knowing that the individual quoted here will only live a finite life and then his replacement will have grown up consuming and sharing workslop.

I don't like leaving such depressing one-liners here. What can be done now by the current workforce to extend the life of corporate knowledge before we hand tasks requiring that prerequisite knowledge over to machines? I'm asking how to leave a legacy of thinking to people whom will not need to think.

  • Author
2 hours ago, Voyager said:

What can be done now by the current workforce to extend the life of corporate knowledge before we hand tasks requiring that prerequisite knowledge over to machines?

Organizational knowledge is the cumulative knowledge of the individual members of the organization. Managers must impress upon their members that they are individually responsible to learn their jobs and strive for expertise. They should evaluate their members on their knowledge and performance.The members cannot wait for their employers to somehow provide it. Read (100 professional pages a week, minimum). Observe. Think. Learn. Adapt. Think again.

Congress should enact fewer laws and agencies write fewer regulations. Appoint fewer COs and select candidates on the basis of rigorous assessment of their ethics, knowledge, fidelity, judgment, and output quality. No exceptions. No handouts.

Make every CO appointment excepted service.

Assign each CO to manage a team of contract specialists.

Have high expectations.

Rigorously audit and reconsider each CO appointment annually.

Consider COs to be the acquisition equivalent of elite military special operators: people you know you can count on to pursue America's best interests honestly and fairly.

Is our federal government capable of establishing and maintaining a program like that???

(Should I send this to Pete Hegseth?)

2 hours ago, Voyager said:

I don't like leaving such depressing one-liners here. What can be done now by the current workforce to extend the life of corporate knowledge before we hand tasks requiring that prerequisite knowledge over to machines? I'm asking how to leave a legacy of thinking to people whom will not need to think.

I understand the existential dread over AI, but I think one of the fundamental aspects of technology is replace or augment human abilities. Each generation looks back and laments the experiences that subsequent generations are missing out on. I don't know your age, but surely you can see how this happened with the advent of the internet age in the 90s and with the smartphone in 2007. Can those of us who've basically grown up with these inventions "think"? Depends on who you ask. Point is, whether these advances are fundamentally helpful or harmful, I see AI as more of the same trend.

I'll leave you with a hopeful nugget I heard on a podcast yesterday: Apparently, when the ATM was invented, it was common wisdom that the bank teller profession would eventually be wiped out. Yet today, 60 years later, there are more bank tellers than ever before. Their scope of responsibilities just became much more sophisticated.

  • Author
8 minutes ago, FrankJon said:

Yet today, 60 years later, there are more bank tellers than ever before.

If you are basing that on Eric Schmidt's claims, you might want to investigate.

The reason ATMs led to more bank teller jobs is that ATMs allowed banks to open more branches, since each branch could be run with fewer tellers, which also meant banks could hire more tellers overall.

But now the number of branches is on the decline, “because of industry consolidation and technological change,” according to the Bureau of Labor Statistics. The federal agency predicts the number of bank teller jobs will decline to 480,500 by 2024, down from 520,500 in 2014.

https://www.vox.com/2017/5/8/15584268/eric-schmidt-alphabet-automation-atm-bank-teller

And ATMs are not AI as it is being developed today. Watch for robot tellers.

8 minutes ago, Vern Edwards said:

If you are basing that on Eric Schmidt's claims, you might want to investigate.

https://www.vox.com/2017/5/8/15584268/eric-schmidt-alphabet-automation-atm-bank-teller

Yep. And based on a Google search, even your Vox article (written in 2017) significantly underestimated amount of decline to today. Apparently primarily due to online banking. Makes sense.

Oh well, @Voyager . I tried!

42 minutes ago, Vern Edwards said:

Appoint fewer COs and select candidates on the basis of rigorous assessment of their ethics, knowledge, fidelity, judgment, and output quality. No exceptions. No handouts.

So then, select managerial candidates incumbent upon their ability to discern B.S. in employees. By "B.S.", I mean:

On 12/29/2025 at 11:16 AM, Vern Edwards said:

content that is actually unhelpful, incomplete, or missing crucial context about the project at hand...[shifting] the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work.

Or else the manager couldn't do the job of selecting CO candidates.

  • Author
43 minutes ago, Voyager said:

So then, select managerial candidates incumbent upon their ability to discern B.S. in employees.

I would let managers nominate candidates to be CO, but require that selection be made by a panel of senior executives (SES).

I would evaluate managers in part based on the quality of their nominees.

Why that approach? Because I want the CO position to be one of high prestige, authority, discretion, and responsibility.

Oh, and I would redesign the Certificate of Appointment.

I think that a great deal of AI product and output can be pretty solid during these early stages of this sort of "revolution" with this technology. Keep in mind that the results rely on quality input from the user or requestor. The garbage in/garbage out mantra still applies here in every sense. If the requestor isn't specific and fairly detailed, then the results will be lacking to say the least. Lazy AI users will receive lazy results a great deal of the time. What is waiting for us down the line is a set of systems that keeps relearning and retooling, which takes slop and turns it to gold if allowed to do so. At the moment, a good system needs "good" users.

Various AI tools are used at my agency, at least for daily admin tasks. Frankly, management has encouraged the utilization of our AI platform. I have mentioned on here before about sitting through various demos of procurement-related AI tools and was pretty much blown away. I think we are going to see a huge shift to AI in the next 2 to 3 years in the fed procurement arena. We are going to see contract "writing" systems process most of the work that 1102s currently perform. While yes, there will be slop, but that can only last for so long. These tools tend to correct themselves and relearn much faster than humans can and do.

  • Author
1 hour ago, Motorcity said:

We are going to see contract "writing" systems process most of the work that 1102s currently perform.

@Motorcity What do you mean by contract "writing"?

Most contract text is in governmentwide or agency-specific standard clauses in regulations. Those have been selected via automation for years, with varying results. AI could improve the speed and accuracy of those selections with the right input.

Then there are fill-in-the-blanks on standard forms. Ho-hum.

I guess you could say that developing contract line items is "writing." That involves some creativity. Have you seen a demonstration of AI doing that?

Contract-specific administrative/instructional text, although much of that is boilerplate or cut-and-paste.

Otherwise, the only real contract "writing" is the writing of acquisition-unique statements of work, performance work statements, and hardware and software specifications. But those are typically written by requirements personnel. That's a classic matter of technical/legal drafting. Not many people like writing and even fewer are good at it, so it's likely that requirements personnel will try AI, with results of varying quality and acceptability.

If you're talking about solicitations (RFPs, IFBs, RFQs), there are instructions and descriptions of evaluation factors. I suspect that those are mostly cut-and-pasties, but AI might do that.

What do you say? If contracting personnel are not competent at writing, would they be competent at reviewing and editing?

3 hours ago, Motorcity said:

The garbage in/garbage out mantra still applies here in every sense.

My concern is what the garbage in is based on. By example my mind quickly races to the RFO and to date the varying adoption by deviation rate by agencies along with final FAR replacement in total. I struggle with how a requester can be specific and fully detailed in their request when considering the FAR, RFO, FAR Supplements and agency policy that would need to be included in a solicitation/contract. Humans are not perfect at doing it so I struggle that the tool created by humans, AI, can do it any better.

I struggle as well with the hidden impact. My life's view has been forever transformed by the multitude of windmills that line the ridges surrounding my home and the behemoth data centers now my neighbors that gulp water from the watershed that surrounds my home.

Connected?? In my view yes because it is all a part of the cost/benefit. Very simply stated. In the context of the discussion should it be more AI or an elite special contracting officer corps as the investment to make it (Federal acquisition) better or even best?

4 hours ago, Vern Edwards said:

What do you say? If contracting personnel are not competent at writing, would they be competent at reviewing and editing?

I think that contracting personnel are going to get very comfortable with whatever tool that can expedite processes. The processes can really be anything from clause selection, to drafting various documents and even solicitations. That being said, writing, reviewing, and editing is all one package, is it not?

  • Author
Just now, Motorcity said:

That being said, writing, reviewing, and editing is all one package, is it not?

I write for legal publications. I have written books and for periodicals. Several hundred publications. And writing, reviewing, and editing are not all "one package," if by that you mean the author has the final say on all three.

  • Author
4 hours ago, Vern Edwards said:

If contracting personnel are not competent at writing, would they be competent at reviewing and editing?

My question for you was: "If contracting personnel are not competent at writing, would they be competent at reviewing and editing?"

Or should they just publish whatever AI gives them?

  • Author
Just now, Motorcity said:

I think that contracting personnel are going to get very comfortable with whatever tool that can expedite processes.

@Motorcity I hope that's not true. Because if it is, it doesn't say good things about "contracting personnel".

Expediting contractor selection and contract formation processes can lead to this:

https://www.gao.gov/products/b-423785

When a protester and agency disagree over the meaning of solicitation language, we will resolve the matter by reading the solicitation as a whole and in a manner that gives effect to all of its provisions. HumanTouch, LLC, B-419880 et al., Aug. 16, 2021, 2021 CPD ¶ 283 at 6. An interpretation is not reasonable if it fails to give meaning to all of a solicitation’s provisions, renders any part of the solicitation absurd or surplus, or creates conflicts. CACI, Inc.--Fed., supra at 9. Here, the Navy’s interpretation of the solicitation is not reasonable as it fails to give meaning to the solicitation’s provision expressly permitting offerors to propose TBD personnel.

Protest sustained. December 18, 2025. Merry Christmas! 🎄

What happened? The agency didn't understand the legal meaning of a key sentence in its own solicitation.

Look, Motorcity--AI is coming, whether we want it or not. Don Mansfield convinced me of that years ago. Used by the right people in the right way it may provide benefits. Probably will. But that remains to be seen, and we won't know for years, either the good or the bad.

I hope the incompetent know they are incompetent, but don't expect AI to be the cure for what ails them. I hope they work to fix themselves. They can do it if they try.

AI is most useful in its current stage of development for instructing preparation of items like SOW. It doesn’t produce a complete document by any means but it provides examples, instructions, and means to complete through iterative prompts.

I just played around with a SOW for help desk services. ChatGPT provided a simple template which is good to convey what’s needed. Gemini, which is what Google uses, is more informative and referred to examples from HHS and others. Claude gave the closest to a finished product.

If I were a CO and had a program office official who looked for help in drafting a SOW, AI can help. Of course, I would need to work closely as the document went through to final. The nice aspect is it avoids copying from other documents which may not be relevant.

  • Author
20 minutes ago, formerfed said:

I would need to work closely as the document went through to final.

@formerfed I don't understand. What does "work closely" mean? With whom or what would you work closely? Work to accomplish what?

@Vern Edwards I would work closely with the program official official who needed help in preparing the SOW. I always liked collaborating with program officials in preparing the SOW and other documents.

I have done some experiments with ChatGPT on SOWs and solicitations. It does okay with SOWs--they still need a fair bit of tailoring but I'd say it can produce a decent starting point. It cannot produce a reasonable solicitation yet, but there may be some series of prompts you could use to get a better output for the solicitation (I haven't explored it enough). So I think it will eventually be useful for those documents, but my rule of thumb is the more I relied on the AI tool, the more editing and oversight the document needs. It may not result in a reduction in effort, rather a shift in what you spend your effort on. I also wonder if this shift is even worth it...I guess we'll see.

That said, I have heard colleagues talk about using AI for evaluation, and that's where I get concerned. A government of the people needs people making decisions on behalf of the people. An idea I've been working on over the past few weeks is that we use contracting processes (and other government processes) to obscure the discretion given to deciders, mostly (probably? I'm still working through it) because deciders are afraid of criticism or don't want to decide.* To delegate decision making to AI is a further dereliction of duty.

*This works when you need the lowest priced #2 pencil, but treating the evaluation of a complex professional service as if it is a math problem to be solved isn't how qualitative decisions are made.

  • Author
Just now, KeithB18 said:

This works when you need the lowest priced #2 pencil, but treating the evaluation of a complex professional service as if it is a math problem to be solved isn't how qualitative decisions are made.

What do you mean by "qualitative decisions"?

How should such decisions be made?

27 minutes ago, Vern Edwards said:

What do you mean by "qualitative decisions"?

How should such decisions be made?

This answer isn't going to be satisfying, because it is not satisfying to me. It's something I'm still working on.

I've been influenced lately by Michael Polanyi who wrote a lot about implicit and tacit knowledge. He wrote, "You know more than you can say." His works are kind of difficult, at least for me. I'm working in a basic and applied R&D context right now, and that line resonates with me when you have very experienced and top of their field scientists reviewing proposals. They may, quite rightly imo, rely on their intuition in selecting projects. Intuition doesn't necessarily lend itself to writing strengths and weaknesses.

Qualitative decisions are non-numeric judgments, and a lot of judgments are based on values. Values often operate in the background, implicitly. (There's a study on "Terror Management Theory" that showed that judges rule more harshly against people that violate the judges values when they are reminded of their own mortality beforehand: https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.57.4.681. Which is just to point out that values can operate in unseen and strange ways.)

I'm not sure how to translate this into a normative rule. What I'm trying to do is what Oliver Wendell Holmes was doing in "The Path of the Law." Get the dragon out into the daylight, so that "you can count his teeth and claws, and see just what is his strength." I'm trying to understand what's really going on when procurement decisions are made, and adjust accordingly. Like, is there any other decision making context, in the entirety of human experience, that works like Federal procurement decisions do?

  • Author
Just now, KeithB18 said:

Qualitative decisions are non-numeric judgments, and a lot of judgments are based on values.

@KeithB18 I don't know you mean by that. Please explain.

Are you saying that qualitative value judgments cannot be described with numbers?

Archived

This topic is now archived and is closed to further replies.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.