The Slippery Slop
How Treating Creative Works as "Content" Anticipates AI Slop
I have never been a fan of the term “content” as it’s used to discuss media.1 I work in books, and “content” has always struck me as a sausage-making, inside-baseball, jargony term, even as “content creator” has become a desirable profession.
I recognize the reason the term has gained momentum. With the different multimedia opportunities available to “creatives” and “influencers” (two other smarmy terms), “content” becomes an easy catch-all to describe whatever that person produces, the ethereal “message” enfleshed in different media. I get it. But I also think that treating creative efforts as “content” is at least in part what put us on a slippery slope to “slop,” the artificial fillers of our AI age.
One reason for this is the very indeterminacy the term is used for in the first place. “Content” can stand in for anything, and it is therefore a small step from indeterminate to indiscriminate. In fact, this is how technological algorithms strike me. Those who own the algorithms, display what gets seen, and sell ads to go alongside it don’t care what draws people to the feed as long as they keep scrolling. It doesn’t matter whether the “content” is good. In fact, very little else matters as long as it is popular. And for it to gain popularity, it has to instantly hook and be easily digestible, lacking in the kind of nuance that more complex and more patient forms of media require. (This is especially why I hate books being talked about in “content” terms. More on this later.)
For content creators, the goal is to produce the kind of media the algorithm rewards. And the impetus is to keep creating, regardless of what they create. The Algorithm rewards consistency more than quality (perhaps due to the parasocial nature of influencing). As long as there is something filling the feed, it matters much less what that something is. And if you aren’t filling it, someone else will.
Treating creative efforts as “content,” then, feels less like creating something meaningful and more like making up the filler of whatever it is you’re trying to stuff.
This kind of makes sense. “Content,” at least what is feeding The Algorithm, isn’t designed for sustained attention. “Content” is designed for multitasking, where attention is already diluted. “Content” is meant to be a distraction from which we can then be easily distracted by ads, ad nauseum.
In a recent interview, Matt Damon described what it’s like making an action movie for Netflix. While a usual action movie has “three set pieces,” with the biggest one at the end, Netflix requested “a big one in the first five minutes” so people wouldn’t get bored and navigate away. “And it wouldn’t be terrible if you reiterated the plot three or four times in the dialogue because people are on their phones while they’re watching.” People in a movie theater may be a captive audience, but for something releasing straight to a streaming platform, the expectation is that Netflix is one screen, one option, among many.
In his book The Life We’re Looking For, Andy Crouch talks about how technology often reshapes the world to be more hospitable to machines (and what makes feedback easier for them) than what adds to human flourishing. Facebook’s “Like” button, for example, was not designed with human emotion in mind but with what’s easier to track engagement (a binary choice). He writes, “Rather than actually creating machines that understand the infinitely creative and complex world of human culture, we will find that it is far easier to create attenuated cultural environments that treat humans like machines.”2
Algorithm-driven media, I think, are among these “attenuated cultural environments.” They do not serve up what is good for human beings (as anyone who has read about the increased polarization of American culture or the mental health of America’s youth can testify); they serve up what is good for investors and shareholders, driven by what is most frictionless for machines. Treating creative work as “content” seems best served for algorithms, which only want attention long enough to sell something. If human beings are creating what attracts eyeballs, so be it. But if content can be generated by machines, for machines, and cut out another sharer in profit, so much the better. (Spotify would be so much smoother without the pesky artists demanding to be paid…)
In a Substack post titled “Creative Work in an Age of Digital Production,” Nicholas Carr puts it this way: “In automated systems, human beings are placeholders for future machines. Until recently, we assumed that creative types who produce content for media systems were exceptions to that rule. We’re now going to test that assumption.” That’s an accurate description of where we find ourselves today, but the first step on the road to this reality was getting human creators to contort the round peg of creativity into the square hole labeled “content.” Once the formula is in place—the “attenuated cultural environment”—a good pattern-recognizing machine can take it from here.
The Year of Our Lord 2025 was a time when we were told, again and again, in louder voices and shouted down if necessary, that AI is inevitable. We have a duty to integrate AI into every aspect of our lives and work, we are told, because, hey, it’s going to happen eventually anyway, and you don’t want to be the last one on the bandwagon, right? You can get so much more done, and thus you have an obligation to do more. But it’s useful to ask: Who benefits from saying this?
There’s a wonderful sketch from A Bit of Fry & Laurie where a man “with a stethoscope and a plausible manner” prescribes the patient “a herbal remedy” for his tightness of chest. The herbal remedy is cigarettes—starting at twenty a day “and ideally rising to thirty or forty.” The patient is startled by the prescription, asking wary questions about lung cancer and overuse, trying to discern whether the prescription is a legitimate treatment. When the patient finally accepts the remedy, saying, “You’re the doctor!” the mask finally drops. The prescriber says, “What on earth gives you that idea?…I’m a tobacconist! Isn’t it obvious?”
I can’t help but get the same feeling about the current press to adopt artificial intelligence tools. Like the tobacconist prescribing cigarettes, the people heavily promoting AI are the same ones directly benefiting from its wide adoption. “Begin using it for menial tasks once or twice a day, ideally rising to everything that requires conscious thought. Oh, and be sure to watch the ads.”
And similar to the tobacconist, these people are not the ones who will be bearing the brunt of the costs of the messes they create. They imagine themselves preserved from any eventual disruptions in labor or any shortages, finding as they have a loophole to most of life’s ills (i.e., money).3 At best, they are like King Hezekiah, who says of the eventual Babylonian incursion that his reckless boasting prompted, “At least there will be peace and security during my lifetime.”4 Never mind that his descendants will be carted off into exile.
The genie of AI is very much out of the bottle. Slop is here, it will soon be everywhere, and it’s not going anywhere. But I think there are still some ways off the ramp, some ways to make the slope if not less slippery at least less of a precipice.
A very simple way to reclaim our lives from AI slop is to stop referring to creative work as “content.” When we treat created goods as interchangeable—as if anything someone makes is as good as any other thing—we shouldn’t be surprised that purveyors of slop are testing that thesis to see if, just as automation cut out the manual laborer, artificial intelligence can cut out the creative worker.
Instead, by referencing things by what they are—this is a book, this is a podcast, this is an essay, this is a short story, this is an infomercial, this is a paid advertisement, this is a lifestyle endorsement—we immediately enter a realm of greater clarity. When we label things as they are, worthless things have a harder time hiding behind what has true value, harder at least than when everything is put under the too-vast umbrella of “content.”
I’ve struggled with “content” in the book world because great books do not always translate into other media, and vice versa. You have probably read books that started as a blog or Instagram post and should have remained there. (I know I have.) Many books that will endure are not suited for Bookstagram endorsements or executive summaries. Marshall McLuhan’s famous statement “the medium is the message” militates against the Gnostic vision of ethereal “content” untethered to specific media. Not every medium is hospitable to every message, nor should it be. And just as John encouraged his church to “test the spirits,” so we should test the media. And not everything will pass the test, which is why it behooves us to avoid using one word to describe all creative output, treating it as all equally worthy.
The other—and more difficult—way to upset the trough is to demand more of the creative works we engage. Slop thrives because we so often settle for the human-created output that resembles it. It thrives because we are looking for a distraction. It thrives in environments of formulaic junk.
I’m not worried that AI will create the next Brothers Karamazov.5 I do worry that the next Brothers Karamazov either won’t be published or will be so honed down to a felt need that what makes it extraordinary will be lost. The worrying thing about the recent Shy Girl dust-up (a book canceled because it shows strong signs of being written by AI) isn’t so much that it made it past industry gatekeepers (although that is worrisome too) but that it released into an environment where it was comparable enough to what was already on the market that it didn’t seem to make much of a difference.
The best defense is a good offense. The best defense against AI slop is making something new, making something human. It is an inexhaustible source of awe that, with the same twenty-six characters of the English alphabet, we can get AI-generated hallucinations, workmanlike emails, and the beautifully drawn characters of George Eliot, the sparse prose of Muriel Spark, and the hilarious sentences of P. G. Wodehouse. Again, it is incredible to think it’s possible (perhaps even likely) to write a sentence no one has written before, given that everyone is cooking with the same ingredients.
I had a conversation a while ago with a group of book editors, when ChatGPT was just coming on the scene. We were lamenting the bestsellers that already felt like they had been written by machines. But our conversation quickly turned to thinking about the best books we had read recently that couldn’t have been written by AI. And in our moment, that’s a useful exercise. What book, movie, song, poem, essay, love note, email, joke bears the stamp of a human intelligence? And how can you celebrate that thing and its maker? How can you contribute to a world more hospitable to that and less hospitable for machines?
Reading
On the topic of books that could not have been written by AI, I read The Body of This Death by Ross McCullough this month. The rave review in Christianity Today hailed it as a “Screwtape for our times” that channels Chesterton, Pascal, Kiekegaard, Dostoevsky, and Lewis. (How was I supposed to turn that down?) The book is an epistolary novel, kind of, except that the letters are arranged in the framing device of a French scholar from the “Year 20 of the New Common Era” who has unearthed and reset the letters for the reader, with some explanatory notes and commentary. The letters are from sometime in the future—close enough to be intelligible but also several steps removed from today. They are the correspondence of “the last archbishop of Lancaster,” and they explore…well, a sweeping range of issues. The archbishop keeps up correspondence with one of his priests, a nun, a Muslim woman, a skeptic, and an old friend. I found myself fascinated by this book in a way similar to Walter Miller’s A Canticle for Leibowitz. Both books take place in the distant future, after some precipitating catastrophe, and both are about the lived witness of the church in this new era. The letters in The Body of This Death address topics of new technologies (“immersive reality”), new governmental orders, and how the church survives in its world. It is also a dense and, in many ways, inscrutable book. I used my Kindle’s dictionary feature liberally, and even so, there were times the word I was looking for was too specialized to look up. (I learned there’s a glossary in the end—the physical book would have been helpful here!) McCullough makes frequent reference to Borges, whose writings often call to mind labyrinths, and as I read, I couldn’t help but think that I was caught in one myself. This book exceeded, in many ways, my capacity to understand. But I was also compelled by it, and it’s the kind of book I want to understand. In one of the last letters, the archbishop asks, “Are there only labyrinths that we explore, or are there also labyrinths that explore us?” This book felt like the latter.
Playing
I love playing board games that take big risks—that swing for the fences for fresh ideas. (That is, they couldn’t have been generated by AI.) And the freshest game I played this month is called Here Lies. Here Lies is a cooperative mystery-solving game. Each game one player is the lead investigator who has access to the casebook and knows all the particulars of the dead body in front of the players (represented by a skeleton on a cloth mat—it’s not gruesome) and the mystery of how they died. The other players are investigators who are trying to discover six pieces of information about the case, things like the victim’s secret, the cause of death, the murder weapon, the murderer, their motive, etc. The lead investigator is only allowed to communicate when investigators ask questions. The questions, however, must be asked in the form of the cards in the players’ hands, things like “Two materials the victim was touching when they died” or “Circle up to three words on the back of this card that pertain to the case” or “Name up to two beings that might have witnessed the crime.” I’ve played other murder-mystery games that are serious and involved (perhaps the best of these is Sherlock Holmes: Consulting Detective), but what I love about Here Lies is how interactive it is and how it manages to keep an almost party-like atmosphere. The lead investigator gives clues in the form of words, drawings, anagrams, and so on, so even though they are essentially a game master, they are also creatively involved in making the story that the other players are interpreting. It’s tense to hear investigators going down the wrong track; it’s exhilarating to hear when they’ve cracked the case wide open. This is a truly delightful, very human game.
Clicking
“America says goodbye to the mass-market paperback.”
Alan Jacobs on rethinking the humanities classroom in light of AI. A small excerpt: “By depriving students of constant AI use—or, to put it more accurately, by allowing them some respite from the tyranny of the chatbots over their lives—we actually enable them to exercise their minds in unfamiliar, and for some unprecedented, ways.”
After I had already drafted a lot of this post, I stumbled on this post by Nicholas Carr saying…a pretty similar thing. An excerpt: “AI-generated slop marks the triumph of machine formalism. The machine establishes the pattern, and the machine fills the pattern with its own creation.”
Matthew Milliner on AI as “the perfect mirror.”
David Oks on why ATMs didn’t replace bank tellers but the smartphone did.
I’ve been enjoying Kate Bowler’s reflections on lent on her Substack, and through that, I came upon her excellent conversation with Malcolm Guite on poetry and prayer. This led me to Guite’s moving poem “The Christian Plummet.”
I’m on the side of the angels on this one, or at least on the side of Emma Thompson, which is the same thing, right?
Andy Crouch, The Life We’re Looking For: Reclaiming Relationship in a Technological World (Convergent, 2022), 97.
It has been a while since I’ve read it, but Kurt Vonnegut’s Player Piano seems especially prescient for today, and the would-be hawkers of AI might want to take note.
I chose The Brothers Karamazov because it is an astonishing book that I’m surprised was ever written at all, wholly original. You can read this for more on why it continues to confound and delight. My friend is reading it now and keeps texting me about how strange and yet compelling it is, which…yep.

