One of my hidden talents is knowing how to phrase tricky work emails. I rarely have use for it anymore, but when friends ask me how to put something to their boss, I’ve been known to feed it to them to the letter. I used to think of this skill as a rare bit of overlap between my forgotten HR career and my current work as a writer. It wasn’t until I saw a recent TikTok of a woman flaunting this same ability—to “professionalize” any given sentiment—that I understood it as something sinister.
In the clip, which appears to be a segment from a talk show called “The Social,” the hosts toss out blunt phrases like “I’m not staying late to deal with that” or “stay in your lane” for a woman named Laura to rephrase into passable work jargon. Laura, slick low bun and silky button-down, translates quickly to claps and cheers. The last one is my favorite: “Okay, how do you professionally say, ‘That sounds like a “you” problem’?” the interviewer asks, grinning at the studio audience. Laura’s reply is serene: “I understand that falls within your scope of responsibility, but I’m happy to support where it makes sense.” The crowd goes wild.
Watching Laura work like a live Google Translator for corporate America reminded me of OpenAI’s ChatGPT, and all the other AI chatbots that have set the internet on fire in recent months. A lot has been written about the extent to which robots now sound like people, and what that might mean, but less about the extent to which people now sound like robots. There is nothing particularly soulful or human about knowing how to say “stay in your lane” without risking your work reputation (for the curious: “Thanks for your input. I’ll keep that in mind.”) In fact, what’s required is basically just data: a generalized knowledge of corporate norms and the depressing list of preferred vocab that comes with. A jargon chatbot could easily imitate the skill, and probably already exists in some form. Does this make AI eerily human, or does it just reveal that we, ourselves, are capable of being eerily inhuman?
As we know, and despite their apparent cleverness, large language models (LLMs) like ChatGPT are simply regurgitating things people have already said, with no deeper understanding. What they offer is, by definition, surface-level and formulaic. So what they excel at has a way of highlighting what in our modern world best engenders those qualities. Five-paragraph grade-school essays, corporate apology emails, cover letters, content-farmed articles, book summaries, biographies, meal or trip plans, etc. The pearl-clutching around, for instance, students, prospective employees, or media outlets now using bots to write their essays, cover letters, or articles seems, to me, like it’s missing an insight. If these formats are so easy to bullshit that a robot can ace them, maybe they were never the vehicles for expression we pretended they were. Maybe we need to think of more creative, humanistic ways to teach, hire, and communicate.
In “You Are Not a Parrot,” a great Intelligencer piece by Elizabeth Weil about the human-AI divide, Weil compares the perspectives of computational linguist Emily M. Bender—who takes pains to distinguish between humans and LLMs—and the OpenAI CEO Sam Altman, who believes in “singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse.” Four days after launching ChatGPT, Altman tweeted: “i am a stochastic parrot, and so r u.” As Weil explains, the term “stochastic parrot” was actually coined by Bender the year prior in a paper about the dangers of LLMs, specifically the way they parrot humans and their biases without any deeper sense of meaning or morality. Reading the piece, Bender’s perspective on AI resonated with me. But on a certain level, I agree with Altman: We have, in some ways, become stochastic parrots. Where we probably disagree is on the matter of whether that’s alarming.
The mechanical-creep of human communication has been an enduring topic of fascination for the chronically online. I will never forget that form text about “being at emotional capacity” that we were told to send to our friends in need, now officially a meme about the depravity of trying to codify human communication. And yet it’s happening despite our best efforts: When I first saw TikToker @corporayshid’s videos in which he performs unsettlingly predictable conversations—like what people say when leaving a job, or tasting wine at a restaurant, or talking to someone at a wedding reception—I followed him right away, and I’ve regretted it ever since. Watching his videos gives me a terrible feeling, as if globalization has us all barreling towards a single unified personality. There’s comfort in the fact that he’s making fun of a particular group, primarily the American professional-managerial class, versus everyone. Less comforting is the fact that ChatGPT is modeled overwhelmingly on those types.
It’s hard to ignore the ways technology has enabled a kind of slow-burning assimilation. My friend Mallory was recently telling me how grossed out she was when Gmail first rolled out predictive text, only later realizing that what was actually gross was how formulaic her email replies were. “Bumping this, thanks!” echoing off the walls of our digital confinement. Is Gmail passing the Turing test, in this case, or are we failing it? These tools—email, Slack, social media, SEO—and the fluency with which we employ them invites a certain consistency of expression, regardless of who you are. To an extent we embrace this, policing each other’s behavior online. Forever trying to pin down the final and correct way to speak or act or feel, we funnel ourselves towards supposed enlightenment, finding something a little dead-eyed instead.
It’s not just technology though. Postmodernism seems an equally veritable force in the acceleration of sameness, sending us on repetitive, derivative loops of nostalgia and reference. In Thom Waite’s essay about Kim Kardashian and the end of history for Dazed, he describes a culture obsessed with “iteration rather than innovation, betraying a dire lack of imagination.” We’re a post-industrial society trapped in a hyperrealistic ouroboros, failing to see the variety of paths ahead of us, assuming instead there’s only one way this can end. Similarly, ChatGPT is trained only on what has been, unable to imagine what could be.
A less cynical way to think about the postmodern tech revolution is that it hasn’t completely subsumed the real, brilliant, chaotic world, but merely emphasized it as vital and precious. Artificial intelligence, in my view, continues this tradition. The only singularity it can truly achieve is one that understands humanity at its most inhuman (and inhumane): data-driven, nihilistic, calculating, and backward-looking. This of course has its uses. Where it best mimics us, though, can be instructive and revealing—not because everything formulaic is bad, but because predictive models necessarily lack the pulsing vitality of life lived off-script.
My favorite thing I read last week was “Nothing Special,” an excerpt of Nicole Flattery’s new novel of the same name, published by Granta. Last Friday’s 15 things also included my latest foray into TikTok baking, musings on nudity, a suddenly-relevant article I completely forgot I wrote, and more. The Rec of the Week was creative breakfast ideas because I’m so sick of mine—huge.
For the podcast this week, I invited Harling and Mallory back on to discuss the viral Ozempic articles published by The Cut last week. Ep drops Tuesday 9am.
Hope you have a nice Sunday!
Haley
photo © eleonora galli