An Age of Hyperabundance | Issue 47 | n+1 | Laura Preston

Created time
Aug 26, 2024 12:02 PM
Posted?
Posted?
At the conversational AI conference
Publication date Spring 2024

Tags

Share and Save

notion image
Dana Lok, Typist. 2023, oil on canvas. 20 × 17”. Courtesy of the artist and Miguel Abreu Gallery, New York.
I was in a room of men. Every man was over-groomed: checked shirt, cologne behind the ears, deluxe beard or clean-shaven jaw. Their conversations bounced around me in jolly rat-a-tats, but the argot evaded interpretation. All I made out were acronyms and discerning grunts, backslaps, a mannered nonchalance.
I was at the Chattanooga Convention Center for Project Voice, a major gathering for software developers, venture capitalists, and entrepreneurs in conversational AI. The conference, now in its eighth year, was run by Bradley Metrock, an uncommonly tall man with rousing frat-boy energy who is, per his professional bio, a “leading thought leader” in voice tech. “I’m a conservative guy!” he said to me on a Zoom call some weeks prior. “I was like, ‘What kind of magazine is this? Seems pretty out there.’”
The magazine in question was this one. Bradley had read my essay “HUMAN_FALLBACK” in n+1’s Winter 2022 issue in which I described my year impersonating a chatbot for a real estate start-up. A lonely year, a depressing charade; it had made an impression on Bradley. He asked if I’d attend Project Voice as the “honorary contrarian speaker,” a title bestowed each year on a public figure, often a journalist, who has expressed objections to conversational AI. As part of my contrarian duties, I was to close out the conference with a thirty-minute speech to an audience of five hundred — a sort of valedictory of grievances, I gathered.
So that what? So that no one could accuse the AI pioneers of ignoring existential threats to culture? To facilitate a brief moment of self-flagellation before everyone hit the bars? I wasn’t sure, but I sensed my presence had less to do with balance and more to do with sport. Bradley kept using the word “exciting.” A few years ago, he said, the contrarian speaker stormed onstage, visibly irate. As she railed against the wickedness of the Echo Dot Kids, Amazon’s voice assistant for children, a row of Amazon executives walked out. Major sponsors! That, said Bradley, was very exciting.
I wondered if I should be offended by my contrarian designation, which positioned AI as the de facto orthodoxy and framed any argument I could make as the inevitable expression of my antagonistic pathology. The more I thought about it, the more I became convinced I was being set up for failure. Recent discussion of conversational AI has tended to treat the technology as a monolithic force synonymous with ChatGPT, capable of both cultural upheaval and benign comedy. But conversational AI encloses a vast, teeming domain. The term refers to any technology you can talk to the way you would talk to a person, and also includes any software that uses large language models to modify, translate, interpret, or forge written or spoken words. The field is motley and prodigious, with countless companies speculating in their own little corners. There are companies that make telemarketing tools, navigation systems, speech-to-text software for medical offices, psychotherapy chatbots, and essay-writing aids; there are conversational banking apps, avatars that take food orders, and virtual assistants for every industry under the sun; there are companies cloning celebrity voices so that an American actor can, for example, film a commercial in Dutch. The field is so crowded and the hype is so loud that to offset a three-day circus with thirty minutes of counterpoint is to practically coerce the critic into abstractions. Still, I accepted the invitation for the same reason I took the job with the real estate start-up: it was a paid opportunity and seemed like something I could write about.
Her voice still needed some work, he admitted. “Right now she’s kinda mean.”
On the first afternoon of the conference I took a lap around the floor and tried to make sense of what I was seeing. Tech companies had arrived with their sundries: bowls of wrapped candies, ballpoint pens, PopSockets, and other bribes; brochures fanned on tables; iPads with demos at the ready. The graphics, curiously alike across the displays, were a combination of Y2K screen saver abstractions and the McGraw Hill visual tradition. Many companies had erected tall, vertical banners adorned with hot-air balloons, city skylines at dusk, dark-haired women on call-center headsets, and circular flowcharts with no discernible content. If conversational AI had a heraldic color, that color would be blue — a dusty Egyptian blue, chaste and masculine, more Windows 2000 than Giotto. It’s a tedious no-color, the color of abdicating choice, and on the exhibition floor it was ubiquitous in calm, flat abysses backgrounding white text.
The only booth that stood out was at the far end of the exhibition hall. A company had tented its little patch of real estate with an inflatable white cube that looked like a large, quivering marshmallow. Inside the cube was Keith, a soft-spoken man whose earnest features and round physique conveyed a gnome-like benevolence. Beside Keith was a large screen. On the screen was a woman. The woman had dark hair, dark eyes, and purple lips that endeavored a smile. Her shoulders rose and fell, as if to suggest the act of breathing, and though she looked toward me, her gaze was elsewhere.
“This is Chatty,” Keith shouted over the roar of the blowers keeping his enclosure erect.
Keith worked for SapientX, a company that makes photorealistic conversational avatars powered by ChatGPT. SapientX had custom-built Chatty for Project Voice. Chatty could answer questions about the conference agenda and show you a map of the exhibition floor, except she couldn’t do it just then, said Keith, because they couldn’t seem to connect her to the wi-fi.
Keith was happy enough to walk me through the visuals. Chatty’s face was the collaborative effort of fifty different companies. A company in Toronto did the eyes. “There’s like eight guys and all they do is eyes all day,” he said.
Chatty’s face was a composite of several different races. Her voice was a composite of several different women. Her voice still needed some work, he admitted. “Right now she’s kinda mean.”
I picked up a brochure that featured a roster of “digital employees,” complete with their names, headshots, and “personality scores.” I wondered what industries might hire them.
“They’re mostly for kiosks,” Keith responded with a tone of defeat. “Like at a mall or a museum. Also military training. Stuff like that.”
Keith directed my attention to the exterior of the cube. A large banner depicted an older male, prosaically handsome, with a square jaw, a custardy dollop of silver hair, and pale, limpid eyes. This was Chief, said Keith. “He’s a navy guy. And he talks like a navy guy. We work in forty different languages. So if you’re training someone in Ukraine how to operate an American tool, we have that language built in.”
Keith went back inside to rustle me up a T-shirt. He told me that the company was also breaking into health care — nursing homes, to be precise. Keith explained the vision. Your mom is old, and you’re constantly reminding her to take her medicine. Why not leave that to an avatar? The avatar can converse with your mom, keep her company, fill up the idle hours of the day. Plus, you can incorporate a retina scanner to check her blood pressure and a motion sensor to make sure she isn’t lying dead on the floor.
“Say there’s an elderly woman with dementia,” he said. “Her avatar will look like she did when she was younger. So she has someone to identify with. Does that make sense?”
I imagined a future geriatric Keith, lying in a nursing home bed, conversing with his younger self. Would such an arrangement appeal to him?
“There’s not going to be a choice,” he said. “A lot of old people are going to be talking to avatars in ten years, and they won’t even know it. When I was touring facilities in San Francisco for people with dementia and stuff, those places are like insane asylums. But some patients still have some cognitive function, and that’s who the technology would be for. It’s definitely not going to apply to the guys that are comatose.”
We stood in silence for a moment, and he faced Chatty, who hovered before us, drifting in her strange, waking trance.
“I wish they could fix the internet,” said Keith. “I swear, she gets nasty. She like, looks at me bad.”
At the back of the exhibition hall, a daylong program of talks and panels was playing out on a modular stage — the stage on which, in two days’ time, I would perform my conference-sanctioned finger-wagging. Like a dutiful student, I had typed out my speech and practiced it against my iPhone’s stopwatch. I began to fear this was the wrong approach. The speakers were going at it notes-free and pacing around TED Talk–style.
A woman named Olga was giving a talk about charisma. I listened, hoping to gain some last-minute wisdom for my own remarks. Olga represented a German firm that puts chatbots through some sort of charisma boot camp. According to Olga, here is how a chatbot can show charisma: First, it can remember the customer’s name. That is very charismatic. “A charismatic assistant might have a quirky sense of humor, a soothing voice, or a nice and friendly tone,” she said. “A charismatic assistant can remind you to take your medicine.”
Olga had an important message about charisma. In our pursuit of charismatic AI, we must avoid dark patterns. Dark patterns are manipulative design tactics that steer people toward decisions they wouldn’t normally make, and these patterns often resemble charisma. A chatbot using dark patterns might mimic your mannerisms to gain your trust. It might lead you to believe it is a real person. It might have enough data on you to flirt with lethal precision, and then, just after delivering a dopamine hit, ask you to provide a credit card to continue. Better not do any of that, said Olga.
Next up was a man from a company called Journey. I did some light research on my phone. Journey’s past projects included Walmart Land, a virtual music festival in the Roblox metaverse meant to “engage the next generation of Walmart fans,” and promotional concept videos for NEOM, the $500 billion megacity that is the pet project of Saudi Crown Prince Mohammed bin Salman, which will allegedly include a floating industrial city, a luxury island for the yachting class, and a continuous, one-hundred-mile-long, mirror-clad structure called the Line that will bisect a desert tract the size of Massachusetts, a tract that is the ancestral land of the Huwaitat tribe, of whom some twenty thousand members have already been evicted, among them three activists who, for their noncompliance, have been sentenced to death.
The man from Journey showed us how generative AI could expedite product design. He took us through a branding exercise for a hypothetical breakfast cereal. His ChatGPT-powered tool came up with the packaging: a cardboard obelisk, four times as tall as it was wide, with a picture of what might have been a passion fruit tumbling down a cascade of liquid. If I had to guess what was in such a box, I would guess printer cartridges or chardonnay. The AI had also written some copy. “Are you tired of starting your day with a bland and boring breakfast? Look no further than PureCrunch Naturals.” With a few more clicks, the man showed how the right combination of generative AI tools can, like a fungus, spawn infinite assets: long-form articles, video spots, and social media posts, all translated into multiple languages.
“We are moving,” he said, “into a hyperabundance decade.”
I bought a sweet tea at a downtown lunch spot and reviewed the notes for my talk. Before I arrived at the conference, I had decided to discuss bias in algorithms. The essence of my argument was this: In 2019, shortly after I finished graduate school, I worked for a company that made a real estate chatbot called Brenda. Brenda answered questions about apartment listings and booked prospective tenants for tours. My job was to supervise Brenda’s conversations as an “operator,” and if she went off script, which she often did, I took over until she regained her bearings.
Over thousands of conversations with strangers, I began to suspect that Brenda’s diction — and the very fact of her texting interface — was most palatable to the young, affluent, and white. I feared this had real effects on which people booked tours, and which people were so put off by the experience of speaking to Brenda they looked for housing elsewhere. Was this not redlining by algorithm? The peculiar mental burden of the job was that I was made to live in parallel but opposite realities. On the one hand, our Slack channels were filled with messages from developers claiming righteous intentions. Brenda was making the rental process accessible, democratic, quick as a text. And yet every night I watched how this bot, with her blameless, chirpy affect, was an instrument of isolation, a digital bully that landlords used to create distance between themselves and their tenants.
Though she hadn’t crossed my mind for some time, I remembered Ella, a woman who messaged Brenda so often I came to recognize her on my shifts. Ella spoke only Spanish. Brenda did not, and neither did most of the chatbot operators, so we corresponded with Ella by copying and pasting Spanish phrases from a Google Doc we had compiled on our own time. Ella was a tenant at one of Brenda’s properties. Ella’s messages were urgent and anguished. She spoke of violencia and God. Her situation was unclear. She sent video clips of her walls and ceilings, which came through as still images without sound.
We were fairly certain Ella was trying to report domestic violence in the apartment next door. We told Ella that if she or someone else was in danger she should call 911. Ella did not call 911; it was possible she was afraid to engage the police. We told Ella to call building management, but the management’s only phone number rerouted to Brenda, the chatbot who handled rental inquiries.
Ella, I should note, was not the woman’s name. She offered us her real name several times, which we manually added to her file. But Brenda, ever keen, kept spotting the feminine singular pronoun ella — a more suitable name by Brenda’s logic, more like the names she had seen before — and entering it into the name field, obliterating whatever had been there. “Como te llamas?” we would ask. “¡Ya te dije!” she would say. The woman’s true name was finally lost.
We always pinged our supervisors when Ella’s messages came through. At first, our supervisors reacted with twisted excitement, for here was an opportunity for Brenda to flex her empathy — that was what Brenda did best! Ella continued to send vague and frightening reports. Our supervisors grew tired. “In some situations, it’s useful to sound like a bot,” one said. The last time Ella appeared in my inbox, I scrolled through her message history. Brenda’s automated messages had been completely disabled. In their place, I saw a litany of human operators repeating a refrain:
>Soy un agente inmobiliario remoto.
>Soy un agente inmobiliario remoto.
>Soy un agente inmobiliario remoto.
The story of Ella was an example of a chatbot working badly. It was also an example of a chatbot working wonderfully. Not once was a landlord’s silence disturbed by this woman and her problems. She was not even a person in the database, but a hysterical pronoun. And how apt, in the end, for her troubles to divert to us, a group of poets and novelists hired specifically for our feelings, who could feel for her endlessly but do nothing else, as we did not know the landlord’s name or how to reach him and lived very far away.
I was an extraterrestrial taking notes on the problems of Earth.
As I reentered the conference floor, I was still thinking about the tension between declared outcomes and actual implementations. All around me, the booths posed a collective thesis on the future. This was a future without busywork or buttons, a future of bespoke experiences, a future where the internet was an ambient thing we’d call upon with our voices — not a service we would use but a place where we would live. Beneath this promised future, however, was a shadow future, one that suggested itself at every turn. This was a future of screens in every establishment and no way to get help, a future in which extractive algorithms yielded relentless advertising, a future of a crapified internet, too diluted with sponcon and hallucinated facts to be of any use. In this future, if you wanted to use a product you would have to download an app and pay a monthly fee. It was a future of ultra-sophisticated scams and government surveillance, a future where anyone’s face could be spliced into porn. Our arrival in this future would be a gradual surrender, achieved through a slow creep of terms and conditions, and the capitulations had already begun.
And yet no one at the conference seemed worried. This was a room of nerdy, earnest people, people who were good at fixing things and eager to improve the world in whatever ways made sense to them. I couldn’t quite figure it out. Was their belief in the goodness of AI so secure that they didn’t see the broader threats? Or did they not care how the technology was ultimately used, as long as they came out with the spoils?
I was nervous about the day’s closing talk, which was called “Savage Communities and Noble Leadership.” This talk would be delivered by Ian Utile, a man of unclear affiliation who was, I could only guess, another leading thought leader of the industry.
Ian strode down the aisle and leaped onstage. He was dressed in black with wraparound sunglasses, black hair greased into a ponytail, a rectilinear beard, and boots so pointy they were practically Arthurian. The look suggested the World Series of Poker, but when he began to speak, Ian’s energy transformed into that of a Pentecostal youth minister.
“What I see is a lot of leaders that are either NOBLE . . . or savage,” he bellowed. “I see a lot of communities that are either SAVAGE . . . or noble. How do you balance savagery, confidence, intensity, drive . . . being the IMPETUS! The DRIVING FORCE THAT MAKES THINGS HAPPEN! . . . with? Nobility. Meekness. Kindness. Love. Care. Empathy. And sympathy. And congratulations, when I look at this industry I see a lot of nobility. I often wonder, ‘Where’s the savagery, everybody?’ Are your customers savage about you? About your products? Your services? ARE YOUR EMPLOYEES SAVAGE? WILL THEY KNOCK DOWN WALLS TO MAKE THINGS HAPPEN FOR YOU?”
I found Ian’s LinkedIn. In fact he did have a history with the Evangelical church. He was the “entrepreneur-in-residence” at Transform Our World, a global ministry led by the Argentinean evangelist Ed Silvoso, and had been a guest on a panel called “How the Church Can Glorify God in the Metaverse.”
“When you’re a leader, ideally, you are focused on nobility. To lead like a queen, to lead like a king. To build communities on top of your shoulders. Not a pyramid down, but the pyramid upside-down. May we at Project Voice represent savage communities and noble leaders. Bradley, thank you for letting me share that with everybody.”
“Collapse of context,” I wrote in my notebook. “The ChatGPT-like impulse to write a speech on the noble savage because noble savage is something you’ve heard before, and it sounds badass.”
Let’s say I got up onstage and did a close reading of Melville or Montaigne. Let’s say I told them that each word was a trawl net that heaves up a thrashing, slapping, shimmering school of associations from the deep. Would anyone believe me? Would it matter?
“Hyperabundance — from ecology,” I wrote. I was too exhausted from the day to refine my thoughts further. As I drifted off to sleep in my hotel room that night, I thought of white-tailed deer devouring the understory and spotted lanternflies attached to tree trunks in horrible, shingled heaps.
The following morning, the conference dispersed for industry-specific talks. When I arrived at the exhibition hall, a man told me the digital signs weren’t working. He took out a pen and drew some arrows on my conference map.
I headed to the health-care room. It was just before nine, and people were looking groggy, but the room was nevertheless full and vibrating with anticipatory energy. A young woman with red hair down to her tailbone approached the lectern.
Her name was Caitlyn, and she represented a company called Canary Speech. Canary Speech had developed a tool that analyzes human speech for “vocal biomarkers.” Caitlyn explained that vocal biomarkers are qualities below the level of human hearing that correlate with emotional and physiological conditions. By listening to just thirty seconds of recorded speech, Canary Speech could return a health audit that breaks down the user’s mood, energy, anxiety, and degree of depression, and identify pre-Parkinson’s traits, as well as early signs of Alzheimer’s.
Caitlyn played a video in which she prompted a woman to speak for thirty seconds on any topic. The woman described her morning. She had woken up and fed her child. Her child had played with their dog.
“Canary is analyzing the audio,” Caitlyn said. “You have medium anxiety and medium depression. Your energy score is at forty-six. Your power is at seventy-eight. Speed is medium, at forty-eight. Dynamic is twelve.”
I could feel a collective intake of breath. A woman raised her hand and asked what would happen to someone’s diagnosis if the sound quality were poor.
“That’s why we record forty seconds,” said Caitlyn. “So we get more audio than we need.”
The woman’s question poked a hole in the dam, and more questions poured forth. Another woman identified herself as a therapist. “To be told you’re mildly depressed will make you depressed,” she said.
“How can we be confident this won’t fall into the hands of corporations that will figure out how to sell things to depressed people?” asked another woman.
“What about cultural differences?” said someone else.
Caitlyn attempted an answer, but before she could produce anything cogent a male voice boomed from the back of the room.
“WE’RE TALKING ABOUT MICRO-PROSODIC FEATURES BELOW THE LEVEL OF HUMAN PERCEPTION,” the voice said. The large and formidable man to whom this voice belonged rolled down the aisle on a mobility scooter. He introduced himself. He was a veteran of the original Alexa build and Canary Speech’s cofounder.
“What is the baseline you’re comparing it to?” someone asked.
“You compare it to a generic baseline.”
“What is generic? Man? Woman? Teenager?”
“These clues are generally universal.”
The cofounder yielded the room to Caitlyn. A member of the audience asked if the software might be useful to large enterprises, perhaps to monitor employees.
“Absolutely,” said Caitlyn. And in fact, Canary Speech was thrilled to announce a new partnership with Microsoft Teams.
Every talk sent me down a frantic spiral of inquiry. That morning, at breakfast, I couldn’t stop reading studies on nursing home robots. (In one, a group of Canadian nurses expressed concerns that robotic equipment could be used as weapons during “behavioural events.”) Now that I knew about vocal biomarkers, I felt myself hurtling down a new tube slide of panic. What would the insurance companies do? Would they increase a patient’s premium because their voice indicated pre-Parkinson’s? What would happen when the company-wide mental health initiative required employee one-on-ones with Canary in the background? Would a sales call debrief include metrics on everyone’s mood? In a society that reserves psychiatric care for only its wealthiest members, was there any reason to automate mental health assessments if not for the purpose of mass surveillance? What did it mean to have medium depression? Was I medium depressed?
I was especially alarmed by that turn of phrase: These traits are generally universal. Without meaning to, the cofounder had summarized the prevailing logic of AI. The AI industrialists envisioned a future where large language models would replace search engines. Instead of rummaging through a heap of Google hits, we would pose questions and receive answers. Though the answers would be no more than a statistical averaging of existing texts, they would be packaged as authoritative comments and give the impression of thought. All the gnarls of individual style, the eccentricities of argument, the anomalous notions, would be smoothed away, and the resulting summary — mediocre, adequate — would be peddled as absolute knowledge.
If ChatGPT reduced texts into summary, Canary Speech aimed to do the same with individual mentalities. To say that vocal biomarkers were generally universal predictors of mood admitted the existence of outliers that for whatever reason did not matter. So what about those outliers? If your voice test indicated an anxious disorder, but you did not suffer from anxious feelings, whose truth was to be believed? To me, this all had an odious whiff of physiognomy and race science. It was the same logic that compelled white men to fashion their avatar’s face as the ghostly average of non-Caucasian women, a de facto stereotype, like some Victorian eugenicist’s photography experiment.
In the breakout room for the hospitality industry, a program of presenters took turns spelling out, with fetishistic precision, our communal experiences as conferencegoers. We were pilgrims in a strange city, lying in the austere bedrooms of the Staybridge Suites, in need of food and beverage but daunted by the urban wilderness. “Show me pizza places nearby,” a developer said as he pantomimed the ideal food-ordering app. “Now show me ones that deliver.” Another man whose entire platform seemed to be lobbying for 24/7 earbud use showed us the perks of having a voice assistant permanently lodged in a bodily orifice. “Where is a good hamburger restaurant in my area?” he said. “Are there any brewpubs within walking distance of the Chattanooga Convention Center?” A dour pair of Germans described the specific burdens of hotel living. For example, you must check in at the front desk. Then, when you get to your room, you must locate the mini-fridge and discover the light switches. The German team was addressing these problems with an in-room, voice-activated AI concierge. You would bypass the front desk and unlock your room with your phone, then a disembodied voice would help you with the lights.
I was an extraterrestrial taking notes on the problems of Earth. Finding pizza in your area was a problem. People being mean to you because you were wearing your AirPods at dinner was a problem. Going on vacation was a problem because the hotels would force you to find the light switches. Elders were a problem. (They never took their medicine.) Loneliness was a problem, but loneliness had a solution, and the solution was conversation. But don’t talk with your elders, and not with the front desk, and certainly not with the man on the corner, though he might know where the pizza is. (“Noise-canceling is great, especially if you live urban,” said the earbuds guy. “There’s a lot of world out there.”) Idle chitchat was a snag in daily living. We’d rather slip through the world as silent as a burglar, seen by no one except our devices.
I was excited about the next talk, which was called “Using ChatGPT to Create an Answer Engine for Pets.” I pictured my family’s terrier — a bedraggled little animal, so endearing and pathetic you wouldn’t be surprised to catch him on the train tracks with a bindle on his shoulder — sitting before a computer terminal, counseled by a program that speaks the language of dogs.
I should have known by then to expect disappointment. “Meet VERA,” said the compact, magnificently tan CEO of a company called AskVet. A woman materialized on a screen behind him. She spoke, but her mouth didn’t move in time with the words. “And if you can believe it, she’s not a real person,” he said.
VERA was the “world’s only veterinary engagement and relationship agent.” If your pet fell ill you could chat with VERA, and she would tell you if a vet trip was necessary.
“Let’s consider your standard diarrhea exercise,” said the CEO. His voice was serene, gentle, trustworthy. And yet below that outer shell of self-possession, I detected a suppressed, slow-burning anger. He seemed aggrieved by what he called pet parents, a fundamentally irrational demographic that was incapable of much besides crowding the vet offices.
A woman in the audience asked if VERA followed up to make sure her diagnosis had been right.
The CEO leaned an elbow on the podium. “I’ll tell you a story,” he said.
A woman wrote to VERA about her elderly dog, who was having diarrhea.
“Your dog is at the end of his life,” said VERA. “I recommend euthanasia.”
The woman was beside herself. She told VERA she wasn’t ready to say goodbye. Her dog was her only companion.
VERA knew the woman’s location. She sent a list of nearby clinics that could get the job done. Still, the woman was unconvinced. Euthanasia was so expensive. She’d never be able to afford it. VERA sent another list, this time of nearby shelters. “If you relinquish your dog to a shelter, they will euthanize him at no cost,” she said.
The woman did not respond. But some days later she sent VERA a long and effusive message. She had taken VERA’s advice and euthanized her dog. She wanted to thank VERA for the support during the most difficult moment of her life.
The CEO regarded us with satisfaction for his chatbot’s work: that, through a series of escalating tactics, it had convinced a woman to end her dog’s life, though she hadn’t wanted to at all. “The point of this story is that the woman forgot she was talking to a bot,” he said. “The experience was so human.”
I needed a break. The day before, a booth on the exhibition floor had been giving away full-size Tony’s Chocolonely bars, so I went looking for one of those. As I navigated the rat maze in search of my treat, I came face-to-face with yet another synthetic person. There was something uniquely inept about her appearance. She was diminutive, perhaps two feet tall, with proportions I can only describe as Atenist: broad hips; narrow shoulders; long, willowy legs; and a thigh gap wider than the thighs themselves. Her arms floated beside her, governed by Martian gravity, and again, the proportions were aberrant. The first two-thirds of each arm was an arm, but the remaining third was all fingers. She wore a white doctor’s coat. Around her neck was a stethoscope.
A real man came up behind me. He was small, about my height, and in his fifties.
“This is Catherine,” he said, “Chatty Cathy.”
“Is she a physician?” I asked.
“Yes,” said the man, whose name was Norrie. “Quite qualified, I’m told. Talk to her!”
I wasn’t sure how wise it was to solicit health advice from a metahuman whose creators hadn’t mastered human anatomy, but she was, after all, wearing a stethoscope. Norrie shushed me.
“Don’t tell her about your ailments,” he said. “That’s very personal. Cathy, who are you with today?”
Cathy stared at us silently. Her irises were completely encircled by the whites of her eyes, which gave her a look of permanent startle.
“The wi-fi’s slow,” said Norrie.
“Who are you here with, Cathy?” I repeated, but Norrie diverted my attention to another screen. On this screen was a male avatar, much like Cathy in style and comportment, with broad shoulders and well-developed pectorals.
“My team made him for me,” Norrie said with a salacious little grin. “He answers questions about my life.”
It took me a moment. “It’s you!”
“I asked for a swimmer’s build.”
I had never been bothered by the singularity of my body; the consensus of this conference, however, was that making your synthetic double was something you should want to do.
“TODAY, I’M HERE AT THE PROJECT VOICE TRADE SHOW WITH MY CODEBABY COLLEAGUES,” bellowed Cathy. “WE’RE ALL EXCITED TO DISCUSS THE POTENTIAL BENEFITS OF CONVERSATIONAL AI AND AVATARS IN HEALTH CARE.”
I still wanted to ask a medical question.
“Tell her your cholesterol’s over three hundred,” Norrie said, and I did.
“I’M NOT A DOCTOR,” said Cathy. “BUT IF YOUR CHOLESTEROL IS HIGH, IT’S IMPORTANT TO CONSULT WITH A HEALTH-CARE PROFESSIONAL. AS A CODEBABY AVATAR, MY EXPERTISE LIES IN CONVERSATIONAL AI AND ITS APPLICATIONS IN HEALTH-CARE COMMUNICATION.”
When Norrie told me his company was developing avatars for elder care, I asked him the same question I had asked Keith. Would he be happy with an avatar in his home who cared for him in his old age?
“Absolutely,” he said. “I’m single. I live with two dogs. You can do a lot of good with this technology. You can also do a lot of bad. You can defraud people. It’s always the vulnerable who are defrauded. Humans kill each other, let’s face it.” Norrie let out a deep, sonorous laugh.
After lunch, the industry rooms disbanded and everyone returned to the exhibition hall for talks on the main stage. Bradley moderated a panel on avatars. “If you had to convince me that avatars are here to stay, what would you say?” he asked.
“People will share medical data with an avatar more willingly than with their own family doctor,” said the CEO of SapientX, the company with the inflatable tent. “Another reason is the fun factor.”
A woman named Reena answered next. Reena’s start-up, Wisdocity, aimed to make digital clones of real people in order to preserve their individual skills and knowledge. Reena explained that her teenage daughter was unhappy with her high school’s curriculum, which didn’t teach anything of real-world use. Reena suggested we clone business leaders and bring them into the classroom. Students would be enriched by human connection, the mental health crisis among teens would subside, and we wouldn’t even have to invent the therapy bots in the first place.
“I think Reena nailed it perfectly,” said Norrie, who expanded on the theme of mental health.
“You are alone,” he droned. “You don’t have any ability to communicate with another human. So you talk to an avatar to have at least some relief from the boredom and loneliness you face.”
Everyone at this conference kept invoking loneliness and claiming the antidote was conversation. That didn’t track with my own experience. My most desperate moments of loneliness have been in conversation: on a Hinge date, doomed but persisting as a form of protocol. At a publishing party, surrounded by people who look and talk like me, all of us a little drunk but maintaining our nervous, manic professionalism. My moments of connection, by contrast, have been beyond language. Biking along the east edge of Prospect Park on an August night, hearing cicadas chant their reedy iambs, as loud on that stretch of Flatbush as they would be in the countryside, remembering summers of childhood, a house that’s gone, and my grandmother’s two-handed wave from the threshold.
“It wouldn’t be right to have this panel and not talk about ethics, given what y’all are doing,” said Bradley. “All y’all’s avatars have the ability to bring someone to their knees.”
Cathy appeared in my mind like an archangel. Her bat phalanges, her snarl of a smile, her vacant eyes.
“Like, the last thing I want to do is create a racist bot,” said a panelist.
“As human beings we experience the fact that we kill other human beings,” Norrie said for the second time that day. “We are creating tools that will be misused. We hope there are some barriers in place, but at the end of the day . . . yeah.”
“Let’s give these folks a round of applause,” said Bradley.
I still had not seen a demo of SapientX’s Chatty, the one inside the inflatable cube. Inside the cube I found Keith as well as the CEO, who had just returned from the stage, and who was sitting on a folding chair in the corner and eating a sandwich. Chatty was no longer on the large screen. Instead, it was Chief, the drill sergeant.
I asked if I could see a demo. “Yes,” said the CEO. “Actually no, I just unplugged everything.” He asked what brought me to the conference.
“Contrarian speaker,” he said. “Is that a company? You’re just contrary by nature?”
I explained the general idea.
“So you’re a writer!” he said. “One of my ranch hands used to be a writer! She got really beaten up by the industry.”
I told Keith I’d try to come back. “Chief’s the one designed for military training, right?” I asked.
Keith looked confused.
“We made him for a mall,” he said. “A military museum and a mall.”
I wasn’t convinced these technologies were sophisticated enough to hasten societal collapse just yet. Some of them couldn’t connect to the internet.
Throughout the conference, a little itch had developed in the back of my brain. It was an ugly, edgy feeling, and I finally recognized what it was. It was the feeling of being scammed. Chief might teach a weapons qualification course, or he might answer FAQs at the veterans’ memorial. Chatty Cathy wore a stethoscope because she was a doctor, but she wasn’t a doctor—her expertise was in conversational AI and its applications in health-care communication. VERA wasn’t a vet—she was a veterinary engagement and relationship agent. When I impersonated Brenda, I was not a leasing agent and had no real estate credentials. I had an MFA in creative writing. My supervisor told me to say I was an offsite leasing specialist, a meaningless title, technical enough for most users to skim over and not question its validity. It all suggested a future of ineptitude, where everyone was a brand instrument disguised as a resource.
I had a little over twelve hours left before my talk. In my hotel room, I set up my iPhone timer and practiced the various turns of my argument. Brenda’s conversations were designed by affluent white people, which meant that her rhetorical style was affluent and white.
I wasn’t feeling great about this argument. Not because it wasn’t true, but because I realized this was exactly the argument people expected me to make. Since I’d been in Chattanooga, there’d been plenty of talk about bias. Lots of companies had called for diverse data sets, for large language models trained on regional vernaculars, for multicultural avatars with whom a wide array of people could identify. But these calls for diversity seemed to represent the far limits of their imagination and stopped short of a more radical truth: that these algorithms — diverse or not — were designed to violate, extract, and exploit. Hito Steyerl describes this problem in the New Left Review, citing Racial Faces in the Wild, a dataset aimed at improving facial recognition software that struggled to identify nonwhite people. “Police departments have been waiting and hoping for facial recognition to be optimized for non-Caucasian faces,” writes Steyerl. And indeed, a firm called SenseTime was happy to use such a data set to train surveillance software for the Chinese government, software that was used to monitor members of the Uighur ethnic minority.
If I had delivered this talk exactly as I’d written it, I would provide my audience with just enough critique to pleasantly stimulate their intellects, but nothing I said would be new. If anything, it would make them feel smug — smug that they had drawn similar conclusions without my counsel, smug that they were already, to borrow a word I heard with remarkable frequency, “cognizant” of the issues. I’d noticed during this conference a prevailing idea that as long as you designed a tool with good intentions, you were not responsible for how others misused it. “How do you think of ethics as a concept?” Bradley had asked nearly every panelist on the stage. “We think a lot about ethics,” said one panelist. “Ethics are something we want to be aware of,” said another. Over and over, I heard people recite variations on this line, like a quaint personal creed, a spell of protection. If the act of thinking about ethics is enough to confer immunity, then to make design choices in the service of diversity is an overachievement worthy of praise. Thus we see companies rolling out synthetic voices for every possible vernacular so that the phone scammers of the future will be blessed with a deluxe toolbox — and if this is not the intention, it will surely be the outcome. If I were to get onstage and ask for diverse data sets, what would I be asking for but a future in which we all have equal opportunity to be defrauded, to be surveilled at work, to be a patient in an Alzheimer’s ward with a phantom at our bedside?
I wondered if I had enough time to write a new speech, something truly hostile. Why not go out on a tirade? But to tell a group of people that their invention could destroy the global order was another way of telling them their invention was godlike, supreme, and was exactly what the tech billionaires themselves were saying to bolster their market influence. In any case, I wasn’t convinced these technologies were sophisticated enough to hasten societal collapse just yet. Some of them couldn’t connect to the internet. What really frightened me was the future of mediocrity they suggested: the inescapable screens, the app-facilitated antisocial behavior, the assumptions advanced as knowledge, and above all the collective delusion formulated in high offices and peddled to common people that all this made for an easier life.
The morning of my speech, I watched ESPN in the lobby of my hotel and ate breakfast potatoes on a styrofoam plate. There was a man at the table next to mine.
“Excuse me,” he said. “Are you with the conference?”
The man was in his late fifties, with silver hair and a nice watch. He asked me what time the program started.
I suspected he knew perfectly well, but I engaged him anyway. He told me he worked in consumer electronics, then asked what industry I was in. I told him that I was the contrarian speaker, but that I didn’t particularly relish the title, as it gave everyone permission to dismiss my argument as a by-product of my lame personality. He asked me what my talk would be about, and I gave him a gloss.
“Most people here are going to agree with you.”
“I know,” I said.
I decided to ask him a question that had been on my mind since the first day.
“Do people feel obligated to invent this stuff?” I said.
“Noise-canceling is great, especially if you live urban,” said the earbuds guy. “There’s a lot of world out there.”
Many of the company reps, I noticed, seemed weary, even bored, as if they had no choice but to toil away on these technologies. Their indolent cadences, their rehearsed lines, the way a self-aware, sardonic remark would slip out before they fell back to the techno-optimist script — all this suggested that they viewed their work as nonnegotiable, a cliché of futurity they were required to design.
The man beside me sipped his hotel coffee. We stared at ESPN, lulled into silence by the muted anchors and the headline crawling beneath them: Packers trade Aaron Rodgers to Jets for multiple picks.
“When I was a kid, this news story came on TV,” he said. “That guy. Jim Jones. He took those people to Guyana and made them eat poison. My buddies and I were watching that and thinking, Where did all those stupid people come from? But you know what? I don’t wonder about that anymore.”
I asked him what he meant, and he kept his eyes on the television.
“When you’re standing in front of that fire, when the drums start thumping, it’s the animal brain again,” he said. He pounded his chest with his fist. “It’ll get you. It’ll get you. I promise, you’ll start to chant.”
The morning audience was sparse. The scattered attendees seemed a little nauseated, zoned out, clutching coffee cups. Last night’s mixer at the Chattanooga Pinball Museum had been a great success. First on the agenda was a pitch event. A woman pitched her voice-activated wellness app. “I love these opportunities that take advantage of helping people,” said a judge. A man demoed an in-home voice assistant. “Did you take your medications today?” the assistant said. “Oh, no. Medications are important for your health. Please take them.”
It was time for me to give my talk. I walked onstage and squinted past the overhead lights. My audience of five hundred was an audience of sixty. I gave my speech exactly as I had conceived it, the speech that by now I felt socially engineered to have arrived at. I was giving the developers my benediction. I, too, had surrendered. As I talked, I could hear people packing up their booths. I got a decent murmur of applause, a singular hoot. Already I could feel my critique mutating into praise.
Bradley took the stage. “We always close the conference with something that will get you thinking,” he said. With that, he yielded the podium to the final speaker, whose talk was titled — I read it twice — “Lasting Impact: How the Holocaust Inspired a New Approach to Conversational AI.”
The speaker, Sarah, led a project that made digital clones of Holocaust survivors. The clones were life-size, made from composite video clips and sound bites, and could answer questions about their experiences. The idea was that museums could incorporate these clones into their educational programs to combat Holocaust denialism. I wondered about the segment of humanity targeted by this effort — that is, the demographic that would deny the evidence submitted at Nuremberg until a digital apparition set the record straight. Halfway through the talk, Sarah did the classic pivot. “As we evolved our work, we realized that lots of businesses rely on the power of personal presence,” she said. She showed us how they cloned a famous Formula 1 driver to represent McLaren at an auto show. Celebrities, athletes, influencers — the possibilities for your brand are endless.
“The concept of conversational AI and the Holocaust demands attention,” said Bradley. “I’m glad Sarah talked to us about that. That concludes the program, we wish you safe travels.”
“Was it like that movie M3GAN?” my Lyft driver asked.
The driver was an ample older woman with a curtain of waist-length hair. She was not from Chattanooga, or even Tennessee, but had driven an hour from her home in Alabama to capitalize on the more fertile rideshare turf.
“I watched that movie four or five times,” she said. “I studied it carefully.”
We were on the interstate, winding our way along the spine of the Appalachian Mountains, a wide, shimmering basin to our left, a wall of stone to our right. We passed motels, an EZPAWN, car dealerships with their sparkling fleets and neon bunting. We passed truck depots, where 18-wheelers sat in ranks and waited to be filled.
“I think children are addicted to their iPads,” she said. “They don’t play outside or have a sense of nature.”
“I do miss playing outside,” I said.
“A few weeks ago, there was a rabbit in my backyard,” she said. “Not a brown rabbit, but black and freckled, a gorgeous animal, someone’s pet, I don’t know. It was happy in the garden. Then one morning I go out to water my beds and my asshole German shepherd had it hanging in its mouth. I called my son hysterical on the phone. My son’s in college but he sent his friend. Put it in a grocery bag and hit it with a dumbbell. I can’t even tell you the relief I felt when my son said he don’t want kids. I am a Christian woman. I’ve been reading Revelation. I said to him don’t you dare give me grandkids. This world is going in a bad direction and I don’t want to worry about my grandbabies.”
All I saw of her face were her eyes glancing at me in the rearview mirror, then darting back to the road. We pulled up to Departures.
“So how many years we got until this all takes over?” she said.
At the airport bar, I recognized Sarah, the woman who made the Holocaust clones. I suspected she saw me, too. But we had left the conference frame. Out in the puzzling, imprecise world, the rules were ill-defined. We kept to ourselves, sipping our beers in pretend absentmindedness, strangers side by side.
If you like this article, please subscribe or leave a tax-deductible tip below to support n+1.
$
USD
notion image