How do we work our way towards utopia?

Hi, it’s Charley, and this is Untangled, a newsletter about technology, people, and power.
Can’t afford a subscription and value Untangled’s offerings? Let me know! You only need to reply to this email, and I’ll add you to the paid subscription list, no questions asked.
In July, I wrote an essay about what humans can do that AI systems cannot. Then I interviewed one of the scholars whose work motivated the piece: Vaughn Tan. I also synthesized the biggest tech stories of the month, and then (wait for it!) published my first book, AI Untangled.
This month is all about the power of utopia thinking — it’s how we got into the current ‘AI’ mess, and a very different version of it will get us out of it. Let’s dig in.
notion image
In the summer of 1956, a group of 47 AI pioneers were invited to Dartmouth College to discuss the future of AI. All were men. All were white. And the vast majority were expert in math and computers, not the social systems their technologies would ultimately become entangled in. We’re now living in a world partially inspired by their imaginative inquiry — their version of utopian thinking got us into this mess. Getting out of it will require a very different approach. Let’s dig in.
The current obsession with predicting the future, optimizing the present, and determining whether or not computers think, know, or learn like humans can be traced back to the Dartmouth workshop. The attendees were heavily inspired by the thinking and theories of Alan Turing, Warren McCulloch, and Jon Von Neuman.
Turing kicked off the computational theory of the mind (which I’ve argued, is a bad metaphor for understanding AI). He was fascinated by whether machines could think like humans and worked from — in my opinion, an incorrect — starting point that assumed humans were not unlike computers. Amy Webb writes in her book The Big Nine that according to Turing, “We too are containers (our bodies), programs (autonomous cellular functions,) and data (our DNA combined with indirect and direct sensory information.” This initial work led Turing to research ‘neural networks,’ which underpin modern-day AI models. Psychiatry researchers Warren McCulloch and Walter Pitts then published “A Logical Calculus of the Ideas Immanent in Nervous Activity,” which built on Turing’s faulty proposition and “described a new kind of system modeling biological neurons into simple neural network architecture for intelligence,” as Webb recounts.
Then Jon von Neumann theorizeed “the foundation of all economic decisions” using applied math. In the subsequent decades, behavioral economists and social scientists would muddy Neumann’s assumptions, but nevertheless, according to Webb, “This marked the transition from the first era of computing (tabulation) to a new era of programmable systems.” This thinking set the agenda for the Dartmouth workshop, which worked on the assumption that “If it was possible to describe every feature of human intelligence, then a machine could be taught to simulate it.”
The initial focus on ‘thinking machines’ and decision modeling imbue today’s AI discourse. We’re living in the world that they imagined — and that’s a big problem. We have to continually push back against the faulty premises guiding their research: e.g. that humans can be modeled like computers; that humans are rational economic actors; that we can predict the outcomes of decisions, and so on. And, crucially, no one was thinking about how the technology would interact with social systems. They started with the technology and then imagined what the world might look like with that technology in it. If we’re going to change the path that we’re on, we have to do the opposite — we have to envision the world that we want and then map backward to determine whether/how to include AI at all.
I’ve come to believe imagination is one of the most powerful conceptual tools — and I think it’s the first step to getting us out of this AI mess. The path being imagined for us is clear. As Shannon Vallor elucidates in her great new book, The AI Mirror: How To Reclaim Our Humanity In An Age of Machine Thinking:
“In such a future, we read poems, songs, and novels written by machines that have a powerful way with words, but not a thing inside waiting to be said. We get mental health ‘care’ from an artificial chatbot that hasn’t known a single moment of doubt or despair. We receive ‘love’ from a companion that can’t willingly refuse us, deny us, or choose us. We gaze at art made by a device that hasn’t ever had to breath to be taken away by beauty, or skin to shiver at the sublime. And we can no longer even tell the difference. In such a future, many more humans than today may labor for piecemeal wages to feed our mirrors the ‘ground truth’ labels of a reality that we are no longer trusted to shape. Instead, our politics are decided by systems whose efficient predictions and optimizing policies push the dominant patterns of the past and present relentlessly into our futures, carrying forward the stories we have already written, the wars already fought, the injustices already committed. It’s a curious kind of innovation in which our past eats the future.”
Breaking from this path requires us to understand our sociotechnical context, and how power dynamics and social systems structure that context. Those who met at Dartmouth didn’t have a theory of how power functioned in sociotechnical systems, which is/was part of the problem. They saw their questions in technical, mathematical terms — the ‘socio’ was nowhere to be found.
Once we take seriously that any technology shapes and is shaped by social systems, it becomes clear that imagining radically different futures isn’t just a fun li’l exercise — it’s a politically radical project because it threatens to disrupt the status quo. As anthropologist Wade Davis put it:
“The world into which you were born does not exist in an absolute sense but is just one model of reality — the consequence of one particular set of intellectual and adaptive choices that your own ancestors made, however successfully, many generations ago.”
Think of it this way: the only way to get to the future is through the present moment. We’re creating our future each day through our decisions, actions, choices, etc. This path has been grooved by those who came before us — the relationships they formed, the institutions they built, and the norms, values, and beliefs they propagated. The well-worn grooves are nudging you in a particular direction, but if we privilege different beliefs and decisions, build different institutions, and form new relationships — well, then, the path starts to veer in a different direction. Or maybe it becomes a new path altogether! The point is, the past is alive in the present, posing constraints and shaping what we perceive as possible — but it’s not determinative.
These grooves are subtle but they reflect and then instantiate power relations. Who has it, and who doesn’t? Who benefits from this version of reality? Who has something to lose if we changed society’s default settings, and charted a radically different path? If you start with questions like this, you’re likelier to see the grooves and figure out how they were created in the first place.
Taking the time to imagine good and bright futures is key to changing the paths we’re on for the better. When you ask yourself what kind of future you want to see, and you feel that what you envision is a little naïve, you’re doing it right. Part of the problem is that we associate ‘utopian’ with unrealistic. As Mannheim wrote in Ideology and Utopia, “The representatives of a given order will label as utopian all conceptions of existence which from their point of view can in principle never be realized.” Right, the existing connotation of ‘utopian’ is evidence that we’re living in a world according to someone else’s interests and logics.
That’s also why pessimism or dystopian thinking is so problematic — it doesn’t just accept the terms of the status quo, it reinforces those terms. It tells you that the future is written in concrete, inevitable; that you can’t change the future, and in fact, and it’s only going to get worse from here. Utopian thinking, then, isn’t pie-in-the-sky naivety, it’s a strategic practice rooted in hope. As feminist historian Rebecca Solnit writes, “To hope is to give yourself to the future,” and “that commitment to the future is what makes the present inhabitable.”

📝 It’s time for an exercise!

Let’s imagine your own li’l sociotechnical utopia, shall we? If enough folks complete the following exercise and share a short vignette (1-2 paragraphs) describing their imagined future by August 31, I’ll share them in a subsequent post. You don’t need to share your answers to each question below — the questions are just tools for imagining your sociotechnical utopia.
As you start to engage in a bit of utopian thinking, here are a few principles to guide the exercise:
  • Start with outcomes (e.g. social, interpersonal, economic, political, etc.) and work backward as you consider the role of technology.
  • There are no ‘technological solutions,’ only sociotechnical interventions. Focus on the power dynamics, beliefs, cultural norms, practices, and values that would need to be different to enact systems change.
  • If you decide that AI is — in some way — important to the future you’ve described, then, as Vallor writes, “let us embrace them not as transcendence or liberation from our frail humanity, but as tools to enlarge our freedom for it.” In other words, make sure those outcomes are an expression of our humanity, not an escape from it!
Okay, the nuts and bolts:
Step 1: Pick a specific context — your work, your family, your community, etc. — and describe what it currently looks and feels like. How does it function? What role does technology play? How does race, gender, and power shape your current context? How do your current actions, practices, and beliefs — and those around you — contribute to this reality (and your perception of it)?
Step 2: Stick with the same context, but put the present description aside, and instead imagine an impossibly good alternative future. Ask yourself questions like:
  • What would be occurring in that future?
  • What would you be doing differently?
  • What would it feel like?
  • What would those around you be doing?
  • What would the results or outcomes of those decisions and actions look like?
  • How might AI systems support the future outcomes you articulated if at all?
If this were an in-person workshop, we’d start working backward from the future outcomes to the present context. As Dave Snowden and others have shown, mapping backward breaks cause-and-effect linear thinking. And we’d keep mapping backward until we reached actions you can take and decisions you can make that align with your sociotechnical utopia.
Every action you take towards your imagined future thing generates demand for that thing — it is a vote to have more of that thing in the world. That’s the ultimate takeaway from the Dartmouth workshop — their vision for what AI could be motivated years of research, shaped the ‘problems’ companies are now trying to solve, and framed AI in the public consciousness. The past is alive in the present, and the only way to chart an alternative path is to imagine it, and start walking.
As always thank you to Georgia Iacovou, my editor.