Reflections on a Month in Silicon Valley

It’s hard to believe I’ve been in the San Francisco Bay Area for a month. The time has flown by.
I won’t forget this visit anytime soon. My conversations with clients and friends have been a turning point for me about where we are going with artificial intelligence, trust and safety, and news. I thought for this newsletter, I’d share more about those reflections.
The overarching theme I’m giving this visit is to “Think Different.” The old Apple ads from the 80s and 90s popped into my head this morning as I tried to describe the last few weeks. Silicon Valley is in a very different place than DC, Brussels, and the news industry. I realized that I need to look at these problems and opportunities through a new lens.
Then, the 1984 Apple ad came to mind. The ad's visuals are striking, with everyone sitting transfixed in an audience until the spell is broken. It was meant to have the Macintosh computer be seen “as a tool for combating conformity and asserting originality.
notion image
Apple’s 1984 ad
I’ve gotten exhausted at the same conversations about mis/disinformation, foreign interference, transparency, etc. It feels like we’re just going through the motions in some ways. We need a wake-up call and realize how quickly things are changing. I’m calling this future the “yes and” problems. This means the old problems still exist, but we have new ones too.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.
This brings me to my first trend of how the trust and safety environment/work is changing with artificial intelligence. We must greatly expand our thinking on where trust and safety efforts are needed. Right now, most of the focus outside of Silicon Valley is on content moderation, with some discussion on the design of platforms and the need for transparency.
Here in Silicon Valley, it’s much more complex. I’ve made this graphic to try to help explain the various areas where safety measures need to be implemented, but also where humans are involved versus machines. This graphic is far from perfect. I’ve gotten some feedback from folks on how to improve it, and I would welcome additional thoughts. That said, it’s a start.
notion image
Let’s go through these quickly.
  1. Human has an idea - What happens after a human has an idea of a piece of content they want to create or a question they want an answer to?
  1. AI is used - consciously or unconsciously - to achieve goals - Here are our first two new surfaces for AI and safety. The first is to understand the types of AI tools available. The Center for American Progress has a great piece on this. You have developers such as OpenAI and Meta’s Llama2 who have their own tools but also provide an API for deployers to create their own tools. So you need API moderation to hold deployers accountable, then prompt moderation to determine what prompts or questions will return results. Many AI platforms have said they won’t allow people to use their tools for political campaigning, and nothing will be generated when you put in that prompt. That’s prompt moderation. (BTW, I’m making up that term.)
  1. Distribution of content - Tools such as OpenAI, Midjourney, Anthropic, etc are just that - tools. They are not distribution platforms. For that, you still have your Facebook, Instagram, Google Search, YouTube, TikTok, etc. You also have television, radio, print, and podcasts. Here, AI and other algorithms are used to help determine what content you are shown. We'll get to this part of model training in a bit.
  1. Actor, Behavior, and Content Moderation - Most conversations today are about what content is or is not allowed and what the penalties should be for breaking the rules. However, talk to anyone in trust and safety, and they’ll tell you rules around actors (such as pretending to be someone you aren’t) or behavior (spamming the same link around) are just as important.
  1. Delivery of content - Until AI, the newsfeed or search results page was the main mechanism for content delivery. With AI, we can now have help summarizing that content for us. It can be done via text, audio, or video. Ricky Sutton talks about using Microsoft Copilot to tell him the top five tech stories in the New York Times technology section. How companies choose to do these summaries is yet another type - what I’m calling aggregation moderation. (I’ve also seen this listed as instruction tuning - “a specialized form of fine-tuning in which a model is trained using pairs of input-output instructions, enabling it to learn specific tasks guided by these instructions.”) It’s machines playing the role of curator, editor, and analyst.
  1. Humans consume content - This brings us full circle to every one of us consuming content how we want to.
Now, we have two overarching pieces to this work as well. Those are:
  1. Model training - At every step of the way, artificial intelligence, algorithms, and other models must be trained from someone creating a piece of content to someone consuming it. This means humans at companies have made decisions about what signals (such as clicks, watches, likes, etc) or existing content (articles, books, videos, etc.) should be considered when the machine creates the output. The model design needs input and tuning from safety and other subject matter experts at the front end as well.
  1. Red teaming - Red teaming is trying to get these models to do “bad” things. People will try to get them to break the rules to close those loopholes. OpenAI has been very forthcoming about its use of red teaming to test out products before they go out into the real world. This is also done by humans inside and outside the companies.
As you can see, for companies and trust and safety professionals - we need to think about these problems and how we approach them beyond just the moderation of content. We must consider how models are trained, what questions people can ask, how content is summarized/aggregated, and how we stress-test it.
The second trend I’m noticing is that it’s not enough for companies to say what they will do; people are increasingly asking them how they will do this. Whether it’s people asking Threads about how they define political content or how platforms like OpenAI will stop their tools from being used by campaigns - people want more detail. I talked last week about how people want platforms to show their work. Platforms won’t want to be more transparent about how the sausage is made because it’s messy, and no one will agree. However, people will rightfully demand it, and I think companies will be forced to be more transparent - voluntarily or through regulation - to build up trust.
The third trend is that news isn’t dying; it’s changing. Jeff Jarvis has written about this forever. Kara Swisher talks about it in her new book. I talked about it in my paradoxes of technology piece.
’s latest piece helped crystalize my thinking on this. I almost want to reprint the whole section, but I won’t. Read it. If you don’t want to, here are the main quotes:
  1. “Publishing isn’t failing because people don’t want the news. The data shows people want news more than ever, but the painful truth is that consumers are getting it from search and social.”
  1. “AI redraws the landscape, but publishers are distracted by its ability to create new content. It’s alluring and cheap, but it’s not where the game changing money will be. All my searching is now done by AI and through voice. I ask Copilot in plain language what I want and do it without typing into my phone or browser.”
  1. “Publishers should also put their minds to how content will be delivered in an AI future. My search responses are spoken back to me now.”
  1. “A new AI-enabled web can then emerge based on [Microsoft’s] Copilot and other AIs that integrate into every device, which is paid for and premium. Governments and antitrust regulators would get what they want. A digital future where content creators, publishers, consumers, and advertisers, have more choice.”
  1. “A premium web needs premium publishers, meaning Microsoft is motivated to support the news industry.”
Let’s stop thinking we will put this genie back in the bottle and instead recognize the future happening in front of our faces. Journalism schools need to rethink their curriculum; newsrooms need to re-train their reporters, reporters need to retrain themselves. A new model is being formulated - now is the time to influence it. Personally, I’m starting to take some prompt generation courses from Coursera to get smarter.
As you know, I’ve been thinking a lot about the past twenty years. After going through a few of these twists and turns in the industry, I’m starting to recognize the small windows of time when people can truly impact where a new technology or aspect of the industry is going. In the early 2000s, it was social media; in the early 2010s, it was mobile; post-2016, it was integrity/trust and safety solutions; today, it’s artificial intelligence. The old problems are still there but multiplying across different platforms and surfaces. Those who wish to hold tech companies accountable must adapt quickly, or the opportunity to be a part of the solution will be harder.
Please support the curation and analysis I’m doing with this newsletter. As a paid subscriber, you make it possible for me to bring you in-depth analyses of the most pressing issues in tech and politics.