8:30 AM, Friday. Welcome back to Professor Farahany’s Advanced Topics in AI Law and Policy Class. Hopefully, you caught up on Week 1 of class. But if you haven’t, start with week 1, which are classes 1, 1.2, and 1.3.
We are now into Week 2, and this is class 2.3. If you haven’t taken class 2 yet, and class 2.2, you should do that first. This semester, we break up the classes into three parts, Mondays, Wednesday, and Fridays. And I’m hoping you’ve finished your attention audit before joining us today.
We’ve covered a lot of ground this week.
On Monday, we talked about the Attention Audit. Whether you did it yourself or followed along with your in-person classmates, you encountered the muscle memory, the boredom-then-peace paradox, the “stimulating but blurry” Day 3.
On Wednesday, we looked at the science. And, well, it’s ... complicated. Mixed evidence. “It depends.” No solid proof of brain rewiring, but also no clean bill of health. And industry has every incentive to keep capturing your attention regardless of what researchers eventually conclude.
So today, we take another foray into the question… Can and does the law help? I say another because last week we looked at one attempt to protect your mind, which are the growing number of neural data laws.
Today, we’re going to look at a different approach, one in France, one proposed in California, designed to address the “always on” problem.

First: A Quick Audit Check-In
Loading...
Whatever your answer, you can engage with today’s material. But if you did the audit, I’m going to ask you to hold your experience in mind throughout, because the gap between what you experienced and what law addresses help you really understand and learn (these are the kinds of “desirable difficulties” and experiential learning that help you really “get” content).
The Right to Disconnect
France passed the first “right to disconnect” law in 2016. It requires companies with 50 or more employees to negotiate policies around when employees can ignore work communications.
The stated goal is to “ensure the respect of rest periods and breaks, as well as personal and domestic life.”
In practice, some French companies implemented it with:
- Automatic email server shutdown between 6:15 PM and 7:00 AM (you literally cannot receive work email during those hours)
- One half-day per month designated as email-free for the whole company
- Email signatures reminding recipients: “I respect your right to disconnect and do not expect a response outside working hours”
California considered a similar law in 2024 (AB 2751). The key language stated:
“’Right to disconnect’ means that, except as provided in subdivision (d), an employee has the right to ignore communications from the employer during nonworking hours.”
The California version would let employees file complaints for a “pattern of violation,” which was defined as three or more documented instances of the employer requiring response during off-hours, with civil fines of at least $100 in penalties.
Loading...
What Problem Is This Actually Solving?
Before we evaluate these laws, let’s make sure we understand the problem they’re targeting.
Think about this way. You’re at home at 9pm, and you’ve had dinner and are watching a show or reading or just existing. Your phone buzzes and it’s an email from your boss about tomorrow’s meeting. Or a Slack message asking for a file, or a text from a client who “just has a quick question.”
Now you have a choice:
- Respond (establishing that you’re available at 9pm, making it more likely to happen again)
- Don’t respond (and spend the rest of the evening wondering if your boss is annoyed, or if you’re missing something important, or if this will affect your performance review)
Multiply this by dozens of messages across evenings and weekends. The boundary between work and not-work dissolves. You’re never fully off, and can’t truly relax because your phone might buzz with something that demands attention.
The “right to disconnect” says that you can ignore that 9pm email without professional consequences. Your employer cannot punish you for being unreachable during personal time. The law protects your boundary.
That’s a real problem. The erosion of work-life boundaries is documented and consequential.
But here’s my question for you:
Is that the problem you experienced during the Attention Audit?
Let’s Get Specific
If you did the Attention Audit: What were your top apps by screen time on Day 3?
In our live class, the top apps were: TikTok. Instagram. YouTube. Reddit. Rednote.
One student spent 5 hours on Rednote on Day 3. Another described Instagram usage that “exploded” when notifications came back on. Someone was “just refreshing apps to see if anything had changed.”
Loading...
For most of you, the answer is probably “almost none. The “always on” feeling from the Attention Audit had little to do with a boss emailing at 10pm and more to do with the algorithmic feed.
The right to disconnect frames the problem as: Other people are demanding your attention. Your employer, clients, colleagues who won’t stop Slacking at midnight. The problem is external pressure, where someone else expects you to be available, and you can’t say no without consequences.
The solution: Give you legal protection to refuse those external demands.
But the Attention Audit revealed a different problem.
Think about your Day 3 (or what your classmates described). Was anyone demanding your attention? Did you receive messages saying “you must check Instagram now”? Did TikTok send a notification threatening consequences if you didn’t scroll?
No. Your attention was captured by design, by algorithms, by variable reward schedules but not demanded.
One student put it perfectly: “I found myself clicking open the apps automatically.”
Nobody asked her to. Nobody required it. She just... did it. Before she consciously decided to.
Here’s the gap:
RIGHT TO DISCONNECT Problem: External demands on your attention (other people requiring you to be available) Solution: Legal right to refuse those demands ATTENTION AUDIT EXPERIENCE Problem: Internal capture of your attention (design that makes you want to engage even when you'd prefer not to) Solution: ???
The right to disconnect assumes you want to disconnect and someone else is preventing you.
But what if the problem is that part of you doesn’t want to disconnect because the design has trained you to keep engaging?
What Are “Variable Reward Schedules”?
I keep mentioning this term so let’s make sure we understand it, because it’s central to understanding why attention capture works.
A variable reward schedule is a pattern where you get rewards at unpredictable intervals.
Think about a slot machine. You pull the lever. Sometimes nothing. Sometimes a small payout. Occasionally something bigger. You never know when the next reward is coming. (I did this last week in Vegas after my talk at NANS. I lost $3).
This turns out to be the most effective way to create compulsive behavior. More effective than rewarding every time (which is predictable and eventually boring). More effective than rewarding on a fixed schedule (every 10th pull). Unpredictable rewards create the strongest habits—and the strongest compulsions.
Now think about your Instagram feed. You scroll. Most posts are meh. But occasionally there’s something that hits like a friend’s big announcement, a video that makes you laugh, drama unfolding in real time, something beautiful or shocking or outrageous. You never know when the next “good” one is coming.
Pull the lever. Scroll the feed.
Or think about notifications. Most are meaningless, such as a like, a promotional email, an app begging you to return. But occasionally there’s something that matters like a text from someone you care about, news you’ve been waiting for, something you actually need to see. You don’t know which it’ll be until you check.
Your phone is a slot machine in your pocket. And the algorithm is optimizing to make you pull the lever more often.
If you did the Attention Audit: Remember the student who was “just refreshing apps to see if anything had changed”? She’d checked 45 seconds ago. Nothing had changed. But she checked again. That’s the variable reward schedule in action—the possibility of something new is enough to drive the behavior.
This is what I mean by “internal capture.” The design creates the compulsion. You’re not being forced to check—nobody is making you. You want to check. But you want to because the variable reward schedule has trained you to want it.
“So Why Don’t We Have Laws That Address This?”
Excellent question. This brings us to Packingham v. North Carolina.
North Carolina passed a law making it a felony for registered sex offenders to access social networking sites where minors could create profiles. A man named Lester Packingham, a convicted sex offender, posted on Facebook celebrating a dismissed traffic ticket: “Man God is Good!”
He was charged with a felony. For a Facebook post about a traffic ticket.
The Supreme Court struck down the law 8-0. Every justice, liberal and conservative, agreed the law was unconstitutional.
Justice Kennedy’s majority opinion included this language:
“While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace—the ‘vast democratic forums of the Internet’ in general, and social media in particular.”“To foreclose access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”
Translation: Social media is where public discourse happens now. It’s the modern public square, the town hall, the place where citizens engage with each other and their democracy. Banning someone from social media is like banning them from public life. That triggers First Amendment protection.
Loading...
The Constitutional Problem
Now think about what Packingham means for regulating attention capture.
If social media is a constitutionally protected “public square,” then restricting access raises serious First Amendment concerns.
And here’s where it gets tricky: What counts as “restricting access”?
Consider some possible regulations:
- Ban infinite scroll: Is that restricting access to the public square?
- Require chronological feeds instead of algorithmic ones: Is that dictating how the platform “speaks”?
- Limit how the algorithm can target content: Is that overriding editorial judgment?
Platforms make this argument explicitly: How we present content is our speech. Our algorithm is our editorial voice, like how a newspaper decides what goes on the front page. The First Amendment protects our right to make those editorial decisions.
This argument is being tested right now. The NetChoice cases working through federal courts ask exactly this, which is does the First Amendment protect algorithmic curation?
If yes, if algorithmic curation is constitutionally protected speech, then many regulations targeting attention capture might be unconstitutional.
This is why France and California can regulate employer communications without First Amendment problems. Employment isn’t speech, it’s a labor relationship. Telling employers “you can’t demand responses at midnight” doesn’t implicate expression at all.
But telling TikTok “you must offer chronological feeds” or “you can’t use this engagement-maximizing algorithm”? That’s constitutionally much harder.
Loading...
Justice Alito’s Warning
Justice Alito agreed that the North Carolina law should be struck down, but he warned about the majority’s sweeping rhetoric:
“The majority seems to equate the entirety of the internet with public streets and parks... There are important differences between cyberspace and the physical world.”
He’s right. Public parks don’t track your movements and sell that data to advertisers. Streets don’t have algorithms optimizing to keep you walking in circles.
Social media is at the same time:
- A public forum (where people exchange ideas)
- A commercial enterprise (designed to maximize profit)
- A surveillance system (collecting data on everything you do)
- A behavior modification machine (using that data to shape what you see and do)
The “public square” framing captures the first function and ignores the others. That makes it hard to address attention capture—because attention capture happens through the commercial/surveillance/behavior-modification functions, not the public-forum function.
Tallying Up
So here’s where we are:
- You experienced something during the Attention Audit (or heard about it)
- The science can’t definitively prove cognitive harm
- The business model incentivizes attention capture regardless
- The laws that exist (right to disconnect) address the wrong problem—external demands, not internal capture
- The Constitution may protect the design features that capture attention
Is there any path forward?
A Different Frame: Autonomy Instead of Harm
Here’s where I want to try something.
The harm frame asks: “Does attention capture damage you? Does it impair your cognition? Does it cause measurable psychological harm?”
The science says: Maybe. It depends. We can’t prove it definitively.
But listen to how students described their experience during the Attention Audit:
“I felt under-stimulated initially, which eventually settled into feeling more in control.” “I didn’t feel in control at all.” “Opening these apps is more of a habit than an actual necessity.” “I found myself clicking open the apps automatically.”
Control. Habit. Automatic.
These aren’t harm words. They’re autonomy words.
So … can we target interference with autonomy instead?
What Is Autonomy, Again?
Remember our framework from Week 1:
Authenticity: Are your preferences genuinely yours? Do you want what you want because you decided to want it—or because something shaped you to want it?
Agency: Do you have the capacity to act on your preferences? Can you do what you decide to do—or is something interfering?
Autonomy requires both. You need authentic preferences (knowing what you actually want) and agency (being able to act on what you want).
Now think about attention capture through this lens—not “is this harming me?” but “is this undermining my self-direction?”
The Autonomy Threat
Even if attention capture doesn’t damage your brain and even if the science never proves cognitive harm, attention capture might threaten your autonomy. Here’s how:
1. It shapes what you attend to.
Your attention is the gateway to everything else. What you attend to determines what you think about, what you know, what you care about, what you want.
If an algorithm is deciding what you attend to—if it’s selecting your inputs based on what maximizes engagement—it’s shaping your outputs. Your preferences, your beliefs, your desires are downstream of your attention.
2. It creates preferences you wouldn’t endorse on reflection.
Remember the student who said: “It’s not TikTok that my brain needs when it’s under-stimulated—it just needs entertainment, and a book or a puzzle is more than enough.”
She’s describing two levels of wanting:
- First-order preference: “I want to scroll TikTok right now” (what she felt in the moment)
- Second-order preference: “I don’t want to be someone who spends hours on TikTok” (what she believes on reflection)
She wanted TikTok in the moment. But she recognized that this want wasn’t really hers. It was manufactured by the variable reward schedule. Given actual alternatives, she’d have been equally satisfied—maybe more.
When your first-order preferences (what you want) conflict with your second-order preferences (what you want to want), that’s an authenticity problem.
If you did the Attention Audit: Did you experience this gap? Did you want to scroll on Day 3 while simultaneously not wanting to want that?
3. It consumes time you’d otherwise direct toward your own goals.
The student who read four chapters and finished a puzzle on Day 2—she didn’t gain time. She reclaimed it. The time existed all along. It was being captured.
When people talk about “losing” an hour on Instagram, they’re describing time they didn’t choose to spend. They opened the app for one reason and emerged an hour later unable to account for what happened.
That’s not just wasted time. That’s an agency problem—your capacity to direct your own life is being compromised.
4. It increases your susceptibility to further capture.
One student tried to “catch up” on content she’d missed during Day 2.
Catch up on what? What actually happened on Instagram while she was away that required her attention? Nothing, really. But the algorithm had created the feeling that she was missing something—that she was behind, that she needed to get current.
This feeling isn’t accidental. It’s designed. And it creates a cycle: attention capture → habituation → increased susceptibility to attention capture.
The Feedback Loop
Remember Week 1? Inference threatens authenticity (are your preferences really yours?). Offloading threatens agency (can you act independently?). Each reinforces the other.
Now add attention capture:
┌────────────────────────────────────┐ │ │ ▼ │ Attention Capture │ │ │ ┌─────────────┴─────────────┐ │ │ │ │ ▼ ▼ │ Exposure to Time Displacement │ Algorithmic Content (less time for │ │ self-directed activity) │ │ │ │ ▼ ▼ │ Preference Shaping Reduced Agency │ (authenticity threat) (capacity threat) │ │ │ │ └───────────┬───────────────┘ │ │ │ ▼ │ Habituation │ (increased susceptibility) │ │ │ └─────────────────────────────────────┘
One student noticed something important: “Watching a high-quality show definitely makes me feel better about how I spend my time than scrolling through short-form content.”
She’s distinguishing between preferences she endorses and preferences she doesn’t. Both activities consumed time. Only one felt like hers.
What This Might Mean for Law
If the problem is autonomy rather than harm, we might need different legal tools.
Harm-based regulation asks: Is this hurting people? Prove it, and we’ll act.
Autonomy-based regulation might ask: Is this undermining people’s capacity for self-direction—even if they “chose” it?
We already regulate things on autonomy grounds:
- Fraud: Even when the victim willingly hands over money, we say the choice was compromised
- Undue influence in contracts: Some pressure makes “consent” not really consent
- High-pressure sales tactics: Cooling-off periods exist because the “choice” to buy at 2am from a door-to-door salesman isn’t really free
- Deceptive advertising: Even if you chose to buy, manipulation makes the choice less than autonomous
The common thread: someone made a “choice,” but something compromised the choice’s authenticity or their agency in making it.
Could attention capture fit this framework? Could we regulate not because it harms you, but because it undermines your capacity to direct your own attention—and therefore your own life?
That’s not an obvious answer. But it’s a different question than “prove brain damage.”
The Question for the Semester
The science being uncertain doesn’t mean there’s no problem. It means we have to think carefully about what kind of problem it is.
Maybe the attention economy isn’t primarily a harm problem. Maybe it’s an autonomy problem. And maybe that opens legal approaches we haven’t fully explored yet.
If you can’t trust your own preferences, and the science is uncertain, what should law do?
We’ll carry this question through the semester.
Next Week: Dark Patterns
The attention economy captures your attention. But that’s step one.
Once platforms have your attention, they want you to do things. Sign up. Subscribe. Share data. Buy something. Keep scrolling. Definitely don’t cancel.
And they’ve gotten very good at designing interfaces that push you toward those actions even when you’d prefer not to take them. Sometimes especially when you’d prefer not to.
These manipulative design choices are called dark patterns. They’re the subject of increasing regulatory attention. And next week, you’re going to experience them firsthand.
Your assignment:
Pick a subscription service you have (or sign up for a free trial of something—anything that promises “cancel anytime”).
Step 1: Try to cancel.
- Document every screen you encounter
- Note the language used at each step (”Are you sure?” “You’ll lose these benefits!” “We’d hate to see you go!”)
- Count the clicks required
- Time how long it takes
Step 2: Try to resubscribe.
- Document the same things
- Count clicks
- Time it
Step 3: Compare.
- How many clicks to cancel vs. sign up?
- What was different about the language and design?
- How did each process make you feel?
Class dismissed. See you Monday.
The entire class lecture is above, but if you’d like to support my work or go deeper in your learning, please upgrade to being a “paid subscriber.”
Paid subscribers also get access to class readings packs, discussion questions, bonus content, full archives, virtual chat-based office hours, additional readings, as well as one live Zoom-based class session per semester.