Consentful Systems

Category
Resource
Created time
Jan 1, 2024 03:44 PM
notes
A way to think of systems and design with consent at the heart.
This is a project website of "Yes: Affirmative Consent as a Theoretical Framework for Understanding and Imagining Social Platforms", a paper accepted to CHI 2021, a top-tier conference in Human-Computer Interaction. The paper won the Best Paper Honorable Mention 🏅, an award given to top 5% of the submitted papers. The link to the paper is here. You can also check out a 5-minute video of the work here. If you have any questions, please reach out to Jane Im. :)
This work could not have existed without and builds on Consentful Tech Project, which the incredible Una Lee founded.✨ Una, who is also a co-author of this work, introduced the term "consentful technology"—which inspired many people.
How can we design a social internet where people's consent boundaries are protected? Non-consensual interactions are pervasive in online spaces, such as online harassment and revenge porn. In this work, we use a theoretical framework of affirmative consent ("Yes means yes!") to understand such problematic phenomena and generate new design ideas to tackle them. This website highlights the 1) principles of affirmative consent and 2) the design insights generated from the principles for building consentful platforms.

1. Principles of affirmative consent

Affirmative consent is the idea that someone must ask for, and earn, enthusiastic approval before interacting with someone else. For decades, feminist activists and scholars have used affirmative consent to theorize and prevent sexual assault. Here, we introduce the five principles of affirmative consent, deriving the principles from prior work in feminist literature, legal scholarship, and HCI. If you are curious about the prior research that has informed these principles, please check our paper! Our principles also build on Una Lee's wonderful zine on digital consent.
1) Affirmative consent is voluntary.
Consent is an agreement that is 1) freely given and 2) enthusiastic.
2) Affirmative consent is informed.
People can only consent to an interaction after being given correct information about it—in an accessible way.
3) Affirmative consent is revertible.
Consent is an ongoing negotiation and can be revoked at any time.
4) Affirmative consent is specific.
People should be able to consent to a particular action (or a particular person), and not a series of actions or people.
5) Affirmative consent is unburdensome.
The costs associated with giving consent should not be so high that a person gives in and says "yes" when they would rather say "no."

2. Affirmative consent for generating new system ideas

In this section, we first describe the sociotechnical building blocks generated by the principles above, and then introduce the concrete interaction features grounded on the building blocks. If you click a building block, the corresponding features will be highlighted in the table.

1) Sociotechnical Building Blocks

1. Building blocks for voluntary.
System periodically asks the end-user (and does not assume) whether they want the interaction to take place. For instance, a system asks a person if they want to enter the group chat room they are invited to, instead of automatically adding them.
System allows granular levels of visibility of personal information for different friends. While some social platforms provide this, many are limited to differentiating “friends” and “non-friends.” For example, users could have agency over their visibility based on strength of ties.
Systems permit limits on how far a post can be shared. For instance, a person can allow people to only directly share their post (hops=1) , helping the author control the degree of visibility and interaction.
Systems allow users to accept a friend request but isolate it, sending the request sender to a separate queue. Users can apply customized social rules to the accounts in the queue. This is in contrast to the current platforms’ rigid options regarding relationships (e.g., accept vs. decline), supporting deeper social rules.
2. Building blocks for informed.
Using algorithms, systems synthesize account-level behavioral data. Of course, every user needs to be aware this could be happening (otherwise it violates the informed principle). For example, a system could show whether an account a user is about to interact with has consistently used toxic language in the past.
Systems provide feedback as soon as the real audience diverges from the likely imagined audience. For example, a system might notify a user if their post is shared within a new network neighborhood using community detection algorithms.
3. Building blocks for revertible.
System efficiently allows users to completely delete all types of information—tags, posts, comments, friendships, etc. For example, when someone unfriends another person, the platform might ask “Would you like to remove past tags of this person as well as related posts?”
System completely deletes past shares/copies if the original data (e.g. post) is deleted. For example, on a centralized system like Twitter, retweets disappear if the post is deleted by the poster; on a decentralized system like Mastodon, a protocol could enforce revertibility, with punishments for defections.
4. Building blocks for specific.
Using computation on interaction data, systems can scaffold classifying relationships into groups, or “social circles.” This might be accomplished with community detection algorithms, for example.
Using computation over textual and image data, systems can scaffold classifying content into high-level categories.
Once these circles and topics are created with computational scaffolding, systems can let users articulate more specific group-level policies for messaging, content feeds, etc. For example, a user might choose to only allow comments on a post from people who have commented (and not been blocked) before.
5. Building blocks for unburdensome.
Systems can put customized time limits to interactions. While ephemeral content is an example of this, we argue timeboxing can be applied to a wide range of interactions, and not just posting (e.g. disallow sharing after one week).
Using computation, systems learn about consent boundaries. Users can annotate posts/comments to articulate their preferences (e.g., annotate posts on content feed as triggering).
Systems limit volumes of comments, mentions, etc. based on end-users’ preferences. For example, a user may decide to only allow up to five comments to a post that is on a sensitive subject.

2) Sociotechnical Interaction Features

Using the building blocks above, we next present proposals for new designs based on affirmative consent. We take the five principles of affirmative consent and use them as design axes to generate sociotechnical interaction features. In some senses they are "primitives"—core interaction ideas that could be repurposed on a variety of social platforms in flexible ways. Each cell of the table presents an interaction primitive. We also sketch three cells from the table in more detail in the next subsection.
ㅤ
Voluntary
Informed
Revertible
Specific
Unburdensome
DM + group chat
Users are asked if they want to join when invited to group chat. Periodic checks
Platform visualizes topics discussed in group chat before a person decides to enter. Topic inference
Users can revert message read status to unread.
Different online status by group: would love to chat for friends; online, but busy for others. Granular visibility Group-level policies
Classify DMs from strangers using sender’s content and behavior. Account summarization
Profile
Users can control profile visibility by audience: only show selfies to friends & friends’ friends. Granular visibility
Platform shows how many people that viewed the profile are strangers. Audience intel
Users can query and delete, en masse, tags and comments from their profile related to account (e.g., ex-partner). Efficient expressivity
Some profile fields are only shown to accounts that have been friends for > t time. Group-level policies
Platform periodically reminds user how their profile looks to other people: “This is how your profile looks to Jake.” Periodic checks
Friend + follow
Users can accept a friend request but can isolate it, sending it to a separate queue. (e.g., if acceptance is coerced). Request isolation
Platform alerts if friend request comes from account with history of posting toxic content. Account summarization
Requests from people previously unfriended are sent to a queue. —ensuring revert. Request isolation
Assign people to “circles” at follow time with rules: no tags from this circle. Social circles Group-level policies
Periodic reviews of followers/friends with new risk scores (e.g. toxicity level). Periodic checks Account summarization
Post+ comment
*most platforms already support voluntary posting and commenting
Users receive reports of how many post viewers are strangers. Audience intel
Users can query and delete posts/comments at large scale. Efficient expressivity
Users can apply audience rules to hashtags: e.g, creator can restrict who can use it. Group-level policies
Users can rate limit comments per post. Individual rate limit
Feed
Feed asks what users want to see today (or this week). Periodic checks
Content feed makes algorithms visible and salient.
Users can bookmark feed settings to easily revert to prior settings.
Users can set different types of content feeds per social circle. *similar to mastodon’s local timelines Group-level policies
Users can annotate posts in feed, from which the system can learn what posts the person wants to see (or not see). Annotation for system-learning
Tag
By default, platform always asks user if they consent to being tagged when another user initiates tagging. Periodic checks
Platform provides high-level summary of audience, outside friends, that sees tagged post. Audience intel
If user unfriends, the system asks if they also want to delete tags of the person. Efficient expressivity
Users set tagging rules by content type: disallow tags in photos of people. Topic inference
Users can timebox tag frequency: Jake can only tag once a month. Timeboxing
Share + retweet
Users can limit how many hops shares are allowed to travel. Sharing hops
Users are notified if post is shared to a new network “neighborhood.” Audience intel
When user deactivates post’s sharing, or deletes the post, existing shares disappear. *twitter partially implements this Cascading & normative revert
Leveraging data of past interactions, users can decide who can share each post: Only people who I have messaged 5 times can share. Social circles
Platform alerts user if their post starts being shared rapidly by strangers. Audience intel

3) Examples

Here we provide tangible mockups that illustrate three examples suggested above. The first and second illustrations are designed by Katherine Mustelier and the third is made by Jane Im.
1. Voluntary Content Feeds: Feeds that ask what you want to see today/this week/this month
notion image
When Lucy opens Socious, they are greeted within the content feed asking what they want to see this week.
notion image
Once Lucy selects the topics they want (or not want) to see, the changes are immediately reflected in the feed.
Current content feeds do not ask what a user wants to see; they typically assume what a user wants based on inference over platform data. As a result, many encounter unwanted posts in their feeds, sometimes even after the user has invested great effort to avoid such posts. A content feed constructed around the voluntary principle of affirmative consent would periodically ask what the user wants to see.
Imagine that Lucy logs onto a new platform called Socious, and the platform greets them by asking “What do you want to see this week?” Lucy sees Socious recommended keywords like “Flower Tending”, “Animation”, and “Dance” based on topic modeling. Lucy decides they would like to see more of flowers, dance, and animation. Lucy also notices they can specify topics they do not want to see. Lucy can also select among tags that include well-known triggering topics. Lucy selects “Self Harm”, “Alt Right,” and “Race” for exclusion from their feed. As Lucy scrolls down the feed, they see the new preferences immediately reflected. After a week, Socious asks Lucy again for topic preferences—though Lucy can change the frequency of requests any time.
2. Revertible Profile Pages: Revert posts, comments, and tags efficiently
notion image
Jon’s profile page on WebCon.
notion image
Jon queries for posts containing tagged photos of Emily or ones that Emily left comments on or liked. Jon decides to delete all of them.
notion image
Jon goes back to his profile page and sees the queried posts removed from his profile.
Our social networks constantly change offline—we sometimes distance ourselves from people who were once close friends, go through break-ups, or our loved ones pass away. However, the rigidity of current platforms makes it hard to reflect these changes. For instance, Facebook’s feature called Memories shows content that you shared in the past—in some cases showing memories that a person may not want to recall, such as photos of one’s recently deceased family.
Imagine Jon logged into WebCon, a new social platform. Jon recently went through a break-up, and wants to remove all data related to his ex-partner, Emily. Jon goes to the dashboard and queries for his posts that Emily liked, is tagged in, or left comments on, as well as Emily’s posts that he liked, is tagged in, or left comments on. He decides to delete all of his posts that are related to Emily. He also chooses to remove his likes, comments, and tags in/on Emily’s posts. Jon goes back to his profile page and sees these posts removed from his profile. Jon also deletes all of Emily’s comments in his remaining posts. In contrast, Jon cannot delete Emily’s posts of Jon, as those posts are Emily’s.
3. Unburdensome Messaging: Leverage network data to control chats
notion image
Sannvi sees many unwanted messages when she opens CoMedia.
notion image
Sannvi uses network rules to control who can message her.
notion image
Sannvi has the majority of her new messages sent to a separate queue. She also sees new messages from friends’ friends, Sharon and Preeti.
On most current platforms, when a person sets their account to public, strangers or spam accounts can DM them with unsolicited content. For instance, about half of American women ages 18 to 29 have received explicit images they never asked for. At internet scale, it becomes very difficult to exercise control over messages; some people abandon platforms altogether for this reason.
Imagine Sannvi has been receiving many unwanted messages on CoMedia. The messages often include compliments about her looks, which she finds uncomfortable. Sannvi decides she does not want to see such messages and goes to “Control Panel,” applying network-centric rules such as: Only allow people that my friends have messaged to message me. Now, if a stranger messages Sannvi on CoMedia, the system first looks up whether the sender has ever interacted with Sannvi or any of her friends on the platform. If not, CoMedia sends the stranger’s message to a separate queue which Sannvi can later review if she wants.

Research Paper and Talk @ CHI 2021