Safety by Design as a Gameplay Practice - Digital Thriving Playbook

Designing gameplay that is engaging, inclusive, fun, and immersive is challenging for any development team. Designing gameplay where players inherently feel safe and welcome to express themselves without fear of abuse, harassment, or worse is even harder. However, with the rising tide of challenges in the gaming industry, safety is more important now than ever. Safety as a gameplay practice means that you, as a developer, designer, or safety professional, are empowered to build safety considerations into the inherent design of any feature. This “safety by design” mindset facilitates gameplay that is fun, engaging, and welcoming for all players.

Safety by Discovery versus Safety by Design

The worst thing that can happen in a space designed for fun is the exploitation of gameplay to harm, abuse, or take advantage of other players, especially younger players. Events like this hurt the player experience, degrading the trust online communities have in a brand. This can be detrimental to the success of a game and, in the worst case, hurt the actual player. Even in games tailored for older players, studies continue to show that minors find ways to use applications designed for adult audiences.
Among this usage are consistently startling statistics about exposure to harassing (1 in 2), sexual (1 in 3), pornographic (1 in 7), depressive (1 in 4), and violent (2 in 3) behavior. Efforts by players to hide their online activities from trusted adults (1 in 2) or hide exposure to sexual interactions (1 in 6) contribute to a growing culture of silence and shame. A key goal for any team is to move from a world where you discover these statistics post-release (safety by discovery), to one where you prevent them up front (safety by design).
Designing for and around disruptive challenges in online environments is difficult, and it’s becoming more difficult as time goes on. Factors like the need for increased transparency reporting and accountability, quickly-developing threats like generative artificial intelligence, the rise of mass online multiplayer networks, the ease of online anonymity, and the lack of social consequences all contribute to increased complexity in safety solutions. That is why building protections from the start is increasingly important.
So the question is, how do you build those protections? The spectrum of safety by discovery to safety by design is a singular path, and every team falls somewhere on that path. The goal is to fall more on the side of safety by design, and the first step toward shifting to that goal is understanding risk.
The beginnings of building safety by design as a gameplay practice is considering risk at every turn. The first step, always, is to consult a safety team (if there is one). Even if at an initial glance there does not appear to be any safety risk, collaboration with the dedicated team will not only confirm or challenge that, it will also build in the muscle memory to include safety concerns, day zero. Having a safety team, however, does not alleviate the responsibility of thinking about safety as a design practice; if anything, it acts as a resource to best empower proactive evaluation at the design stage.
When evaluating safety risk, there are four considerations that are fundamentally important:
  1. The potential for user generated content (UGC) — UGC is essentially anything a player can create that can be exposed to another player. Think chat, images, voice, persistent text, skins, etc.
  1. Social or multiplayer features — These features do not necessarily have UGC, but they are usually tailored to improve the socialization of players. This can include features that allow players regular access to other players (like party systems) or offer profile information (like usual time online), as well as invite codes that can lure players to other environments.
  1. The possibility for exploitation — Exploitation of features mainly relates to the abuse of that feature to harass a system. Think bots, fake reporting, automated systems, DDOSing, cheating, doxing, etc.
  1. Regulatory obligations — Legal requirements the game must follow. These will be a good launching point for a conversation on what is minimally necessary to implement.
Considering these four threat vectors goes a long way to start the safety-by-design journey.
When it comes to UGC, some consistent areas of consideration include:
  • Detection and prevention mechanisms involve filtering (or removing) content that violates policy. This means you need a way to scan content as it is being created, block the content up front, and consider messaging the player to communicate that a violation has been made. UGC should always be scanned by a content moderation platform, and the results of the content scan can be enforced in game using technical interpretations.
  • Acting and reporting mechanisms involve automatically or manually acting on content and providing a means of reporting something a scanner might have missed. This means you need to build in the technical pathways to remove persisted content (and potentially ephemeral content according to its retention), take account action (account suspension or termination), and report created content.
Scanning UGC, when paired with a policy prepared and enforced by the accompanying technical protocols, typically covers your bases for detecting and preventing harms. The technical implications of the above are far reaching (and ways to implement them are expansive), but the ultimate goal is the same. Numerous resources (including a plethora of regulatory guidelines) exist to help direct the specific implementation details to achieve the above. At a high level, though, these are the safety design technical considerations: You need the ability to scan it , filter or block it , escalate it , report it, and act on it.
Social and multiplayer features are much more nuanced. There are fewer consistent rules for them, since the design is often incredibly specific to the game. Nonetheless, a good question to ask regarding any feature (social or otherwise) that can be flagged as a safety risk is: “Can the information offered by my feature be used to harm or harass another player?”
For example, if your feature offers a new section of the player profile that denotes the typical times you are online, does that open up a groomer to target you regularly (i.e., can someone infer age of a player by drawing a line between typical times online and a school schedule)? If your feature allows inviting other players to a party, can a stranger invite (what could be) a child to a more intimate setting?
Affirmative answers to these questions are not necessarily bad things. The key to safety, which will be explored later on, is not saying “no” to features. It’s saying “yes, and.” Yes, my feature offers typical online times, but it’s extrapolated to the day and not the hour. Yes, my feature allows strangers to join up with other strangers, but not for child accounts. In addition, the in-party chat is monitored by the content scanning system. Very rarely do the risks outweigh the value of the features.
Unfortunately, safety isn’t just about protecting good-meaning players. It’s also about protecting the system and its efficacy. There will always be players that seek to exploit gameplay mechanisms to cheat and manipulate their way through a game. This category stretches beyond the responsibility of the safety team; security and stability concerns are also at play. Some questions to kick off the conversation:
  • What protections need to go into effect to prevent someone from falsifying reports or report evidence?
  • Can the feature be cheated or leveraged to allow people to cheat?
  • If traffic flooding happens, what circuits are in place to protect the technical systems and the players on those systems?
The focus of safety sometimes bleeds from facilitating prosocial behavior to protecting the people and mechanisms encouraging that prosocial behavior.
It’s important to note that safety requirements and legislative requirements are not synonymous, but they are entangled. Often, the safety narrative specific to your game will differ wildly from what is required by law, which is why this area won’t be deeply explored here.
As a good rule, though, regulatory obligations are the first steps on a journey toward maximizing consideration for safety in features. When accounting for regulatory obligations in the design, then, it’s important to keep in mind that it may be worthwhile to go above and beyond what the legislation calls for. For example, if compliance requires the reporting categories “Sexually Inappropriate” and “Hate Speech,” simply meeting this requirement would mean only having those two reporting categories in the game. However, most teams would want to add additional topics like “Harassment” or “Cheating.” This is how you shift from what is required to what is best for your players.
notion image
Set Shield for game, Video console, Portable video and Game with joystick icon. Vector.
Safety is more than understanding and mitigating risk. Safety as a design practice includes being intentional in setting a standard for behavior in your game. This involves putting systems in place to be sure you are delivering excellence to your players:
  • Feedback loops between the content moderators and engineering teams are imperative to the iterative success of safety features — Development teams should have a solid feedback loop to the safety or moderation teams to make sure features remain relevant and effective. Moderators experience, first hand, the behavior that is reported and escalated from in game. When they see opportunities for growth and improvement, they should have a pathway to express them to ensure features remain current and functional. This goes both ways. Receiving regular updates on new features and gameplay empowers moderators to better understand the context and applications of socialization in game, better enabling them to do their jobs.
  • Quality assurance is essential — As in all other gameplay features, when you design for safety, you have to make sure that it works as intended. This is especially true for safety because the ramifications of having a feature fail are greater than other features. For example, if you tell your players cursing is allowed in your game and then accidentally release a feature that blocks that language, you immediately degrade the one essential success metric for any safety team: trust. If players don’t trust the features, they will not use them. Quality assurance specific to safety vectors is essential for making sure features work as intended, just like any gameplay feature.
When it comes to the safety considerations for gameplay mentioned here, what it comes down to is mainly a frame of mind. Just as game design accounts for risk, accessibility, and usability, safety is simply an added component. Before getting to solution generation, it’s important to consider angles like player reception, success metrics, and what content moderation policies make sense for your space. These design considerations for safety features are important to understand in depth before you go to the drawing board.
Once you understand certain requirements (e.g., maximize player reception, create success metrics, build trust in your features), the next challenge is understanding how to design with these considerations in mind. That’s where safety and the user experience comes into play. This is where you close the gaps listed above and use the opportunity to encourage prosocial interactions while setting a precedent of certain behavior in your game.
notion image
Safety and the user experience
The final component here is bringing it all together. Advocating for safety won’t mean much if no one listens to you, so one of the challenges you might face is emphasizing the importance of investing in safety considerations.
As a safety advocate or team, it might take a while to define yourself within a studio or among partner teams as the one to come to when new features are being developed. As an individual designer or developer, it could be hard to advocate for the additional design considerations needed for safe gameplay. All of this knowledge won’t mean much if it can’t be applied to what is actually shipping to players. This makes visibility within a game studio a unique problem for safety teams, since feature design without safety considerations can have some pretty undesirable ramifications.
So how do you set yourself up for success within a studio? The answer is to show up everywhere you can. All hands, show and share, hackathons, bug bashes, feature tests, shiprooms — anywhere gameplay features are being talked about or other teams are listening in. Ask the questions posed here. The more visible you are, the more other teams will know what you are there for.
When you can be present with fellow teams, you can learn how to best empower gameplay features in the safest way possible. This is where you apply the core design considerations for safety and the user experience. You will create a reputation for improving and elevating player experiences. Some tips for meeting with design teams:
  • Relate to them the potential consequences (relevant to their areas) if safety measures aren’t taken.
  • Introduce relevant statistics and applications to the game.
  • Be a good neighbor and help implement safety features with and for teams where you can.
  • Prove the value of safety features with data (this is where the importance of KPIs comes into effect) to establish your validity for other teams.
In this way, you develop an understanding for partner teams’ goals, needs, and priorities. Empowered by that knowledge, you build the trust to actually influence design and then, eventually, do it yourself.
In the beginning, you will find out about features (potentially even post-release) that need safety protections, and it will likely be a game chasing down problems after they have already arisen. (By the way, this would be the safety-by-discovery phase mentioned earlier.) But, through developing relationships with other teams and building the expertise to proactively address safety risk in the game, you will grow into influencing design to include safety considerations pre-release (the safety-by-design phase).
As more teams are exposed to the work that you do, more teams will know how to effectively advocate for safety in their features. This is the ideal state, where safety is at the table before a line of code is even written.
Safety is a global problem, from the smallest studio to the largest franchise — we don’t live in a world where we can wait and see. A proactive approach is the best way to build a foundation for success for any game.
Safety journeys are not easy, nor are they linear. But they are important, and starting the journey of building gameplay that is safe by design starts here. Of course, there is always going to be more to consider — things out of scope of what has been discussed. The key to safety as a gameplay practice isn’t to be right 100% of the time, it’s to be inclusive in your mindset at the design phase of any project.
As soon as you start thinking about what could go wrong, you can begin to understand how to do it right. The point here is to think about it. To consider safety every step of the way. It is your responsibility to ethically and respectfully build a healthy game environment. Safety is a small word, but in game design it is a massive undertaking and has a huge impact on player communities.
The work here is more than just a design practice, it’s a mentality and a framing to view the work that you do. Healthy and thriving multiplayer design is the epitome of safety practice. When you aim to deliver that, only then can you fully succeed at building gameplay that is engaging, inclusive, fun, and immersive for all players.
To learn more about designing for safety, see:
Lauren Henderson is the Trust and Safety Technical Program Manager at Minecraft. With a primary background in cybersecurity and computer science, as well as a degree in Computer Engineering from Villanova University, her focus has always been on building inclusive, accessible, and innovative tech. A certified cybersecurity professional, she has worked on everything from authentication stacks to server farm cooling systems to biometrics to her current focus: gaming. With a highly technical background, she actively advocates bridging the gap between engineering complexity and the human-centered focus of the trust-and-safety space, without compromising the deeply technical leanings needed for success.