
Photo: Braden Brook/Bloomberg Law
Transcend’s Ron De Jesus says transparent AI policies help foster consumer trust and helps safeguard users.
Lack of commonsense AI regulation opens the door to real harm and risks a potential $10.3 trillion opportunity to develop and use generative AI. We need guidelines to encourage innovators to explore new technologies without risking public trust.
Efforts to ensure responsible AI innovation have veered off course. I’ve spent my career creating privacy programs to protect consumer data and give users meaningful control over its use, and I believe regulations such as the EU AI Act are crucial to unlocking AI’s most powerful use cases.
All the innovations we’ve seen in AI so far are a direct result of collecting and using data for model training. When users feel protected through transparent data policies and consent measures, they’re more likely to contribute the high-quality data necessary to advance AI.
But if companies neglect to safeguard users, they will run out of “fuel” or data to advance any further.
Are Guardrails Excessive?
The global AI community has progressed in the past several years over how to govern AI technology so innovation doesn’t sacrifice user trust and safety.
The Institute of Electrical and Electronics Engineers trade association set the stage in 2016 with its ethically aligned design principles, and the Organization for Economic Cooperation and Development adopted its own AI principles three years later. The White House’s 2022 AI Bill of Rights was followed by former President Joe Biden’s executive order on AI in 2023. Last March, the EU passed its watershed AI Act.
But remarks from Vice President JD Vance at last month’s Paris AI Summit that “excessive regulation of the AI sector could kill a transformative industry” pit AI opportunity against safety. Why should we have to choose between the two? User-first privacy programs build lasting trust between consumers and companies.
Products and services that prioritize consumer trust and empower user choice can accelerate innovation rather than block it. Take biometric data and smart devices. From facial recognition and fingerprint scanning to always-on smart assistants and connected cars, these once-emerging technologies continue to raise privacy questions.
In response, regulations such as Illinois’ Biometric Information Privacy Act, the EU’s General Data Protection Regulation, and California’s IoT Security Law have stepped in to establish clearer rules around data collection, storage, and sharing. This has helped reassure the public–leading to greater adoption of technologies such as digital identity verification, wearable health devices, and smart home assistants.
Regulating Known Harms
Under-regulating technology has troubling consequences. Self-driving cars from Chinese companies traversed millions of miles gathering untold amounts of information on US citizens because the US lacks laws specifically governing technologies originating from adversarial countries that may be collecting such data.
In contrast, the EU AI Act places necessary guardrails around AI technologies with known harms, such as biometric surveillance in public spaces, predictive policing, or emotion-recognition systems in the workplace or schools.
The risk of racial discrimination by facial recognition technology is well-documented. In 2020, Detroit police arrested Robert Williams, a Black man, after facial recognition software wrongfully identified him as a theft suspect. Three years later, the same police force relied on another flawed match, wrongly accusing Porcha Woodruff, a pregnant Black woman, of carjacking.
Rather than stifling innovation, regulations push companies to continue improving their products. Privacy laws have forced companies to rethink data collection and usage and to innovate in areas such as encryption, data minimization, and user consent management, leading to stronger security, better consumer trust, and new business models.
For example, Apple introduced advanced data protection for end-to-end encryption of iCloud data categories beyond passwords and protected health information. User consent regulations have created new technologies from startups to major enterprises such as IBM that govern the entire lifecycle of user permissions–from initial capture and storage to handling of granular data access requests and deletion.
Innovation and Trust
Regulations such as the EU AI Act that seek to address the most critical risks of emerging technology aren’t examples of government overreach. No regulation is perfect, but these laws can serve as blueprints for meaningful US legislation to promote public trust and AI adoption (something Colorado has already done through its own AI Act).
The US didn’t sign onto statements at the Paris Summit that outline joint approaches to AI risk mitigation and ethical AI development. This is concerning, given the current administration’s deregulatory stance and the revocation of Biden’s executive order on safe AI use and development.
The future of US AI leadership hinges on forging a path where innovation and responsible governance coexist. History proves that trust—built through transparent data practices and practical guardrails—is the currency of progress. Frameworks that prioritize safety will reassure users and empower innovators to experiment.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Ron De Jesus is field chief privacy officer at data privacy platform Transcend. Formerly the chief privacy officer of Grindr, he has experience helming privacy programs at companies such as Match Group and Coach.
Write for Us: Author Guidelines
To contact the editors responsible for this story: Melanie Cohen at mcohen@bloombergindustry.com; Rebecca Baker at rbaker@bloombergindustry.com
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.