The Structural Failures Thwarting Responsible AI Development — All Tech Is Human

By Deb Donig, All Tech Is Human’s Siegel Research Fellow
Over the past four years, I have been tracking the evolution of what we might call the "Responsible Tech workforce"—the people hired to make technology serve human values rather than just maximize engagement and profit. The data reveals a field that has largely evaporated even as public concerns about AI bias have grown more urgent.
In 2021, positions focused on ethical technology—roles like AI ethicists, algorithmic auditors, and diversity specialists—represented 58% of all jobs in the Responsible Tech ecosystem. By 2025, that number had collapsed to just 8%. Meanwhile, technical implementation roles focused on compliance and regulatory requirements exploded to dominate 66% of the market.
What This Shift Represents
This shift represents more than changing job titles. It signals a fundamental retreat from asking "what kind of AI systems should we build and why?" toward simply implementing predetermined requirements. What we are witnessing is the premature ossification of certain assumptions and frameworks into technical systems that will operate at massive scale across social, political, cultural, and economic life, all of it without the institutional capacity to deliberate about whether those embedded assumptions serve any coherent vision of human flourishing.