The gap between AI ethics principles and operational AI governance is well documented. Organizations publish principles — fairness, transparency, accountability, privacy — and then struggle to translate them into engineering practices, audit procedures, and organizational routines. Three recent frameworks attempt to bridge this gap, each from a different angle: comprehensive system design, decentralized organizational governance, and structured innovation processes.
The Comprehensive Framework
Herrera-Poyatos et al. (2025) propose a framework for responsible AI systems that spans the entire lifecycle from domain definition through design, auditability, accountability, and governance. Their contribution is architectural: rather than adding ethics as a layer on top of existing AI development processes, they integrate responsibility requirements into each stage of system design.
The framework's domain definition phase is particularly practical — it requires teams to specify the societal context in which the AI will operate before any technical work begins. Who will be affected? What power asymmetries exist? What failure modes would cause harm? These questions, typically relegated to ethics review boards that meet after the system is built, are placed at the start of the development process where they can actually influence design decisions.
Adaptive Governance for Decentralized Organizations
Meimandi et al. (2025), presenting at AIES, address a different challenge: how to implement responsible AI governance in organizations that are decentralized by design. Startups, open-source communities, DAOs, and distributed teams cannot rely on centralized ethics boards or top-down compliance structures. They need governance frameworks that are adaptive — capable of evolving as the technology and its applications change — and distributed — implementable without a central authority.
Their framework uses modular governance components that can be assembled and configured for different organizational contexts. A small startup might implement lightweight risk assessment and bias monitoring. A large distributed organization might add formal audit trails and external review mechanisms. The modularity principle recognizes that responsible AI governance is not one-size-fits-all but must scale with organizational complexity and risk exposure.
Structured Innovation Process
Torkestani and Mansouri (2025) contribute SCOR — a framework that embeds responsible AI considerations into the innovation process itself. Rather than treating responsibility as a constraint on innovation (something that slows down or limits what can be built), SCOR positions it as a dimension of innovation quality (something that makes the innovation more valuable and more durable).
The framework structures the innovation process into stages where responsibility considerations are evaluated alongside technical feasibility and market viability. At each stage, the team assesses not just "can we build this?" and "will anyone use it?" but "should we build this, and what safeguards are needed?" This integration prevents the common pattern where ethics review occurs only at the end, when significant resources have already been committed and organizational momentum makes course corrections difficult.
The convergence of these three approaches suggests that the field is moving beyond the principles-practice gap toward implementable governance architectures. The remaining challenge is adoption: frameworks exist, but most organizations still treat responsible AI as a compliance exercise rather than a design discipline.
Why Adoption Lags
The gap between framework availability and organizational adoption has several causes. Responsible AI governance requires cross-functional coordination between engineering, legal, ethics, product management, and executive leadership that most organizations are not structured to provide. It requires expertise in both AI systems and ethical reasoning that few individuals possess. And it requires a willingness to slow down development processes that are optimized for speed.
The most effective adoption pattern observed in practice is not comprehensive framework implementation but incremental integration. Organizations start with a single high-risk application, implement governance practices for that specific context, learn from the experience, and gradually extend governance to other applications. This approach is less theoretically elegant than comprehensive frameworks but more practically achievable given organizational constraints.
The role of regulation in driving adoption cannot be overstated. The EU AI Act, by imposing legal requirements for high-risk AI governance, is creating market demand for the frameworks that academic researchers have been developing. When responsible AI governance moves from optional to legally required, organizational adoption accelerates. The frameworks reviewed here are positioned to meet this demand.
The measurement of governance effectiveness presents its own challenge. How do we know whether a responsible AI framework is actually working? Output metrics (number of bias audits conducted, percentage of models reviewed) measure process compliance, not substantive impact. Outcome metrics (reduction in discriminatory decisions, improvement in user trust) are harder to measure but more meaningful. The frameworks reviewed here would benefit from built-in evaluation mechanisms that go beyond process compliance to assess whether they are achieving their stated goals of reducing harm and increasing trustworthiness.