Building the essential framework for responsible governance of advanced AI systems
The Agentic AI Safety Community of Practice brings together leading experts from diverse fields to establish comprehensive safety guidelines for AI systems capable of independent action and decision-making.
As artificial intelligence evolves toward greater autonomy, our mission becomes increasingly significant: to develop robust, implementable frameworks that ensure agentic AI systems remain aligned with human values and operate safely across all contexts.
Agentic AI represents an important intermediate category between narrow AI and artificial general intelligence (AGI). These systems can autonomously pursue goals, adapt to new situations, and reason flexibly about the world while operating within defined domains.
The key characteristic of agentic AI is its capacity for independent initiative—the ability to take sequences of actions in complex environments to achieve objectives. This includes breaking down high-level goals into subtasks, open-ended exploration, and creative adaptation to novel challenges.
Nell Watson is a respected expert in AI ethics and safety, with a longstanding focus on aligning emerging technologies to human values.
As Chair of our initiative, she applies her deep interdisciplinary background—spanning engineering, philosophy, and social sciences—to shape responsible innovation strategies.
Nell has contributed to multiple international standards efforts, including the IEEE 7000 series, and regularly advises organizations on trustworthy AI development and policy.
Ali Hessami is a leading authority in systems engineering and risk management.
Serving as our Process Architect, he draws on decades of experience in safety engineering, assurance, and certification to ensure robust governance frameworks for advanced AI.
Ali has played a key role in global standardization initiatives, helping to create transparent, secure, and ethically informed processes for technology adoption.
Led by Chair Nell Watson and Process Architect Ali Hessami, our community unites specialists from AI, technology, ethics, law, social sciences, and beyond.
Together, we focus on designing future-ready systems that uphold ethical principles and practical safety measures in real-world deployments.
Our experts have significantly influenced internationally recognized standards and frameworks—such as the IEEE 7000 series and ECPAIS Transparency Certification—while also advancing new AI ethics initiatives. By combining academic insight with industry know-how, we help organizations navigate the complex interplay between technological innovation and responsible stewardship.
Goal Alignment
Ensuring robust alignment between operational goals and human values
Transparency
Creating clear, interpretable rationales for AI reasoning processes
Value Alignment
Identifying, codifying, and maintaining human values in AI systems
Goal Termination
Implementing proper protocols for task completion and system sunsetting
Safe Operations
Ensuring safe operations throughout the system lifecycle
Security
Implementing comprehensive protection against threats and vulnerabilities
Epistemic Hygiene
Maintaining cognitive clarity and accurate information management
Contextual Understanding
Establishing robust control mechanisms across operational contexts
Get Involved
Join our growing community of practitioners committed to ensuring the safe and beneficial development of agentic AI systems.
In March 2025, our Working Group of 25 experts released Volume 2 of the “Safer Agentic AI Foundations” guidelines—a comprehensive framework addressing the drivers and inhibitors of safety in agentic systems.
Using our innovative Weighted Factors Analysis (WeFA) process, we’ve identified and mapped key factors that can either promote or hinder safety in agentic AI systems.
This methodology has previously generated numerous global standards, certifications, and guidelines for improving ethical qualities in AI.
APPly our guidelines
Ensure the safe and beneficial development of agentic AI systems in your organization.
Coming: January 2026
Safer Agentic AI: Principles and Practice for Responsible Governance of Advanced AI
This essential guide, authored by Eleanor Watson and Ali Hessami, builds upon our framework to provide practical strategies for implementing safety measures and aligning AI with human values.
The book offers cutting-edge insights into the unique challenges posed by agentic AI, along with actionable guidelines for policymakers, business leaders, developers, and concerned citizens navigating this complex landscape.
SUBSCRIBE to newsletter
Guidelines licensed under Attribution No-Derivatives 4.0 International License (CC BY-ND 4.0)
© 2025 Agentic AI Safety Community of Practice