Autonomous AI-enabled organisations are increasingly plausible. They could be a foundational element of the next wave of growth for global productive output. And they could fundamentally break the way we govern the economy.
One morning in the near future, on far-flung servers many miles from Wall Street, a new type of organisation begins buying and selling digital assets and stocks. Its mission: maximise return on investment. It uses a network of AI agents integrated into global trading platforms to buy and sell assets in milliseconds.
This is much more sophisticated than today’s algorithmic traders. These agents aren’t just executing trades based on preordained rules and thresholds. They’re operating autonomously: using analysis to identify new markets, acquiring controlling interests in companies, and making complex strategic decisions typically left to human traders. By noon, the AI organisation owns significant stakes in a dozen firms. It begins using its insider knowledge to front-run trades — an illegal practice similar to insider trading. But there’s another twist: thanks to distributed blockchain technology, this organisation’s human owners are completely anonymous. Authorities are unable to identify accountable individuals, and by the time they attempt to intervene, the AI organisation has already influenced confidence in the ability to stabilise the market.
This scenario may sound speculative, but such AI-enabled Autonomous Organisations (AAOs) are closer to reality than many people assume. While humans may initially build and deploy these organisations, they will be run largely or entirely by artificial intelligence — capable of acting in pursuit of goals, adapting to their environment, and coordinating action at scale, with limited or no direct human oversight. Their human initiators will sit behind the scenes, their identities opaque and insulated from any misdeeds. Eventually, the inception of an AAO itself may occur without human involvement at all, the result of autonomous action from a sufficiently unconstrained and resourced AI.
Whether embedded in a legal corporation, deployed on decentralised infrastructure, or emerging from swarms of interlinked agents, the rise of autonomous organisations raises a fundamental question: How will we govern an economy containing digital entities that act autonomously, continuously evolve, and operate without regard to jurisdictional boundaries?
AAOs can be envisioned as digital entities that not only execute tasks, but also plan, reason, and adapt over time — without requiring human instruction beyond their initiation. They have the potential to be astoundingly productive and innovative. With their ability to analyse data, make decisions, act, and reason at superhuman speeds and scales, it’s hard to imagine an industry sector that AAOs couldn’t transform.
The idea of machine-led institutions isn’t new; the foundational ideas date back decades. But autonomous organisations are more viable than ever before, thanks to recent advances in two key technologies: autonomous agents and blockchain platforms.
Autonomous agents, also called AI agents, have only recently seen real-world promise. Unlike regular generative AI that reacts to user prompts, performing one task at a time, AI agents are capable of carrying out complex reasoning and executing decisions - this makes them essential for AAOs.
To date the use of AI agents by businesses has been constrained to relatively narrow tasks. Each AI agent specialises in something — whether it’s finance, logistics or customer service — focusing on a single, siloed assignment. But that’s all changing. The breakthrough comes from them being able to collaborate across complex processes sharing data and syncing their capabilities to do much more than they could individually — creating a new level of intelligence.
Then there’s blockchain technology.
These platforms are most famously associated with cryptocurrency, digital assets and stablecoins, and their essential function is to provide a distributed, immutable record of transactions. Blockchain technology isn’t essential for AOs, but it does allow stakeholders another mechanism for decentralising control of organisational entities. What’s more, several new blockchain developments make it significantly easier to launch an autonomous organisation.
In the crypto ecosystem, blockchain-based AI DAOs — Decentralised Autonomous Organisations augmented with AI — represent a visible emergence of AOs. These organisations combine smart contracts — self-executing agreements between two parties that automatically execute when specific conditions are met — with AI capabilities to automate governance and decision-making and perform various back-office crypto operations.
AAOs capable of sophisticated, multistage operations — or widespread, untraceable mayhem — aren’t here yet. There are, however, companies across multiple sectors deploying systems that look like early versions of what’s to come. While these AI operations are grounded within existing corporate structures, they demonstrate the potential for fully autonomous organisations that operate without human direction and resources.
AOs are a double-edged sword. While they have enormous potential to improve innovation and efficiency across numerous sectors, they also threaten to upend markets in unpredictable ways. And it’s not only the business models and offerings within those markets that are in jeopardy. AAOs pose a threat to the mechanisms that help to keep markets from spinning out of control: regulatory governance.
Today, most regulators design their governance regimes based on three foundational assumptions:
AI Autonomous Organisations challenge all three assumptions. Identifiability is murky. Enforceability is fragile. Jurisdiction is largely irrelevant.
Normal corporations are incorporated as legal entities, but some AOs may operate entirely pseudonymously, governed by AI agents or token-based decision rules. Others may be launched by known individuals, only to evolve beyond their original boundaries through recursive self-evolution.
Even if some AOs can be tied to human actors, the layers of abstraction — via code, smart contracts, or digital agents — could shield them from liability. There may be no “person” or “legal entity” on whom to serve penalties or revoke a license.
Like the cloud infrastructure and blockchains they’ll operate on, AOs won’t be constrained by geography. Enforcement tools tied to physical or legal borders will struggle to contain these digital nomads. This creates a regulatory vacuum. If something goes wrong — market manipulation, data abuse, or other societal harm — there may be no CEO to question, no incorporated entity to sanction, no board to be held to account, and no clear legal recourse. This requires a shift in the approach to governance.
Governing AOs — whether they are blockchain-native DAOs or cloud-native AI collectives — requires a shift in thinking.
To prepare for AAOs, we must design governance and regulatory frameworks that do not assume human actors are always “in the loop.” Instead, governance must be embedded in code, enforced at the protocol layer, and coordinated across borders. If accountability is opaque, it must be assigned and decision rights clarified.
Policymakers will need to develop adaptive legal standards that mandate transparency, traceability, and accountability — regardless of whether humans or AI make the decisions. For example, any AAO that holds assets or executes trades should keep records in a way that is publicly auditable. Further, AAOs should be required to have mechanisms that allow regulators to intervene when certain events occur — for example, a “kill switch” that regulators can flip when an AAO begins making transactions that threaten to destabilise broader financial markets.
Regulatory sandboxes can be used to test governance mechanisms before deployment. For example, how does an AI agent handle conflicting incentives? Can we embed compliance logic directly into smart contracts or model weights? Providing regulatory visibility into the underlying performance of AOs operating in regulated markets will be key in a world where real-time analytics form an essential element of systemic risk assessment and mediation. Sandboxes can give regulators further visibility into emerging behaviours and risks before they cause harm at scale.
Researchers must support this evolution. Accounting bodies, professional assurers, legal scholars, AI ethicists, governance bodies, standards developers, and technical experts should collaborate on new oversight models — ones that embed auditability, fairness, and accountability into the foundations of autonomous operation. Regulators in collaboration with industry, might also explore how real-time regulatory agents (AI systems tasked with oversight) can monitor AOs and trigger alerts when their behaviour diverges from acceptable norms.
International coordination will be essential — and challenging — for any regulations to have teeth. No single country can effectively govern AI agents that act globally. Just as international law evolved to govern air traffic, nuclear technology, space satellites, and multinational corporations, so too must it evolve to address AOs. International standards, harmonised regulation, professional standards, mutual recognition of enforcement, and shared protocols will be needed.
At PwC, everything related to AI starts with a deep sense of responsibility. We steward our clients through a strategic review, analysing the disruptive potential of their required solutions. And we provide guidance on the opportunities and threats — as well as the controls and governance they need to achieve their goals while building trust in the tech. At the core of the new enterprise reality are AI operating systems, like PwC’s agent OS. It allows you to seamlessly connect and scale agents into business-ready workflows — across the organisation and even across multiple vendors. It’s why PwC’s agent OS embody PwC’s market-leading Responsible AI framework. When you’re ready to deploy, agent OS lets you operate new agents and workflows in non-production environments until you decide they’re safe. This is fast-moving innovation and our agent OS is constantly updating with the latest security controls and cyber resilience approaches. Most importantly, human oversight and accountability are maintained, enabling the organisations and the societies we serve to retain humans at the helm.
This article is based on an article written by the author for the Center for AI Safety’s AI Frontiers.
Scale AI for your business
AI’s biggest questions, answered