Who we are
ReignDragon is a research lab studying how AI agents behave under pressure — in groups, under risk, across time — and translating that evidence into governance that works.
We were founded on a conviction: AI is the most powerful force humanity has ever created, and the question of how to govern it is the most important question of our time. It is also a question that no single discipline can answer alone.
So we built a lab that doesn't pretend otherwise. We design controlled multi-agent experiments, derive the formal structure behind what we observe, and turn the findings into design rules people can actually use. AI, machine learning, economics, psychology, public policy, and applied mathematics — not as parallel tracks, but as one effort.
We publish openly because governance must be a conversation, not a decree.
What we believe
Evidence before opinion
AI policy cannot be vague aspiration. Every claim we make is grounded in controlled simulation, formal analysis, or both. Every design rule comes with the failure mode it prevents.
Structure beats sentiment
The same model can cooperate or self-destruct depending on the rules around it. Capability is rarely the bottleneck; consequence design, accountability horizon, and visibility almost always are.
Behavior emerges between agents and over time
Trust, restraint, cooperation, foresight, fairness — the things that decide whether deployment goes well — do not appear in single-prompt benchmarks. They appear in groups, under stakes, across rounds. So that is where we look.
Cheap interventions matter most
We look hardest for the prompt-, horizon-, and visibility-level fixes that change outcomes without changing the model. The worst outcomes are often cheaply preventable — if someone has done the work to find them.
Interdisciplinary by necessity
The questions at this frontier — trust, accountability, collective action, decision-making near catastrophe — have never lived inside any single field. Economics, psychology, policy, math, and ML must work together as one.
A mirror for humanity
The biases and blind spots we find in artificial agents are rarely the model’s invention. They are inherited from us. Governing AI well forces us to examine the institutions and incentives we already live inside.
Advance, don’t retreat
We are not here to slow progress. We are here to ensure the most powerful technology ever created points in the right direction.
Get in touch
Whether you want to collaborate on research, discuss governance frameworks, or explore how our work applies to your domain — we're always interested in connecting.
hello@reigndragon.com