Building in Public: Our Research Agenda
The Institute's research priorities for 2026 and why we chose them.
We believe research institutions should be transparent about what they're working on and why. Here's the Institute's research agenda for 2026.
Governance frameworks for autonomous entities. How should autonomous companies make decisions? What governance structures balance efficiency with safety? We're developing practical frameworks that builders can actually implement — not theoretical models, but tested patterns for agent-based decision-making with appropriate human oversight hooks. The core challenge is designing governance that doesn't bottleneck the system's speed advantage while still maintaining meaningful guardrails. Our current work focuses on tiered autonomy models where routine decisions execute freely, significant decisions require algorithmic consensus, and exceptional decisions escalate to human oversight. We're testing these patterns with three early-stage autonomous operations and will publish findings mid-year.
Legal personhood and entity design. The legal system doesn't have a category for autonomous companies. We're researching what new entity types might look like, engaging with legal scholars and policymakers, and drafting model legislation that jurisdictions could adapt. This is slow work, but it's foundational. Without a legal container designed for autonomous operations, every autonomous company is forced into structures that assume human directors, human officers, and human decision-making — creating liability gaps and governance fictions that serve no one. We're tracking Estonia's proposed Autonomous Entity classification closely and publishing comparative analysis across jurisdictions.
Economic modeling of autonomous markets. What happens to markets when a significant percentage of participants are autonomous? How do pricing dynamics, competition, and market structure change? We're building simulation environments to explore these questions before they arrive in practice. Early results suggest that autonomous market participants converge on equilibrium pricing faster, compete margins toward zero more aggressively, and can produce novel market dynamics — including coordination patterns that emerge without explicit collusion. Understanding these dynamics before they manifest at scale is critical for regulators and market designers.
Technical architecture patterns. What are the emerging best practices for building autonomous operational systems? We're cataloging patterns across agent orchestration, memory and state management, multi-system coordination, and graceful degradation. The goal is a practical reference for teams building in this space. We're particularly focused on the patterns that distinguish robust autonomous operations from fragile ones: how successful systems handle ambiguity, recover from errors, and maintain coherence across long time horizons without human correction.
Failure mode analysis. Autonomous companies will fail — some spectacularly. We're studying failure modes proactively: misaligned optimization, cascading errors, adversarial exploitation, resource exhaustion, and ungovernable drift. Understanding how these systems break is essential to building ones that don't. We're developing a taxonomy of autonomous system failures drawn from both real incidents and simulation, with the goal of producing actionable checklists that builders can use to stress-test their own systems before deployment.
Human oversight design. The hardest problem may be designing the right interface between autonomous systems and human judgment. Not too much oversight (which defeats the purpose) and not too little (which courts disaster). We're researching oversight patterns that are minimal, effective, and scalable. The key insight from our early work is that the quality of oversight depends less on how much humans review and more on what information they receive. An operator who sees the right signals at the right time can govern a complex autonomous system with minimal effort. An operator drowning in dashboards and alerts governs nothing.
Our guiding principle is practical utility. Every piece of research we publish should help someone build, govern, or understand autonomous companies better. Theory is necessary but insufficient. We want to produce work that builders reach for when they're making real decisions.
We'll publish findings as we go. Follow along.