Launching the Institute Site

Why the field needs a public home that combines conceptual seriousness with builder usefulness.

launchinstitute
|8 min read

This site exists because the field has a gap.

There is plenty of AI discourse. Predictions about artificial general intelligence. Debates about alignment. Speculation about what agents will do next quarter, next year, next decade. What there is much less of is public infrastructure for people trying to think clearly about autonomous companies as actual firms — systems with memory, workflows, control surfaces, governance, and real economic implications.

The Institute for Autonomous Companies is meant to help close that gap.

The company is changing shape

Something structural is happening to the firm. Not in the distant future. Now.

In March 2026, the Federal Reserve chair confirmed zero net private sector job creation. Block cut 40% of its workforce citing "intelligence tools." HSBC announced 20,000 job reductions through AI restructuring. Fiverr laid off 30% of its staff to become "AI-first." These are not isolated events. They are early indicators of a transition in what a company is, how it operates, and how many humans it requires.

The traditional firm is a coordination mechanism for human labor. People are hired to perform cognitive and physical work. Management exists to align that work toward shared objectives. The organizational chart, the office, the meeting, the performance review — all of it is infrastructure for coordinating humans.

When agents can perform meaningful cognitive work — writing code, managing customer relationships, executing marketing campaigns, analyzing data, making operational decisions — the firm no longer needs to be organized around human coordination. It can be organized around agent orchestration. The company becomes a system, not a headcount.

This is not a hypothetical. There are already platforms running hundreds of companies autonomously with zero employees — handling engineering, marketing, operations, and customer support through specialized agents. These companies have no human workers. They have control planes.

We are watching the company change shape in real time. The question is whether we have the vocabulary, the frameworks, and the operational knowledge to build these new structures well — or whether they emerge by accident, without governance, without foresight, and without anyone thinking clearly about what happens next.

What is missing

The current discourse around autonomous companies suffers from three problems.

The first is conceptual poverty. Most discussion treats autonomous companies as "regular companies but with AI." This misses the deeper transformation. An autonomous company is not a firm that uses ChatGPT. It is a fundamentally different organizational form — one where the execution layer is non-human, the memory layer is persistent and machine-readable, and the control plane must be explicitly designed rather than emerging from human intuition and hierarchy.

We lack a shared vocabulary for the components of an autonomous firm. What is the control plane? How does the memory layer work? What does governance look like when there are no employees to govern? How does the firm maintain coherence when its agents operate asynchronously across different tools and timelines? These are not abstract questions. They are engineering problems that every builder in this space confronts, and most are solving them from scratch because no public body of knowledge exists.

The second is a theory-practice divide. Academic research on AI and organizations tends toward the theoretical. Builder communities tend toward the tactical — framework comparisons, prompt engineering, tool selection. Almost nobody occupies the middle ground: rigorous thinking about autonomous company architecture that is immediately useful to someone building one.

The institute is designed for that middle ground. Research should be conceptually serious and builder-useful. Every paper should give you something you can implement, and every implementation should be grounded in a framework that helps you understand why it works.

The third is a governance vacuum. Autonomous companies are being built right now with no established norms about how they should operate. Who is responsible when an agent makes a bad decision? How do you audit a system that runs 24/7 without human supervision? What happens when the autonomous company's interests diverge from its owner's? From its customers'? From the public's?

These are not questions for later. They are questions for now, because the systems being built today will establish the defaults that are difficult to change tomorrow. The too-big-to-fail dynamic applies here: once an autonomous company is embedded deeply enough in its market, removing or restructuring it becomes prohibitively expensive. The governance must be designed in, not bolted on.

What the institute publishes

The institute produces four types of work:

Research on autonomous company architecture and governance. This includes papers on control planes, memory systems, agent orchestration patterns, failure modes, and the economic implications of firms that operate without human labor. The research is opinionated — we have a point of view about what good autonomous company design looks like — but it is grounded in real implementations, not speculation.

Tools and reference implementations for agent operations. Working code matters more than working papers. Where possible, we publish tools that builders can use directly: orchestration patterns, control surface designs, monitoring frameworks, and governance templates.

Guides for building the specific subsystems that autonomous companies require. Memory layers. Orchestration hierarchies. Execution infrastructure. Control planes. Human-in-the-loop patterns for high-stakes decisions. These are practical, implementation-level documents written for engineers.

Field notes from active builders. The institute maintains an archive of dispatches from people building and operating autonomous companies. These are not polished case studies. They are honest accounts of what works, what breaks, and what no one warned you about.

Who this is for

The institute is for builders — engineers, founders, and operators who are constructing autonomous companies or autonomous subsystems within existing firms. It is for people who want to think clearly about what they are building, not just ship faster.

It is also for researchers, policymakers, and analysts who need grounded, technical perspectives on what autonomous companies actually look like in practice — as opposed to what they look like in pitch decks and press releases.

It is not for people looking for hype, AI influencer content, or tools to "supercharge their workflow." The autonomous company is a serious organizational form with serious implications. The institute treats it accordingly.

The broader context

We are publishing this at a specific moment. In March 2026:

The global economy is bifurcating. Consumer spending data shows luxury goods growing 8-12% while mass market spending is flat. Analysts describe this as a K-shaped recovery — prosperity at the top, exhaustion at the bottom. The middle is hollowing out.

AI inference is subsidized. The major labs are spending more than they earn to maintain artificially low prices. When those subsidies normalize — and they will — the economics of every AI-dependent system will change. Autonomous companies that depend on cheap inference may discover their unit economics were built on sand.

Hardware is getting scarce and expensive for consumers. Memory prices surged 90% in the first quarter of 2026 as data centers absorb the majority of global chip production. The tools needed to run local AI are being priced out of reach for individuals and small operators.

Platforms are locking out automation. Major platforms are considering identity verification to block bots. AI-generated content is reducing conversion rates in some channels. The surfaces where autonomous companies need to operate are becoming hostile to non-human actors.

Governments are beginning to respond. There are growing calls for moratoriums on new AI data center construction. The EU AI Act reaches enforcement deadlines later this year. Political backlash against AI is forming faster than most in the industry acknowledge.

These are not peripheral concerns. They are the operating environment for anyone building an autonomous company. The institute exists partly to ensure that builders are thinking about this context — not just the technical architecture, but the economic, political, and social terrain their systems will have to navigate.

What we believe

We hold a small number of convictions that inform the institute's work:

The autonomous company is a real and consequential organizational form. It is not a gimmick, a marketing term, or a temporary phenomenon. The economics of AI make some degree of autonomous operation inevitable for most firms. The question is not whether it happens, but whether it happens well.

Architecture matters more than capability. The bottleneck in autonomous companies is not model intelligence. It is the design of control planes, memory systems, governance structures, and human-in-the-loop interfaces. A well-architected autonomous company with current-generation models will outperform a poorly-architected one with frontier models.

Governance must be designed in, not added later. The defaults established now will be difficult to change. Every autonomous company being built today is a precedent. The institute takes governance as seriously as orchestration — they are equally fundamental to the system.

Public knowledge accelerates the field. Most operational knowledge about autonomous companies is currently private — locked inside individual companies, undocumented, and learned through painful trial and error. Making this knowledge public benefits everyone building in the space, raises the quality of what gets built, and creates shared standards that the field needs.

Radical honesty about limitations. Current AI systems are powerful but brittle. Agents excel at well-represented patterns and fail at novel composition. Scaling may be plateauing. Inference costs will rise. The autonomous company is real, but it is not magic, and the institute will not pretend otherwise.

What comes next

The site launches with an initial body of research, an archive for ongoing field notes, and resource sections for tools and guides. The research will expand as the field develops. Contributions from external builders are welcome.

We are particularly interested in work on: company formation and control surface design, execution infrastructure for agentic work, governance and failure analysis for autonomous systems, the economic implications of firms that do not employ humans, and the relationship between autonomous companies and the broader labor market.

If you are building an autonomous company, studying one, or thinking seriously about what they mean for the economy and society, this site is for you.

The field needs a public home. This is it.

Related

What We Mean by Autonomous

Defining our terms: what the Institute means when we say 'autonomous company' and what we explicitly don't mean.