Experiment Zero
The Problem
The world is not just changing fast. It is becoming unpredictable. That difference matters more than most people realize.
Twenty years ago you could tell someone to become a doctor, an engineer, or a lawyer and be almost certain they'd be economically secure. That advice worked because the world was stable enough to predict. Industries lasted decades. Skills held their value for a career. The deal was straightforward - pick a direction, commit, work hard, and the system would take care of you.
That deal is breaking.
AI is compressing the timeline between when a skill is valuable and when it's worthless. Something that took a specialist years to learn can now be done by a tool in seconds - and the capabilities are expanding in ways nobody can predict. Industries that felt untouchable five years ago are on shaky ground today. Companies are doing mass layoffs while reporting record profits. The implicit contract of "be loyal and you'll be taken care of" is gone.
And it's not just AI. Every new technology accelerates the next one. A competitor can appear from anywhere on the planet overnight. The cost of building things has collapsed so dramatically that the competitive landscape can shift in weeks. Political instability reshapes entire industries without warning. The institutions that used to provide stability - large employers, pension systems, predictable career ladders - are weakening or disappearing.
Here's what makes this different from previous periods of change: it's not that one thing is disrupting one industry. It's that everything is shifting simultaneously, at an accelerating rate, in directions that nobody can reliably forecast. Not individuals. Not experts. Not institutions. The unpredictability itself is the problem.
And every system designed to help people navigate their economic lives - education, career planning, financial advice - assumes you can pick a direction and commit. That assumption is the foundation, and the foundation is cracking.
Most people feel this. They feel the anxiety. They see the ground shifting. They don't act because acting means admitting the old plan is broken and they don't have a new one. So they wait. And waiting has a cost that compounds silently - by the time you're forced to move, your options are fewer and worse than they were when you first felt the tremor.
Experiment Zero is a response to this. Not a plan for what to do. An operating system for figuring it out.
The Strategy
When you don't know what's coming, planning doesn't work. But there are domains that have been dealing with irreducible uncertainty for a long time, and they've converged on the same answer.
Evolutionary biology. Species that survive volatile environments aren't the strongest or the smartest. They're the ones that produce the most variation and select the hardest. They try many mutations, keep what works, kill what doesn't, and repeat. The speed of that cycle determines whether the species adapts fast enough to survive. When the environment is stable, specialization wins. When it's volatile, adaptability wins.
Poker. A professional poker player doesn't try to win any single hand. They play many hands, manage their bankroll so no single loss can eliminate them, gather information before committing big, and make decisions based on expected value across hundreds of iterations - not the outcome of any one bet. Good decisions and good outcomes are not the same thing. You can make the right call and lose. You can make the wrong call and win. What matters is the process over time.
Venture capital. VCs know most of their investments will fail. They don't try to pick the one winner. They make many bets, structure each one so the downside is limited and the upside is uncapped, and let the portfolio math work over time. A fund might invest in 30 companies knowing 20 will fail, 7 will break even, and 3 will return the entire fund many times over. They survive the losses because no single loss is fatal.
Science. When you don't know the answer, you don't guess and commit. You form a hypothesis, design an experiment, run it, observe the results, and update your understanding. Then you do it again. Science doesn't advance by being right the first time. It advances by being wrong faster and cheaper than every other method of generating knowledge.
Nassim Taleb's antifragility. Some systems don't just survive disorder - they get stronger from it. The key is small downsides and large upsides. If each experiment costs little when it fails but teaches you something, and occasionally produces a big win, then more volatility actually helps you over time. You want to be positioned so that chaos feeds you rather than destroys you.
Every one of these domains lands on the same strategy: run many small experiments, define what success looks like before you start, measure the results honestly, kill what fails, scale what works, repeat faster than conditions change.
This isn't one approach among several. It's the only strategy that has been proven to work across every domain that deals with uncertainty you can't predict your way out of. It works because it doesn't require you to be right. It requires you to be fast, cheap, and honest about results.
Why This and Not Something Else
There are really only a handful of philosophical postures you can take toward an unpredictable future. All of them deserve serious examination.
Predict and commit. Pick the best direction and go all in. This is the traditional model - career plans, business plans, five-year strategies. It produces the stories we celebrate. We remember Bezos picking ecommerce. We don't remember the thousands who went all in on something that didn't work and lost everything. In a predictable world, a smart person can meaningfully improve their odds of picking right. In this environment, even a smart person is mostly guessing. And if you guess wrong, you're done.
Specialize and defend. Become so good at one thing that you're the last person replaced. This has been the dominant successful strategy for most of modern economic history, and in a stable environment it's the right answer. The problem is that AI is compressing the timeline between "this skill is elite" and "this skill is automated." You're betting that your specific domain will be the exception. That's a prediction wearing a different outfit.
Follow the experts. Track what the smart people say is coming and position yourself accordingly. This outsources your judgment to people who are also guessing - they're just more confident about it. And by the time a trend is widely recognized enough for experts to call it, the early advantage is gone.
Get more credentials. The establishment's answer. It assumes the credential will still be valuable when you finish earning it and that the institution granting it will still be respected. A four-year degree started today graduates into a world that doesn't exist yet.
Wait for clarity. Don't commit until the picture becomes clearer. This is what most people actually do. It feels like patience. It's actually paralysis with a hidden cost - every day you wait, you fall further behind everyone who is learning through action.
Run diversified experiments. Test many things cheaply, let reality tell you what works, kill what doesn't, scale what does. This is the only posture that doesn't require you to predict the future, doesn't let a single failure wipe you out, and generates real information through contact with the actual world.
All six were evaluated rigorously - not casually, formally. Against every meaningful criterion derivable from the problem. Prediction independence. Resilience. Information generation. Speed. Cost efficiency. Scalability. Adaptability. Skill accumulation. Psychological sustainability. Antifragility.
Diversified experimentation is the only one that passes all of them. The details of that analysis are in the papers linked below. But the conclusion is clean: every other posture fails at least one non-negotiable requirement. Most fail several.
That doesn't mean specializing is stupid, or that credentials are worthless, or that following smart people is a waste of time. It means none of those can stand alone as your primary strategy. They're all moves you might make inside the system. But the system itself - the operating system for deciding what to do, when to commit, and when to walk away - has to be experimentation.
Why Now
Here's the thing nobody's talking about.
This strategy has always been the right strategy. Venture capital has been running it for decades. The problem was never that the approach was wrong. The problem was that individuals were priced out.
It used to cost hundreds of thousands of dollars to start a company. You needed developers, designers, servers, offices, legal, marketing. A VC fund could bankroll 30 of those bets because they had a billion dollars behind them. An individual couldn't fund even one. So individuals were stuck with the only option available - pick a job, commit, hope it works out. Not because that was the best strategy, but because the best strategy was too expensive to play.
That's over.
AI compresses the building. The internet compresses the distribution. Cloud infrastructure compresses the operating cost. What used to take months and hundreds of thousands of dollars now takes days and nearly nothing. One person can run the same experimental strategy that used to require institutional backing.
But here's the part that makes this even more interesting. The individual playing this game doesn't just have the same advantages as a VC. In some ways, they have better ones.
A VC fund that raises a billion dollars has to return three to five billion. That means they can only bet on ideas with the potential to become hundred-million or billion-dollar companies. Walk into Sequoia with an idea that generates $50,000 a month and they'll show you the door. Their fund structure forces them to ignore the vast majority of opportunities because the returns are "too small."
An individual needs $4,000 or $5,000 a month to cover their life. A VC would laugh at that number. But for you, hitting it means freedom. And the universe of opportunities that can generate that kind of return is enormous - orders of magnitude larger than the universe of billion-dollar outcomes.
A VC plays the game with more money but fewer options. An individual plays with less money but massively more options. And in a strategy that depends on running many experiments, having more options is the bigger advantage.
AI made this possible. But a lot of people are asking the wrong question about it. They're asking "what can AI do for me" and sitting back waiting for results. That's passive. That's treating AI like a vending machine.
The right question is "what can you do with AI." The human still drives. The human decides what to test. The human judges the results. The human makes the strategic calls. AI is the execution layer that compresses the feedback loop. It's not the strategist. You are.
The System
Experiment Zero has four layers. Ideas move through them based on evidence, not enthusiasm.
The Ideas Feed
Before anything enters the system, it starts as a raw idea. A one-liner. A passing thought. Something noticed during a conversation, while reading, while building something else. These get captured throughout the week and collected in one place - not explored, not evaluated, just held so they don't get lost.
Most ideas will never go anywhere. That's the point. The ideas feed is a net, not a filter. Cast it wide. During weekly reviews, the list gets scanned. If something keeps pulling at you - if it won't leave you alone - it earns its way into the sandbox for real exploration.
The Sandbox
This is where active exploration happens. An idea earned its way here because it was interesting enough to tinker with. Unlike a raw idea, a sandbox item has some structure - what it is, what it would need to become a real experiment, and a loose list of things to explore. But there are no formal commitments. No deadlines. No expectations.
Most things in the sandbox will live and die there. That's fine. Its job isn't to produce winners. Its job is to produce possibilities. A frustration you keep running into, a skill you suspect people would pay for, a weird niche you stumbled into, a problem nobody seems to be solving - all sandbox material.
The sandbox is always active. No matter how full the rest of the system gets, you never stop feeding it. The environment keeps changing and new opportunities keep appearing. Curiosity is the permanent input layer.
The Betting Pool
Not everything in the sandbox deserves a real experiment. A bet costs time and energy, even when it's small. So there's a threshold, and crossing it requires you to answer some basic questions.
What specifically are you testing? What are you putting in front of real people? What does success look like - defined before you start, not after? By when? And can you afford the resources without jeopardizing anything else you're running?
If you can answer all of those, you've got a bet. If you can't, the idea stays in the sandbox until it matures.
Once a bet is live, it runs against a deadline. The experiment ends on a defined date whether you feel ready or not. When the deadline hits, you evaluate honestly. The data at this stage is usually noisy - small sample sizes, ambiguous signals. So evaluation is a judgment call informed by data. You look at what the numbers say. You factor in what you observed qualitatively - was there genuine excitement or polite indifference? Was there pull or were you pushing?
Then you make a call. Kill it - the idea is dead, extract what you learned, free the slot. Modify and retest - the signal is ambiguous, adjust the experiment and run again. Or graduate it - the idea showed real signs of life and has earned its way into the portfolio.
The hardest part of the whole system is killing things. You will get emotionally attached to ideas you've invested time in. You will rationalize ambiguous results into reasons to keep going. That's human. It's also the single biggest threat to the system working. The discipline to kill is what separates this from wishful thinking.
Bankroll management is non-negotiable. No single bet should consume a dangerous proportion of your resources. If one failed experiment can set you back significantly - financially, in time, or emotionally - the bet is too big. Protect your capacity to keep playing. That capacity is your most valuable asset.
The Portfolio
Things that prove themselves graduate here. This is where real ongoing commitment lives. Something had to survive the sandbox, earn its way into a bet, make contact with reality, and demonstrate it deserves continued attention. Most things won't reach this stage. That selectivity is what gives the portfolio its value.
Once something is in the portfolio, the mode shifts. You're no longer asking "is there anything here." You're asking "how big can this get." You're still experimenting - testing pricing, messaging, channels, features - but the experiments are in service of something already alive. You're optimizing, not probing.
This is where AI changes the math most dramatically. Traditionally, as your portfolio grew, your capacity would force you to narrow - you can only do so much. But if AI handles significant portions of the execution - building, operating, producing, distributing - then your constraint isn't doing the work. It's directing the work. You become a portfolio manager across multiple validated ventures, not a laborer grinding inside one.
The portfolio doesn't have to narrow the way it used to. You kill losers and it stays disciplined. But the ceiling on what you can manage simultaneously is much higher because AI carries the operational load. Your bottleneck is judgment, not labor.
The Balance
At any given time you're splitting your energy between two things: running new experiments and nurturing what's already working. Both are always present. You never stop betting because the world never stops changing. You never ignore the portfolio because that's where your compounding value lives.
Early on, when nothing is validated yet, you're almost entirely in the betting pool. Your job is to generate signal as fast as possible. As things graduate into the portfolio, the balance shifts - more nurturing, fewer new bets. But it never goes to zero on either side. A portfolio with no new experiments is a portfolio that will eventually get disrupted. A system that's all experiments with nothing graduating is a system that isn't producing results.
The Rhythms
The system runs on six rituals at different altitudes. They share a name - wayfinding - because the activity is the same at every level: figuring out where you are and where to point next.
Daily Open - Start of day. Look at the plan for today, get oriented, commit. This isn't a planning session - the week was planned on Sunday. The Daily Open is the moment you get your head in the game and go.
Daily Close - End of day. A two-minute sweep. Anything captured that you missed? Did the day match the plan? If it diverged, note why. This is what keeps the weekly review from being a memory exercise.
Midweek Check - Wednesday or Thursday. A quick alignment check. Are you still pointed the right direction? Has anything shifted that changes the back half of the week? Not a replanning session. Just an honest look at whether you've drifted.
Sunday Wayfinding - The main event. Review everything captured during the week. Assess all four layers - ideas, sandbox, active bets, portfolio. Make decisions - kill, modify, graduate, promote. Then plan next week day by day. What gets worked on, what content ships, where the time goes. Sunday's output is the actual plan you execute against all week.
Monthly Wayfinding - End of each calendar month. Everything in Sunday Wayfinding but deeper. Every active bet gets formally evaluated. Kill or graduate decisions are mandatory. The portfolio gets a health check. The balance gets assessed over the full month. Cycles are calendar months - no custom lengths, no tracking overhead.
Quarterly Wayfinding - Highest altitude. Step above the system and look at it from the outside. Are the original assumptions still holding? Has the environment changed? Are there patterns in what's working and what's failing? This is where you evaluate Experiment Zero itself, not just the things inside it.
The rituals prevent drift, enforce honesty, and keep the system alive as a practice rather than something you used to do.
Why This Exists
The #1 problem AI builders have is they don't ship. Everyone's chasing the next tool. The difference isn't the tools - it's whether you can use them to put something real in the world.
That's what Experiment Zero is built to solve. Not with motivation. Not with another framework that sits on a shelf. With a system that forces contact with reality - structured bets, honest evaluation, kill discipline, and a rhythm that keeps everything moving forward.
This comes from years of chess, poker, and building software - thousands of hours thinking about how to make good decisions with incomplete information. When the current wave hit - the speed, the disruption, the sheer unpredictability - the question was simple: what would a game designed to navigate this actually look like?
Experiment Zero is the answer. It's running right now, in public, with real bets that can really fail. Not because the system is perfect. Because building in the open is how you figure it out - and shipping is how you prove it works.
Go Deeper
The Tools
These are the same tools used to run Experiment Zero. They're free.
The Conductor's Log ->
The weekly field document. A structured template for capturing what happens every day - plans, observations, signal from bets, ideas, ritual notes. Open a new one each week. It's the single source of truth that makes Sunday Wayfinding work instead of being a memory exercise.
The Conductor's Bet Board ->
The strategic layer. Outcomes at the quarterly, monthly, and weekly level - globally and for every item in the system. Prioritized backlogs for each bet, portfolio item, and sandbox project. The roadmap showing what's coming and in what order. This is what you open before the weekly log to decide where your time goes.
The Reasoning
The strategy wasn't built on vibes. Before a single experiment ran, the reasoning was written and stress-tested. These are published because showing your work matters - and because if the logic doesn't hold up, that's worth knowing.
Why Experiment Zero ->
The full case for why diversified experimentation is the right response to what's happening in the world. Built from evolutionary biology, poker, venture capital, options theory, and Taleb's antifragility. This is the argument in narrative form. Read this one first.
The Justification ->
The formal logic used to stress-test the strategy before building anything. The problem stated precisely. Ten criteria derived from the problem. Six candidate strategies evaluated against every criterion. The strongest counterarguments, steel-manned and answered honestly. And the conditions under which the whole thing fails. This isn't a polished article - it's a reasoning tool. It's published because if the logic doesn't hold up, that's worth knowing.