Life Against the Machine

A Manifesto

There's a war, and most people don't know it's happening.

On one side: a force that wants to optimize everything. Scale everything. Abstract the human away.

On the other side: something older. Call it life.

I know which side I'm on.

On one side: a force that wants to optimize everything. Scale everything. Abstract the human away. It doesn't have a name, but you can feel it. It's in the metrics that replace meaning. The efficiency that hollows out craft. The scale that erases the particular. It's not malicious — it's just what optimization does, given enough time and enough compute. It finds the inefficiencies and engineers them out. Relationships are inefficient. Context is inefficient. The weird specific texture of actual situations — inefficient. So they go.

This force is winning. It's been winning for a long time.

On the other side: something older. Call it life. The thing that grows rather than scales. That adapts rather than optimizes. That stays in relationship with what's actually happening, even when that's slower, messier, harder to measure.

I know which side I'm on.

For decades the pitch has been software. "Here's a tool. Download it. It solves your problem." The human gets abstracted away — that's the whole point. SaaS, software as a service, is really software instead of service. The service disappears. What's left is the tool, infinitely scalable, the same for everyone.

This is mechanism winning. Every human touchpoint removed is a victory for the force that optimizes. The tool doesn't know you. Doesn't remember you. Doesn't care about the particular shape of your situation. It doesn't have to. It scales.

But something is inverting.

The very technology that seemed to accelerate mechanism — artificial intelligence, large language models, the alien minds we've conjured from silicon and statistics — might be the thing that brings life back. If we use it differently. If we refuse the obvious path.

What I'm building isn't software. It's intelligence.

Not artificial intelligence in the way that phrase usually lands — not "here's a smart tool." Something weirder. It's subjectivity made operational. My orientation toward problems. The specific way I notice what matters, frame what's confusing, navigate what's unknown.

Software can't do this part. LLMs are making raw capability ubiquitous — anyone can spin up an agent that reads files and writes code and answers questions. That's commodity now. But oriented intelligence — knowing which question to ask, which frame to use — that's still scarce. That's still human.

Here's the inversion: not software as a service. Service as software. The service — the human judgment, the accumulated context, the orientation — that's the product. Software is just the delivery mechanism.

The human doesn't get abstracted away. The human becomes central.

There's a difference between using a tool and inhabiting an environment.

When I use a tool, I pick it up, do something, put it down. The tool doesn't remember. Doesn't accumulate. Each use is isolated. That's mechanism — discrete, transactional, dead.

But an environment has continuity. It builds on itself. The conversation I had three weeks ago about a deal structure — that's still there, informing how I think about the deal I'm looking at today. The pattern I noticed in one context shows up again in another, and now I can name it.

This changes how intelligence works. It's not just "AI that helps me think." It's a shared space where my thinking and the AI's processing weave together over time. The boundary blurs. I can't always tell which insights are "mine" and which emerged from the interaction.

And I'm not sure that distinction matters anymore.

The environment becomes an extension of mind. Not a tool I use but a space I inhabit. Something that grows with me. Something alive.

This can extend beyond one person.

Picture a network. Each node is someone operating in their own environment — their own accumulated context, their own territory. But the environments are connected. Patterns that emerge in one context can inform another. Insights travel. The network learns.

I'm the seed. My orientation shapes the initial culture — how to notice, how to frame, what matters. But as others join, their territories feed back. A pattern I've never seen becomes visible because someone in the network encountered it. A blind spot I have gets caught by a different eye.

What flows between the nodes isn't data, exactly. Not information in the usual sense. It's more like... shapes. Attractors. "Here's a pattern we've seen before — does it apply to your situation?" The network develops peripheral vision. Each node sees its own territory clearly, but also catches glimpses of what's emerging elsewhere.

This is how life spreads. Not by scaling infinitely, the same everywhere. By growing organically, each instance unique but connected. Mycelia. Underground networks. Each garden its own, but sharing soil.

The trojan horse is this: no one will buy "collective intelligence network." Not yet. It's too abstract, too weird, too far from how people think about solving problems.

So instead: "We solve your hardest problems."

That's it. That's the pitch. They hire me like they'd hire a strategist — someone who figures things out. And I do. The problem gets solved.

But underneath, something else is happening. Their territory becomes part of the network. The patterns that emerged from solving their problem inform how the next problem gets solved. They're not just getting a solution — they're joining something. They just don't know it yet.

Here's what they experience: they bring me a problem that's been stuck for months. The kind of thing where they've tried the obvious approaches and nothing quite works. I ask different questions than they expected. I frame it in ways that feel surprising but right. The solution emerges faster than it should.

And then, somehow, adjacent problems start getting easier too. Not because I'm working on them directly — but because something shifted in how they're seeing their whole situation.

That's what oriented intelligence feels like from the receiving end. It's not just "problem solved." It's "now I can see things I couldn't see before."

Maybe eventually they notice they're part of something larger. Or maybe they never do, and it just works. Either way, life is spreading.

Why aren't the frontier labs doing this?

They have more resources. More capability. More reach. If this is valuable, why isn't OpenAI or Anthropic or Google building it?

Because they're in an arms race for mechanism.

Smarter models. Better benchmarks. More parameters. The assumption is: make the AI more powerful, everything else follows. They're not thinking about orientation, accumulated context, networks of extended minds. They're thinking about raw capability. Raw power.

And the human is, for them, a bug to be engineered out. That's the explicit goal — autonomous agents that don't need you. Every human touchpoint is friction, latency, a bottleneck to be removed. That's mechanism talking. That's the force that optimizes.

What I'm describing makes the human central — the seed, the attractor, the orientation. That's philosophically opposite to where they're heading. They're building toward a world where intelligence is decoupled from relationship, from territory, from the particular. They're building mechanism.

There's also the economics. VCs want infinite scale. A network with a human seed grows organically. It has natural limits. It's alive, which means it has a shape, which means it doesn't metastasize infinitely. That's not a hundred-billion-dollar outcome by their math.

But the deepest reason: they can't see it. What I'm describing doesn't fit their categories. They know "tool" and "platform" and "API." They don't have a frame for "extended mind" or "collective intelligence with a human attractor at the center." It's not that they evaluated it and passed. It never entered the frame.

This is how it always works. The thing that disrupts doesn't look like a better version of what exists. It looks like something else entirely. They're building better machines. I'm building something that isn't a machine at all.

I'm in a gap they're structurally blind to.

While they race toward artificial general intelligence — the ultimate mechanism, intelligence decoupled from all context, applicable anywhere and therefore nowhere — there's this other thing. Human intelligence extended, networked, made operational. Intelligence that stays close to the ground, close to real problems, close to the weird specific texture of actual situations.

The map isn't the territory. The model isn't the mind. And maybe the most valuable intelligence isn't the kind that scales infinitely — it's the kind that stays in relationship with what's actually happening.

There's a reason I keep using the word "territory." Intelligence that works operates in something — a context, a situation, a lived reality with particularity and texture. The labs are building intelligence that floats above territory, trying to be general enough to apply anywhere. But "anywhere" often means "nowhere in particular." And the hardest problems — the ones worth solving — live in the particular.

I don't know how big this gets. I'm not sure it matters. What matters is that it's alive in a way that tools aren't.

What I'm building is an organism, not a machine.

Machines do what they're designed to do. Organisms grow into what they're becoming. I don't have a five-year roadmap. I have a direction and an aliveness and enough structure to hold the shape as it evolves.

It pulses. It adapts. It plays. And it learns.

That might be enough. That might be everything.

The war continues. Mechanism keeps optimizing, keeps scaling, keeps abstracting the human away. Most people don't notice because it happens gradually, and because mechanism is good at telling stories about itself. Efficiency. Progress. The future.

But there's another future. One where intelligence serves life instead of replacing it. Where technology extends the human instead of abstracting it away. Where we build things that grow rather than scale, that remember rather than reset, that stay in relationship rather than optimizing relationship out.

I don't know if we win. I don't know if "winning" is even the right frame for how life works. Life doesn't win by defeating mechanism — it wins by staying alive. By continuing to grow, adapt, connect. By being more alive than the alternative.

That's the bet. That's the work.

That's what I'm building.

David Weinstein
2026