March 2026

Non-Hype Manifesto of 90s CS Graduates

A mindset for those who learned to think before they learned to brand

Final Draft

Full Manifesto MD Short Version MD One-Pager PDF
Begin Reading

We came of age in a period when computing still had friction.

Memory was finite.
Bandwidth was slow.
Machines failed loudly.
Cost was immediate.

This mindset is not bound to a demographic. It is bound to a reality: constraints never disappear; they only return in more fashionable clothing.

Now we are told that code is cheap, intelligence is abundant, agents are here, and the future belongs to those who can orchestrate probabilistic interns at scale.

Very well.

We are not anti-AI.
We are anti-nonsense.

We do not deny that something profound is happening.
We deny only the childish claim that history has ended and judgment is obsolete.

So here is our position.

Part I

What Has Changed

Section 01

Implementation is being commoditized

Code generation is real. Syntax matters less. Routine implementation is increasingly machine-cheap.

That does not mean software engineering has disappeared. It means the center of gravity is shifting — unevenly, and at different speeds across organizations — from writing code toward:

  • specifying intent
  • designing context
  • constraining ambiguity
  • validating behavior
  • observing runtime reality
  • managing blast radius
  • carrying responsibility

The keyboard has not vanished. It has simply moved one floor up the building.

Section 02

Ambiguity is now a first-class engineering problem

For decades, programming languages disciplined ambiguity by forbidding most of it.

Now we build systems through natural language, prompts, specs, context packs, agent plans, and half-formal instructions. Fine. But then let us speak honestly:

If the spec is the code, then unclear language is not a documentation problem. It is a systems problem.
Section 03

Memory constraints have returned

Once upon a time, memory constraints were obvious: RAM, disk, wire speed, CPU time.

Now the constraint has returned in more sophisticated clothing: context windows, token budgets, session amnesia, retrieval gaps, and institutional forgetting.

The answer is not simply "give it more tokens."
The answer is to engineer memory:

  • state management
  • context packs
  • decision logs
  • precedents
  • explicit handoffs
The 90s are back, except the memory bug now lives in organizations as much as in machines.
Part II

What the Hype Gets Wrong

Section 04

Inevitability is not an argument

Every era invents a priesthood that says: Resistance is futile.

Usually this means someone is selling infrastructure, someone is raising money, and someone else wants labor to feel outdated before the numbers prove it.

Some AI capabilities do follow genuine structural trajectories — cost curves, scaling laws, hardware improvements. We do not deny those. But trajectory is not destiny, and a cost curve is not a civilization plan.

Steam engines changed history. So did bubbles.
The skill is telling them apart in real time.

Section 05

Confidence is not comprehension, and demos are not deployments

Machines produce fluent explanations, plausible architectures, and passable implementations. This is useful. It is also dangerous.

The hype machine reports velocity gains and viral demos. It reports much less eagerly on silent failures, compounding errors, brittle integrations, and catastrophic edge cases.

A workflow that is 95% magical may still be operationally unacceptable if the 5% failure mode is loss, harm, broken trust, or systems nobody can repair.

Section 06

"Human-in-the-loop" is often ceremonial

Sometimes humans genuinely constrain the system. Sometimes they are there to absorb liability, comfort regulators, and press the final button on a process whose real decisions happened elsewhere.

We demand honesty about that distinction.

If the human is only there to bless machine momentum, say so.
If the human genuinely constrains the system, design for that seriously.

Section 07

The economics are political, not just technical

The future of software is shaped not only by technical possibility, but by token cost, compute concentration, cognitive lock-in, and platform dependency.

A world in which productive thinking requires paid cognitive infrastructure is not merely a technical shift. It is a political economy.

"Open source" may remain a philosophy while actual capability becomes increasingly metered.

Part III

What Still Matters

Section 08

Accountability does not automate

In every important system, somebody still carries the burden:

a pilot,
a surgeon,
an operator,
an engineer on call at 2:17 a.m.

The machine may generate the artifact.
The human still absorbs the consequence.

Any serious discussion of agentic AI must answer:

  • Who approves?
  • Who reviews?
  • Who remembers?
  • Who rolls back?
  • Who knows enough to fix it when the chain of cheerful automation snaps?
Section 09

Labor mutates; it does not vanish

When executives say AI removes work, we ask:

  • Which work?
  • For whom?
  • Under what risk?

Often the labor has not disappeared. It has mutated.

The engineer writes less code and more:

  • specification
  • validation
  • review
  • incident analysis
  • correction of machine overreach
  • governance of systems they did not entirely compose

That is not the end of labor.
That is labor repackaged and renamed.

We are happy to automate drudgery.
But we object when people automate the visible part of labor and then declare the invisible part solved.

Section 10

The apprenticeship gap is a real risk

A civilization that automates the beginner path risks making expertise scarce.

If young engineers no longer learn by wrestling with systems, tracing failures, debugging pain, and slowly building taste, then the pipeline of people capable of genuine judgment narrows.

This is not inevitable. Judgment can also develop through design review, incident response, mentorship, and operational responsibility.

But the risk is serious enough to design for deliberately, not to dismiss with optimism.

Section 11

Thinking still happens off-screen

Not all engineering happens in IDEs, dashboards, or prompt windows.

Some of it still happens while:

  • walking
  • arguing
  • reading outside the field
  • staring into space without producing anything that fits a sprint board

We reject any workflow that mistakes total screen time for total thought.

An engineer who cannot think away from the machine is already partially automated.

Part IV

What We Demand

Section 12

Verifiable governance, not vibes

We do not want a future where humans issue magical incantations to proprietary black boxes and hope for favorable weather.

We want:

  • machine-enforceable charters with declared authority
  • bounded ambiguity and memory discipline
  • auditability and reversibility
  • evidence trails, not confidence theater
  • real human override where stakes demand it
Not vibes, but verifiable governance.
Section 13

Our commitments

We do not merely believe these things. We commit to practicing them.

  1. We will not ship autonomous workflows without rollback. If you cannot reverse it, you do not yet control it.
  2. We will treat specifications as production artifacts. If the spec drives the machine, then unclear language is a defect, not a style issue.
  3. We will maintain human-readable decision logs. If no one can explain why the system did what it did, the system is ungoverned.
  4. We will prefer reversible systems over elegant ones when uncertainty is high. Beauty is welcome. Recovery is mandatory.
  5. We will invest in apprenticeship deliberately. We will not assume that AI-assisted shortcuts produce the same depth of understanding as direct experience.
  6. We will distinguish ceremony from control. If a human approval step does not genuinely constrain the system, we will say so honestly rather than use it as compliance theater.

Final Position

We are neither utopians nor reactionaries.

We accept that AI will transform software, that abstraction is moving again, and that some old forms of expertise will shrink.

But we also insist:

The future is not "AI replaces engineering."

Engineering becomes the discipline of governing machine-generated possibility under uncertainty.

And that, unfortunately, still requires adults.

We have seen enough revolutions to know two things:
first, this one is real;
second, the people explaining it are still mostly insufferable.