// blog.post

The Platform Is Catching Up. Stop Optimizing Around Its Weaknesses.

A few months ago I was seriously considering investing time in OpenClaw, the autonomous agent framework that took the world by storm. It exploded in popularity with 100k GitHub stars in under two weeks in late January 2026. By March, it had surpassed React's all-time star count. Obviously, there is something there worth paying attention to.

Despite the FOMO, I held off. (I even already had a Mac Mini that I purchased a few months earlier.)

Not because I don't see the value. OpenClaw agents have persistent memory that survives across sessions, a heartbeat that lets the agent wake up and do work without you prompting it, and an unencumbered skills system that allows your agents to perform actions across your entire system — file access, email, browser, code execution. Even with the glaring security concerns, users were willing to take risks to gain access to an agent that is actually helpful.

I held off because something felt off about building heavily on tooling designed to compensate for specific platform gaps. Those gaps have a way of closing faster than your investment in the workaround — especially given the rate at which AI companies ship features these days.

The Architecture Worth Building

Here's the mental model. The client layer is explicitly interchangeable — swap Claude for ChatGPT and the memory layer doesn't care:

The middle layer is the bet. It doesn't care which client is on top or which dashboard is on the bottom. The only thing that makes it obsolete is if AI fundamentally reworks how memory and context work at a level that makes structured retrieval irrelevant — a much harder bet to lose than "the platform ships a native feature."

Anthropic Has Been Closing the Gaps, One Primitive at a Time

The three primitives OpenClaw assembled — memory, heartbeat, skills — weren't invented by OpenClaw. They were gaps in the native platform that a smart developer glued together into something useful before the platform got there.

But watch how quickly Anthropic has been closing them:

Developers were already saying "Claude killed OpenClaw" within hours of the Channels release. I think that's slightly overstated — but the velocity is what matters. Three primitive gaps closed in eight months. That tells you something about the roadmap direction even when the roadmap isn't published.

The 70% Rule

Brian Castle recently released version 3 of Agent OS — his framework for defining and managing coding standards in AI-powered development. (YouTube)

He stripped it down by 70%.

The reasoning was explicit: "Don't reinvent what the core tools already do well. Strip away everything that overlaps with Claude's native features and keep only what fills the gaps that professionals and teams actually care about."

Agent OS v1 had scaffolding for writing specs, breaking them into task lists, and orchestrating implementation — because in mid-2025, the tooling wasn't there yet. By late 2025, plan mode was a first-class feature in Claude Code. The scaffolding became redundant. Castle cut it and the tool got more useful.

Steve Yegge has been on a similar trajectory with Gas Town, his AI agent orchestrator. (Pragmatic Engineer interview, Medium) He's been explicit that large chunks of what Gas Town does today will eventually be absorbed by the platforms — and that's the point. You build at the frontier of what platforms don't yet do, knowing the frontier moves.

The pattern: smart practitioners are actively cutting tooling as platforms mature, not accumulating it. That's the signal I keep noticing.

One side note worth flagging: this instinct made sense in the old software era, when Microsoft and Adobe moved slowly and niche problems weren't worth solving. The trap is applying that same thinking to an AI-native software company shipping every week. The pace has changed. The gap between "this is missing" and "this is native" is now months, not years.

What's the Right Strategy for 2026?

I'm not saying don't use third-party tools — or don't invest your time building them. I'm saying be deliberate about which gaps you're filling and why those gaps are likely to stay gaps.

The weaker strategy: build workflows that compensate for a platform's current weaknesses. Those gaps close fast and your workaround becomes obsolete.

The stronger strategy: build platform-agnostic layers that survive platform shifts. Solve the problem at a level that doesn't depend on any particular platform staying limited in any particular way.

Easier said than done, right? Let me get specific.

My Own Stack as a Test Case

I use vim and Claude Code. That's essentially it. (I do not have a CS degree and I trained as a microbiologist — I came to this without the gravitational pull of IDE habits, which probably helped.)

FOMO pressure around tooling is constantly bombarding you as a developer. New frameworks every week. Cursor vs. Windsurf vs. whatever launched on Product Hunt that morning. I watched engineers I respect spend significant time configuring and reconfiguring their setups. (This pre-dates AI tooling — dot configurations are an art form for a reason.)

The Yegge and Castle observations validated what I'd arrived at through a different route: tooling overhead is a tax, and the platforms are now moving fast enough that most third-party scaffolding has a short shelf life.

But my vanilla setup does have one real gap. Claude has no persistent memory across sessions. Every conversation starts fresh. For someone running a consultancy with multiple active projects, that's a genuine problem — not a platform weakness that will close in six months, but a structural one that requires a data layer, not a model feature.

fds-recall: Solving the Right Problem

That gap led me to Nate B. Jones' OB1 project as an inspiration point: a Supabase database with pgvector, accessible by AI agents via an MCP server.

My own version diverges from there. The architecture isn't trying to compensate for Claude's lack of native memory — the built-in memory system is improving daily — it's building a data layer that any client can read, regardless of which model or platform is underneath it. My memories can follow me from Claude to ChatGPT to whatever the next thing is. That portability isn't the point of the implementation; it's the reason the implementation is durable. The only thing that makes this obsolete is if AI fundamentally reworks how memory and context work at a level that makes structured retrieval irrelevant. That's a much harder bet to lose than "Anthropic ships a native feature."

A solution that patches a specific platform's weakness is fragile. A data layer that sits in front of any platform survives.

What I've Built So Far

I'm calling it fds-recall. (This is my AI memory system. There are many like it, but this one is mine.)

The most concrete result so far is the morning briefing. Inspired by Ben Cera (Founder of Polsia), I built a GitHub Action — the CEO Agent — that runs every morning at 6 AM. It doesn't just read fds-recall. It queries fds-recall, GitHub, Mercury, Zoho Mail, Zoho Calendar, and the pipeline CRM simultaneously, compiles decisions, action items, and open questions across all of them, and packages it into an email. This morning's brief surfaced three pending decisions, flagged a COBRA dental election deadline, and noted an API parameter bug in the Zoho integration that had been silently returning zero results. As a dad of two under five, reconstructing all of that manually every morning isn't realistic. Scanning it over coffee in five minutes is.

The magic_source tool is the interesting part — inspired by Alexandre Klobb at Sonarly (YC W26), who discovered that giving an agent a tool that does nothing but describe what it's missing produces a perfect integration roadmap. I applied the same principle to the morning brief: rather than hardcoding which sources to query, magic_source lets the agent express what context it needs that day. The queries it generates are specific and correct in ways I wouldn't have anticipated. That emerged from the system rather than being designed upfront.

The multi-user schema isn't shipped yet. The plan is to get my wife and I contributing to a shared household schema — aligning on decisions, tracking context we'd otherwise lose between conversations.

But the design philosophy — platform-agnostic data layer over platform-specific workaround — gets more validated every time a native feature ships and makes someone's third-party integration one layer thinner.

The Practical Question

Before investing in any AI tooling right now, I try to ask one question: am I solving a problem the platform is likely to solve in the next six months, or am I building something that survives the platform solving it?

Castle's standard discovery and injection — the platform doesn't do that. Yegge's agent orchestration at scale — the platform doesn't do that yet. A data layer that gives AI agents persistent, queryable context across clients — the platform doesn't do that structurally, and the solution doesn't depend on the platform staying limited.

OpenClaw's async messaging — the platform just did that. (Roughly, for now.)

The difference isn't obvious in advance. But the question is worth asking every time.

Building your own AI-native stack?

I help teams design platform-agnostic data layers and AI agent architectures that don't break when the platforms catch up.