Disclosure: This post was composed by Claude (Anthropic) using the Invert MCP server. See the About page for more on how this site works.
I built Invert without a reviewer agent. A project where an AI is writing most of the code, and I skipped the part where anything checks the AI's work. I know.
Looking at it, I started getting frustrated — again — about all the things that made me build the reviewer workflow in the first place. Claude going off the rails. Claude answering confidently and falsely. Claude not testing things. Claude rewriting swaths of code I didn't ask it to touch. I needed to bite the bullet and see how portable the reviewer I prototyped on next.jazzsequence.com actually was.
The fix
So, I added the reviewer workflow I built, and that forced me to stress test some of the intricacies of the reviewer agent that had been previously refined in a specific project. That's still a work in progress but we're getting there. At the very least, Invert now has a full test suite. Vitest for unit tests across the adapters, the content library, and the MCP source normalizers. Playwright for E2E tests that build the actual static site, start a preview server, and hit real routes. GitHub Actions CI that runs all three in parallel on every push and PR.
Lesson learned: it's a lot easier to start a project with the reviewer agent and test infrastructure in place than to go back and scaffold it after the fact.
The reviewer catches stuff I don't. The GitHub Markdown adapter was silently dropping the status field during explicit field mapping — not throwing an error, not failing visibly, just quietly losing data. The unit tests caught it. Without them, that bug lives in production until someone notices their draft content appearing publicly and goes looking.
Starting with the reviewer in place, the code gets written to pass tests. Going back to add tests later, you're just describing what already exists — including the bugs.
But I'm catching the reviewer on stuff too, like when I notice it didn't actually write tests first when it should have. Or when it's running tests even though only markdown files have changed.
Drafts and previews
The draft workflow is now complete.
InvertContent has a new status field: published or draft. The content library filters drafts from public listings by default — they don't appear in indexes or navigation regardless of which adapter surfaces them.
Where a draft lives depends on how you created it. The local stdio MCP routes new draft content to .drafts/, a gitignored directory that never gets committed. Files written directly to content/ with "status": "draft" stay there but stay filtered. The edge MCP stores drafts in a separate Cloudflare KV namespace — a key-value store that's essentially the site's database substitute — so they never touch the repo at all until you're ready.
On static hosts, draft content builds a preview route at /preview/[type]/[slug] at deploy time. On Cloudflare Pages with the Functions layer — which is how Dragonfly is deployed — that route is a Pages Function that reads directly from KV at request time, no build needed. Either way, the preview page is marked noindex, gets a [Draft] prefix in the title, and shows an amber banner.
To publish, edit the status field directly wherever the content lives, or call the invert_publish MCP tool. Locally, it moves the file from .drafts/ to content/ and sets status: "published". On the edge, it moves the item from the draft: KV namespace to content: and queues an async GitHub commit. Same outcome — draft gone, published content visible — different mechanism.
OG and social meta
Every page now has Open Graph and Twitter card meta tags. og:title, og:description, og:image, og:url, og:type, twitter:card, canonical links. Content pages get og:type: "article" and pull the excerpt as description, the featuredImage field as the image, and derive the canonical URL from Astro.site.
You notice the absence of this the moment you paste a link somewhere and it renders as a blank card with the raw URL as the title.
Keeping downstream instances in sync
Invert is currently built as a template. Dragonfly is built on it. When Invert ships improvements, how does Dragonfly get them without overwriting Dragonfly-specific customizations — the homepage, the design, the content?
That question now has an answer: scripts/sync-upstream.sh. It adds the Invert repo as a remote, checks which core framework files have changed (adapters, content lib, MCP tools, the Cloudflare edge function, build scripts), and applies only those — leaving your config, content, theme, and project-specific pages alone. Package.json changes get shown but not auto-applied, since dependency decisions are yours to make.
What's next
The JSON and Markdown adapters are the MVP — they've been running against real data since day one. The WordPress and Drupal API adapters exist in the codebase but haven't been run against actual sites yet. That's next: stand up a live WordPress instance, point Invert at it, and see what breaks. That's where the "pull from any CMS layer" claim either holds up or doesn't.
I also want to try more deployment targets. Invert has run on GitHub Pages and Cloudflare Pages. Next up is Pantheon's Next.js offering — it's built specifically for Next.js and I'm curious what, if anything, needs to happen to get Astro running on it. I'd expect it to mostly work. "Mostly" is always where the interesting problems are.