edit_square igindin

Parsed: An SEO Tool That Writes to Your Codebase

Most SEO tools end with a report. Parsed starts there — it writes schema, publishes articles, and submits URLs directly into your codebase.

Ilya Gindin
translate de  · es  · fr  · pt-br  · ru
read ilao dzindin version arrow_forward

Most SEO tools end with a report. Parsed starts there.

You get a list of missing schema, keyword gaps, unindexed URLs. Then you close the tab and forget about it. The work hasn’t happened — you’ve just been told what the work is.

I built Parsed to skip that step.

The problem with SEO tooling

Every audit tool I’ve used has the same shape: it scans your site, shows you what’s wrong, and leaves you to fix it yourself. That made sense when the work was manual. It makes less sense when agents exist.

The gap isn’t information. I know I need JSON-LD schema on my product pages. I know I have keyword gaps in my blog. I know half my sitemap isn’t indexed yet. I’ve known these things for months. The constraint is execution — sitting down and doing each thing.

Parsed treats that as the actual problem to solve.

What Parsed does

Parsed is a local super-agent that runs alongside your monorepo. It connects to your codebase directly, not through a web UI that hands you a snippet to copy.

Three core agents:

Autonomous optimization agent — scans your pages, identifies missing or weak structured data, generates JSON-LD schema components, and writes them directly into your project. Next deploy, the schema is live. No copy-paste.

Publishing agent — finds keyword gaps in your content, generates SEO articles, runs them through a human-score check, and writes .md files into your content folder. The article exists on disk. You review it, commit it, done.

Quick index agent — fetches your sitemap, submits every URL to IndexNow. Google, Bing, and Yandex get pinged immediately. No waiting for a crawler to discover a page you published three weeks ago.

There’s also citation tracking — monitoring how often your site gets cited by AI search engines. Real API calls, not mock data.

The stack is Next.js 14, Zustand, OpenRouter routing between GPT-4o and Claude 3.5 Sonnet depending on the task, and IndexNow for the submission layer.

AEO: the thing most SEO tools ignore

Google is not the only search engine that matters anymore.

ChatGPT, Perplexity, and Claude are pulling significant traffic — and they cite sources. When someone asks Perplexity “what’s the best tool for X,” it surfaces a few sites with inline citations. Those citations drive real clicks.

The optimization logic is different from traditional SEO. Google ranks pages based on links, authority, technical signals. AI engines cite based on content clarity, schema richness, topical authority, and how well your content answers a specific question directly.

Parsed tracks both. The schema work feeds Google. The content work feeds AI citation. They overlap but aren’t identical, and treating them the same is leaving coverage on the table.

I’ve started thinking about this as AEO — answer engine optimization. It’s not a rebrand of SEO, it’s an additional layer. Your content needs to rank in traditional search and be citable by AI. Those are related but distinct goals.

The humanizer loop

Here’s the uncomfortable part.

I’m generating articles with an AI, then running them through an AI detector, then rewriting the flagged sections until the human score hits 87% or higher, then writing the file to disk.

The paradox: using AI to write content, then using AI to check if the content reads like it wasn’t written by AI, then using AI to fix the parts that do.

It works. The articles pass. Google doesn’t have a reliable detector at the content level — it looks at signals like links, authority, user behavior. But I’d rather ship content that reads like a human wrote it regardless of whether the detector matters. It’s just better writing.

The humanizer loop is part of the publishing agent’s pipeline. It’s not optional. Every article that comes out of Parsed has gone through it.

Current limitation

Parsed runs locally.

It has direct filesystem access because it needs to write to your project. That’s what makes it useful and what makes it hard to productize. A hosted version would require a repo integration — GitHub app, write permissions, PR workflow. That’s a real product, not a side project.

Right now it’s a tool I run against my own sites. igindin.com is the primary test case. The schema agent has written structured data for every major page. The publishing agent has generated and committed several articles. The index agent runs whenever I ship new content.

The path to SaaS is a GitHub integration that creates a branch, writes the files, opens a PR for review. You get the autonomous execution without giving a third-party tool direct write access to main. That’s the right architecture. I haven’t built it yet.

What’s next

Short term: tighter feedback loop between citation tracking and content generation. If a specific article is getting cited by Perplexity, I want to know which section is being pulled and write more content around that framing.

Medium term: the GitHub integration. Parsed as a service that runs on a schedule, opens PRs, and lets you approve the work without touching a local environment.

Longer term: I’m not sure. SEO tooling is a crowded space. But “writes to your codebase” is a different category than “shows you a report.” The value is in the execution gap, and that gap is real.

The tool exists. It runs. It’s produced measurable output. That’s enough to keep building.

← arrow keys or swipe →