A Paradox
The most awkward thing about being an SEO agency: ship a client site, then look at your own site a year later and realize it can't rank for its own name.
NexAgent's main business is rebuilding SMB websites and doing SEO + GEO optimization for Vancouver businesses. So we made ourselves a rule: nextagent.ca has to run better than anything we ship to clients.
This post is about what that means in practice — a daily AI loop that runs the site for us.
The Problem: Websites Rot After Launch
The standard agency playbook: sign the contract, do the design, ship the code, go live, pop the champagne. Then walk away.
Six months later:
- Last blog post is from forever ago
- Landing pages are stale, keyword density has drifted
- GSC is full of
Discovered – currently not indexedURLs nobody is fixing - Sitemap last submitted three weeks ago
- Industry news nobody wrote up
Without eyes on it, the site rusts. But hiring a full-time SEO lead isn't economic for an SMB either.
The Fix: Hand the Site to an AI Agent
Our line of thinking: automation is literally what we sell, so the site should run itself.
nextagent.ca runs this daily loop today:
1. Daily SEO + GEO Scoring
Every blog post and landing page carries a composite score (DB column
combined_score). Three sub-scores:
- SEO: meta description, canonical, h1-h3 structure, keyword density, internal links
- GEO: AI-search readability — FAQ schema, entity mentions, citation density
- Traffic: live GA + GSC pullback (7-day impressions / clicks / avg position)
blog_fetcher/scorer.py runs nightly, writes back last_scored_at. Anything
below threshold goes to the next step.
2. Gemini-Driven Auto-Rewrite
Low-scoring posts land in blog_fetcher/post_rewriter.py. Gemini 3 Flash,
split into four small sub-tasks:
- Analyze the original for SEO weaknesses
- Rewrite the Chinese body addressing those weaknesses
- Translate to English preserving all outbound links
- Validate canonical / hreflang / FAQ schema
- Write back to DB (defaults to a review queue, not live publish)
The critical bit: we preserve original intent. The goal is tightening schema, deepening keywords, and adding FAQ blocks — not turning human writing into AI marketing slop.
3. Google Search Console API Inspection
We call the official GSC urlInspection API for every URL in the sitemap.
Script: website/scripts/gsc-report.ts. Each run:
- Pulls every URL in the current sitemap (70 today)
- 8-way concurrent urlInspection calls
- Aggregates by coverage state (INDEXED / DISCOVERED_NOT_INDEXED / DUPLICATE / unknown)
- Writes dated JSON + CSV reports
Our 2026-04-19 run looked like this:
| State | Count |
|---|---|
| URL is unknown to Google | 52 |
| Discovered – currently not indexed | 15 |
| Submitted and indexed | 2 |
| Duplicate, Google chose different canonical | 1 |
52 URLs Google hasn't even seen? Triggers step 4.
4. Auto Sitemap Resubmit
website/scripts/gsc-submit-sitemap.ts runs on every deploy. It calls GSC
sitemaps.submit to actively notify Google to crawl. Beats waiting for
Google's own schedule (last download was three weeks ago before we built this).
5. OpenClaw Agent Orchestration
The four steps above are orchestrated by OpenClaw — our open-source AI agent framework. Agents handle:
- Cron-triggered execution of each step
- Writing score snapshots back to DB
- Pushing low-score reports to Discord for human review
- Week-over-week report diffs
Roadmap
Scoring + rewrite + GSC inspect + sitemap resubmit are live. Up next:
- News-aware refresh: daily Tavily / Exa / agent-reach signal flags posts touching breaking stories, triggers a refresh with current citations
- Self-review closed loop: after rewrite, the agent re-scores and iterates until the target score is hit
- Cross-platform syndication: one post → auto-syndicated to TikTok, XiaoHongShu, Twitter, LinkedIn (format-aware)
- Multi-modal output: text post → auto-generated short video + audio podcast version per publish
Open Inspection
This isn't a slide deck. You can:
- Open GSC for
sc-domain:nextagent.caand see the coverage report - Ask us to live-inspect any URL
- See any blog post's
combined_scoreandlast_rewritten_attimestamp
We don't sell PPTs. If we can't make our own site self-heal, we have no business asking you to trust us with yours.
FAQ
Q1: Can you bolt this onto my existing site? Modern framework (Next.js, Remix, Astro, SvelteKit) + API-accessible CMS: 2–4 weeks. WordPress / Wix: typically rebuild first — the Core Web Vitals and schema floor matters.
Q2: What if the AI rewrites something badly? Rewrites default to a review queue, not live. You approve. Once trust is built for a content type, you can switch to auto-publish. Pricing and legal pages stay manual by design.
Q3: How much? Full rebuild + 6 months of autonomous maintenance: $6,000–$15,000 one-time plus $400–$1,200/month. Maintenance-only subscription on an existing modern site: from $400/month.
Q4: How is this different from hiring a freelance SEO? A freelancer bills monthly for human edits. We deliver a system that runs daily — reports are diffs, edits are git commits, you subscribe to a capability, not person-hours.
Next Step
Want to see if your site fits this loop? See the Self-Evolving Website service →