Skip to main content

AI Impact Scanner - Analyze sites AI search readiness tool - Progress 29 Nov 2025

Published: November 30, 20253 min read
#AI Search Optimization#SEO#Build-in-public#Saas

Unlocking SEO Magic with llms.txt — Day 29 of AImpactScanner MVP

TL;DR: Launched the first sprint of llms.txt integration, letting Growth and Scale users generate SEO-optimized files right from their AI analysis — plus tackled some tricky bugs to get it all running smoothly.

🎯 Today's Focus

Today was all about bringing llms.txt to life inside AImpactScanner. I wrapped up the core backend and frontend pieces and deployed everything to both staging and production. This means users on paid tiers can now generate these SEO-friendly files directly from their results — a big step toward making AI insights actionable.

✨ Key Wins

I dove deep into integrating the LLMtxtMastery API, which powers the generation of llms.txt files optimized for search engines. This wasn’t just a plug-and-play; I had to craft a custom Edge Function with smart routing to handle different actions like analyzing, generating, and downloading these files. Why does this matter? Because llms.txt files help sites communicate with AI crawlers better, boosting visibility — a real value-add for users serious about AI-driven SEO.

On the frontend, I built the LLMsTxtPanel — a sleek new dashboard widget that shows usage stats, progress bars, and download buttons, gated by user tier. It’s satisfying to see the feature come together visually and functionally, making it clear who gets access and how they’re using it. Plus, integrating it into the existing SimpleResultsDashboard with an upgrade flow means users get nudged naturally toward higher tiers.

The last piece was the database schema: I added a new llmstxt_generations table with row-level security policies and a helper function to track monthly usage. This ensures data stays secure and lets me monitor how much users are leveraging the feature — vital for future scaling.

💡 What I Learned

Edge Functions are powerful but have quirks — like not handling query parameters the way I initially expected. Passing the ‘action’ via request body instead of URL params fixed a frustrating 400 error. Also, when authenticating users in Edge Functions, using supabaseAdmin.auth.getUser(jwt) directly feels much more reliable than spinning up new clients with headers. These little nuances are the kind of things you only pick up by getting hands-on and debugging in real-time.

🔧 Challenge of the Day

The biggest headache was user tier detection showing as “free” no matter what. Turns out, the root cause was surprisingly simple: the top-level App.jsx wasn’t passing down the user prop into the dashboard component, so the tier was always undefined and defaulted to free. After some head-scratching, adding user={{ tier: userTier }} fixed it instantly. It was a good reminder that even small prop chain breaks can ripple into big UX confusion.

There were also some tricky deployment gotchas — like staging not updating because I pushed to the wrong branch. Merging main into develop solved it, reinforcing the importance of clear branch discipline.

📊 Progress Snapshot

  • Completed: 6 major tasks
  • Momentum: 🚀 High

🔮 Tomorrow's Mission

I’ll focus on smoke testing the new llms.txt feature with a Growth tier user on staging, running full end-to-end generation tests. From there, it’s about polishing any final bugs before the production rollout and starting Phase 6: monitoring and documenting everything.


Part of my build-in-public journey with AImpactScanner MVP. Follow along for daily updates!

Share this post