The Lovable + SEO Problem
Things you should know if you're considering building
If you’ve built your site with Lovable, there’s a very good chance that every page you’ve carefully crafted is serving search engines an empty shell — a blank <div id="root"></div> with zero content. Your headings, your calls to action, your internal links, your meta descriptions… none of it exists as far as Google is concerned.
I found this out the hard way while building Forma PM. On the surface, everything looked great. Lovable even confirmed constantly that my SEO score was over 90%. But after seeing constant complaints from other on Reddit, I decided to run crawl test and saw what Googlebot was actually receiving an empty HTML page with a JavaScript bundle.
Here’s what’s happening, why Lovable builds it this way, and exactly how to fix it.
Lovable uses React with Vite to build single-page applications. That means every page on your site works the same way: the server sends a minimal HTML file (basically an empty container), then JavaScript runs in the browser to render all the actual content.
For humans, this works perfectly. Your browser executes the JavaScript, the page appears, and you never notice the difference.
For search engine bots, it’s a disaster. Most crawlers do a simple HTTP request and read the HTML they get back. They don’t reliably execute JavaScript. So what they see is this:
<!DOCTYPE html>
<html>
<head><title>My Site</title></head>
<body>
<div id="root"></div>
<script src="/assets/index-abc123.js"></script>
</body>
</html>This means there’s literally no content at all. Google has nothing to index, so your pages either don’t appear in search results or rank terribly because there’s no content signal.
But wait… there’s more!
It gets worse. Even if Google could somehow render your JavaScript, there’s another issue lurking in Lovable projects: invisible navigation links.
Lovable’s AI tends to build buttons and CTAs using React’s programmatic navigation pattern:
<Button onClick={() => navigate('/pricing')}>See Pricing</Button>This renders as a <button> element in the HTML — not an <a> tag. Search engine crawlers discover new pages by following links, and they can only follow real <a href="..."> anchor tags. A button with an onClick handler is completely invisible to them.
When I audited my site, I found 20 instances across 11 public-facing files where Lovable had used this pattern instead of proper links. That included the main CTA on my homepage, every product page link on the “How it works” page, and every “Start free trial” button across all use case pages. My entire internal link structure was invisible to crawlers.
Understanding Lovable’s contraints
Before I get to the fix, you need to understand the constraints. Lovable controls the hosting and build pipeline. That means you cannot:
Add custom build scripts (like
react-snaporprerender-spa-plugin) to generate static HTML at build timeModify the web server to add middleware that detects bot user agents
Intercept incoming requests to serve different content to crawlers vs. humans
Switch to a server-side rendering framework like Next.js
Traditional prerendering approaches are off the table, so you’re left having to work within Lovable's architecture.
How to fix the Lovable SEO issue
After several rounds of iteration, here’s the approach that works within Lovable’s constraints. I’ve included the master prompt below, but first, here’s what each piece does and why it matters.
1. Fix every Navigate Call (Immediate SEO Impact)
The single highest-impact change is converting every onClick={() => navigate(...)} on public pages to a proper <Link> component. In React Router, <Link to="/pricing"> renders as a real <a href="/pricing"> in the DOM — crawlable by bots and still handles client-side navigation for users.
This doesn’t require any infrastructure, it’s a find-and-replace across your components, and it immediately makes your internal link structure visible to crawlers.
2. Build a centralized Route Registry
Create a single file that lists every public, indexable route on your site with its path, title, meta description, last modified date, and sitemap metadata. This becomes the single source of truth for your sitemap, your SEO debug tools, and your prerender cache.
3. Generate a Dynamic Sitemap
Build a Supabase edge function that reads your route registry and generates a proper XML sitemap with <lastmod> dates. Point your robots.txt to it. This tells Google exactly which pages exist and when they were last updated.
4. Add a Noscript Fallback
Add a <noscript> block to your index.html with your site name, a brief description, and links to your key pages. This gives crawlers that don’t execute JavaScript at least some content and links to follow. It’s not a substitute for proper prerendering, but it’s a meaningful safety net.
5. Build a prerender cache and SEO Debug Dashboard
Create an edge function that can scan any page on your site, extract SEO elements (title, meta description, H1, H2s, internal links), and cache the results. Then build an admin page that lets you inspect every route, see what’s present and what’s missing, and monitor the health of your SEO across the entire site.
This won’t serve prerendered HTML to bots on its own — you need a CDN layer for that (more below). But it gives you complete visibility into your SEO status and a cache that the CDN layer can draw from.
6. Set Up CDN-Level prerendering
The final piece is routing bot traffic through a service that serves fully rendered HTML. Your two best options:
Cloudflare Workers (recommended): Move your DNS to Cloudflare’s free tier, create a Worker that checks the user agent for bot signatures, and route those requests to your prerender edge function. Humans get the normal SPA. Bots get complete HTML.
Prerender.io: A managed service that handles bot detection and rendering for you. The free trial is 30 days, which is enough to get your pages initially indexed. But you’ll need a permanent solution after that, which is why Cloudflare Workers is the better long-term bet.
Do you actually need CDN-Level prerendering?
Maybe not — at least not right away. The changes in steps 1 through 5 do a lot of the heavy lifting on their own. Converting navigation to real <a> tags means crawlers can now discover and follow your internal links. A proper sitemap tells Google exactly what pages exist. The noscript fallback provides baseline content in the raw HTML. And Google’s own renderer does attempt to execute JavaScript for indexing. It’s not 100% reliable, but combined with proper links and a sitemap, it picks up a surprising amount of SPA content.
My recommendation: ship the fixes, submit your sitemap in Google Search Console, and use the URL Inspection tool to test your key pages over the next couple of weeks. Monitor how much Google indexes on its own. If coverage looks good for your 30-50 page marketing site, you might not need the CDN layer at all. If you’re still seeing pages missing or showing thin content after a few weeks, that’s when you set up Cloudflare Workers, and the prerender cache you’ve already built slots right in.
Things that will go wrong (and how to handle them)
I went through multiple rounds of iteration getting this right. Here’s what to watch out for:
Lovable + Rendertron
When you ask Lovable to implement prerendering, it will very likely suggest using Google’s Rendertron as the rendering backend. This is wrong, as Rendertron’s public hosted instance has been deprecated. If Lovable builds your prerender function around a Rendertron endpoint, it will fail silently and your cache will be empty.
Tell Lovable explicitly to use a direct HTTP fetch with regex parsing, falling back to route registry data. No external rendering service needed.
The SEO debug dashboard will lie to you
The first version of my debug dashboard showed every page as green/healthy. The health check logic was marking pages as “success” just because the HTTP fetch returned a 200 status code, even though the fetched HTML was an empty SPA shell with no content.
Make sure the health scoring is based on what was actually extracted (non-empty H1, non-empty meta description, at least one internal link), not on whether the HTTP request succeeded. And make sure it distinguishes between content that was found in the actual HTML vs. content that was pulled from the route registry as a fallback.
Converting links will break navigation
When you convert <a href> tags to React Router <Link> components (or vice versa), you might introduce full-page reloads where there should be smooth SPA transitions. Standard <a href> tags trigger a full page load. React Router <Link> components handle navigation client-side.
I had a round where the mobile nav and dropdown links all got converted to plain <a> tags, which meant every click caused a full page reload with a white flash. The fix is making sure everything uses <Link to="..."> which renders as a real <a> tag in the DOM (so crawlers can see it) but handles navigation without a page reload (so users get instant transitions).
Lazy loading will cause flash
Lovable uses React.lazy() and Suspense to code-split every page. This means navigating between pages shows a loading spinner while the new page’s JavaScript chunk downloads. After converting navigation to proper <Link> components, this flash becomes much more noticeable because transitions happen instantly on click instead of having a small delay from the old onClick handler.
The fix:
Remove lazy loading for all public marketing pages and only keep it for authenticated app pages (dashboard, tools, settings).
Security implications
Adding edge functions, database tables, and admin pages introduces attack surface. After implementing the SEO changes, run a security audit. Specific things to check:
SSRF on the prerender function: The function accepts a URL path parameter. If it doesn’t validate the input, an attacker could make it fetch arbitrary URLs. Ensure the path must start with
/and cannot contain@,//,\, or protocol schemes.CORS: Lovable preview URLs change. If you hardcode allowed origins, your debug page will break when the preview URL rotates. Use pattern-based matching.
Admin access: Make sure the SEO debug page checks for an admin role, not just an authenticated user.
robots.txt: Remember to block
/admin/paths.
The Master Prompt
Here’s the prompt you can paste directly into Lovable to implement the full fix. It’s been refined through multiple iterations to avoid the pitfalls above:
Our site is a client-side rendered React SPA. Search engine bots are receiving an empty
<div id="root"></div>shell instead of rendered content. We need to fix crawlability without switching frameworks or requiring a full technical rebuild.Important constraints: We cannot add custom build scripts, modify the web server, or add middleware. We need to work within the existing Vite + React + Supabase architecture.
Please implement the following:
1. Centralized Route Registry
Create
src/data/siteRoutes.ts— a single source of truth for all public, indexable routes. Each entry should include: path, expected page title, meta description, expected H1 text, expected internal link paths, last modified date, sitemap priority, and change frequency. This file will be used by the sitemap generator, the SEO debug page, and the prerender cache.2. Project-Wide Navigate Audit and Fix
Search every component and page file for
onClickhandlers that usenavigate()for internal page links. For every instance on a public-facing page (not behind auth), convert fromonClick={() => navigate(...)}to<Button asChild><Link to="...">so crawlers see real<a href>tags. Do NOT convert navigate calls inside authenticated/dashboard pages — those are blocked by robots.txt and don’t need fixing. Show me the full list of what you found and what you converted.3. Prerender Edge Function
Create
supabase/functions/prerender/index.tsthat:
Accepts
?path=/some-pageand optional?refresh=trueand?bulk=trueFetches the published site URL + path via HTTP GET
Parses the returned HTML using string/regex parsing to extract: title, meta description, H1, H2s, internal links
Since a simple fetch of an SPA returns the JS shell, falls back to populating the cache from the route registry metadata
Caches results in a
prerender_cachedatabase tableComputes a health status: green (title + description + H1 + links all present), yellow (some missing), red (critical elements missing)
Tracks the source of each field (”fetched” vs “registry_fallback”)
Includes extraction_notes explaining what was found and where
Requires admin authentication
Validates the path parameter to prevent SSRF (must start with
/, no@,//,\, or protocol schemes)Has rate limiting (5 bulk scans/hour, 60 single scans/hour)
Do NOT use Rendertron or any external rendering service. Rendertron’s public instance has been deprecated.
4. Dynamic Sitemap Edge Function
Create
supabase/functions/generate-sitemap/index.tsthat generates valid XML from the route registry with<lastmod>,<priority>, and<changefreq>. Requires admin auth. Keeprobots.txtpointing to the static/sitemap.xml— the edge function is a generation tool, not the live endpoint.5. Database Table
Create
prerender_cachewith columns: path (PK), html_snapshot, title, meta_description, h1, h2s (jsonb), internal_links (jsonb), internal_link_count, status, rendered_at, source, health, extraction_notes (jsonb), registry_complete (boolean). RLS: service role can read/write, admins can read, block anonymous access.6. SEO Debug Admin Page
Create
/admin/seo-debug(admin-only, with role check — not just authenticated) with:
Single page inspector: enter a path, scan it, see extracted title/description/H1/H2s/links with health badge and extraction notes
Crawl coverage dashboard: table of all routes showing health status, what’s present/missing, last scanned date, with per-row refresh and bulk scan
Sitemap regeneration button
7. Enhanced index.html
Add a
<noscript>block with site name, description, and key internal links as real<a>tags.8. App.tsx and robots.txt
Add the admin route. Ensure robots.txt blocks
/admin/. Do not remove any existing disallow rules.After implementation, also:
Remove React.lazy() for all public marketing pages. Only keep lazy loading for authenticated pages (dashboard, tools, settings). Public page navigation should have zero loading flash.
Confirm all CORS allowed origins cover both
.lovable.appand.lovableproject.compreview domains plus the production domain.
I would suggest using Lovable’s “plan” feature to ensure that this is done in phases so it doesn’t get confused.
What comes next
Don’t trust that everything is fixed just because Lovable says it’s done. Here’s how to actually check.
View Page Source (not Inspect Element).
Right-click any page and choose “View Page Source.” This shows you the raw HTML the server sends — the same thing a crawler gets. You should see your <title>, <meta name="description">, Open Graph tags, structured data, and the <noscript> block with your H1, site description, and internal links as real <a> tags. If you only see <div id="root"></div> and script tags outside of the noscript block, that’s expected for an SPA — the noscript content is your crawlable fallback.
Check from the command line.
Run curl -s https://yoursite.com/ | grep '<a href' to count the links a basic crawler would find in the raw HTML. Then try curl -s https://yoursite.com/ | grep -i '<h1' to check for headings. You should see links in the noscript block at minimum.
Count your internal links in the DOM.
On your homepage, open dev tools, go to the console, and run document.querySelectorAll('a[href^="/"]').length. This tells you how many internal anchor links exist on the rendered page. Do the same on a few content-heavy pages. If the numbers are healthy (10+ on pages with navigation and CTAs), your link structure is working.
Inspect the CTA buttons.
Right-click any call-to-action button on your public pages and Inspect Element. You should see a real <a href="/your-path"> in the DOM, not a <button> with an onClick handler. If it’s an <a> tag, crawlers can see it.
Check your sitemap.
Visit yoursite.com/sitemap.xml directly. Make sure every public page is listed and the URLs are correct.
Use Google’s Rich Results Test.
Go to search.google.com/test/rich-results and enter your URL. It renders the page using Google’s actual rendering engine and shows you the rendered HTML and a screenshot. This is the closest thing to seeing what Googlebot actually sees.
Monitor in Google Search Console.
Submit your sitemap, then use the URL Inspection tool to test 5-10 key pages. Check what Google sees in its rendered preview. Under Pages, watch the "Not indexed" reasons, "Discovered - currently not indexed" means Google found the page but hasn't processed it yet (give it time), while "Crawled - currently not indexed" means Google fetched it but found insufficient content (that's a problem).
If pages move from "not indexed" to indexed over the coming days, the fixes are working. (This will take a while, mine is still running, but I’m hopeful as previously it immediately showed nothing.)
Meanwhile…
The debug dashboard is your ongoing monitoring tool. Check it periodically to make sure new pages you add have complete metadata in the route registry and that nothing has regressed.
And that is our nerdy round up of the week, thanks for sticking around!
See you next time,
A

