For SEO reasons, I want to make sure the permalinks (the URLs that end in moovweb.io
that are created for every deploy) do not get indexed by search engines like Google, Bing, etc. but not prevent the production URL of my site from being indexed.
You can solve this by using CDN-as-JavaScript to send the X-Robots-Tag: noindex
header whenever the domain ends in moovweb.io
. You can do this by putting something like this as one of the first statements in your router:
.match(
{
headers: {
host: /.*.moovweb.io/,
}
},
({ setResponseHeader }) => {
setResponseHeader('X-Robots-Tag', 'noindex')
})
Here’s a full router example:
const { Router } = require('@xdn/core/router')
const { nextRoutes } = require('@xdn/next')
const { API, SSR, cacheResponse } = require('./cache')
const prerenderRequests = require('./xdn/prerenderRequests')
module.exports = new Router()
.prerender(prerenderRequests)
.match(
{
headers: {
host: /.*.moovweb.io/,
}
},
({ setResponseHeader }) => {
setResponseHeader('X-Robots-Tag', 'noindex')
})
.match('/service-worker.js', ({ serviceWorker }) => {
serviceWorker('.next/static/service-worker.js')
})
.match('/', cacheResponse(SSR))
.match('/api', cacheResponse(API))
.match('/s/:categorySlug*', cacheResponse(SSR))
.match('/api/s/:categorySlug*', cacheResponse(API))
.match('/p/:productId', cacheResponse(SSR))
.match('/api/p/:productId', cacheResponse(API))
.use(nextRoutes)
.fallback(({ proxy }) => proxy('legacy'))
1 Like
Would the robots.txt file function similarly with no index in coding? https://yemlihatoker.com/robots.txt
They are similar but subtly different: robots.txt tells search engines which pages they can crawl and noindex tells them what they should index. I’d recommend Googling for “noindex vs robots.txt” for more details.