top of page

Setting up Internationalisation for a Content‑Driven Website: Ai Lessons and What Actually Worked

  • Writer: Mark Waldron
    Mark Waldron
  • Feb 5
  • 5 min read

When I first decided to internationalise whentotravel.com, I assumed the hard part would be translation. It turned out the real challenge was choosing the right approach for a content‑heavy Next.js site without over‑engineering myself into a corner.

This post is a walkthrough of the dead ends, false starts, and eventual solution that finally clicked.


This Ai image is so funny I had to include it. What kind of monitor / laptop combination is that! Omg!
This Ai image is so funny I had to include it. What kind of monitor / laptop combination is that! Omg!

The Problem with the Official Recommendation


My starting point was the Next.js documentation on internationalisation:

On the surface, it looks reasonable. But for an SEO‑focused, content‑based website, the recommended approach is simply wrong.

The documentation explicitly suggests relying on the user’s browser language preferences:


It’s recommended to use the user’s language preferences in the browser to select which locale to use. Changing your preferred language will modify the incoming Accept-Language header to your application.For example, using the following libraries, you can look at an incoming Request to determine which locale to select, based on the Headers, locales you plan to support, and the default locale.

This approach might make sense for a logged‑in application that adapts itself to an individual user. For an SEO‑driven website, however, it is close to the worst thing you can do.


Why This Is Bad for SEO


Search engines do not behave like users with stable browser preferences. They crawl your site from multiple regions, with varying or missing Accept-Language headers, and they expect deterministic, crawlable URLs.


When language selection is based on request headers instead of URLs:

  • Search engines cannot reliably index each language. The same URL may return different content depending on the crawler’s headers.

  • There is no stable, canonical URL per language, which makes it impossible to signal intent via hreflang annotations.

  • Content can be treated as duplicate or inconsistent, hurting ranking rather than improving it.

  • Language‑specific backlinks become meaningless, because /about might be English today and German tomorrow.


From an SEO perspective, every language version of a page must be:

  • Addressable via a unique URL (/en, /de, /fr, etc.)

  • Consistently rendered regardless of who or what is requesting it

  • Explicitly declared to search engines using hreflang


Header‑based language negotiation breaks all of these assumptions.

In short: search engines cannot rank what they cannot reliably crawl, and they cannot crawl what does not have a stable URL.

This is the fundamental mismatch between the Next.js recommendation and the needs of a content‑heavy, search‑driven site.


So the obvious next step was to ask an AI assistant (ChatGPT, in this case) how to do it properly. I told ChatGPT that i was developing a NextJs website and needed to internationalise my routing and to give me suggestions on how to set that up.


When AI Makes Things Worse


The answers I got initially looked promising, I followed its suggestions but each iteration pushed me toward a more complex and increasingly hacky setup. Route rewrites, duplicated layouts, and fragile conventions piled up fast.

I did learn something useful along the way — route groups in Next.js — but in this context they turned out to be a distraction rather than a solution.


Lesson learnt:

  • ChatGPT is probably not the best model for coding suggestions

  • The AI are trying to keep you engaged. They use flattery and positive reinforcement to lead you into more and more interaction

  • The longer you follow their path the further you get from your goals

  • REMAIN GOAL FOCUSED!


At this point, I stopped experimenting and went back to the docs. Old school.


The Important Links Everyone Skips


At the bottom of the Next.js i18n page are links to third‑party libraries. That’s where the real answers live.

One library immediately stood out:

next-intl

After a short investigation, it was obvious this was designed for exactly the kind of problem I had.


Why next-intl Fits an SEO-Focused Content Website


We can set it up so that Locale is part of the URL, not inferred

  • next-intl encourages explicit, URL-based locales (/en, /de, /fr)

  • This gives each language version a stable, crawlable, indexable URL

  • Search engines can reliably associate content with a specific language and region

  • A given URL always renders the same language, regardless of headers

This aligns directly with Google’s documented best practices for international SEO.

For SEO, predictability beats “smart” behaviour every time.


First-class support for hreflang

  • next-intl makes it straightforward / trivial to generate correct hreflang tags

  • Each language variant can explicitly reference its alternates

  • This helps search engines understand:

    • Which pages are translations of each other

    • Which version to serve in which market

Without hreflang, multilingual sites often end up competing with themselves or confusing the search engines with what appears to be duplicated content.


Clean separation of routing, content, and translation

  • Routes define structure

  • Message files define language

  • Components stay focused on rendering

This separation is crucial as content grows. It avoids:

  • Hard-coding strings

  • Duplicating pages per language

  • Maintaining parallel directory trees that inevitably drift out of sync


Works with modern Next.js rendering

  • Fully compatible with the App Router

  • Plays nicely with:

    • Server Components

    • Static generation

    • Incremental Static Regeneration

  • No hacks, no middleware gymnastics

You get SEO-friendly output and modern Next.js features without trade-offs.


SEO decisions are explicit, not accidental

  • With next-intl, you consciously choose:

    • Which languages exist

    • Which URLs they live at

    • Which pages are indexed

  • Nothing is left to browser defaults or crawler heuristics


That explicitness is exactly what search engines reward.


What did I do wrong?


I believe that by asking ChatGPT how to do it with NextJs I got answers that were taking me along a winding path of recreating something that already exists. I had asked the Ai the wrong question. It did not consider the external libraries that already exist.


What I needed to do was more fully define exactly what I needed, kind of like writing a user story for your team. The better you define what it is you want and ensure that the team understands, the better result you will finally get. When in discovery mode looking for a solution, I should be explicit and inform the AI to ensure it considers third-party libraries and not constrain it with prompts that narrow its solution, like "how do I do this using Next.js."


I have since stopped using ChatGPT for these tasks and moved on to Claude. The difference is astounding - I will cover that in a future post.


The next-intl experimental feature that really helps me move quickly


I can't recall how I stumbled on this experimental feature, probably some combination of google and Ai chat, but its definitely one I will be using.


Message Extraction

This approach tracks changes directly into message files and automatically creates keys based on the source language. That means:

  • No wasting time inventing abstract or premature translation keys

  • No refactoring keys when wording or context changes

  • Faster iteration during early development


When new text is added, empty entries are created in other languages automatically. It’s exactly the kind of workflow support that makes localisation manageable instead of painful. I want to spend my time creating pages with content, not worrying about namin translation keys correctly. Naming things takes so much time! It's one of the biggest time sinks so anything I can do to avoid that is a big win.


The feature is still marked experimental, so I’ll keep an eye on changes — but it was simple enough to set up that I’m confident future adjustments won’t be hard to handle.


With this in place I have the routing and site structure handled and the english version of the site being generated naturally as part of the development cycle. The other language translation files are automatically created and empty placeholders are added to the translation dictionaries when new english phrases are added.


This puts me in a great position to move on to the next part - automated translation of the english phrases and change management. I will cover that in the next post.


Need help?

If you want help setting up your site for a multi-lingual audience the please get in contact: info@cortexit.co.uk











Comments


bottom of page