Automating Translations with AI: A Practical Solution for Cost and Ease of Development
- Mark Waldron
- Feb 17
- 5 min read
The Problem
I'm building whentotravel, a multilingual travel site with 14 languages. Every time I update English content, I need translations for Arabic, Danish, German, Spanish, French, Italian, Japanese, Korean, Dutch, Norwegian, Portuguese, Swedish, and Chinese (both simplified and traditional). Manually managing this is tedious and error-prone.
Could AI automation handle this efficiently without breaking the bank and without adding additional layers of development overhead?

The Solution
I started looking into this in a previous post:
The solution came out of that initial exploration.
Make use of the Next-Intl and their useExtracted() feature to manage message translations files during development.
Combine this with scripts that use the Anthropic Ai APIs to provide translations for the untranslated text.
Add a test suite that ensures that all visible text within the app is being handled using that translation features in Next-Intl and the translation patterns required for auto-translation to work as expected.
Enable Pre-Commit hook to run the test suite to flag when translations are missing.
Enable Pre-Push hook to automatically run the translation scripts if that step has somehow been missed during the current development cycle.
Key decision: Pre-push vs CI
I considered running translations in the CI pipeline but chose pre-push instead:
Translations committed atomically with code changes
Zero CI costs for translations
Immediate feedback if something fails
No incomplete translations in production
Next-Intl useExtracted() and Turbopack
This is a great new experimental feature added to Next-Intl. Its experimental at the moment but I really hope they keep (or even improve?) it. It makes this possible and removes a lot of the pain points
1. Automatic Key Generation
Instead of manually defining keys in a JSON file (e.g., t('home.title')), you use the useExtracted() hook and pass the actual source text.
Code: const t = useExtracted(); return <h1>{t('Hello world')}</h1>;
Result: During development, Turbopack (or Webpack) uses a specialized loader to "peek" into your files. It assigns a unique, minified key to that string and automatically updates your message catalogs (like en.json).
2. Turbopack Integration (Next.js 16+)
Turbopack significantly optimizes this extraction process through several mechanisms:
Fast Peeking: Next.js 16 introduces an optimization that allows Turbopack to check if a file uses useExtracted() before fully processing it, keeping the dev server fast.
Instant HMR: When you change a string inside t('...'), Turbopack's Hot Module Replacement updates the emitted translation catalog in memory. Your browser reflects the new text immediately without a page reload.
Rust-based Transformation: The SWC compiler handles the heavy lifting, transforming your useExtracted() calls into standard useTranslations() calls behind the scenes for the production bundle.
3. Catalog Syncing
If you modify a message, the loader not only updates the source locale but also keeps your target locales in sync by adding empty entries or placeholders for the new strings.
All of this means that the developer (Or Ai Agent) can develop in a native language and not worry about the naming of translation keys. I personally hate having to spend a long time thinking about naming. It takes way too much time and introduces a refactor hotspot in the code. Removing that is a joy. The messages catalogs are all kept in sync and retranslate requirements are quickly and automatically identified by reseting the non source language messages to empty strings. This make identifying what need retranslating very simple.
The Claude Ai Translator
I worked with Claude and Gemini to generate this script. I quite like doing the initial planning with Gemini to keep the cost down. I had to battle Claude a little at times to get it into shape and not create some obviously wild code, but in the end I am happy with this version. It could do will a little more tweaking at the point that I reuse it in more that a one more project but at this point in time its solid enough for what I need from it.
Here are some of its key benefits
Prompt Caching for Bulk Processing: Leverages Anthropic’s prompt caching to store heavy system instructions and context, reducing costs by up to 90% for repetitive translation tasks. (Yes the prompt needs to be longer to actually trigger this, but it will be eventually as I iterate).
Tiered Intelligence Strategy: Routes shorter UI strings (buttons, labels) to Claude Haiku for instant, low-cost results, while sending long-form content (descriptions, blogs) to Claude Sonnet for superior linguistic nuance.
Smart Batching: Grouping translations into batches of 10–25 strings reduces API overhead and avoids hitting rate limits while maintaining a fast processing pace.
Incremental "Dirty" Checking: The script identifies only untranslated or empty strings, meaning you only pay to translate new content rather than re-processing the entire library.
Composite Key Lookup: In the .po files I noticed that Next-Intl can sometime generate the same msgid for different messages when the msgctxt is different (different page) . Thus to uniquely identify the message that needs to be update you need to key that message using a composite of both msgid and msgctxt.
Context-Aware Translation: Injects specific travel-industry personas and brand guidelines (e.g., "never translate the brand name") directly into the AI system prompt.
Native-Level Phrasing: Moves beyond literal word-for-word translation by instructing the LLM to focus on "web-friendly" and "natural native-speaker" phrasing.
Format Preservation: Automatically handles and protects complex string elements like placeholders, HTML tags, and special characters during the translation process.
Over time I can improve this. Some of the requirements are just requested in the prompt at the moment, not enforced. Some stronger guards might be needed here. From a customer and SEO perspective I really want my translations to refer to Cities, locations an Countries using the names commonly known to the person reading the article. I am not 100% sure that will be that case at the moment. I need to test and amend the prompt if needed. I also want to develop a strong, or an least consistent voice for the site, a persona which is recognisable. I will need to start thinking like a brand manager for that, so a different hat to put on for that refinement.
It has great foundations though. And its cheap. A full translation of my homepage into 14 languages cost ~$0.02. I think thats great value considering the benefits it provides. Benefits that will only get better the more effort I put into its refinement.
When combined with the tests and workflow elements that I developed to support this I think we have a strong base.
Contact me if you want to talk on these subject or are interested in finding out more about the configuration and tests that go along side this to build it into the development workflow.



Comments