Control Shift: How LLMs Are Changing Localization in a Way Machine Translation Never Could
Photo by Edu Lauton on Unsplash

Control Shift: How LLMs Are Changing Localization in a Way Machine Translation Never Could

Since OpenAI released ChatGPT to the world, and with tech companies racing to train and release better, faster, and more specialized language models, LLMs and AI have become household names.

With the debate on human vs. machine once again at the forefront of the language industry, we find ourselves having the same discussions as when machine translation was introduced, covering topics such as:

  • Control
  • Use cases
  • Specialized expertise and skillsets
  • Naming and self-identification
  • Pricing
  • Processes and standards

But here’s what I think has indeed changed this time round.

Control Has Changed Sides

The most impactful change brought about by LLMs—and the primary driver of disruption—is that control has shifted from the language service providers (LSP) side to the buyer side. OpenAI’s decision to release ChatGPT to the public created a sense of power and autonomy that previously resided with service providers.

Use Cases Are Crowdsourced, Not Imposed

Unlike machine translation, which was cloaked in science and came with a pre-defined use case (embedded in its name), LLMs were launched without a specific use case application. No sandbox environment, no constraints—just a playground for free experimentation. As a result, their potential applications were effectively crowdsourced to millions of users across industries, making it one of the largest user research projects ever conducted.

Technology Can No Longer be Separated from Offering

Like many service providers across industries, LSPs have traditionally outsourced their technology in a transactional rather than strategic manner, relying almost entirely on tech providers’ roadmaps. When these roadmaps were forward-looking, LSPs benefited; when they stagnated, LSPs’ capabilities were limited. The focus has largely been on internal efficiencies rather than on building a customer-centric infrastructure.

LLMs and AI have abruptly made us realize that technology cannot be an afterthought, something layered on top of your service offering. Instead, technology must be the spinal cord that drives a competitive, future-proof service. Technology should not only support operations; it should define and measure how services are rendered and how they evolve, enabling greater speed, quality, and scalability.

Expertise Still Being Acquired, and the Matching Skill Set Still Being Defined

As new use cases emerge, we witness a familiar pattern. Buyers drive change, while suppliers initially struggle to justify their value, and then scramble to adapt their solutions to the new reality. Understanding the new advanced technology tools is easier for those with a technical background, so both buyer and supplier teams are seeking engineering expertise. Meanwhile, non-technical professionals are either upskilling or resisting change. The players best positioned to gain exponential value from the latest technology relatively early are tech providers, both within and outside the language industry.

Naming as a Self-Identification Stronghold Is No More Justified

New job titles are emerging as new teams form and new skills are required. Yet, translators continue to safeguard their title as their last stronghold of self-identification. Historically, job titles were tied to academic degrees, but that has not been the case in many industries for years. Translators have long performed various tasks—editing, proofreading, post-editing, research, terminology management, localization, culturalization, trans-adaptation, layout checks, linguistic testing—but these shifts never sparked debates on self-identification. So why has the rise of LLMs triggered such discussions? Even if translators need to rethink their title, their value can now extend to new areas: terminology validation, register adjustment, fact-checking, source validation, and relevance assessment.

Discussion Should Be About Offerings, Not Just Pricing

Buyers, LSPs, and freelance linguists are once again negotiating pricing, but what’s the point of discussing the price of an obsolete offering? Translation is already free. If translation is free, any bundle that previously included translation needs to be reevaluated, along with the associated pricing models.

Take the legacy TEP (Translation, Editing, Proofreading) workflow, for example. The proofreading step has already been made obsolete for most content types except document-based content. Yet, we still refer to TEP instead of TE. Traditionally, translation (T) accounted for 70–80% of the rate, while editing (E) made up 20–30%. But now that LLMs provide the translation step for free, the real value in the service offering shifts from translation to editing. It follows, then, that editing LLM-generated content should account for its own independent pricing—potentially an equivalent to what was previously considered the "new word rate" for TEP. New product, new price.

Current Processes and Standards Are Hardly Relevant

Just as naming conventions, service offerings, and pricing models are being challenged, so too are industry processes and standards.

LSPs have long justified their value through vendor management and project management functions. But now, what kind of talent should vendor managers source or train if not traditional ‘translators’? How will localization project managers run projects involving LLMs across various tasks? Are ISO standards for quality, translation workflows, and machine translation post-editing (ISO 9001, 17100, 18587) still relevant when LLM-generated content is replacing machine translation, or when workflows demand different skill sets? Management teams now need to make significant new strategic decisions.


Article content
Photo by


A Win-Win Future: Maximizing the Potential of LLMs

While localization professionals can leverage LLMs for tasks such as quality control, terminology management, and workflow automation, the broader industry must consider deeper structural shifts.

Key areas of focus include:

  • Talent Mobility—Greater movement between the provider and buyer sides can unlock value, allowing professionals to shift into roles related to AI governance, content strategy, asset qualification, and data-driven informed decision-making.
  • Limitations of Current LLMs—Existing LLMs are either too broad—focusing on large-scale models with limited predictability for specific use cases—or too narrow, prioritizing English over true multilingual enablement. The industry must advocate for AI models that better serve global linguistic diversity, where localization professionals can contribute with real value.
  • Interdisciplinary Work Groups—To keep up with AI advancements, industry stakeholders should establish interdisciplinary work groups to redefine standards, workflows and quality expectations that align with evolving AI capabilities.
  • Academia as a Bridge—Universities and research institutions can play a greater role in bridging the gap between legacy market standards and the latest technological advancements. By updating their curricula and fostering collaboration between academia and industry, they can help train a future-ready workforce with the relevant skills needed to navigate this evolving landscape.

Some of these ideas are not new, and yet we find ourselves, once again pondering upon them in the sphere of theory because it’s hard to move from idea to implementation, because it’s hard to be the first to change, because the status quo is really good at sticking around.

So, where do we start? Share your thoughts in the comments!

Roberto Ganzerli

LSP Growth | Language Industry Expert | M&A Advisor | Advisory and Corporate Board Member

1mo

Great insights Vasso! Thank you for your article.

Tayseer Hamwi

Head of Business Development at LocHere

1mo

Even the post-MT era is now finished. Take a quick look at locaitra.com the first AI dedicated translation platform. And enjoy 10K FREE translated words in any pair you like.

Like
Reply

"Great insights! The industry is evolving fast, and it’s exciting to see new possibilities for both humans and AI. I’m also part of this journey. Team dtplabs.com

Hector B.

Localization Strategist - Solutions Architect

1mo

Very informative and useful tips. To thrive in the changing localization landscape, organizations should assess their current workflows and integrate advanced AI tools like LLMs for enhanced efficiency. Redefining roles within localization teams and providing relevant training is essential. Developing customized solutions tailored to client needs and fostering interdisciplinary collaboration will ensure continuous improvement. Monitoring performance and committing to ethical practices will further strengthen brand reputation. Engaging with industry partners and academic institutions will keep organizations ahead of trends, positioning them to leverage both AI and human expertise effectively.

Manuel Herranz

On a mission towards an AI that’s more multilingual, accurate and responsible. We gather, process, prepare and structure ethical data for AI. I’m a technology analyst, frequent speaker at industry events.

1mo

We ARE moving to a flat-rate service or one where the cost is not the word. That is , one in which buyers pay for the software architecture providing the service (customisation, personalisation, adaptation, MTQE, agents, etc.,) but the actual MT is flat-rate, in a pure SaaS model. I have heard Renato Beninatto talk about a shift similar to what happen in telephone services years ago. Watch the film “Blackberry” and you’ll see what happened to a world that paid 0,10 per SMS. Just in case someone here does not pay a flat rate for telephone &data services, or to use internet at home or watch 1 or 100 movies a month: the “per unit” cost model gave way to data-driven services.

To view or add a comment, sign in

More articles by Vasso Pouli

Insights from the community

Others also viewed

Explore topics