As artificial intelligence (AI) becomes more commonplace, Polish has emerged as the most effective language for prompting, outperforming English, Spanish, French, and Chinese, according to a new study from the University of Maryland and Microsoft.
Key Takeaways:
- Polish achieved the highest prompt accuracy of 88%.
- English ranked 6th, despite being most common in AI training data.
- Chinese ranked near the bottom, 4th-lowest out of 26 languages tested.
Polish Takes the Top Spot in AI Prompting Efficiency
As it happens, researchers evaluated responses from leading AI systems using identical prompts across 26 languages and found Polish delivered the highest accuracy rate, while English ranked a surprising sixth place, according to the estudiar ‘One ruler to measure them all: Benchmarking multilingual long-context language models.’
Specifically, across the tested models, Polish demonstrated an 88% accuracy rate in completing tasks based on identical prompts. French, Italian, Spanish, and Russian also scored highly, while English landed just below them. As the team explained:
“Our experiments yield several surprising and counterintuitive results. For one, English is not the highest-performing language across all models; in fact, it is the sixth-best language out of the 26 when evaluated at long-context lengths (64k & 128k), while Polish takes the top spot.”
Indeed, the findings challenge a widespread belief that English, the dominant language of AI development and web content, is the most efficient tool for interacting with advanced language models. Instead, researchers found that AI interprets Polish, historically known as one of the hardest languages to learn, particularly well.

Perhaps even more surprising is that training data volume cannot explain the performance gap. English and Chinese dominate the AI training body, yet Chinese placed near the bottom, ranking fourth from last in accuracy.
As such, the results point to deeper linguistic and model-training factors that researchers plan to explore further. Possibilities include grammatical structure, token efficiency, and model optimization techniques that may inadvertently benefit languages with more rigorous syntax.
More Must-Reads:
- AI Papers Flood Academia, ArXiv Says “No More Opinion Science”
- Soccer Coach Used ChatGPT for Match Tactics – And It Helped Her Win
- Grammarly Rebrands as Superhuman – Here’s What Changed
¿Qué te parece?
Join Techgaged on Telegram
Get first-access to daily trending tech stories, AI breakthroughs, and more, before it hits your feed.












