Starcom: Nexus has about 65,000 words of text, roughly the length of a short novel. How does an indie game dev know which languages, if any, to translate into?
Translation costs vary, but I’ve seen estimates ranging from $0.02 to $0.08 per word if you use a freelancer or $0.10 to $0.30 per word via an agency, depending on the language. Assuming a lower-end average cost of $0.04 per word, that’s $2600 per language.
Let’s take Russian for example. From my research, our genre (top-down space action adventure) is relatively popular with Russian speakers. There have been multiple titles that were developed in Russian first and translated to English, such as Space Rangers HD. On the other hand, using Steam’s default currency prices, the game would be selling for roughly 1/3 the US dollar amount. A Russian translated version would need to sell nearly 1000 additional copies to break even after Valve’s percentage, VAT, etc.
I really wasn’t sure how well the game would sell: I had spent a considerable amount of time trying to figure out how well the game would do, but there was a lot of uncertainty. Wishlists from the Russian Federation accounted for <5% of total wishlists.
So going into Early Access, I didn’t feel like I could justify professional localization. Instead, I decided to try an experiment: I would try translating the bulk of the game’s text with Google Translate. The game would be advertised as English only, but once launched the game would have “experimental” language options.
If a player switches into an experimental language, the game shows a pop-up explaining that they were seeing an automatic translation and they could suggest improvements to most text they found in game by right-clicking on the text while holding the alt key. This opens a dialogue window with the original English text and editable text area of the translation. Once submitted, their suggestion is sent to a web server that stores it in a database table. Once I’ve had a chance to verify that someone isn’t spamming in vandalized text, I can mark it as an approved replacement.
The objective was to render the game playable for non-English speakers until the translation was sufficiently improved by either player submissions, dedicated volunteers, or professional translators in the event the game did well enough.
Stardate: Netflix and Chill
I don’t really speak any language fluently besides English, but I have a half-forgotten high school understanding of French. So the first language I tried was French. My French isn’t good enough to know whether or not the machine learning translation was always grammatically correct, but I immediately recognized a problem.
The HUD displays the current in-game date. After switching into French, it read “Rendez-vous amoureux” which literally means “romantic meeting.” Without context, GoogleTranslate didn’t know whether the word referred to a calendar date or like an OKCupid kind of date and had guessed wrong.
“Hail” was another problem. Early testers playing in German reported that Google had chosen the “falling ice” meaning instead of the intended “contact another vessel” meaning.
One of the game’s alpha testers pointed out that the game’s use of forced capitalization in some UI components was producing incorrect text in his native language of Turkish. In Turkish, a dotted “i” is a different letter from a dotless one and there is both a dotted and non-dotted capital “I.” Most of the time this just looks unprofessional and wrong, although on rare occasions it can cause confusion and murder.
Overall, the response from testers was the text was obviously flawed but usually comprehensible from the context.
The biggest problem after the quality of the translation was testing. Not being able to read any language besides English fluently, I couldn’t easily do a full playthrough to identify which text was missing a translation, failing to change the font, using the wrong translation id or not providing enough space where the translated text took up more space than the English equivalent.
It occurred to me that it would be helpful if Google could translate the text into something like Pirate speak: a language that was not quite English, but legible to an English speaker. Unfortunately, it did not. I figured that there must exist some solution to this problem, but during the hectic period of Early Access I never took the time to research it.
Once the game was in full release, I revisited the problem and discovered a new term: pseudolocalization.
Pseudolocalization is an automatic process that takes a bit of text and substitutes its characters with different but similar looking characters, while at the same time doubling some letters to increase symbol length. Exactly the technique I’d been looking for.
I modified an existing solution I found to create a new “language” called Progent-A (in the game the Progent are a mysterious long-vanished civilization the player periodically uncovers evidence of. Progent-A is a play on Linear-A, a famous untranslated language in the real world from ancient Crete.) It substitutes regular Latin characters with characters from extended Latin, Greek and Cyrillic while expanding the most common two and three letter pairings in English. This creates text that at a glance looks like an alien tongue, but on close inspection is relatively easy to read, particularly if you already are familiar with what it says.
Here it is in action:
It’s immediately obvious that the title “Planet Anomaly” and the words “Research Points” have not been translated. The text identifying the two other resources has been translated, but not assigned the right font. This leads to the text being displayed as little squares known as “tofu.” (Incidentally the fonts used for non-English text come from Google’s free “Noto” family, which is short for “no tofu”.)
The Experiment’s Results
After a year in Early Access and a few weeks of full release, players had submitted translation suggestions for 1800 language-symbol pairs (a symbol being a chunk of text that can be translated as a unit, whether it’s a single word or several paragraphs). That’s really great, but the game has over 3200 symbols and a dozen “experimental” languages for nearly 40,000 possible language symbol pairs. Even the language with the most coverage (Russian) only had submissions for 900 symbols, less than a third. Clearly not enough to advertise that the game supported Russian.
But players were playing the game in a variety of languages. From the anonymous analytics I’d implemented, I knew that there had been over 1000 games started with the language set as Russian and over 800 with simplified Chinese.
This was somewhat of a mixed blessing: the game is not advertised as supporting any language besides English. It has sold fewer 200 units in China (which does not necessarily equate to the Chinese-speaking players) and received 7 Chinese language reviews, 3 of which are negative. That’s a 57% rating, compared to the game’s overall ~92% rating in all languages. Put another way, China accounts for less than 1% of the game’s revenue, but 10% of the negative reviews. Three is not a huge sample so it is unclear how the game would have been received if it had not been translated at all but it is possible that by including simplified Chinese with Google Translate baseline I had lowered the game’s overall score for a trivial increase in revenue. On the other hand, all 13 Russian language reviews were positive. Was this difference the result of the quality of the automatic translation, a difference in game preferences, or just a statistical fluke? I didn’t know.
Evaluating Languages for Professional Translation
After graduation from Early Access, the game sold better than expected. Not Stardew Valley or FTL numbers, but suddenly it seemed much more likely that expanding into some languages could justify the cost, even though I’d missed the launch exposure.
The question was, which languages?
As a starting point, I made a spreadsheet listing the unit sales, revenue and outstanding wishlists from the top-selling 30 countries. From this I calculated the average regional unit revenue and the ratio of wishlists to units sold for each country.
Regional Unit Revenue = Regional Revenue / Regional Units Sold
Regional Wishlists per Unit (RWU) = Regional Wishlists / Regional Units Sold
For English-speaking countries this averaged around 2.2. For most non-English speaking countries this number was higher. Dividing this number by 2.2 gave an estimated the factor by which that language underperforms English in wishlist conversion.
Subtracting 1 from this number gave a value I’ll call the Language Uncaptured Return (LUR). That is (at least in my theory) the percent of sales lost in each language due to lack of localization.
I also estimated the expected future global sales of the game assuming no localization (I used 4x first week Gross revenue for estimated lifetime Net revenue) for each country. Add up all the countries that speak that language and multiply by the LUR to get the expected marginal return on translation. If it’s greater than the cost of translation then it’s worthwhile, at least from a financial standpoint.
Example: There were 2300 outstanding wishlists for the Russian Federation, compared to 500 units sold in the region. This is a RWU of 4.6. Compared to the US baseline of 2.2, I have about half as many sales per wishlist in the Russian Federation.
If the main reason for the difference is localization, then localizing to Russian would be expected to double future sales in that region. If lifetime future sales in the Russian Federation with no translation are expected to be $2900, translating to Russian, ballpark would have a return of around $3000.
I’m making quite a few unproven assumptions here, the most significant being the LUR calculation. It makes sense that lack of localization is part of the reason for lower wishlist conversions in other countries. But there may be other reasons. For example wishlists from Germany convert at a higher rate than the US but I would not expect adding German as a supported language would lose sales. In short, the theory is unproven and probably flawed, but in the absence of any other model it is my best starting point.
Six months later…
It’s been roughly 6 months since I added professional Russian localization (by translator Roman Matsuk). What did I learn?
Shortly after launch, the game had sold ~23,000 units for Net revenue of $227,000 across all regions. The Russian Federation share of that revenue was $2400 or 1.06%.
Since then, the game has generated another $79,000 in Net revenue, with the Russian Federation accounting for 3.2%.
Assuming the change is the result of adding localization, that localization effort has generated returns of $1700. This is in the general ballpark of the LUR model, since the game will presumably continue to sell additional units over time.
With a single datapoint it is impossible to say if my LUR model is right or just lucky, but in the absence of another model, it seems to be at least a good starting point.