How will GPT-4 affect white-collar jobs? Translation shows us the way
Translation field has been affected by machine translation - but sometimes in unexpected ways
In the ongoing discussion on AI, especially the negative, scaremongering side of the discussion, much of the fear just seems to be quite… personal. Genuine tremors caused by apocalyptic Skynet scenarios and scary AI shoggoths lurking behind the surface seem to just reflect the personal fears of many of those in the writing professions, whether writing articles or writing code.
Automation now seems to be coming for white-collar professions in a big way, just like it has come for many blue-collar professions before. The “Lost your factory job? Learn to code!” brigade is fearful that they might, in fact, now learn to plumb. Or do something else previously considered by many to be low-status.
Of course, GPT-4 has hypercharged all this. Some of the demonstrations eagerly posted online give the impression that the newest GPT version can do just about anything text-related with just a few prompts and clicks. A part of such a hype train might just reflect carefully seeded preset demonstrations. The future days and weeks will show how much faster and smarter the new AI tools will actually make the coding process.
There’s nothing wrong with taking a personal approach in analyzing this topic! When straying too far from one’s personal sphere, there’s every danger of exaggerating the potential effects of the considerable technological effects we are witnessing now – but also of underplaying them. Looking at one’s own particular field and its effects is one way to keep yourself grounded in something concrete.
As I've indicated before, my own field, translation, has already seen machine learning applications be commonplace for over a decade. I have witnessed the development of machine translation from substandard early Google Translate effort that many people still associate with machine translation to the sophisticated effort put in by modern translation engines.
Of course, translation is one of the tasks that the new neural-network-based general models like GPT-4 are supposed to be able to perform near human-level, as well. I did put GPT-4 to the test, attempting to translate a blog entry into Finnish, and it produced a workmanlike translation that essentially resembled GPT-3’s similar efforts but still fell short of even replacing a specific machine translation engine like DeepL. (Obviously, if you don’t read Finnish, you can’t understand the specific failures GPT-4 has made, but let’s just note that “SANNA MARIN IN UKRAINE” should be “SANNA MARIN UKRAINASSA”, not “SANNA MARIN UKRAINOSSA”.
My understanding is that the LLM-based generalist models work better on some other languages, but Finnish, at least, still seems to pose a challenge. Perhaps I will return to this topic in an upcoming entry to specifically show the defects.
MTPE, AND ALL THAT IT ENTAILS
For a long time, a large amount of my work has been checking and editing machine translation made by such engines, a job that is common enough to just generally be referred by an acronym in the field (MTPE, Machine Translation Post-Editing).
If anything, the amount of money I've been making has if anything increased during the recent years, though this is also probably natural career development (and also a necessity to answer the rising inflation). Obviously, it’s at least partially a margin issues. While a lot of text is translated from one language to another, vastly greater amounts of texts getting produced right now are not, commercial applications included. Translator workloads getting lighter and translation getting cheaper means more and more texts getting translated.
There's still a lot of work to be done to not only fix the various errors that even advanced models make but also make the texts generally just work better. I’ve sometimes thought of a certain "smell of machine translation" that often surrounds even sophisticated efforts. Sometimes, of course, this is about machine translation models creating overtly literal translation work, but a part of it is just the sameness of it all.
One fear of AI advancement, after all, is that the replacement of the creator contributes to all the movies, books, artwork etc. just becoming one gray mass, based on a few popular styles and an approximation of all that has been made before. "Just let the algorithm do it, maybe touch it up a bit" - and little by little, everything you see or hear just starts blending together. Which it already does, but even more so than now.
COUNTERTRENDS
Still, countertrends exist, as well. During the past few years, I have received less MTPE tasks than in some previous years, even though machine translation tools have improved considerably. This is partly just random chance (i.e., I've worked in projects which just aren't that suitable for modern MT applications).
However, I've also seen an increasing amount of companies just point-blank stating that translators cannot use MT of any sort to do their jobs and any indication of that is an automatic fail. They also openly tell the reason: there's the danger that the translator just uses a web tool by Google, or a similar large company, and then the translated texts (which often are secret corporate communications and like) are now owned and usable by Google etc.
It's not universal and many translation companies have utilized machine translation as a part of their work for a long time, it's just something that I've increasingly seen within the last half a year or so. I guess some companies just don't want to take any chances.
My understanding is that the biggest game changer in the field - from the point of view of a working translator, happened a decade or two earlier, when electronic communication enabled the translation from regular in-company jobs to entrepreneur-based freelancing. This was actually going on while I was still at the university. The teachers still mentioned in-company jobs as something to strive for, but often acknowledged they probably wouldn't be forthcoming and this would (negatively) affect pay for translators.
Furthermore, even before machine translation, as such, got common, there have been the so-called translation memory programs, fairly simple tools that mostly replicate existing translations to new ones, and have done their share in making translation faster. Even in white-collar fields, the automation of grunt work is hardly a new concept.
One potential field for further advancement would be when we get models that can, with some reliability, process image, audio and text at the same time, since this might have a considerable effect on subtitling (or dubbing, but that is pretty rare in Finland, outside of children's programs). This is really where models like GPT-4, now able to analyze images, might show their effect. The even more advanced models surely will.
Article image created by entering “An article image for a substack blog entry called "How will GPT-4 affect white collar jobs? Perhaps translation can show us the way"“ as a prompt in Midjourney, v. 5.