AI development is still not about to replace the translator - though it offers a good tool
AI development is one of those things which might be less important than generally portrayed - or far far more important. Some believe that we are on the cusp of human or superhuman level AI changing everything in the society. Such messages have heralded the introduction of this new “generalist” AI, for instance. Some may consider this a step towards a transhumanist singularity. Others fear we are about to create Skynet.
On the other hand, in political circles, even ones that are concerned about technology and the development of computers and see it as a political topic, discussions about AI, its effects on society and its political implications are almost nonexistent. At most AI becomes another part of discussions about technology replacing the need of human workers. This is then either portrayed as a boon for economy or a cause of concern due to mass unemployment, or both.
I see all AI discussions in the light of my own profession, translation. Predictions of translation being made useless by AI have been occurring a long time. I remember, years ago, being in a pizzeria with a friend (also a translation studies student back then, like me) after an evening out. We were accosted by a drunken man who, after inquiring us who we were and what we do, proceeded to tell us that he was an engineer and that he knew that machines would soon make translators useless. I remarked that the machine that would make translators useless would also make engineers useless.
The current translation AI is, indeed, quite good; it allows me to read Russian analyses of the Ukrainian war on Telegram and derive meaning from them, for instance. Is it replacing me? No. However, it still has a major effect. An increasing part of my work - well, a majority, at this part, though there are still companies that want me to translate from scratch - is proofreading AI-generated text, and that is getting easier and easier.
No matter how good a translation AI I work with, it still needs me to check it, since even the good translators will occasionally do something very stupid or even end up creating, for instance, instructions that are completely erroneous. When you are dealing with something like medical devices, erroneous instructions might kill people.
There are effects smaller than that, though, which are also meaningful. One translation task I encounter at times is corporate lingo and advertisement-related texts, and these are quite prone to cultural context to actually be something that works as an ad; there are major cultural differences in ad lingo in Finnish and English, starting from things like how words like "please" are used much less in Finnish (though it's not completely nonexistent. A human translator is needed to know when it is needed and when not).
I've often, for instance, thought about tasks where I must translate corporate survey lingo. One problem I have had to encounter at times is how to translate, for instance, the concept of "race" in surveys and such in general, since this word is freely used in English but in Finnish contexts there is a definite aversion to talking about human races at all, expect when directly connected to racism as a concept. Usually, I just end up translating using concepts like ethnicity, which is not a perfect solution, but will do.
Machine translation is still translation of paragraphs and sentences. It can often know what to put in quite well but is much worse at knowing what should be left out. One of the practical functions of machine translation, and increasing reliance on machine translation, is the continuation of direct importation of American cultural models into European cultures. I imagine similar problems would be evident in many other contexts.
Of course, this is just one of the things which shows that even translation needs a generalist AI, but as some have noted, even this generalist AI has certain limits. Even the newest Deepmind AI is still tangibly a tool, though a tool with a great number of functions. It is not a question of being able to handle many functions, it is a question of knowing which specific functions might potentially be needed in a process and how to apply them.
I used to be more confident that a program able to also handle such issues will not arrive in the near future and am now less confident. Still, one function a human translator might still have in the future would still be the one of a "shit magnet". One might think of it this way; usually translation tasks are handled by translation companies, which are contacted by customers, and which then find a freelancer from their lists to take the task.
Imagine a company deciding that machine translation is now at the level where they can just cut out the translator, take the customer's texts, run them through machine translation and send them to the customer, charging them the normal rate. The customer is not going to notice, since they usually do not know the target language. That is why they need the translation.
This works until there is a mistake bad enough to lead to either extensive negative customer feedback or tangible results like dead bodies, at which point the customer is going to sue the company. If the company has a translator who was at charge of verifying the translation and its quality, they can assign the blame to the translator for not doing their job and fulfilling the contract they had with the company. If there is no translator, it is the company that must take the blame.
Of course, the same idea applies to a lot of other functions. If there is a decision made to use AI to process data and make decisions at some bureau, there is still going to be someone at some high role making the decision to use the AI for that. If the AI does a bad job, it is going to be the role of the person at that high role. That is, unless they can assign someone at a lower role to supervise the AI, even if that supervision, 99 times out of 100, just means doing a check that the AI is doing everything as intended and spending the rest of your time playing Candy Crush.
The true AI risk is still in the human-AI interaction, not only related to the decisions related to what data is fed to the AI for decisions but also in the human-AI interaction in those decision-making and supervisory roles. Before we get the AI that makes the engineers useless, we still must grapple with the fact that that AI itself will be influenced by the attitudes of the current engineers.
All of this is still mostly a present-day worry, though. In the visions of the AI risk proponents, the true risks are related to vastly more developed artificial intelligences and their capabilities, not the current pieces of software aiding in translation and other functions. Still, even the advent of superhuman artificial intelligence would certainly be preceded by considerable expansion in the use of less-intelligent AI and its interaction with the current social structures. The current issues may just herald similar problems in the future - but in a whole another category of magnitude.
Image: "Spaghetti Letters & Numbers" by Leo Reynolds is licensed under CC BY-NC-SA 2.0.