The end of my study at UCL is fast approaching. I’m currently finishing up my dissertation project, an exploration of how fingerspelling in BSL can become “nativised” (more “sign-like”), and an analysis of what kinds of fingerspelling Deaf community members across the UK are actually using, based on narratives and conversations in the BSL Corpus. If it’s accepted, I’ll blog about my findings later in the year.
Fingerspelling is a representation of a writing system. The presence of written foreign language elements in a signed language is interesting on many levels, but problematic for some. Just like the dusty “Immortals” in the Académie française who presumably feel vindicated now that the Walkman brand of personal music players has died off, and German complaints about Überfremdung (“over-foreignness”) in the face of the spread of “Denglisch“, there are occasional pockets of resistance in the British Deaf community to signs which have a perceived “hearing” (English) influence.
These anxieties are fully understandable if looked at in terms of oppression or privilege. I don’t think it’s a total coincidence that English has one of the largest vocabularies in the world but is also the preferred language of some of the most invasive and destructive empire-building nations in history. Likewise, it is difficult to formulate a decent explanation of the British “Deaf identity” without acknowledging the role of hearing English-users as oppressors who insist that specific articulators and receptors are used in the production of language. Nonetheless, language mixing is inevitable for all but the most repressive of societies. It is arguably proof that a language is alive and thriving.
I sheepishly admit to getting a little side-tracked, but my grandma told me to always finish what you start. Part 1 was all about the surprising diversity of nonsense, the difference between non-language, pseudo-language and lies; we encountered the idea that many if not most language utterances are riddled with invisible implications that even users of the same language have to unpack and infer from without even noticing (we call this pragmatic competence); and that in order to reformulate a concept from one language to another, we have to be able to understand it in languageless terms in the middle, to “see” it internally, to connect it deeply to knowledge. Part 2 was about ambiguity, failure, intentions, co-production and, for some reason, sophisticated mouse civilisations. Sorry about that.
I was once button-holed by a PhD student after interpreting a computer science lecture, who told me he was working on a project to replace sign language interpreters with robots. I asked him how he knew it hadn’t already happened.
He didn’t get it.
In 2010, Google introduced an automatic captioning service for YouTube (speech recognition software, essentially) and pitched it as an amazing access improvement for d/Deaf people in general and students in particular, encouraging media fanfares to pluck at heartstrings. The deaf credentials of the engineers were highlighted, and there were “very positive reactions” and no negative ones from all two of the people consulted by a BBC journalist: “Now we really can completely understand what is going on and we feel like we are on an equal playing field,” said a student. (I envy them: I have never completely understood what is going on.)
Around that time, I was working in an education team in a Local Authority sensory service, and was asked (as resident geek) to assess whether we should recommend to teachers that they use automatic captioning when they inevitably use YouTube videos as a learning resource in classes which include d/Deaf students. My recommendation was that the captioning was actually worse than nothing in the context of education, and should be avoided at all costs. At that time, the system could not even cope with dialogue lifted straight from Hollywood film trailers, let alone a British accent with any kind of normal background noise or incidental music. I’ve only played with the system a few times since, but saw nothing to make me believe it has substantially improved.
And this is machine processing of just one language, albeit from its spoken to its written form. What happens when two very different languages are involved? We’ve been seeing rolling hyperbolae from big corporates ever since, promising wearable technologies which will be able to translate between languages in real time (but whose voice recognition interfaces, not to mention the entire corporate ecosystem around the product, are not accessible to Deaf people without speech).
In the sign language field, there are spurts of hugely misleading promotions from tech giants such as Microsoft claiming to be using motion sensors to “understand the gestures of sign language and convert them to spoken and written language”, not to mention bizarre assertions that an interaction interpreted by a living, breathing linguist is “very artificial” but one mediated by a set of machine algorithms and an uncanny valley avatar is “natural”.
Special mention must go to products like this one, which – SQUIRREL! – must have been inspired by the Pixar film “Up”. “Futuristic headset translates your dog’s thoughts into human language!” claims the headline, which is great, except that it isn’t futuristic, translation, thoughts or language (EEG was first used on dogs in 1912). It’s interesting to note that in the German dub of “Up”, Doug and the other machine-translated dogs are distracted by cats, not squirrels – an example of the kind of cultural mediation and dynamic equivalence that machines are especially poor at.
The view I was offered in lectures from world-leading academics at the Centre for Translation Studies at Leeds was somewhat more tempered and practical: that machine translation is an exciting development, yes, but it is also nothing more than another tool for human translators to employ, a tool in the same sense that a dictionary is a tool, just a little bit more sophisticated. There is a school of thought which says that in order to mediate human meaning and intention, it is beneficial to be a human, or at the very least, something like a human. (This reminds me of the time a colleague was complaining that Deaf people make poor choices of interpreter and their decisions are based too heavily on interpreter personality. I can only conclude that they were angry and envious because they didn’t have one.)
We are back to issues of adequacy: asking for directions in a foreign city, ordering a pizza and many other tasks that lead to sales of products and services (totally coincidentally, I’m sure) are short, goal-oriented tasks that lend themselves well to a systemic, literal approach to language. These machine translation tasks are definitely within reach already – even if the translation is noticeably garbled or unnatural or marked, in those cases it might still be adequate.
But sitting in a physics class or discussing emotional difficulties with a mental health professional or engaging in political debate are not comparable to choosing egg and chips from a fixed menu: they are complex discourse situations embedded with the subtle and chaotic echoes of inferences that evolved over millennia. They set a much higher bar for translation adequacy. Since people frequently misunderstand each other even when they apparently speak the same language, how is a non-sentient entity supposed to cope? How is it supposed to know that it has misunderstood?
In the workshop that I’ve run a couple of times now on functional/purposeful translation theories, I set the group a task which I cheerfully stole adapted from one of my lecturers in methods and approaches to translation: I give them three target texts translated from the same foreign language newspaper source. One is a literal translation created entirely by humans (literal in Nord’s “documentary translation” sense or Nida’s “formally equivalent” sense). The second is a “free” or “purposeful” translation by humans (Nord’s “instrumental translation”, Nida’s “dynamic equivalence”). The third is a machine translation by either Microsoft Bing or Google Translate. When I ask the group which translation is the best, and then ask which one they like the most, they initially tend to rate the literal human translation as “best” (and then give slightly confused reasons for that, often due to a common misconception that “translation” is literal and “interpretation” is free) but then they very often like and prefer the natural, intention-centred language of the purposeful translation the most. The only people that ever voluntarily choose the machine translation are usually first language speakers of the text’s source language: possibly they relate to echoes of their own native tongue preserved in the structure of the machine translation and may be oblivious to the “marked language” errors it contains.
The lesson: if I were translating foreign newspapers for a living, I could possibly save a bit of time by using commercial translation software, but would still have to go through the output with a fine tooth comb in order to maintain a shred of credibility. I, for one, welcome our new robot overlords and currently do not believe they are going to make my job obsolete in my lifetime.
The amount of information stored in the world is, some say, rising exponentially, doubling in ever shorter periods of time. A friend of mine believes that the world is going to end sometime in 2016 because of this trend: the rate at which the sum of information stored doubles will reach every year, then every month, then every day, then every second, then every millisecond, and then … what? Nature is sadly filled with J-curves. Fortunately for my doom-saying friend, an ever-increasing proportion of new information is just meta-information, indexes of indexes of indexes, catalogues of cultural sterility created by unthinking robots only for the thankless benefit of other robots. My server logs are mostly filled with the details of failed hacking attempts by mindless algorithms originating on the other side of the planet, like a million flies all banging against the closed half of a half-open window. If there is a Singularity, sadly, it might just be a black hole of stupidity.
So, given my scepticism and my love/hate relationship with computers, I was unaccountably moved to read the following message, which shows several clear markers of translation by a non-human entity (only a machine would choose to replace the X in the phrase “get X with” with “get acquainted with” to make “get get acquainted with with”, or render the second person singular future of “to will” as “you will will”). I don’t believe “Oleshka” really exists, or at the very least she is not being entirely honest with me and her terseness and lonesomeness conceal a deadlier purpose, but only a machine could fail to be even slightly moved by the wistful courtliness of this entreaty, and I am a human who revels in being flawed. Perhaps just sometimes, the language of machines does speak more profoundly of beauty and of love.
Hi!!! I don’t know as to begin because it is my opening practice of knowledge in the Internet. But I write you this writing that will get get acquainted with with you. I didn’t get become acquainted on the Internet never and thence I don’t know what to write. But I will to tell that I was tired of lonesomeness and I choose to find the heart and to commence the serious relations. And I will be glad if you compose to me and you will will to get acquainted with me.
I don’t know why I write to you, but in you there is something like that attracts me. I won’t write about myself many, whether because I don’t know you will become interested in knowledge to me. But after all it is the Internet and if you don’t write to me, I will make out you. But I will hope that I will find your message and I will write to you more about myself in the next writing.
And now, I will write about myself tersely. My name is Oleshka. And I am from Russia. I am 30 years old. If you choose to get to know about me more, compose to me about yourself in better detail. But I will to tell you at once that I refer knowledge very seriously and I will from you same. Because I heard that on the Internet there are people who write for entertainment cheerfully to spend time and to whom money is necessary. And me it doesn’t interest.
But it seems to me that you are very serious and I will like to know you more. Certainly I may be mistaken but if we know each other more, time will show everyone. Do you agree? For me it is the opening step and it very hard, but I trow in the best and the succeeding step for you.
I will wait very much for your answer. Oleshka.
I hope that you find the heart and that time shows everyone, Oleshka, but I doubt it. Be safe.