The end of my study at UCL is fast approaching. I’m currently finishing up my dissertation project, an exploration of how fingerspelling in BSL can become “nativised” (more “sign-like”), and an analysis of what kinds of fingerspelling Deaf community members across the UK are actually using, based on narratives and conversations in the BSL Corpus. If it’s accepted, I’ll blog about my findings later in the year.
Fingerspelling is a representation of a writing system. The presence of written foreign language elements in a signed language is interesting on many levels, but problematic for some. Just like the dusty “Immortals” in the Académie française who presumably feel vindicated now that the Walkman brand of personal music players has died off, and German complaints about Überfremdung (“over-foreignness”) in the face of the spread of “Denglisch“, there are occasional pockets of resistance in the British Deaf community to signs which have a perceived “hearing” (English) influence.
These anxieties are fully understandable if looked at in terms of oppression or privilege. I don’t think it’s a total coincidence that English has one of the largest vocabularies in the world but is also the preferred language of some of the most invasive and destructive empire-building nations in history. Likewise, it is difficult to formulate a decent explanation of the British “Deaf identity” without acknowledging the role of hearing English-users as oppressors who insist that specific articulators and receptors are used in the production of language. Nonetheless, language mixing is inevitable for all but the most repressive of societies. It is arguably proof that a language is alive and thriving.
“Words don’t mean; people mean.” – commonly attributed to S.I. Hayakawa
In the Indian adage about a group of blind people feeling different parts of an elephant, each of them gives a very different account of what they learn: the one feeling the trunk says that an elephant is like a branch, the one feeling the tail says an elephant is like a rope, and so forth. It’s a parable about subjectivity and the limits of perception. What this story does not tell you is that all of the elephant-feelers immediately fall insensible to the floor, paralysed by a new appreciation of the sucking abyss that surrounds the elephant for light years in every direction.
I sheepishly admit to getting a little side-tracked, but my grandma told me to always finish what you start. Part 1 was all about the surprising diversity of nonsense, the difference between non-language, pseudo-language and lies; we encountered the idea that many if not most language utterances are riddled with invisible implications that even users of the same language have to unpack and infer from without even noticing (we call this pragmatic competence); and that in order to reformulate a concept from one language to another, we have to be able to understand it in languageless terms in the middle, to “see” it internally, to connect it deeply to knowledge. Part 2 was about ambiguity, failure, intentions, co-production and, for some reason, sophisticated mouse civilisations. Sorry about that.
Early this year, I wrote a blog post titled Machine love which, inspired by the blogs of others, argued that translators and interpreters will never be made obsolete by machines. I was on a high, excited about starting to run introductory translation studies workshops again. Now, the evenings are drawing in and I think I’ve changed my mind. We’re doomed.
For a long time, I’ve been bothered by questions about what to do, as a working interpreter, when you are presented with a long barrage of unexpected nonsense. Most of the time, I’m fairly sure I’m working with real human beings, so it’s a regular occurrence.
I think my anxiety about it stems from a brain-curdling experience in a Higher Education art lecture that’s been nagging at me for over three years now. I wrote about it during my interpreter training in my critical reflection logs: I knew I was flailing around for excuses because I flippantly listed “jumping through a plate glass window and falling two stories in order to escape” as a potential but grudgingly unprofessional “coping strategy”. Those logs and the critically reflective essay they produced got me the highest mark I have ever received in all of my academic endeavours, which says to me that (a) my supervisor had a sense of humour and (b) I was asking all the right questions, but didn’t necessarily have any decent answers.
Sometimes, when an answer seems very elusive, it may be because we don’t understand the question. So I’ve done a little thinking and reading about nonsense, gibberish and gobbledegook. If the following doesn’t make any sense, I’m sorry for being insufferably meta. Nonsense is not all the same, so in time-honoured interpreting tradition, our responses to it are going to depend – but on what?
Imagine trying to make sense of the world with only a thousand different words at your disposal. It might cause difficulties, but perhaps not the ones you might expect.
We can see it what it might look like in xkcd’s amusing Up Goer Five, which despite being deliberately silly is, in its own context, a perfectly adequate intralingual translation of a diagram that demonstrates the main points of how the Saturn V launch vehicle was constructed. This is rocket science. But it only uses only the “ten hundred” most common US English words (the word “thousand” is ironically not one of them).
It’s a brilliant poster and I want one, but like many hilarious things, it would be horribly inappropriate if applied to most real-world situations. I would not actually use it even in a primary school classroom, at least not as a tool for explaining how rockets work. I think that nearly everyone would agree that an adequate or appropriate translation of “hydrogen” in the context of the Hindenburg Disaster is very unlikely to be “the kind of air that once burned a big sky bag and people died”. But the frequencies of that sentence’s words are not the essential problem. Sensitivity and culture are: the sentence is marked, unexpected, just not the kind of thing that anyone ever says.
The English word “hydrogen” is only a couple of centuries old and derived from the Greek (via French) for “bringing forth water”, which as an explanation of hydrogen’s nature is only illuminating in specific contexts. “Inflammable air” might have worked if you and I were 18th century chemists. It is not necessarily any individual lexical choice, or indeed the sizes of the lexicons from which we choose, that determine fidelity and capture intention. Words are not often uttered as individual, stand-alone entities in their citation form. We speak as other people expect us to speak, almost all of the time. Language and meaning are co-constructed. It can take bloody ages.
In German, that most whimsical of languages, hydrogen is wasserstoff – “water matter/material”. What does “a word” mean in a fusional language where Rhabarberbarbarabarbarbarenbartbarbierbierbarbärbel is allowable? Whether or not my in-laws and their recent ancestors really have “a word for hydrogen”, it didn’t stop them developing rockets. (Germans also name their televisions fernseher, the “far seers”, which might sound like something from Lord of the Rings but is actually the same intended meaning as the borrowed Greek root tele- and Latin root visi-.)
It can take several drummings-home to get sign language students and aspiring interpreters to stop asking the question “What is the sign for …?” – a question that implies an underlying mindset that different languages always have word-for-word, code-for-code correspondences – and ask for the context instead: “How would you sign … when we’re talking about …?” No-one can say exactly what the size of the British Sign Language (BSL) lexicon is: existing online dictionaries are on the order of about 4,000 to 5,000 signs. Corpus linguistics may shed more light on that in future, but we will still have to deal with the question of how conventional and widespread a sign has to be before it “officially” becomes BSL – fortunately we have no Académie française staffed by dusty, conservative Immortals in the Deaf community – and even what is meant by “a sign”. At any rate, I’m pretty confident in asserting that there is no well-established individual sign in BSL that encapsulates exactly the same concept as “hydrogen” does. Instead, Deaf scientists and interpreters borrow from English, just as English borrowed from the French which borrowed from the Greek. That is only a problem if you want it to be one.
There is an urban myth that “newspaper” The Sun has less than a thousand different words in it. Like many things we really, really want to be true, it isn’t even close. Prolific linguist and lecturer David Crystal emphatically debunks the idea on his blog, stating that the linguistic diversity of a copy of The Sun with proper nouns removed is about 8,000 words – roughly equivalent to that of the King James Bible, one of history’s most influential and discussed translations (for better or worse). Professor Crystal’s demolition is the best antidote to some of the bilious, contradictory and pathetically incompetent reporting which tried to paint British teenagers as only knowing or using 800 words: sorry, Daily Mail, but you are having a bad problem and you will not go to space today.
Past a certain point, a fairly low bar for complexity, it doesn’t practically matter to discourse what the size of a lexicon is. To say that meaning explodes out of language is like describing the Chicxulub Impact as a bump in the night: we simply cannot even start to visualise the staggering immensity of our own linguistic potential. If you want to try the impossible, or at least get a sense of the awe-inspiring magnitude of the numbers involved, you can turn once again to xkcd and get an answer to the question “How many English sentences can fit into a ‘tweet’?” The answer would be, for practical purposes, the same as for “How many BSL sentences can fit into a ‘Vine’?”: enough to fill a universe of universes.
Those who sneer at the slightly New Age term “finding yourself” might be enviable: they have probably never known the feeling of being truly lost.
Visiting interpreters really should read this liberating short article by translator Andrew Morris on the ITI website. It begins with a story about a sculptor who “finds” a fully formed likeness of themselves inside a solid block of marble, by simply chipping away at it until nothing but the likeness is left. (The story is unattributed but I wonder if Andrew was thinking of Michaelangelo.) The author describes the final realisation that for all his language skills, there will always be topics, texts and genres that he will never be much good at translating; he sets out to “chip away at the world of translation” until he finds his own likeness within it.
It’s a refreshing point of view and easy to appreciate the cheerful candour of saying that it’s perfectly legitimate, and maybe even a necessity, to whittle away your more bothersome and soul-destroying work and pursue a specialism that you revel in and excel at, resulting in better quality of work and a more satisfied, enriching life. Yes, I’ll have some of that, please.
I do wonder, though, whether this way of thinking is completely compatible with the goal of being a public service interpreter. Council meetings, Entry Level 1 Literacy and the continuing saga of Mrs Scroggins’s bunions at her GP may not be conducted in Wildean epithet but they are very necessary evils and of critical importance to others. A quick browse through Amazon’s offerings on books about translation might tempt us to believe that translators have a more entrepreneurial bent. Andrew’s post raised the following questions in my mind:
Where is the boundary between the obligation of a social duty, and the personal fulfilment of pursuing an expertise?
Is “access” simply good in and of itself, or is an interpreter allowed to make determinations about the quality of that which is accessed?
Do I love specific assignments just because they “suit” me, and do I really do a better job with them or does it merely seem that way?
I’m an appalling interpreter when working in art classes; but am I the very worst?
Why do I feel like I have some kind of debt to some kind of community to pay off?
Is it not a little privileged to assume that I can simply edit the interpreting world to my own fancy?
If I don’t do those jobs, why should anyone else? What about solidarity, collegiality?
Or is it simply pragmatic to accept that a specialised, choosy interpreter is better than a burned out one?
Do I have as much of a “right” as anyone else to lead a happy life or do we just make up “rights” like that when it suits us most? (Clue: interpreters tend to invoke Codes of Conduct most often when it benefits them, not the interlocutors, or to dismiss some perceived conflict of “role”.)
I can’t answer any of these questions very well today and they are, of course, very much polarised. But the first steps towards an answer would be to establish whether as an interpreter, I can actually apply a translator’s work ethic at all.
Much is made about the difference between translators and interpreters; both groups are fully capable of exhibiting pique when named for the other. As someone who trained at the University of Leeds Centre for Translation Studies, where both translators and interpreters take the same core modules and it was perfectly possible to specialise in translation within the interpreting programme (I took a roughly 50/50 mix of options), and appreciating some of the astonishing variety both fields contain, this thinking seems at best odd and at worst a little petty to me. Are we really so different?
AIIC have a guide to some of the more commonly held differences and repeats the common assertion that translation is written and interpreting is spoken, although to someone working in a sign language or any other language with no written form, there are some immediate problems with those definitions – even assuming for the moment you can get temporarily get away with describing sign language as a “spoken language”, what about working with a recorded clip of someone signing BSL and voicing it into spoken English, or (as I routinely did in my one very busy Access to Work assignment – l33t typing skillz required) taking “dictated” but very informal live BSL and turning it into a business e-mail on the fly? Are those activities interpretation, or translation, and does it matter? This is the kind of confusing thinking which has led to Deaf people being arbitrarily encouraged to title themselves as translators and hearing people as interpreters.
Interpreting is often described as “live” or “immediate”, but then there are some practices in consecutive interpreting which are not remotely “immediate” by any conventionalised definition of the term – ten minute lags are possible and beyond. Translation is sometimes described as not time pressured or not involving “extreme speed”, but try telling that to someone who just successfully bid the lowest price on an online translation portal to translate a thirty page Portuguese legal document with penalties for not delivering before the deadline. Then there are the huffy but frankly dull debates on the minute differences between “sight translation” and “sight interpretation”. Well then, perhaps interpretation is different because it’s “face to face” – sorry, no again, interpreter “professionalisation” arguably started with the technology boost granted by the Nuremberg Trials and we didn’t stop innovating in 1945: today an interpreter might be actual miles away, notwithstanding one colleague’s confusion on several levels about why the interpreter who corrected the French president could not “interrupt him” to ask for clarification (aside from the social consequences of interrupting a world leader, the connection was both remote and one-way, but it was still interpretation).
Your mileage may indeed vary. Going well off the beaten path, I have also been told to my face by an experienced interpreter that translation is “word for word” but interpretation is “free”. Then there are the dearly belovéd sign language interpreter colleagues who list “technical translation” and “localisation” as skills on LinkedIn, giving the impression that they spend a good deal of time in software houses reconfiguring user interfaces or producing Japanese camera manuals. There are not enough “facepalms” in the entire box set of Star Trek: The Next Generation to express how I feel about these things, but then, I talk a lot of crap as well, frequently.
Maybe we can just accept that there are probably differences some of the time in some places, mostly in the setting or “domains” and possibly the lifestyle, but it would be churlish and counter-productive to deny that interpreting and translating are extremely closely related. Professionals from both sides can learn a good deal from each other, and I would immediately recommend texts on translation studies to any interpreters who have never read any – Mona Baker’s In Other Words or Jeremy Munday’s Introduction to Translation Studies are great places to start. Any interpreter training which does not contain at least an introduction to “translation as a purposeful activity” is not worthy of the name, in my view. As the saying goes, if you walk a mile in another man’s shoes, you’ll be a mile away and have his shoes.
I was once button-holed by a PhD student after interpreting a computer science lecture, who told me he was working on a project to replace sign language interpreters with robots. I asked him how he knew it hadn’t already happened.
He didn’t get it.
In 2010, Google introduced an automatic captioning service for YouTube (speech recognition software, essentially) and pitched it as an amazing access improvement for d/Deaf people in general and students in particular, encouraging media fanfares to pluck at heartstrings. The deaf credentials of the engineers were highlighted, and there were “very positive reactions” and no negative ones from all two of the people consulted by a BBC journalist: “Now we really can completely understand what is going on and we feel like we are on an equal playing field,” said a student. (I envy them: I have never completely understood what is going on.)
Around that time, I was working in an education team in a Local Authority sensory service, and was asked (as resident geek) to assess whether we should recommend to teachers that they use automatic captioning when they inevitably use YouTube videos as a learning resource in classes which include d/Deaf students. My recommendation was that the captioning was actually worse than nothing in the context of education, and should be avoided at all costs. At that time, the system could not even cope with dialogue lifted straight from Hollywood film trailers, let alone a British accent with any kind of normal background noise or incidental music. I’ve only played with the system a few times since, but saw nothing to make me believe it has substantially improved.
And this is machine processing of just one language, albeit from its spoken to its written form. What happens when two very different languages are involved? We’ve been seeing rolling hyperbolae from big corporates ever since, promising wearable technologies which will be able to translate between languages in real time (but whose voice recognition interfaces, not to mention the entire corporate ecosystem around the product, are not accessible to Deaf people without speech).
In the sign language field, there are spurts of hugely misleading promotions from tech giants such as Microsoft claiming to be using motion sensors to “understand the gestures of sign language and convert them to spoken and written language”, not to mention bizarre assertions that an interaction interpreted by a living, breathing linguist is “very artificial” but one mediated by a set of machine algorithms and an uncanny valley avatar is “natural”.
Special mention must go to products like this one, which – SQUIRREL! – must have been inspired by the Pixar film “Up”. “Futuristic headset translates your dog’s thoughts into human language!” claims the headline, which is great, except that it isn’t futuristic, translation, thoughts or language (EEG was first used on dogs in 1912). It’s interesting to note that in the German dub of “Up”, Doug and the other machine-translated dogs are distracted by cats, not squirrels – an example of the kind of cultural mediation and dynamic equivalence that machines are especially poor at.
The view I was offered in lectures from world-leading academics at the Centre for Translation Studies at Leeds was somewhat more tempered and practical: that machine translation is an exciting development, yes, but it is also nothing more than another tool for human translators to employ, a tool in the same sense that a dictionary is a tool, just a little bit more sophisticated. There is a school of thought which says that in order to mediate human meaning and intention, it is beneficial to be a human, or at the very least, something like a human. (This reminds me of the time a colleague was complaining that Deaf people make poor choices of interpreter and their decisions are based too heavily on interpreter personality. I can only conclude that they were angry and envious because they didn’t have one.)
We are back to issues of adequacy: asking for directions in a foreign city, ordering a pizza and many other tasks that lead to sales of products and services (totally coincidentally, I’m sure) are short, goal-oriented tasks that lend themselves well to a systemic, literal approach to language. These machine translation tasks are definitely within reach already – even if the translation is noticeably garbled or unnatural or marked, in those cases it might still be adequate.
But sitting in a physics class or discussing emotional difficulties with a mental health professional or engaging in political debate are not comparable to choosing egg and chips from a fixed menu: they are complex discourse situations embedded with the subtle and chaotic echoes of inferences that evolved over millennia. They set a much higher bar for translation adequacy. Since people frequently misunderstand each other even when they apparently speak the same language, how is a non-sentient entity supposed to cope? How is it supposed to know that it has misunderstood?
In the workshop that I’ve run a couple of times now on functional/purposeful translation theories, I set the group a task which I cheerfully stole adapted from one of my lecturers in methods and approaches to translation: I give them three target texts translated from the same foreign language newspaper source. One is a literal translation created entirely by humans (literal in Nord’s “documentary translation” sense or Nida’s “formally equivalent” sense). The second is a “free” or “purposeful” translation by humans (Nord’s “instrumental translation”, Nida’s “dynamic equivalence”). The third is a machine translation by either Microsoft Bing or Google Translate. When I ask the group which translation is the best, and then ask which one they like the most, they initially tend to rate the literal human translation as “best” (and then give slightly confused reasons for that, often due to a common misconception that “translation” is literal and “interpretation” is free) but then they very often like and prefer the natural, intention-centred language of the purposeful translation the most. The only people that ever voluntarily choose the machine translation are usually first language speakers of the text’s source language: possibly they relate to echoes of their own native tongue preserved in the structure of the machine translation and may be oblivious to the “marked language” errors it contains.
The lesson: if I were translating foreign newspapers for a living, I could possibly save a bit of time by using commercial translation software, but would still have to go through the output with a fine tooth comb in order to maintain a shred of credibility. I, for one, welcome our new robot overlords and currently do not believe they are going to make my job obsolete in my lifetime.
The amount of information stored in the world is, some say, rising exponentially, doubling in ever shorter periods of time. A friend of mine believes that the world is going to end sometime in 2016 because of this trend: the rate at which the sum of information stored doubles will reach every year, then every month, then every day, then every second, then every millisecond, and then … what? Nature is sadly filled with J-curves. Fortunately for my doom-saying friend, an ever-increasing proportion of new information is just meta-information, indexes of indexes of indexes, catalogues of cultural sterility created by unthinking robots only for the thankless benefit of other robots. My server logs are mostly filled with the details of failed hacking attempts by mindless algorithms originating on the other side of the planet, like a million flies all banging against the closed half of a half-open window. If there is a Singularity, sadly, it might just be a black hole of stupidity.
So, given my scepticism and my love/hate relationship with computers, I was unaccountably moved to read the following message, which shows several clear markers of translation by a non-human entity (only a machine would choose to replace the X in the phrase “get X with” with “get acquainted with” to make “get get acquainted with with”, or render the second person singular future of “to will” as “you will will”). I don’t believe “Oleshka” really exists, or at the very least she is not being entirely honest with me and her terseness and lonesomeness conceal a deadlier purpose, but only a machine could fail to be even slightly moved by the wistful courtliness of this entreaty, and I am a human who revels in being flawed. Perhaps just sometimes, the language of machines does speak more profoundly of beauty and of love.
Hi!!! I don’t know as to begin because it is my opening practice of knowledge in the Internet. But I write you this writing that will get get acquainted with with you. I didn’t get become acquainted on the Internet never and thence I don’t know what to write. But I will to tell that I was tired of lonesomeness and I choose to find the heart and to commence the serious relations. And I will be glad if you compose to me and you will will to get acquainted with me.
I don’t know why I write to you, but in you there is something like that attracts me. I won’t write about myself many, whether because I don’t know you will become interested in knowledge to me. But after all it is the Internet and if you don’t write to me, I will make out you. But I will hope that I will find your message and I will write to you more about myself in the next writing.
And now, I will write about myself tersely. My name is Oleshka. And I am from Russia. I am 30 years old. If you choose to get to know about me more, compose to me about yourself in better detail. But I will to tell you at once that I refer knowledge very seriously and I will from you same. Because I heard that on the Internet there are people who write for entertainment cheerfully to spend time and to whom money is necessary. And me it doesn’t interest.
But it seems to me that you are very serious and I will like to know you more. Certainly I may be mistaken but if we know each other more, time will show everyone. Do you agree? For me it is the opening step and it very hard, but I trow in the best and the succeeding step for you.
I will wait very much for your answer. Oleshka.
I hope that you find the heart and that time shows everyone, Oleshka, but I doubt it. Be safe.