“Words don’t mean; people mean.” – commonly attributed to S.I. Hayakawa
In the Indian adage about a group of blind people feeling different parts of an elephant, each of them gives a very different account of what they learn: the one feeling the trunk says that an elephant is like a branch, the one feeling the tail says an elephant is like a rope, and so forth. It’s a parable about subjectivity and the limits of perception. What this story does not tell you is that all of the elephant-feelers immediately fall insensible to the floor, paralysed by a new appreciation of the sucking abyss that surrounds the elephant for light years in every direction.
How many interpreters does it take to change a lightbulb? It depends.
In part 1, we looked at a few distinct ways in which people fail to make sense, and what we (as interpreters) can do about it, if anything. Some themes cropped up more than once:
That we need to be able to “see with the mind’s eye” what things mean, either as an actual visualisation or as an abstraction (like the hovering multicoloured towers of Lego that followed me around and haunted my dreams when I was a programmer), and that if we can’t do that – if the input doesn’t create anything we can understand in languageless terms – we probably can’t produce anything very meaningful in the target language. One of the situations in which interpreters stop making sense is when they start acting as a conveyor belt, a translation machine that only works on a lexical level: a participant in a study I worked on expressed this as interpreters who “pass the parcel”.
Following from that, we also need to understand language as discourse: where is it going, what is the point? Preparing for an interpreting assignment by looking up words, doing background research and making glossaries is all fine and good, but it might not be enough if we don’t understand why language is being uttered.
That sometimes people don’t make sense on purpose, and we might need to collaborate with that aim. Interpreters are typically “explainers”: but sometimes a speaker needs to baffle, challenge, test their audience.
To explore some of these ideas in practice, I offer up a failure as example: a totally awful interpretation I was recently responsible for. I completely cocked it up and it was mostly my fault. But I learned something.
For a long time, I’ve been bothered by questions about what to do, as a working interpreter, when you are presented with a long barrage of unexpected nonsense. Most of the time, I’m fairly sure I’m working with real human beings, so it’s a regular occurrence.
I think my anxiety about it stems from a brain-curdling experience in a Higher Education art lecture that’s been nagging at me for over three years now. I wrote about it during my interpreter training in my critical reflection logs: I knew I was flailing around for excuses because I flippantly listed “jumping through a plate glass window and falling two stories in order to escape” as a potential but grudgingly unprofessional “coping strategy”. Those logs and the critically reflective essay they produced got me the highest mark I have ever received in all of my academic endeavours, which says to me that (a) my supervisor had a sense of humour and (b) I was asking all the right questions, but didn’t necessarily have any decent answers.
Sometimes, when an answer seems very elusive, it may be because we don’t understand the question. So I’ve done a little thinking and reading about nonsense, gibberish and gobbledegook. If the following doesn’t make any sense, I’m sorry for being insufferably meta. Nonsense is not all the same, so in time-honoured interpreting tradition, our responses to it are going to depend – but on what?
I returned last night from the annual conference of the European Forum of Sign Language Interpreters in Antwerp, Belgium, which was simply amazing. It was fascinating to be surrounded by so many spoken and signed languages (with participants from 25 countries, some of which represent more than one language in each modality) and hugely satisfying to meet interpreters from hundreds and thousands of miles away and immediately recognise their experiences. It was a total pleasure to leave “local concerns” behind and talk about what interpreting is for once. I’m already looking at how to get to Warsaw for the 2015 conference.
The theme of the conference was the underlying cognitive processes (“mind tricks”) that make interpretation possible – totally my cup of tea. If I were going to be constructively critical, I’d say that the only flaw in the extremely well-organised proceedings was the total onslaught of information – my brain currently resembles an over-boiled primary school dinner cauliflower, and physically I feel like I’ve spent the weekend mountain climbing. The main day was a long series of (fascinating) presentations with no “break out” exercises or workshops – although the standard was extremely high and all of the topics useful, I’d have enjoyed the opportunity to pause from passively receiving information and engage in more structured dialogue with colleagues from abroad.
Because of this, there is absolutely no way I can describe here everything I took away, and most of it hasn’t settled down yet – I expect things will continue to poke me for months to come. So the following is an almost random selection of just a few of the bitesize “messages” I found easier to process as they’ve fit with my reading and thinking so far – I should say that they do not necessarily represent the views of the presenters, and there was a metric shedload of other interesting stuff.
Can we take a break from “professionalisation”?
Peter Llewellyn-Jones, president of efsli (and my former supervisor) began and ended the conference by reminding us that while the early years of interpreting research had a notable focus on cognition, perhaps that fell by the wayside: we have collectively spent much of the last 15 years talking instead about what it means to be a “professional”. That dialogue is sometimes useful but arguably there is little point in endlessly elaborating on our status as interpreters if we do not understand how interpreting works.
A trivial bit of new knowledge on my part was finding out how interpreters are named in other languages – for example, I met Swedish, Dutch, Danish and Flemish interpreters who use some linguistic variant of the word tolker, which is best translated as “interpreter” but more literally means “talker” or “explainer” or “spokesperson”. Good luck to any statutory regulation afficionados with trademarking that title. On a related note, efsli’s long term aim (perhaps by 2020) is to create a Europe-wide register of interpreters – many countries such as Croatia and, surprisingly, Sweden do not have a national register – and I’m excited by the idea of an international register of public service interpreters set up by interpreters.
Why do we default to simultaneous interpreting?
Debra Russell gave the keynote presentation, which was a goldmine in itself and had my brain whirling before we’d even reached the first morning’s halfway mark. Evidence was highlighted which demonstrated what spoken language interpreters take for granted but sign language interpreters often completely ignore: simultaneous interpreting is worse than consecutive in terms of faithfulness, quality and, often, believability (a vital and sometimes overlooked requirement for settings such as court interpreting, business and education). In one measure of quality on the work of experienced interpreters in court settings, error percentages were worse for simultaneous interpreting by more than a decile compared to consecutive; accounts from Deaf users were highlighted, which clearly expressed that they disliked interpreting errors more than they disliked consecutive interpreting.
I’ve said this myself in my own workshops for CSWs – we “grow up” as interpreters emulating what we have seen others doing, and we think that it must be right just because that is the status quo. But the main utility of authority is that it is there to be questioned. We have successfully persuaded large sections of the community that just because bimodal interpreting can be done simultaneously without racks of specialist technology, it should be done that way, even though research shows time and again that it creates more errors and less “natural” target language renditions. Perhaps consec should be the default stance whenever possible – yes, it does have some disadvantages (lawyers don’t like it, for example, perhaps because it takes up more of their precious time), but in which situations are those disadvantages outweighed?
Discourse and what we are doing
A recurring theme in several of the presentations was that interpreters seem to struggle with maintaining the discourse functions of language, what language is doing above the word and sentence levels. Debra Russell talked about interpreters working in education, who tend to get sucked into processing language on just a lexical level but are not mindful of the discourse, where the language is headed, what the point is, even when they have actually been prepared and included in those aims as a proper colleague of the teacher (which never happens enough). Terry Janzen discussed “intersubjectivity” (a term which is ironically very hard to translate and which called to mind Robert Lee’s writing about the identity and personal history of interpreters) and communication as “construal”: the problem here is a possible tendency in interpreters for “over-contextualisation”, which I regrettably recognise in myself as an someone who started off working in the education field.
Yes, interpreters are “explainers”, but perhaps it is more common than we realise for a speaker to need to leave things unexplained, to be obscure on purpose, at least for the time being. Discourse is co-constructed: we all unconsciously probe each other’s understanding all the time, and maybe an interpreter who is driven to make everything crystal clear, to the lowest common denominator of meaning, is completely screwing up that everyday process. In education, it is common and important for a teacher to “test” learners by saying things s/he expects to not be understood. So how can you interpret something “faithfully” if you don’t know why it was said? When interlocutors complain that they didn’t understand the interpreter, do we have the courage to tell them, when it’s true, that they weren’t supposed to? These themes kept bothering me all the way through my training – all these wonderful models and diagrams about interpreter-mediated communication seem to assume that people make sense all the time. They don’t. They frequently talk a load of crap. We should recognise and indeed celebrate that.
What I found fascinating is that as frequent as these ideas about discourse, goals, construal etc. were across the presentations, there seemed to be many colleagues present (both from abroad and home) who seemed to find it all quite new, and there were even suggestions that it needed more of a focus in interpreter training. I was a bit surprised and disappointed to find out that people aren’t really being taught this stuff. Pragmatics, discourse and illocution – functional/purposeful approaches – were introduced on day one of my own interpreter education, but (for example) I don’t see them prominently listed in Signature’s NVQ curriculum.
There was so much more – you’d need a short-ish book to properly cover everything that came up – but I’ll stop there for a lie down: the over-boiled cauliflower needs time to cool off. Thanks to all the organisers and participants for an amazing experience.
Imagine trying to make sense of the world with only a thousand different words at your disposal. It might cause difficulties, but perhaps not the ones you might expect.
We can see it what it might look like in xkcd’s amusing Up Goer Five, which despite being deliberately silly is, in its own context, a perfectly adequate intralingual translation of a diagram that demonstrates the main points of how the Saturn V launch vehicle was constructed. This is rocket science. But it only uses only the “ten hundred” most common US English words (the word “thousand” is ironically not one of them).
It’s a brilliant poster and I want one, but like many hilarious things, it would be horribly inappropriate if applied to most real-world situations. I would not actually use it even in a primary school classroom, at least not as a tool for explaining how rockets work. I think that nearly everyone would agree that an adequate or appropriate translation of “hydrogen” in the context of the Hindenburg Disaster is very unlikely to be “the kind of air that once burned a big sky bag and people died”. But the frequencies of that sentence’s words are not the essential problem. Sensitivity and culture are: the sentence is marked, unexpected, just not the kind of thing that anyone ever says.
The English word “hydrogen” is only a couple of centuries old and derived from the Greek (via French) for “bringing forth water”, which as an explanation of hydrogen’s nature is only illuminating in specific contexts. “Inflammable air” might have worked if you and I were 18th century chemists. It is not necessarily any individual lexical choice, or indeed the sizes of the lexicons from which we choose, that determine fidelity and capture intention. Words are not often uttered as individual, stand-alone entities in their citation form. We speak as other people expect us to speak, almost all of the time. Language and meaning are co-constructed. It can take bloody ages.
In German, that most whimsical of languages, hydrogen is wasserstoff – “water matter/material”. What does “a word” mean in a fusional language where Rhabarberbarbarabarbarbarenbartbarbierbierbarbärbel is allowable? Whether or not my in-laws and their recent ancestors really have “a word for hydrogen”, it didn’t stop them developing rockets. (Germans also name their televisions fernseher, the “far seers”, which might sound like something from Lord of the Rings but is actually the same intended meaning as the borrowed Greek root tele- and Latin root visi-.)
It can take several drummings-home to get sign language students and aspiring interpreters to stop asking the question “What is the sign for …?” – a question that implies an underlying mindset that different languages always have word-for-word, code-for-code correspondences – and ask for the context instead: “How would you sign … when we’re talking about …?” No-one can say exactly what the size of the British Sign Language (BSL) lexicon is: existing online dictionaries are on the order of about 4,000 to 5,000 signs. Corpus linguistics may shed more light on that in future, but we will still have to deal with the question of how conventional and widespread a sign has to be before it “officially” becomes BSL – fortunately we have no Académie française staffed by dusty, conservative Immortals in the Deaf community – and even what is meant by “a sign”. At any rate, I’m pretty confident in asserting that there is no well-established individual sign in BSL that encapsulates exactly the same concept as “hydrogen” does. Instead, Deaf scientists and interpreters borrow from English, just as English borrowed from the French which borrowed from the Greek. That is only a problem if you want it to be one.
There is an urban myth that “newspaper” The Sun has less than a thousand different words in it. Like many things we really, really want to be true, it isn’t even close. Prolific linguist and lecturer David Crystal emphatically debunks the idea on his blog, stating that the linguistic diversity of a copy of The Sun with proper nouns removed is about 8,000 words – roughly equivalent to that of the King James Bible, one of history’s most influential and discussed translations (for better or worse). Professor Crystal’s demolition is the best antidote to some of the bilious, contradictory and pathetically incompetent reporting which tried to paint British teenagers as only knowing or using 800 words: sorry, Daily Mail, but you are having a bad problem and you will not go to space today.
Past a certain point, a fairly low bar for complexity, it doesn’t practically matter to discourse what the size of a lexicon is. To say that meaning explodes out of language is like describing the Chicxulub Impact as a bump in the night: we simply cannot even start to visualise the staggering immensity of our own linguistic potential. If you want to try the impossible, or at least get a sense of the awe-inspiring magnitude of the numbers involved, you can turn once again to xkcd and get an answer to the question “How many English sentences can fit into a ‘tweet’?” The answer would be, for practical purposes, the same as for “How many BSL sentences can fit into a ‘Vine’?”: enough to fill a universe of universes.
Those who sneer at the slightly New Age term “finding yourself” might be enviable: they have probably never known the feeling of being truly lost.
Visiting interpreters really should read this liberating short article by translator Andrew Morris on the ITI website. It begins with a story about a sculptor who “finds” a fully formed likeness of themselves inside a solid block of marble, by simply chipping away at it until nothing but the likeness is left. (The story is unattributed but I wonder if Andrew was thinking of Michaelangelo.) The author describes the final realisation that for all his language skills, there will always be topics, texts and genres that he will never be much good at translating; he sets out to “chip away at the world of translation” until he finds his own likeness within it.
It’s a refreshing point of view and easy to appreciate the cheerful candour of saying that it’s perfectly legitimate, and maybe even a necessity, to whittle away your more bothersome and soul-destroying work and pursue a specialism that you revel in and excel at, resulting in better quality of work and a more satisfied, enriching life. Yes, I’ll have some of that, please.
I do wonder, though, whether this way of thinking is completely compatible with the goal of being a public service interpreter. Council meetings, Entry Level 1 Literacy and the continuing saga of Mrs Scroggins’s bunions at her GP may not be conducted in Wildean epithet but they are very necessary evils and of critical importance to others. A quick browse through Amazon’s offerings on books about translation might tempt us to believe that translators have a more entrepreneurial bent. Andrew’s post raised the following questions in my mind:
Where is the boundary between the obligation of a social duty, and the personal fulfilment of pursuing an expertise?
Is “access” simply good in and of itself, or is an interpreter allowed to make determinations about the quality of that which is accessed?
Do I love specific assignments just because they “suit” me, and do I really do a better job with them or does it merely seem that way?
I’m an appalling interpreter when working in art classes; but am I the very worst?
Why do I feel like I have some kind of debt to some kind of community to pay off?
Is it not a little privileged to assume that I can simply edit the interpreting world to my own fancy?
If I don’t do those jobs, why should anyone else? What about solidarity, collegiality?
Or is it simply pragmatic to accept that a specialised, choosy interpreter is better than a burned out one?
Do I have as much of a “right” as anyone else to lead a happy life or do we just make up “rights” like that when it suits us most? (Clue: interpreters tend to invoke Codes of Conduct most often when it benefits them, not the interlocutors, or to dismiss some perceived conflict of “role”.)
I can’t answer any of these questions very well today and they are, of course, very much polarised. But the first steps towards an answer would be to establish whether as an interpreter, I can actually apply a translator’s work ethic at all.
Much is made about the difference between translators and interpreters; both groups are fully capable of exhibiting pique when named for the other. As someone who trained at the University of Leeds Centre for Translation Studies, where both translators and interpreters take the same core modules and it was perfectly possible to specialise in translation within the interpreting programme (I took a roughly 50/50 mix of options), and appreciating some of the astonishing variety both fields contain, this thinking seems at best odd and at worst a little petty to me. Are we really so different?
AIIC have a guide to some of the more commonly held differences and repeats the common assertion that translation is written and interpreting is spoken, although to someone working in a sign language or any other language with no written form, there are some immediate problems with those definitions – even assuming for the moment you can get temporarily get away with describing sign language as a “spoken language”, what about working with a recorded clip of someone signing BSL and voicing it into spoken English, or (as I routinely did in my one very busy Access to Work assignment – l33t typing skillz required) taking “dictated” but very informal live BSL and turning it into a business e-mail on the fly? Are those activities interpretation, or translation, and does it matter? This is the kind of confusing thinking which has led to Deaf people being arbitrarily encouraged to title themselves as translators and hearing people as interpreters.
Interpreting is often described as “live” or “immediate”, but then there are some practices in consecutive interpreting which are not remotely “immediate” by any conventionalised definition of the term – ten minute lags are possible and beyond. Translation is sometimes described as not time pressured or not involving “extreme speed”, but try telling that to someone who just successfully bid the lowest price on an online translation portal to translate a thirty page Portuguese legal document with penalties for not delivering before the deadline. Then there are the huffy but frankly dull debates on the minute differences between “sight translation” and “sight interpretation”. Well then, perhaps interpretation is different because it’s “face to face” – sorry, no again, interpreter “professionalisation” arguably started with the technology boost granted by the Nuremberg Trials and we didn’t stop innovating in 1945: today an interpreter might be actual miles away, notwithstanding one colleague’s confusion on several levels about why the interpreter who corrected the French president could not “interrupt him” to ask for clarification (aside from the social consequences of interrupting a world leader, the connection was both remote and one-way, but it was still interpretation).
Your mileage may indeed vary. Going well off the beaten path, I have also been told to my face by an experienced interpreter that translation is “word for word” but interpretation is “free”. Then there are the dearly belovéd sign language interpreter colleagues who list “technical translation” and “localisation” as skills on LinkedIn, giving the impression that they spend a good deal of time in software houses reconfiguring user interfaces or producing Japanese camera manuals. There are not enough “facepalms” in the entire box set of Star Trek: The Next Generation to express how I feel about these things, but then, I talk a lot of crap as well, frequently.
Maybe we can just accept that there are probably differences some of the time in some places, mostly in the setting or “domains” and possibly the lifestyle, but it would be churlish and counter-productive to deny that interpreting and translating are extremely closely related. Professionals from both sides can learn a good deal from each other, and I would immediately recommend texts on translation studies to any interpreters who have never read any – Mona Baker’s In Other Words or Jeremy Munday’s Introduction to Translation Studies are great places to start. Any interpreter training which does not contain at least an introduction to “translation as a purposeful activity” is not worthy of the name, in my view. As the saying goes, if you walk a mile in another man’s shoes, you’ll be a mile away and have his shoes.
I was once button-holed by a PhD student after interpreting a computer science lecture, who told me he was working on a project to replace sign language interpreters with robots. I asked him how he knew it hadn’t already happened.
He didn’t get it.
In 2010, Google introduced an automatic captioning service for YouTube (speech recognition software, essentially) and pitched it as an amazing access improvement for d/Deaf people in general and students in particular, encouraging media fanfares to pluck at heartstrings. The deaf credentials of the engineers were highlighted, and there were “very positive reactions” and no negative ones from all two of the people consulted by a BBC journalist: “Now we really can completely understand what is going on and we feel like we are on an equal playing field,” said a student. (I envy them: I have never completely understood what is going on.)
Around that time, I was working in an education team in a Local Authority sensory service, and was asked (as resident geek) to assess whether we should recommend to teachers that they use automatic captioning when they inevitably use YouTube videos as a learning resource in classes which include d/Deaf students. My recommendation was that the captioning was actually worse than nothing in the context of education, and should be avoided at all costs. At that time, the system could not even cope with dialogue lifted straight from Hollywood film trailers, let alone a British accent with any kind of normal background noise or incidental music. I’ve only played with the system a few times since, but saw nothing to make me believe it has substantially improved.
And this is machine processing of just one language, albeit from its spoken to its written form. What happens when two very different languages are involved? We’ve been seeing rolling hyperbolae from big corporates ever since, promising wearable technologies which will be able to translate between languages in real time (but whose voice recognition interfaces, not to mention the entire corporate ecosystem around the product, are not accessible to Deaf people without speech).
In the sign language field, there are spurts of hugely misleading promotions from tech giants such as Microsoft claiming to be using motion sensors to “understand the gestures of sign language and convert them to spoken and written language”, not to mention bizarre assertions that an interaction interpreted by a living, breathing linguist is “very artificial” but one mediated by a set of machine algorithms and an uncanny valley avatar is “natural”.
Special mention must go to products like this one, which – SQUIRREL! – must have been inspired by the Pixar film “Up”. “Futuristic headset translates your dog’s thoughts into human language!” claims the headline, which is great, except that it isn’t futuristic, translation, thoughts or language (EEG was first used on dogs in 1912). It’s interesting to note that in the German dub of “Up”, Doug and the other machine-translated dogs are distracted by cats, not squirrels – an example of the kind of cultural mediation and dynamic equivalence that machines are especially poor at.
The view I was offered in lectures from world-leading academics at the Centre for Translation Studies at Leeds was somewhat more tempered and practical: that machine translation is an exciting development, yes, but it is also nothing more than another tool for human translators to employ, a tool in the same sense that a dictionary is a tool, just a little bit more sophisticated. There is a school of thought which says that in order to mediate human meaning and intention, it is beneficial to be a human, or at the very least, something like a human. (This reminds me of the time a colleague was complaining that Deaf people make poor choices of interpreter and their decisions are based too heavily on interpreter personality. I can only conclude that they were angry and envious because they didn’t have one.)
We are back to issues of adequacy: asking for directions in a foreign city, ordering a pizza and many other tasks that lead to sales of products and services (totally coincidentally, I’m sure) are short, goal-oriented tasks that lend themselves well to a systemic, literal approach to language. These machine translation tasks are definitely within reach already – even if the translation is noticeably garbled or unnatural or marked, in those cases it might still be adequate.
But sitting in a physics class or discussing emotional difficulties with a mental health professional or engaging in political debate are not comparable to choosing egg and chips from a fixed menu: they are complex discourse situations embedded with the subtle and chaotic echoes of inferences that evolved over millennia. They set a much higher bar for translation adequacy. Since people frequently misunderstand each other even when they apparently speak the same language, how is a non-sentient entity supposed to cope? How is it supposed to know that it has misunderstood?
In the workshop that I’ve run a couple of times now on functional/purposeful translation theories, I set the group a task which I cheerfully stole adapted from one of my lecturers in methods and approaches to translation: I give them three target texts translated from the same foreign language newspaper source. One is a literal translation created entirely by humans (literal in Nord’s “documentary translation” sense or Nida’s “formally equivalent” sense). The second is a “free” or “purposeful” translation by humans (Nord’s “instrumental translation”, Nida’s “dynamic equivalence”). The third is a machine translation by either Microsoft Bing or Google Translate. When I ask the group which translation is the best, and then ask which one they like the most, they initially tend to rate the literal human translation as “best” (and then give slightly confused reasons for that, often due to a common misconception that “translation” is literal and “interpretation” is free) but then they very often like and prefer the natural, intention-centred language of the purposeful translation the most. The only people that ever voluntarily choose the machine translation are usually first language speakers of the text’s source language: possibly they relate to echoes of their own native tongue preserved in the structure of the machine translation and may be oblivious to the “marked language” errors it contains.
The lesson: if I were translating foreign newspapers for a living, I could possibly save a bit of time by using commercial translation software, but would still have to go through the output with a fine tooth comb in order to maintain a shred of credibility. I, for one, welcome our new robot overlords and currently do not believe they are going to make my job obsolete in my lifetime.
The amount of information stored in the world is, some say, rising exponentially, doubling in ever shorter periods of time. A friend of mine believes that the world is going to end sometime in 2016 because of this trend: the rate at which the sum of information stored doubles will reach every year, then every month, then every day, then every second, then every millisecond, and then … what? Nature is sadly filled with J-curves. Fortunately for my doom-saying friend, an ever-increasing proportion of new information is just meta-information, indexes of indexes of indexes, catalogues of cultural sterility created by unthinking robots only for the thankless benefit of other robots. My server logs are mostly filled with the details of failed hacking attempts by mindless algorithms originating on the other side of the planet, like a million flies all banging against the closed half of a half-open window. If there is a Singularity, sadly, it might just be a black hole of stupidity.
So, given my scepticism and my love/hate relationship with computers, I was unaccountably moved to read the following message, which shows several clear markers of translation by a non-human entity (only a machine would choose to replace the X in the phrase “get X with” with “get acquainted with” to make “get get acquainted with with”, or render the second person singular future of “to will” as “you will will”). I don’t believe “Oleshka” really exists, or at the very least she is not being entirely honest with me and her terseness and lonesomeness conceal a deadlier purpose, but only a machine could fail to be even slightly moved by the wistful courtliness of this entreaty, and I am a human who revels in being flawed. Perhaps just sometimes, the language of machines does speak more profoundly of beauty and of love.
Hi!!! I don’t know as to begin because it is my opening practice of knowledge in the Internet. But I write you this writing that will get get acquainted with with you. I didn’t get become acquainted on the Internet never and thence I don’t know what to write. But I will to tell that I was tired of lonesomeness and I choose to find the heart and to commence the serious relations. And I will be glad if you compose to me and you will will to get acquainted with me.
I don’t know why I write to you, but in you there is something like that attracts me. I won’t write about myself many, whether because I don’t know you will become interested in knowledge to me. But after all it is the Internet and if you don’t write to me, I will make out you. But I will hope that I will find your message and I will write to you more about myself in the next writing.
And now, I will write about myself tersely. My name is Oleshka. And I am from Russia. I am 30 years old. If you choose to get to know about me more, compose to me about yourself in better detail. But I will to tell you at once that I refer knowledge very seriously and I will from you same. Because I heard that on the Internet there are people who write for entertainment cheerfully to spend time and to whom money is necessary. And me it doesn’t interest.
But it seems to me that you are very serious and I will like to know you more. Certainly I may be mistaken but if we know each other more, time will show everyone. Do you agree? For me it is the opening step and it very hard, but I trow in the best and the succeeding step for you.
I will wait very much for your answer. Oleshka.
I hope that you find the heart and that time shows everyone, Oleshka, but I doubt it. Be safe.