Outline for a Relevance Theoretical Model of Machine Translation Post-editing
Translation process research (TPR) has advanced in the recent years to a state which allows us to study “in great detail what source and target text units are being processed, at a given point in time, to investigate what steps are involved in this process, what segments are read and aligned and how this whole process is monitored” (Alves 2015, p. 32). We have sophisticated statistical methods and with the powerful tools to produce a better and more detailed understanding of the underlying cognitive processes that are involved in translation. Following Jakobsen (2011), who suspects that we may soon be in a situation which allows us to develop a computational model of human translation, Alve…
Why Translation Is Difficult
The paper develops a definition of translation literality that is based on the syntactic and semantic similarity of the source and the target texts. We provide theoretical and empirical evidence that absolute literal translations are easy to produce. Based on a multilingual corpus of alternative translations we investigate the effects of cross-lingual syntactic and semantic distance on translation production times and find that non-literality makes from-scratch translation and post-editing difficult. We show that statistical machine translation systems encounter even more difficulties with non-literality.
Syntactic Variance and Priming Effects in Translation
The present work investigates the relationship between syntactic variation and priming in translation. It is based on the claim that languages share a common cognitive network of neural activity. When the source and target languages are solicited in a translation context, this shared network can lead to facilitation effects, so-called priming effects. We suggest that priming is a default setting in translation, a special case of language use where source and target languages are constantly co-activated. Such priming effects are not restricted to lexical elements, but do also occur on the syntactic level. We tested these hypotheses with translation data from the TPR database, more specifical…
The Development of the TPR-DB as Grounded Theory Method
Abstract Initial versions of the translation process research database (TPR-DB), were released around 2011 in an attempt to integrate translation process data from several until then individually collected and scattered translation research projects. While the earlier individual studies had a clear focus on quantitative assessment of well-defined research questions on cognitive processes in human translation production, the integration of the data into the TPR-DB allowed for broader qualitative and exploratory research which has led to new codes, categories and research themes. In a constant effort to develop and refine the emerging concepts and categories and to validate the developing the…
Chapter 3. Measuring translation literality
Experiments in Non-Coherent Post-editing
Market pressure on translation productivity joined with technological innovation is likely to fragment and decontextualise translation jobs even more than is cur-rently the case. Many different translators increasingly work on one document at different places, collaboratively working in the cloud. This paper investigates the effect of decontextualised source texts on behaviour by comparing post-editing of sequentially ordered sentences with shuffled sentences from two different texts. The findings suggest that there is little or no effect of the decontextualised source texts on behaviour.