Audiovisual Translation A translation practice at the …...proficiency level and type of visual...

78
Audiovisual Translation A translation practice at the service of language learning Francesca Bianchi Università del Salento (Lecce, Italy) Department of Humanities [email protected] Chemnitz, 10-11 July 2018

Transcript of Audiovisual Translation A translation practice at the …...proficiency level and type of visual...

Audiovisual Translation A translation practice at the service of language learning

Francesca Bianchi

Università del Salento (Lecce, Italy)

Department of Humanities

[email protected]

Chemnitz, 10-11 July 2018

Feel free to askquestions at anytime!

What is AVT?

• A (range of) means to make a multimodal text available to a wider audience

• Translation? …Or transposition? …transcreation? …other?

• Supported by technology

Why talk about AVT in relation to languageteaching/learning?

Interactive way to use videos

Flexible, adaptable to one’s needs

Applied tasks

The student’s work becomes a real product to share

The students usually love it!

Students can be

• Users of AV material

• Creators of AV translations

Videos can be used to teach• content• L1• L2

Outline• Focus on multimodal products

• How do they support learning• Why (features, interactions among features)

• Focus on language in videos• Language in different text types• General differences from natural language• Linguistic realism

• Audiovisual Translation• Types of AVT and possible uses for teaching L1 / L2

• Focus on Subtitling, with examples relating to L2 teaching

• What types of subtitles for what types of students? (experimental study)

• Subtitling for CLIL (experimental study)

Focus on videos

Videos are excellent teaching tools!

Multimodal nature + narrative structure stimulates the human brain and activates different types of intelligence, including the linguistic, spatial, musical, and emotional ones (Berk 2009).

Stimulate student’s attention, imagination, and critical debate (Mathews, Fornaciari, Rubens 2012).

Video contents have a long-lasting impact on our memory (Mathews, Fornaciari, Rubens 2012).

Multimedia products

Interaction of several semiotic levels:

- images

- audio

- text

Films, documentaries, videogames, advertisments, interactive software ….

Films and the like

• What does each semiotic level convey?

Images

• The framework of the action and characters. Describe the setting, provide clues to the characters (look; clothing; gestures)

• May include text: Displays (on screen writings): they are not there by chance, but for a specific purpose (eg. My big fat greek wedding; Mickey

blue eyes; Bridget Jones’s Diary; You got mail)

Audio

• Soundtrack

• Dialogues

• Sounds

Soundtrack

• Ambient; describe emotional state of the characters; lead the audience towards a particular emotional state; cross-reference (eg. Sharktale).

• Singing: voice to the character’s thoughts; background information (like an off-screen voice; Disney’s movies)

Dialogues

• Help the plot develop (eg. flashbacks; background information; …)

• Explicate interpersonal relations between the characters

• Help understand the character’s mood, personality, ecc.

• Dialogues tend to be realistic, although they are not examples of reallanguage *

Sounds

• Make the image realistic

• Direct the audience towards a specific event;

• May be functional to the plot (e.g. a barking dog scares two thieves)

Written text

Captions (explanations): help the viewer understand when/where the action takes place

(eg. Shrek; My big fat greek wedding)

• Subtitles (dialogues)

Text does not disturb, because reading is automatic (d’Ydewalle et al. 1991; d’Ydewalle, Gielen 1992).

How do these elements interact?

• Constant interaction:

• May provide different pieces of information

• May provide the same type of information (reinforcing)

• In both cases, they take advantage of the human ability to master the audio-verbal code along with the visual code (Paivio’s dual coding theory)

Paivio’s dual coding theory

Our cognitive system includes two sub-sitems

- One processes audio-verbal signals

- The other processes images.

Though independent, the two systems are interconnected: A piece of information from one of the two channels activates also the othersystem.

Furthermore, dual coding reinforces memorization.

Focus on language in videos

From a linguistic perspectivevideos are excellent teaching tools!

Provide examples of natural language use (register variation; sociolinguistic variation).

Show dialogues in context, in a variety of everyday contexts.

Spoken language! But also technical language, ….

Different text types

• Films

• Sit-coms

• Medical dramas

• Documentaries

• TV news

• Video conferences (e.g. TED talks)

dialogues

monologues

dialogues

dialogues

monologues

monologues

Mostly informal tenor

Mostly semi-formal

Mostly informal tenor

Mostly informal tenor

Formal tenor

Mostly semi-formal

Technical vocabulary

Technical vocabulary

Technical vocabulary

Dialogues are not examples of real language

• Written to be spoken

• Written by a single person for many different characters

• Films present a compact version of a long story (e.g. many days/months/years in two hours)

• The whole product is written for the benefit of the viewer, who is not part of the story/plot/events)

Major differences(Pablo-Romero Fresco 2009)

• Relevance (every word is highly relevant for a specific narrative function)

• Informative density

• Predictability (set phrases)

• Clear prosody

• Greater grammatical accuracy

• Syntactic and dialogic structure aimed at reducing/avoiding ambiguity, redundancy, overlapping

Linguistic realism

British and American films: highly realistic

Contemporary films faithfully reproduce the following features typical of spontaneous conversation (Taylor 1999):

- Parataxis - Vocatives

- Verbal elision - taboo words

- Subject/object dislocations - Interjections

- Creative language (e.g. puns, creative metaphors) - Hedges

More recent quantitative studies have confirmed that American films (Forchini2012) and sitcoms, such as Friends (Quaglio 2008, 2009) show linguistic features that are very close to face-to-face conversation.

Let’s enhance videos with AVT !

Types of AVT

• Interlingual subtitling *

• Intralingual subtitling (captioning; teletext) *

• Subtitling for DHH (intralingual) *

• Audiodescription *

• Dubbing

• Voiceover

• Surtitling (theatre)

• Live subtitling (respeaking) *

Audiodescripton

• Images to Oral

• Intralingual

• Objective

• Coincise

• No overlap with dialogues

• Time constraints

Example

Where?TVCinemaMuseumsCity tours

Brief descriptions of characters, space and action.Off-screen voice speaks in between dialogues.

Software: SANAKO

Audiodescription and language learning

Recent literature suggests that audiodescription may be used to favourlanguage learning, in particular for: the elderly, illiterates, and migrants(Perego 2014).

Classroom uses….

Listen to audiodescribed video: Learn how to describe

Create audiodescriptions: Practice describing places, people, actions…

Suitable for…

• A2 levels

• Everyday vocabulary

• Present tenses (simple and continuous)

• There is / there are

• Space prepositions

• Short, simple sentences. Parataxis.

LIVE SUBTITLING

• Respeaking/simultaneous translation + editing and projecting

• Oral to written

• Asynchronous

• Punctuation

• Grammatical correctness

• Elimination of spoken traits

Where?TV newsLive broadcasting

Example

Respeaking

Initial task in live subtitling.Simultaneous dictation of an oral text to a speech recognition software.Punctuation is dictated, too.

Uses….To train listening skills

To train production skills

To train pronunciation

To learn the importance of punctuation

DRAGON NATURALLY SPEAKING

Professional SUBTITLING…

• Oral to written

• Interlingual (L2 to L1 or L1 to L2)

• Synchronous

Technical constraints:

• Max 2 lines

• Max 37 characters per line

• Given character-time ratio (average reading speed)

Example

…continued

• Neutralisation of accents, dialectal features, stammering, voice overlapping.

• Punctuation.

• Deletion of redundances, false starters, and other features that are typical of spoken texts

• Correct grammar, though respecting register (tenor) variations

…continued

• Reduction (deleting hesitations, repetitions, false starters, vocatives)

• Condensation (shorter paraphrases)

• Neutralisation (dialects, accents, bad words, other features that are typical of spoken discourse)

• Simplification (syntactic and lexical)

• Explicitation (anaphoric chains, logical connectors, other elements of textual cohesion)

Subtitling for the deafand hard of hearing

• Intralingual (same-language)

• Same technical issues as ordinary subtitling

• Same strategies: Reduction, Condensation, Neutralisation, Simplification (syntactic and lexical), Explicitation

• Use of graphic devices to indicate which character is speaking (name; colour)

• Subtitling of songs and other diegetic sounds

Example

Uses of subtitling in L1 learning

• India (just subtitling)

• Adults (Kothari, Pandey, Chudgar 2004)

• Children (Kothari, Takeda 2000; Kothari et al. 2002)

• New Zealand (L1 subtitling + reading/writing tasks)

(Parkhill, Davey 2014)

• improved vocabulary

• better reading and understanding• On all children, and in particular on Maori, Pasifika, and slow

learners.

Empirical experiments

Uses of subtitling in L2 learning

Watching subtitled video - Impact on learning

Empirical studies on captions and subtitles

Better than audio only with reference to:

Acquisition and development of L2 vocabulary

Listening comprehension skills

Content memorization

MotivationBUT NOT:grammar pragmatic issues

Which subs for which students?

Very many variables at play:

- Student’s age

- Student level of proficiency in L2

- Length of video

- Speed of speech

- Semantic match between dialogues and images

- Students’ individual differences and preference

- Aim (content; vocabulary).

Bianchi & Ciabattoni 2007

Tested the impact of some of these variables using a specifically created integrated computer environment.

Controlled variables:

- Semantic match between audio and video channels

- Type of subtitling: audio-only; captioning (L2); subtitling (L1)

- Student L2 proficiency

- Type of element to learn (content; vocabulary)

- Long-term vs. short-term vocabulary acquisition

Participants

• Participants: 93 psychology students (Italian native speakers), motivated to learn English

• different levels of proficiency (17 beg; 49 int; 27 adv)

• 2 experimental groups (EG1: English subtitles; EG2: Italian subtitles)

• 1 control group (no text aid)

Scenes from two films:Harry Potter: great semantic match between images and text aidFantasia: no semantic match

Original subtitles

Materials

Experimental design

• Pre-test: written multiple-choice exercises, collectively-administered test. To assess the level of English + knowledge of the words, phrases, and linguistic phenomena targeted in Phase Two.

• Computerised video test: seven days after pre-test. On individual PCs. To assess comprehension and short-term vocabulary retention. Multiple-choice questions at the end of each clip. at the end of each series of questions the subjects had the possibility to watch the entire film clip again and then review their answers up to two times. (simulating a scenario of an adult intentionally watching a film as a means to learn English).

• Post-test: one week after Phase Two. Same test.

Results: comprehension

Students with L2 subtitles (EG2) obtained the best results, regardless of their proficiency level, and type of film. Captions (EG1) proved more useful than no-text input for beginners and advanced students (see Markham, 1989), but not for intermediate students. Finally, when semantic match was high, content comprehension was constantly higher regardless of proficiency level and type of visual aid, and differences between experimental and control groups were less marked. This result supports the fundamental role of images in general content comprehension (see Baltova, 1994).

Results: short-term vocabulary acquisition

The profiles of the three proficiency levels have little in common, except for higher scores when semantic match is higher.Captions (EG1) were less useful for vocabulary comprehension than subtitles (EG2), especially when proficiency was lower or images did not particularly assist dialogue and plot comprehension.

Results: long-term vocabulary acquisition

All students (regardless of proficiency level or text aid) acquired a greater number of words belonging to Harry Potter.The number of words acquired grew with proficiency.Intermediate and advanced students seem to have taken the greatest advantage from subtitles, beginner students from captions.

Conclusions1.

Greater semantic match between audio-video-text inputs helped at alllevels of proficiency:

- higher results in short-term comprehension tasks

- higher results in both short- and long termg-term vocabulary tasks

2.

The benefits of each type of subtitling varied with languageproficiency.

Results (continued)

Content comprehension was generally favoured by subtitles (L2).

Short-term vocabulary understanding was favoured by captions (L1) more than by subtitles (L2), in particular in the case of beginners.

Long-term vocabulary acquisition was favoured by captions (L1) in beginners, but subtitles (L2) in intermediate and advanced students.

But more ways exist to take advantage of subtitles. …because subtitles are not simply a transcription or translation of the dialogues!

Subtitles’ peculiar features

Subtitles meet three fundamental needs:

1. Adapt spoken language to writing norms

2. Produce clear, quickly readable subtitles

3. Respect technical constraints (max number of lines per subtitle and max number of characters per line)

Various types of adaptations

Change in the mode of discourse (spoken to written)

• punctuation and capitalization

• linguistic elements which are peculiar to spoken language (e.g., hesitations, repetitions, interjections, broken words, mumbling…) are reduced or deleted

• reduction or deletion of elements that are not expected in writing (grammar mistakes are corrected; swear words may be neutralised)

• preference for standard language

• different accents disappear

• dialects (if used) disappear

…may thus result in written subtitles that have a slightly more formal tenor than the corresponding spoken lines.

Readability + technical constraints

Lead to:

Reducing text length

Explicitating and clarifying meaning to the benefit of the viewer’s cognitive load when reading a subtitle

Reformulating

Reduction tricks (Diaz-Cintas & Remael 2007)

• replacing verbal periphrases with shorter verb forms

• preferring simple to compound tenses

• generalising enumerations

• using shorter near-synonyms

• changing word classes

• using shorter forms and contractions

• changing negative and interrogative sentences into affirmative ones

• using direct questions rather than indirect ones

• simplifying indicators of modality

• turning direct into indirect speech

• changing the subject of a sentence or phrase

• changing the theme-rheme order

• reducing compound sentences into simpler ones

• transforming active sentences into passive ones or vice versa

• replacing nouns or noun phrases with pronouns

• merging phrases or sentences

• and omitting words such as adjectives, adverbs, phatic words, greetings, interjections, vocatives, formulas of courtesy, hesitations and false starters

ExplicitationTricks:• Substituting pro-forms• Replacing ellipsis• Replacing deictics• and more

Perego (2003):- explicitation in conjunction with addition and specification- Aimed at:

- compensating cultural gaps between source culture and target culture (cultural explicitaton);

- verbalising data conveyed by the visual or auditive channels (channel-based explicitation);

- compensating loss due to source text reduction (reduction-based explicitation).

Intralingual subtitling…different types…

• Professional film captioning (DVDs): adaptation, due to linguistic and technical needs.

• Specifically created for language learning: may reproduce the spoken text rather faithfully

• Automatically-created (e.g. youtube): try to reproduce the spoken text literally, but include many mistakes and are often too fast

Uses….• Used by the students, in autonomous learning

• Easily and quickly created by the teacher, and used in class to make a slightly too difficult text more accessible and enjoyable

• Printed out for the students to read or manipulate (e.g. fill-in-the-gap exercises; punctuation exercises)

• Offered to the students to check their answers to a listening exercise

• Offered to the students to check a transcription they made

• Used as a ‘listen and correct-the mistakes’ task (automatically-made subs, but not only!)

• Compare professional captions to original audio to attract the student’s attention to specific linguistic elements (e.g. deictics; elliptic genitive;…)

• Created by the students as an alternative listening comprehension / transcription task

• Created by the students following professional subtitling practice: obliges them to focus on the differences between speaking and writing!

• Or viceversa: provide subtitled video with no audio and invite students to recreate lively dialogues.

The Superman task

Teacher • created English subtitles of an English short video• pout subtitles, after removing pieces of dialogues

Secondary school students • watched the video without subs• filled in the gaps in a printout of the English subtitles • checked their answers by watching the subtitled video• used the srt English file to create Italian subtitles

Interlingual Subtitling

Dialogue adaptations.

All the subtitling specific features described before

+

Ordinary translation issues:

- lexical issues

- morphosyntactic issues

- pragmatic issues

- cultural issues

Main subtitling strategies

(Bianchi 2015a, inspired by Gottlieb’s and Lomheim's classifications)

• Addition (of a NP, VP or PP or other element)

• Effacement (of a NP, VP or PP or other element)

• Substitution (lexical substitution)

• Literal transfer

• Reformulation (morphosyntactic changes)

Uses……?

• Fill-in-the-gap exercises.• To focus attention on a specific grammatical element and compare

English phrasing to L2 phrasing (contrastive grammar)• To work on synonyms / hypernyms / hyponyms• To highlight cultural equivalents• Ask the students to read, analyse and comment professional subtitles

(understand which strategy was applied and guess why) • Ask the students to produce subtitles (from L2 to L1; or viceversa):

obliges them to focus on the differences between speaking and writing, as well as on grammar, vocabulary and punctuation!

• Or viceversa: provide subtitled video with no audio and invite students to recreate lively dialogues.

• Introduce students to translation

Subtitling as a CLIL task

University students:

• Watch a science video

• Make a transcription

• Self-correct their transcription

• Produce English subs

• Search the Internet for parallel texts in L1 and suitable translation for technical vocabulary

• Produce Italian subs

MaterialVideos from the GeoSet project

about things like:- solar cells

- fullerens- C60- wave power generators

Specifically created for a young audience and for the dissemination of science.

Video length: 3-7 minutes.

http://www.geoset.info

• Does creating subtitles favour the acquisition of scientific content, and scientific vocabulary?

• How does subtitling compare to watching subtitled video?

• Does creating subtitles increase the student’s interest in science?

EXPERIMENT (Bianchi, 2015b)

Experimental design

Experimental Group Control Group

24 MPhil Students in AVT, at the beginning of their first year

16 undergraduate students in translation, at the end of their third and last year

create subtitles watch subtitled video

Tested on content, vocabulary, love for science

1 week after end of p.w. Immediately after videos

Watching tasks (control group only)

• 3 different conditions:• Videos with English subs only

• Videos with Italian subs only

• Videos with English subs + same videos with Italian subs

• Watched each video 3-4 times.

• After that, they were administered the questionnaires

• The students were told they would be tested on contents and language.

The questionnaires

• Questions on content, e.g.: - Quali sono le 3 forme pure di carbonio note finora in natura?

- Qual‘è la forma di una molecola di C60?

- Come si può ottenere il C60?

• Questions on language, e.g.: Provide a suitable translation to the following:

- Nanotubo a spirale

- The caps of the nanotubes

• Questions on interest in the subject matter, before and after working on / watching the videos

• Self-rating of previous knowledge (each language and content item)

Assessing scheme0 = wrong answer

0.5 = partially correct answer

1 = correct answer

Prior knowledge – self assessment

Content

• 1: didn’t know anything about this topic

• 2: knew something about the topic, but learnt this particular content through the project work/video.

• 3. knew this content, and the project work/video helped me refresh my memory

• 4. knew everything about the topic. I could have answered this question even without the project work / video.

Totally unknown items

Partially unknownitems

Previously knownitems

Prior knowledge – self assessmentLanguage

• 1= never heard this word or its translation before

• 3 = didn’t know the English word or the Italian one, but context helped guessing.

• 2 = didn’t know the English word, but knew the Italian one.

• 4= didn’t know the English word, but it was clear from the context; and I knew the Italian one.

• 5 = I already knew the English and Italian words

Totally unknownitems

Partially unknownitems

Previously knownitems

Results

Both activities (watching ready-made subtitles and creating subtitles) favoured content understanding and language memorization.

Content:

watching group results much lower than subtitling group, despite similar or slightly higher previous knowledge of the former.

Language:

watching group vocabulary gain was, on the whole, slightly lower or similar to those of the subtitling group.

Comparison between groups

However: 1. the students who watched ready-made subtitles were tested immediately after watching the video, while the students who created subtitles were tested seven days after completion of the subtitling task; 2. watching students knew they would be tested on the contents and language of the videos, while subtitling students were not aware of this

Suggests that creating subtitles is probably a much more powerful activity than watching subtitles.

both activities increased the student’s interest in science, but the subtitling group declared increase is greater than that of the watching group

Simplified version…

Secondary school students:

• Watch a science video with captions• Make a list of technical terms• Use schoolbooks to find Italian equivalents• Produce Italian subs• Class correction of subtitles

They enjoyed it!Learnt!

Captions / subtitles for DHH

Shorter.

Very simple.

Limited vocabulary.

SVO order (always!)

Explain diegetic sounds ([phone ringing] [lyrics] : require greater attention to context

Coloured lines (one colour per character)

Uses…..

• Work on synonyms

• simplification

• Reformulation (e.g. Manipulate ordinary subs and make them suitable for DHH or viceversa)

Software to create subtitles…

Free

• VISUAL SUB SYNC

• AEGISUB

• SUBTITLE EDIT

Payment• WINCAPS• SANAKO• SPOT• MANY OTHERS

FOR YOUR PATIENCE

References

• Baltova I. (1994), Impact of video on the comprehension skills of core French students, in: “Canadian ModernLan-guage Review”, 50 (3), 507-531.

• Berk R.A. 2009, Multimedia teaching with video clips: TV, movies, YouTube, and mtvU in the college classroom, in “International Journal of Technology in Teaching and Learning” 5 [1], pp. 1-21.

• Bianchi F. & Ciabattoni T. (2007). "Captions and subtitles in EFL learning: an investigative study in a comprehensive computer environment". In Baldry A., Pavesi M., Taylor-Torsello C., Taylor C. (eds.). From DIDACTAS to ECOLINGUA: an ongoing research project on translation and corpus linguistics. Trieste: E.U.T. Edizione Università di Trieste, pp.69-90.

• Bianchi F. (2015a), “The narrator’s voice in science documentaries: qualitative and quantitative analysis of subtitling strategies from English into Italian”. ESP Across Cultures, 12, 7-32. Rivista di Fascia A.

• Bianchi F. (2015b). “Subtitling science: An efficient task to learn content and language.” Lingue e Linguaggi15, pp. 7-25.

• d’Ydewalle G. and Gielen I. 1992, Attention Allocation with Overlapping Sound, Image, and Text, in Rayner K. (ed.), Eye Movements and Visual Cognition: Scene Perception and Reading, Springer-Verlag, New York, pp. 414-427.

• d’Ydewalle G., Praet C., Verfaillie K. and Van Rensbergen J. 1991, Watching subtitled television: Automaticreading behavior, in “Communication Research” 18, pp. 650-666.

References

• Diaz-Cintas J. and A. Remael 2007. Audiovisual Translation: Subtitling. Manchester: St. Jerome Publishing

• Forchini P. 2012, Movie Language Revisited. Evidence from Multi-Dimensional Analysis and Corpora, Peter Lang, Bern.

• Kothari B. and Takeda J. 2000, Same language subtitling for literacy: small change for colossal gains, in Bhatnagar S.C. and Schware R. (ed.), Information and communication technology in development, SagePublications, New Delhi.

• Kothari B., Pandey A., Chudgar A.R. 2004, Reading Out of the “Idiot Box”: Same-Language Subtitling on Television in India, in “Information Technologies and International Development” 2 [1], pp. 23-44.

• Kothari B., Takeda J., Joshi A. and Pandey A. 2002, Same language subtitling: a butterfly for literacy?, “International Journal of Lifelong Education” 21 [1], pp. 55–66.

• Markham P. (1989), The effects of captioned television videotapes on the listening comprehension of beginning, intermediate, and advanced ESL stu-dents, in: “Educational Technology”, 29 (10), 38-41.

• Mathews C.S., Fornaciari C.J. and Rubens A.J. 2012, Understanding the Use of Feature Films to Maximize Student Learning, in “American Journal of Business Education” 5 [5], pp. 563-574.

• Parkhill F. and Davey R. 2014, ‘I used to read one page in two minutes and now I am reading ten’: Using popular film subtitles to enhance literacy outcomes, in “Literacy Learning: The Middle Years” 22 [2], pp. 28-33.

References

• Perego E. 2003. Evidence of explicitation in subtitling: towards a categorisation. Across Languages and Cultures 4/1: 63-88.

• Perego E. 2014, Da dove viene e dove va l’audiodescrizione filmica per i ciechi e gli ipovedenti, in Perego E. (a cura di), L’audiodescrizione filmica per i ciechi e gli ipovedenti, EUT – Edizioni Università di Trieste, Trieste, pp. 15-46. http://hdl.handle.net/10077/9997 (16.10.205).

• Quaglio P. 2008, Television dialogue and natural conversation: Linguistic similarities and functionaldifferences, in Ädel A. and Reppen R. (eds.), Corpora and Discourse: The challenges of differentsettings, Benjamins, Amsterdam, pp. 189-210.

• Quaglio P. 2009, Television Dialogue: The sitcom Friends vs. natural conversation, Benjamins, Amsterdam.

• Romero-Fresco P. 2009, The fictional and translational dimensions of the language used in dubbing, in Freddi M. and Pavesi M. (eds.), Analysing Audiovisual Dialogue. Linguistic and Translational Insights, CLUEB, Bologna, pp. 41-55.

• Taylor C. 1999, Look who's talking. An analysis of film dialogue as a variety of spoken discourse, in Lombardo L, Haarman L., Morley J. and Taylor C. (eds.), Massed Medias: Linguistic Tools for Interpreting Media Discourse, LED, Milano, pp. 247-278.