Video Translation 101 - José Henrique Lamensdorf - translation - tradução

Go to content

Main menu:

Video Translation 101



- by José Henrique Lamensdorf    


The first step is to get it converted to some format the translator can work with. Nowadays any video format can be converted into digital video, however the cost of doing so may vary a lot, depending on the distance to be covered, e.g. from motion picture film, analog video, or some less common digital video format. Actually, some translators can still work with analog VHS tapes, but if these will be professionally converted into digital video, it's a good idea to do it at the outset, before translating.

There is a logistics issue here, too. Depending on the geographic distance between client and translator, sending video tapes or DVDs physically may extend the timeline. Uploading and downloading long, hi-res video files over the web may take too long to be practical. One second of DVD-quality video means generating over 10 million pixels on the screen (plus audio). Regardless of software compression, it's a lot of data. So, for translation purposes, the best way to send video for translation over the web is to use a reduced screen size video file, but with uncompromising sound quality. This is easily (though often not so quickly) accomplished with software, often freeware. After all, most of the translation work will be on the soundtrack; the video is just for reference.


The first decision to be made on video translation is the intended (video) outPUT. As I said before, the (translation) outCOME, i.e. the target language, has been initially chosen. Considering what we see on TV every day, the two options seem to be dubbing or subtitling, or subbing for short. There are more, we'll see these two first.

To dub or to sub? This is the very basic initial decision, as it involves a substantial difference in costs down the road. At least in Brazil, the whole process of dubbing - in average, translation included - costs three times as much as subtitling the same video. But don't stop reading to decide right now, there are some technical considerations as well. An important point is that the translation for one can seldom be effectively "converted" into a translation for the other. A change in the objectives after translation will often cause wasted work, and going back to square one.


Four options will be covered here. The two basic ones, subtitling and dubbing, plus voice-over and hybrid.

There is no hard rule preventing anyone from mixing these in a film, but random unjustified shifts in the same film will cause a general uneasiness on spectators. So, let's see them.


It's the most economical way. No need to describe it, its key shortcomings are:

  • Subtitles use part of the spectator's attention. If it's just one or a few "talking heads", one viable option is to have the script translated into text, and send it by e-mail or fax. No need actually to watch the movie. Still pictures of these "heads" may be included, if desired.

  • If people talk too fast, and say too much in terms of content, part of it may be lost, as there won't be time to read so much text onscreen.

  • It definitely doesn't work for technical instruction films. One cannot read something like "Pull the latch release under the cover to get access to the control knob underneath." and watch how it's done at the same time.

  • If there are charts, graphs or other data visuals on the screen, it will be impossible to read both the subs and them at the same time, even if these are translated.

  • Audiences that have limited or no fast reading ability (small children, illiterate people, visually impaired people, foreigners) will have limited or no access to the content.

  • The original audio will remain there. If the translation is bad, bilingual spectators may protest.


No need to describe it either. As said before, the whole process is about three times more expensive than subtitling. Its key shortcomings are:

  • It requires a dubbing-specialized translator, so that the dubbing script allows voice artists to sync their speech with the lip movements of the cast. If it's only off-screen narration, there is no such problem.

  • Though I personally make no such difference, most translators charge a(n often much) higher fee to translate for dubbing than for subbing.

  • If there are too many roles, the dubbing cost may skyrocket, as this will require a numerous dubbing cast.

  • If there is music and SFX (sound effects, or Foley), the so-called M-E (music + effects), unless provided on a separate audio track, might have to be re-created, which can be quite expensive.

  • Musicals may require partial subtitling, or special talent - musicians & singers - for dubbing.


It is a similar, though more economical process than dubbing. It may be seen mostly on documentaries and newscasts.

It usually involves dubbing by at most three people: one narrator, one "man", and one "woman". The narrator does the job exactly as if it were for dubbing. The other characters start with the original sound, its volume is immediately lowered, and a non-sync translation is read by either one of the two other voice artists (the same "man" for the voice of all men; the same "woman" for the voice of all women). These latter two finish speaking just before the original character ends their speech, when their volume is restored to normal.

It is comparatively cheaper than dubbing in all aspects, especially if the video includes statements from several different people. Its key shortcomings are:

  • The output inevitably looks and feels "cheap", as there is a continuous reminder to its having been translated. Sometimes, depending on the content, it gives the feeling that the intention was to have it dubbed, but the budget was prematurely exhausted.

  • If there is any dramatic interpretation, it will be completely lost, as the translation is read with minimum interpretation, like a newscast.

  • When this is done by one single voice for all narration, men, and women, it's called "lectoring".

Its reason to be is to offer, at a much more affordable cost, a video with the dubbing effect, so spectators have more time to see all the images instead of reading subtitles.


In this process, the narrator, and sometimes the leading characters, are dubbed. All other appearances, such as testimonials by different people, are subtitled. It calls for a lot of common sense to decide which roles will be dubbed, and which will be subbed. There must be some logic in this, otherwise frequent - especially if unjustified - shifts between reading and listening will impair the spectator's level of attention.

It is worth reminding that a shift in any process under way will inevitably take it back to square one: translation. So, if a video is humbly subtitled, becomes successful, and then has to be dubbed, it all starts with translation. At its best, the initial translation may be used as a reference.


Some misbeliefs...

Too many people believe that a video must be transcribed first, and then translated. This is not true, if it has to be translated from one language into another one language. Good translators work directly from the audio/video into the translated script.

Many translators offer a lower price if the script is provided. The reason is that when noise/SFX/music obliterates the speech, it might be difficult to understand and translate what was spoken. But the script must be accurate, and match the final edit. All too often it isn't, and it doesn't; so watch out!

Some people think that all the translator needs is the script, or the subtitles in some language they can translate from; no need to see the video. This is the birthplace of most of the bloopers we see onscreen. Imagine the phrase: It's down! Would that be something that was lowered? Some equipment or device not working? Or the dumped contents of a pillow? Then there a genderless word that has a gender in the target language, like colors, positions, adjectives etc. A zillion minor things that would be obvious after the video was seen, but inextricable if not.

In the worldwide struggle for lowering costs, some people try to hire the cheapest vendor at every stage. It is worth pointing out that video translation is a progressive sequence of events, where the quality in each one is wholly dependent of the quality rendered in all previous steps. If a flawlessly dubbed/subbed DVD is mass-duplicated onto cheap media, and all those copies are useless, it's just a matter of disposing of these, and duplicating again. On the other end of the timeline, if the translation is bad, no good dubbing/subbing will ever save it. If it goes as far as having so many copies of it made, and they have to be discarded, the process will have to go back to the translation stage, and all the ensuing steps will have to be redone.


Here I'll cover the possible translator involvement after his/her part has been accomplished.

Dubbing (also for voice-over)

The translator is seldom involved in the dubbing process after the translation has been delivered. Most of the possible issues will be handled by the dubbing director.

However there are some high-quality demanding jobs where - if geographically possible - the translator is invited to assist the dubbing process at the recording studio, to possibly make some on-the-fly decisions. Any translator having this chance should take it. It's a valuable learning experience, as they'll see the actual consequences of their decisions in the translation process, and therefore make radical improvements to their working skills.

In other cases, the translator will receive the raw (unmixed) dubbed video, to check if anything should be redone. At this stage it is easy to recall the voice artist to redo a phrase, or even part of it. Only after this raw dubbing is cleared, the next value-adding step - audio mixing - will take place.


Due to the digital video (r)evolution, it is possible for a translator to render a complete subtitled DVD from a standard computer, if it has just a few rather common and inexpensive features.

In subtitling, the next step after translation is called "spotting", or "time-spotting". This involves breaking the subs into blocks of text within specified characteristics such as "X lines with at most Y characters each" and punctuation standards, and assigning each of them two times, in and out. These characteristics usually vary from one country to another, the chars per line limit varying with specifications like font and size to be used in the actual subtitles. The output is in a specific format file, and there are more than 50 of them, many convertible into one another, depending on the compatibility with the software that will be used thereafter. It is worth noting that some of these subtitle formats include more extensive data, e.g. the *.SSA, to be used with simpler subtitling software; while others have somewhat skimpy information, e.g. the *.TXT, which are used by more complex subtitling (often DVD authoring) software, which will handle all these details.

After a compatible subtitles file has been generated, it's time to render them, and there are three main ways of doing it. The most basic one is burning the subs onto the video itself. We've seen the result for years on VHS. After it's done, the only way to watch the video without the subs will be by covering the part of the screen where they appear. Another one, used by cable TV broadcasting stations is to generate and overlay the subs on-the-fly. And the third one is exclusive to DVD, which is to author such a disk with so-called "subpicture" files, which are actual overlays that will be shown together with the main video. One DVD may contain up to 32 different such files, plus "none", which may be selected by menus devised for it, or by the "subtitle" key on the remote control.


This article would not be complete if I failed to mention closed caption. They are like subtitles, however with a different purpose: providing the complete audio for hearing-impaired spectators. They don't require translation, as closed captions are in the same language as the audio track being played. No objection however, to closed captions on a dubbed video. They include not only the full spoken script, but specific noises, such as [car starts], [bell rings], [dog barks], etc.

Bear in mind that the hearing impairment might not always be in the spectator, but a noisy environment, such as in a train station, may render closed captioning useful.


So here is a roundup of most of the basics involving video translation. It's not a job for every translator, but specialized work instead. It behooves training, and certainly improves with experience. There is also the possibility of sticking to translation only, or going deeper and deeper. Clients will usually have different demands on the parts of the work they need: some will want the whole nine yards, others will want just the translation for one of the options described above.

Is it always so much fun as everybody thinks? No, it brings as much fun as any other kind of translation work. It may be the next Hollywood awards-scooping movie, or it may be a dull interview with dull people who don't know what they are talking about, nor why. It's all in a day's work!

© 2008 José Henrique Lamensdorf - By special agreement with Proz, where this article was originally released, it may NOT be republished elsewhere.

Click here to see the list of articles on this web site.

Back to content | Back to main menu