VIDEO SUBTITLING - José Henrique Lamensdorf - translation - tradução

Go to content

Main menu:




Many people ask me a cost estimate because they need
a video subtitled. They certainly know the source material, the original video, as well as the desired end-product, the subtitled video. Yet they don’t seem to know what happens between one and the other. Some of them present me with a huge quantity of hours in video they want subtitled, reasoning that if there is so much of it on TV, it should be a cheap and quick process.

My intent here will be to briefly explain what takes place in the process, as well as a few of the most common options.


It is assumed that the video is spoken in one language, and it’s desired to have it subtitled in another. “Subtitling” videos in the
same language they are spoken is called “closed captioning”, which will be quickly covered at the end here; let’s leave it aside for the time being.

So the first step is translation.

Let’s begin by correcting some misconceptions I’ve seen.

Unless you are having the video translated into several languages, there is no reason to have it transcribed to obtain the original script. We’ll see the reason for it shortly. If you do have the script, there shouldn’t be a reason to have it translated completely. This would only be the case if the subject matter were something so specialized, using some hard-to-find terminology, that there wouldn't be one same translator specialized in both that subject and video translation for subtitling.

While we are on this issue, yes, you do need a specialist in translating video for subtitling. Many good translators don’t translate video; they lack both the hard&soft-ware for it and the necessary training. Some translators work on video only for dubbing, and won’t deliver an output adequate for subtitling. I was one of them; spent 18 years translating for dubbing before I ventured into subtitling. The issue is not about either one being more difficult or easier than the other, many of the tools are the same, yet the approach is completely different. Sintra – the Brazilian National Translators Union suggests charging for the translation for dubbing no less than twice the suggested rates for subtitling, a stance that I personally disagree with.

The strength in translating for subtitles lies on conciseness. The translator must preserve - as much as possible - the gist of what is said in the original script, while seeking for the most trimmed-down text, so spectators will have more time left to watch the action onscreen after having read each subtitle.

For example:
Original in Portuguese (spoken):
É minha profundamente arraigada opinião que...
Full translation: It is my deeply rooted opinion that...
Ideal subtitle (written): I really think that...

Is the script necessary?

I have absolutely nothing against Sintra, I’m using them here as the only reference in Brazil for translation rates. However they suggest that a translator should add a 50% surcharge to translate videos when a script is not provided.

Unquestionably, when the speech is obliterated by loud noise or music, it will be hard to understand what is being said. However on a second thought, spectators who are native speakers of the original language will face the same problem, so it’s unlikely that a good original production will dare to make it so.

The major reason to depend on the script is for the accurate spelling of proper names in subtitles. A good video translator will work directly from the soundtrack, yet how is possible to enable him/her to spell correctly on the subtitles, e.g. the name of a
Professor Zbygniew Wojechszlecki, from the University of  Szczecin, being that said by a non-Polish speaker? The translator won’t know how to google it for the proper spelling. If a script is available, providing it will solve this problem; otherwise some reference material may help, even if it means links to some pages on the web.

Another concern for the video translator is what I call the video
rhythm; for instance, if it has many pauses, many interchanges, quick dialogues, countless variables. The translator should advocate for the spectator, whether the latter will follow the rhythm in the video while understanding what’s going on there. As much as possible, when a speech spans over more than one subtitle, considerable common sense is required to slice these fragments, so that the general idea gets progressively built in the spectator’s mind.

For this reason it is not viable to translate from existing subtitles alone, without having the translator watch the video. To illustrate, if we have a video spoken in Dutch and subtitled in English, it is foolish to ask the translator to work on the subtitles alone from English into Portuguese and reuse the
time-spotting (our next point), without watching the video. Considering the phrasal structure difference between English and Portuguese, it’s almost certain that subtitling will come out very bad if done in this way.

Another reason to have the translator watch the video is the possibility of one same phrase having different meanings. For instance, imagine It’s down! being said out of the blue. The translator will have to watch more of the video, to discover what’s it all about, and then go back. Is there something in a lowered position, has some equipment stopped working, or has someone found the contents of a torn pillow? Many words in English don’t flex, yet they do in other languages. In Portuguese, it’s red may be é vermelho (masculine) or é vermelha (feminine). Despite blue in Portuguese being azul for both genders, in Italian it will be azurro (m) or azurra (f), so it’s better not to go there, and let the translator watch the video.

Some video translators only
translate for subtitling. Time-spotting (or simply spotting - which we’ll see next) is done by someone else. The convention is to use at most two lines in each subtitle, yet the maximum number of characters per line may vary, depending on the font and size that will be used, so the translator should be advised about this number.

Translators usually deliver their subtitles for spotting in one of these formats:

First line of first subtitle
Second line of first subtitle

Second line of second subtitle
Second line of second subtitle



First line of first subtitle|Second line of first subtitle
First line of second subtitle|Second line of second subtitle

One final remark: video translation work is not quick. My continuous work average is 6:1. This means it takes me one hour to translate 10 minutes of video. I heard of translators who work at 12:1, but I’ve never heard of anyone doing it faster than 5:1. As it is tiresome work, requiring concentration, it’s not possible to assume that I’ll translate an 80-minute video in eight hours while maintaining any quality whatsoever.


Explained in its simplest form, time-spotting (or simply spotting) is to define when - relatively to the video start - each subtitle will come up onscreen, and vanish, in hours, minutes, seconds, and fractions thereof.
This is done with software, the result being a file containing mostly text. The problem here is that this text file may exist in over 50 formats, some of them listed
, the choice depending on how the subtitles will be applied to the video, i.e. which program will be used to do it.

When such software is simple, usually freeware, this file contains most of the information on the subtitles, in terms of font, color, size, position, etc. One such more complete file formats is SSA (acronym for SubStation Alpha), which looks like this:

ScriptType: v4.00
[V4 Styles]
Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, TertiaryColour, BackColour, Bold, Italic, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, AlphaLevel, Encoding
Style: Default,Trebuchet MS,36,65535,16777215,16777215,0,-1,0,1,2,2,2,50,50,40,0,0
Format: Marked, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
Dialogue: Marked=0,0:00:02.93,0:00:07.30,Default,NTP,0000,0000,0000,!Effect,First line of first subtitle\NSecond line of first subtitle
Dialogue: Marked=0,0:00:08.24,0:00:13.52,Default,NTP,0000,0000,0000,!Effect,First line of second subtitle\NSecond line of second subtitle

When the software that will implement subtitles is more complex, the subtitles files become much simpler, like the SRT (SubRip) format:

00:00:02,934 --> 00:00:07,302
First line of first subtitle
Second line of first subtitle

00:00:08,247 --> 00:00:13,527
First line of second subtitle
Second line of second subtitle

Sometimes there are minor differences from one file format to another, however these are essential for compatibility. If the software using a simple subtitles file is equally simple, maybe only one or another global change to all subtitles (color, size, font) will be possible to apply, however it’s unlikely that it will allow changes specific to one or a few subtitles, for instance changing into italics when someone on the video is talking off screen. In high-level subtitling programs, usually DVD authoring software, the simplest file types are used, however in these same programs it is possible to make changes to individual subtitles, sometimes parts of them (e.g. to one single word there).

Some translators do spotting while translating; this is possible, and may save some time. I personally prefer to do translation reviewing while spotting subtitles.

The timespotter doesn't necessarily have to be a translator, though s/he needs to understand both source and target languages to some extent. In my case, though I only actually translate between English and Portuguese, I spot videos translated in all language pairs involving these two plus Italian, French, and Spanish.

When the timespotter is not a translator into the target language, the loss is in their being unable to notice (and therefore correct) any mistakes that may have eluded the translator. Furthermore, s/he won’t be able to replace words with synonyms, in case the translator exceeded the number of characters allowed in each line.
Again, according to Sintra, the cost for timespotting should be 30% of the cost for translation, which makes sense to me, as long as the same person is doing both translation and spotting. If the spotting is done by someone else, that person’s workload will depend on the quality of the translation they receive. 30% of a cheap translation, hence supposedly bad, may mean doubling or tripling the timespotter’s workload, for a despicable compensation. This would be unfair.


At this point the process opens into a few options.

Let’s start with the simplest one, when generating/burning subtitles is not necessary.

There are computer programs, and even modern TV sets that generate subtitles on-the-fly. Many TV stations, especially cable TV, do the same. All that is needed is a compatible video file and its subtitles file having the same file name (with different suffixes, of course) for the system to play the film with the subtitles.

The second option is an old acquaintance of ours from the days of VHS: subtitles burnt on the video. The only way to watch the video without subtitles is by covering the lower part of the screen with a board or duct tape. This operation is performed by software, each program usually working on one file type. The process is automatic, however it may take (a long) time, depending on the frame size and the video playing time. It may also involve converting video files. Spelling it out, I burn subtitles on AVI files. If, for instance, I receive a WMV file, and the desired result is a FLV file, I’ll need first to convert from WMV into AVI (where I’ll burn the subtitles), and later from AVI into FLV. Some care must be taken not to compromise video quality in the conversions.

And the third option, exclusive to DVD, is subtitles overlaid on the video. One DVD may hold up to 30 subtitle sets, viz. different languages, plus the option to turn them off. This process takes longer and is more expensive per minute and per language than burning the subtitles (in one single language, of course) directly on the video, however it allows more flexibility. To turn subtitles on or off, when there is only one set, or to cycle through all of them available on a DVD during play, there are two ways: One is using the SUBTITLE key on the player remote control. The other requires DVD authoring, adding an initial menu to select among the available languages before starting the show.


This article wouldn’t be complete if
Closed Captions were left out. Though they look alike, they are not subtitles.

First, they are in the same language people speak in the video. Its objective is to allow whoever may not hear the audio to understand what is said. All right, hearing impaired people are one case, however they are also used where the surrounding noise (for instance, busy waiting areas) prevents hearing, or where silence is required (for instance libraries, or waiting rooms in hospitals).

Secondly, they include sound effects described between brackets, e.g.
[phone rings], [door slams], [dog barks], [music].

Yet what finally determines the difference is that closed captions are not added directly to the image, but encoded in the video instead. Decoding is performed either by the television set of by an external decoder.
Due to low demand, I don’t offer this service as such. Clients who requested it from me accepted having them made as plain layered subtitles on DVD, which offers them the same result with more flexibility.


I hope that by now you have a general grasp of the entire subtitling process.

Video producers usually request the translation for subtitles only, sometimes with spotting, too. Companies, organizations, and people whose core business is not video production prefer the completed job. Anyway, I am prepared to cover any step of the process.

If you need only the translation for subtitling, click here for express service.

If you want a video subtitled, please click here.

Finally if you need other video services, please click here for express service.

Back to content | Back to main menu