Avid Media Composer Although the technology used for editing a "video" is pretty much the same, low-budget production companies often do not take the same care with "video" that they would with "film." Location production may be rushed and understaffed, with less attention being paid to sound and lighting. It is not that the technology is different (many films are actually shot in high res video, as you know), but the attitude of the production company.

To screen dailies in film, the picture and sound first had to be transferred, synced up, and then was ready to project onto the big screen. To screen dailies in video, one only has to play back the video file.

Therefore, the screening of video dailies sometimes is not the empowering ritual that it is in sprocketed film. Very often, producers will ask for an immediate playback of the video just after it has been shot--right there on the set. Unfortunately, this means that the product is being viewed on less than an ideal sized screen, under questionable lighting conditions, and with field quality audio speakers.

The next step, after production, consists of the first picture edit.

Film is transferred in a telecine to video, along with matching keycode or SMPTE time code numbers. Sometimes the location audio is transferred in sync with the video. Other times, only the picture is transferred to video, and then the audio is added in the editing system. Note that when film is converted to video, there may be a time change (slowdown) of 0.1% to accommodate the difference between 24 fps (film) and 29.97 fps (video). Audio must be converted (pulled down) by the same percentage in order to remain in sync.

If you shoot in video, then there are no time changes to worry about.

Editing can be "on-line" or "off-line". Film and broadcast video are usually edited "off-line" because the editing system is only creating a blue-print (EDL or edit decision list) on small format (or non-HD) video that will be used later on as a guide for conforming the film negative or re-editing (rendering) the original videotape on an extremely high quality system.

On-line editing refers to cutting a project on an editing system with the intent of going directly to your mastering format or release format (such as DVD) from the output of that same editing system. This is a common practice in low budget video production, such as wedding & events, corporate, documentary, and many projects not intended for big-screen or major network broadcast.

With the increased use of high resolution video formats for acquisition, we are seeing a return to the old practice of off-line and on-line editing, although editors prefer more glamorous terms than their predecessors. Off-line is now known as proxy editing, using lower resolution or more highly compressed video files. After the show has been edited, the next step is "finishing" in which the original high resolution, uncompressed footage is used to replace the proxy material. At the same time, the electronic graphics and special effects are remastered in higher quality.

Due to the complex electronic nature of video editing--and the scientifically unproved yet widely recognized influence of supernatural occurrences on computerized edit controllers--there are bound to be at least a few edits that differ in length or visual effect from the off-line version. So it is not completely automated; the human influence is still very much a part of the workflow.

After the on-line version is complete, video editors begin work on the soundtrack. The process of sound editing and mixdown is known as "sweetening."

The first step in sweetening is to transfer the edited version of the production track onto a multi-track Digital Audio Workstation (non-linear audio editing system). Matching time code is used to maintain exact frame sync between the audio and the video files. This entire phase is called "laydown."

Note that a common workflow on today's major sets is to deploy two digital recorders during principal production. The location soundmixer prepares a basic, two-track "guide track" that is sent to one of the recorders and perhaps simultaneously to the camcorder; while a multi-track or iso'd version (consisting of four or more tracks) is recorded on the second audio recorder. Both recorders share common timecode with the video.

The two-track version of the soundtrack is used for on-set playback, dailies, and the picture cut. When the edited picture cut is sent down to the sound editors, they will work from the multi-track version that allows them more creative control over the soundtrack.

Sound effects, narration, and music are transferred over from the other audio sources onto the multiple tracks with frame accuracy.

If needed, A.D.R. and Foley can also be recorded using special software programs (or external recorders equipped with time code).

After all of the individual tracks have been built, checkerboard fashion, onto the virtual multi-track--the editor then begins the task of final mixdown. Depending on the budget of the show and the number of tracks involved, the mixer may create a DM & E. Otherwise, the tracks will simply be mixed down to a single monaural, stereo, or a multi-track surround release format

The final process is to transfer the mixed soundtrack back onto the finished videotape from whence it came. This is called the "layback".

On a major film, the individual audio tracks would be transferred to a removable hard drive and then taken to a post production sound facility for "re-recording" (final mixdown). In a large theatre environment, the picture would be projected onto a large screen, and the tracks would be accurately mixed for best presentation.

Obviously this is a simplification of what is an elaborate process. Depending on the formats involved and the file sharing capabilities of the various post-production departments involved, the actual work-flow will differ. Besides, with software and hardware constantly evolving, the actual hands-on process is in a state of never-ending change.