PROFESSIONAL VIDEO EDITING GUIDE
“Professional video editing is about the editing technique and the art of editing. Editing theory is what this guide is about.”
TABLE OF CONTENTS
PART 1 – INTRODUCTION TO EDITING
PART 2 – EDITING STEPS
PART 3 – EDITING BASICS
PART 4 – MAKING CUTS
PART 5 – CHOOSING SHOTS
PART 6 – IMAGE CONTROL
PART 7 – EDITING AUDIO
PART 8 – FILE MANAGEMENT
PART 1 – INTRODUCTION TO EDITING
After shooting media, the editor should have a collection of media clips on the hard drive. An editing process is then required to trim out or discard the poor clips, and to select the clips with the best quality and performance metrics. The editor will need to organize or arrange the best and most appropriate shots into logical sequences. The editing process takes in numerous input clips and produces a single output, perhaps into story format. The output media becomes a piece which then communicates ideas effectively and showcases the best work done by the production team and cast.
Much of editing education refers to the technical aspects of editing. Editing requires technical skills such as an understanding of the software editing program features, such as; masks, colour correction, cuts, transitions, audio effects, video effects, and compression. However, knowing the technical aspects of editing is insufficient to produce truly great works. One needs to be more of a technician, but an artist as well. Therefore, a professional editor needs to understand editing concepts and theory, to understand the technique and the art. Editing theory is what this guide is about.
PART 2 – EDITING STEPS
Editing isn’t simply about working with clips in the editing program timeline. Editing encompasses a series of steps, which must be performed, in order to bring the project towards completion. The steps are:
Acquire Footage – Get the audio and video clips and import them into the editing program.
Organize Footage – The visual and audio elements need to be organized into bins and folders.
Identify Best Shots – After viewing all of the shots, the best shots are identified.
Rough Cut – A preliminary version of the edited sequence. Some shots are placeholders.
Fine Cut – The final version of the edited sequence, tweaked and finalized.
Finishing – The step where image enhancement will take place.
Mastering – Rendering the sequence to a high quality media file, from which copies can be made.
Distribution – Outputting the finished sequence to Blu-ray Disc or to the web.
PART 3 – EDITING BASICS
Depending on the complexity of the video project, the act of editing will vary in difficulty. If there are fewer clips to edit, then the difficulty to edit a sequence will be lower. However, if the production team provides the editor with many clips, then editing a workable sequence will be more difficult. This leads us to the idea that editing difficulty is partly a function of the quantity of source clips. When an editor assembles a sequence, he or she must compare and contrast the clips in order to select the best and most fitting piece. With a large database of clips, the number of comparisons that must be made increases dramatically.
Every clip has both pros and cons, and choosing among many different clips can be challenging to resolve. Different combinations of clips will lead to different psychological effects on the viewer. The difficulty an editor faces is both the act of choosing the best clips, and also, choosing the most optimum sequences. Often times, the decisions made usually come from artistic and theoretical knowledge, rather than to technical knowledge. That is why the editor is more an artist than a technician. Learning just the program features and functionality of editing programs, is insufficient. To become a professional, the editor must practice theory driven shot selection, shot arrangement and fine-tuning.
When it comes to the type of transitions to use, the simple transitions turn out to look the most professional. A fade from black can be used at the beginning, and a fade to black can be used at the end. A black fade can also be used to create the feeling of time passing by within a sequence. A fade to white can also be used, and this often represents the passage of time. Lastly, the crossfade is a little less professional than a simple cut, but it can still be used within a professional sequence. In a crossfade, the current clip fades out and blends over the next clip as the new clip fades in.
Editing involves knowing when to make cuts, when to move cuts, or when to shorten or lengthen clips that have been cut. Editing is about fine-tuning the sequence through subtle cut modifications. The cuts must be frame specific and done with precision.
TRANSITION TO A NEW SCENE
The production team likes to film cinematic movement and walking of the characters on the sets. There are often many takes and many angles filmed of the critical transitional movements between scenes. Just because the shots are available, it doesn’t mean that they all need to be used to make a sequence. Showing too much sitting, standing, walking or driving is likely to result in slowing down the pacing of the sequence enough to turn the presentation into a boring piece of media. In order to show a change in location from one place to another, the editor needs to pick the best shot of the character leaving the scene, then a shot or two of the new geographical location, and then a shot of the new character entering the new location. The audience will assume that the character has changed locations, without actually being shown the travels. The selection of transitional movement should be done with restraint and a good sense.
THE ROUGH CUT
Editing a sequence is best done through a multiple pass system. Instead of making final edits as you go, you leave some decisions for later with the idea that you will come back to tweak that portion of the sequence later. A circular process, which first lays a rough cut, provides the advantage that on the subsequent passes, new information will available. Information such as what each different edit feeds into is of critical importance. Final editing decisions can only be made after the sequence has been viewed a few different times and after numerous tweaks and revisions to the sequence have been made.
The world established on camera, the physical space represented, must be done in a logical way so as to conform to real world directions; up, down, left, right, forward, and back. If there is a clip that shows a character travelling left and out of frame, the next clip shown must show that character enter from frame right. There must be a consistent and continuous space representation as the sequence progresses.
When shooting a scene, the film crew usually starts off with a wide shot of the talent, so as to have a baseline to which to cut to in the event that there are no other shots taken for a particular section. The talent faces each other, and they are filmed from the side. Their eye lines align along an imaginary axis or imaginary line, which needs to be acknowledged and visualized by the crew. The crew can and should film from only one side of this axis, the side from which they had set up their wide shot. This idea of filming from only one side of the axis is referred to as the 180-degree rule. If the crew were to film from the other side of the axis, and some crews do take shots from the other side, the footage could potentially be edited in a way so as to not maintain screen direction. The footage would not cut together well. What is screen left or what is screen right, would flip directions and confuse the audience. If the editor receives footage that crosses the 180-degree axis, then he or she must be careful to edit a sequence from takes shots from only one side of the axis.
Besides the “180-degree rule”, there is also the “30-degree rule”. Whereas the 180-degree rule is “inclusive” in that it states that one should film everything from one side of the imaginary axis, the 30-degree rule is “exclusive” in that it recommends not showing two sequential shots that are filmed within 30-degrees of one other. Simply put, when cutting from one shot to another shot, the two shots should be spaced at least 30 degrees apart. Otherwise, the viewer will notice a “jump cut or glitch” effect at the cut point. After a shot is made within the 180 degrees, the crew should then move the camera at least 30 degrees to set up the next shot. If they do not do this, the two shots will look too similar in the minds of the viewer and this will create an “observable effect”. If the sequence does require two similar shots filmed within 30 degrees of one another, one possible way to mask the potential jump effect is to add in a shot between the two similar takes, if it makes sense to do so.
PART 4 – MAKING CUTS
The cut is the most widely used and most professional of the transition types. This is because it reinforces the ‘invisible editing’ philosophy. Editing with simple cuts leads to a sequence where the editing itself is not apparent to the viewer. To support invisible editing, the cut must be made at the right time. The cut must be made to a shot that will bring new visual material or new information. Consider what would happen if two shots were edited in a sequence, where both shots presented similar visual information. In the mind of the viewer, the shots would appear to jump, and a visual discontinuity or glitch would be apparent. But by cutting from one shot to the next shot with different visual information, the viewers mind is forced to search the frame for new details, and this mental engagement reduces the jump effect. But making cuts to shots with new information is not just a glitch avoidance mechanism, far from it. The purpose of every new shot is to add new information to the sequence, which was not previously there. Every shot must present new information. The editor must figure out if all the necessary information has been shown, what information needs to be shown next, and what motivating element needs to be focused on.
JUMP CUTS AND MOTION
The production crew will provide footage of shots that are in motion and also shots that are static. Sometimes the motion shots will begin and end on a static frame, and sometimes they will not. When cutting from a static to a motion shot or vice versa, the potential will exist for a visual crash to occur, or a jump effect. This will not always be the case, but it is important to consider the effect that may occur. The production team should provide pans, tilts, or dolly shots that are in motion, but that begin and end on a static frame. This will ensure that the editor will have greatest amount of choice and ability to remove any jump effect that may occur. In addition, the motion of a shot will have a particular direction. If the shot is panning towards the left, it is motivated towards the left. If the editor then cuts to another moving shot, but one that is panning right, then the motivation is not continued. This will result in a discontinuity or a visual crash. The cut point must maintain its invisible editing effect; so therefore, it shouldn’t be centered between opposing motions.
When editing dialogue, it can be tempting to edit the sequence to show the characters only when they speak, and not when they react to speech. Editing according to dialogue, or providing screen time to the current person that is speaking is normal. However, if the scene has a lot of dialogue, it can become predictable and boring. From time to time, it can be a good practice to surprise the audience by showing them the character from the listener’s perspective. Showing the audience the character’s reaction as they listen to the dialogue can break the flow of the editing and make it more dynamic, and it can also add new depth to the scene.
PAUSES AND OTHER SOUNDS
In editing sequences of video, it becomes apparent that the actors will occasionally pause in between sentences or they will insert a few “ums” and “ahs”. There is no general editing rule that must apply, but there are two guidelines to take note of. Generally, in documentary editing, the goal is to make the presenter sound as legitimate and as professional as possible. This means that the goal of the editor is to polish the speech. This can be done by removing pauses, or by removing unprofessional sounds such as “ums” and “ahs”. These types of sounds could be indicative of poor public speaking skills or of having a tough time remembering the subject matter. However, during other types of editing such as film editing, what to edit out is not immediately apparent. The film actors practice for many hours. Many times, their pauses and vocalizations are well rehearsed, calculated and presented for emotional effect. In this instance, they may need to be left in to maintain the intended emotional effect.
PART 5 – CHOOSING SHOTS
The editor’s main responsibility is to select the best shots. But what does the best refer to? Is it the best performance? Is it the shot with the best technical quality for audio and video? Is it the shot, which highlights the conflict in the story and draws and enhances the conflict in the subtlest ways? Is it the shot, which causes the intended psychological effect on the viewer? Is it the shot that just happens to fit the editing sequence in terms of composition? Is it the shot that has the appropriate meaning to the story? Or is it a combination of all these factors. Usually, editors will weigh the pros and cons of the different priority areas and pick the shot that satisfies most of the criteria. They may choose the shot that offers the best performance, while highlighting the conflict and motivation, and conforming to the meaning of the story and the relevant quality standards. To come to a decision, many editors will have to rely on their feelings and on their instincts. At other times, they will not know the answer and may choose to guess or to pick a shot at random to fill in the sequence. Great editors understand the meaning of the shots and use their refined instinct to convey the right message to the viewer. Great editing is about understanding the factors of shot selection.
Wide shots are used at the beginning new scenes by most productions. They are very useful in this role because they establish several key things. Using wide shots, the filmmakers are able to, with a single shot, show: the environment, the lighting (mood), the characters and their positions relative to each other and to objects. If there is movement within the scene, that may provide the opportunity to cut back out to a wide shot so as to update the audience with a new “mental map” of where everything is in the scene relative to everything else.
The medium shot is the most used shot. The medium shot is more selective than the wide shot. It allows the audience to focus on the actions and dialogue of one individual, while cutting out what is not relevant for that moment in time. While it has the benefit of being close enough to the actor’s face and upper body that emotions and body language can be seen, it is also far enough away that little bits of the background action still connects the viewer with the environment.
Close-Up shots are very intense shots. From the cinematic perspective, they look great. From the directing perspective, they are very intimate shots, which convey a lot of emotion. The key to close-up shots is timing, or knowing when to use them. They can’t be used too early on in the scenes development or they will not have the necessary effect when they are needed later on. The best time to begin to use them is when the drama of the scene intensifies and the scene approaches the climax.
When editing a dialogue sequence between actors, the type of shot used on one actor should usually be used on another. For example: In a two person dialogue scene, if a medium shot is used on one actor, than a medium should be used on the other actor which they are speaking too. By shooting similar shots from both sides, the shots tend to adopt similar properties, such as: focal length, frame composition, lighting and distance. These similar properties make the editing flow more seamless, connected, and a continuous match. The audience expects to see similar shots cut together, as opposed to dissimilar shots.
2ND VISUAL REFERENCE
For most edits, the editor will be able to put together a sequence, which flows visually and is seamless. However, for some shots, they will appear to “jump”, and not appear as seamless as the rest. This can happen when there are more objects in focus. The characters on screen are a reference point. The audience will pay attention to a character, or to their reference point, and then they will pay attention to the next character or reference point as it is presented. However, when there is also another prominent object in the scene, this can become a second reference point. The mind begins to track both reference points. Now imagine that within a shot there is a second object, say a large flower is screen left and the actor is screen right. As you cut to the other actor, perhaps now the large flower is screen right and the new actor is screen left. Under these conditions, the flower has “changed positions” within the frame, so it has appeared to jump. Under this circumstance, perhaps cutting to a closer shot without the flower is more appropriate.
3 CHARACTER EDITING
Some of the scenes provided to the editor from the production crew will be of three characters having a conversation. In this case, the editor cannot simply edit a two-shot of two of the characters speaking, and then edit to a matching perspective to another two-shot, of two characters. When making an edit of three characters, from a two-shot to another two-shot, one of the characters appears to “jump” positions on screen. Imagine filming a two-shot of character 1 which is screen left, along with character 2, which is screen-right. The editor then cuts to another two-shot of character 2, which is now screen-left and character 3, which is now screen-right. In this scenario, character 2 has jumped positions from screen-right to screen-left. The editor will need to pay attention to this scenario, and perhaps cut from a two shot of character 1 and character 2, to a single shot of character 3. If this solution doesn’t work, then the editor could instead cut from a two-shot of character 1 and character 2, to a wide shot of all three.
PART 6 – IMAGE CONTROL
Enhancing video clips by correcting colour or adding visual effects can be a lot of fun and very satisfying process. However, editors that choose to do this early on are less efficient then editors who leave the work for after the edit is complete. This is because it is inefficient to do image enhancement work on the footage that has been shot, as opposed to doing it only on the footage that will actually be used. It is best to leave colour work towards the end of the post-production process.
IMAGE CONTROL WITH SCOPES
Human perception of colour can vary depending on lighting conditions. Editors cannot fully rely on their eye’s perception to consistently and reliability set the appropriate levels of colour in their work. Editors use scopes, which offer a graphical representation of the necessary colour information. Modern editing programs have these scopes built into the feature set.
The waveform monitor allows the editor to check, and if necessary, to modify the footage so that it is broadcast safe. On the monitor, a value further up represents a brighter image and a value further down represents a darker value. A value of 0 represents black, and a value of 100 represents white. It is important to ensure that the image does not go below 0 or above 100.
The Vectorscope allows the editor to check the balance of colour, or colour cast within an image. It shows the saturation of a hue for a given clip. On the outer boundary of the Vectorscope, there are six colours: red, blue, green, yellow, cyan and magenta. The distance from center to the edge represents the level of saturation.
The histogram shows where in the range a particular lightness or colour value occur. The histogram has a series of spikes. If there are more spikes in the left side of the histogram, the image will be darker. If there are more spikes in the middle, the image has little contrast. If there are more spikes on the right side, the image is very bright. It is best to have a histogram that is distributed throughout the range, and has a hill in the middle. Of course, if the scene is not lit properly, and the editor must do colour work to adjust and balance, then there is a risk that the adjustments which will need to be made to balance the image can end up squeezing the histogram values to either the low end or the upper end of the range. This could end up destroying the smooth tonal range and instead it can lead to the creation of a harsher digital image with less information.
PART 7 – EDITING AUDIO
Even though audio is highly important, new media producers often overlook its importance. Many times, it is only a secondary consideration. The reality is that audio is more important than video. One can watch bad video with good quality audio but cannot watch cinematic footage with incomprehensible audio. Getting the audio recording right is critical to creating a watchable media presentation. This is often out of the editor’s hands, but one would hope, that the editor would be provided media with good quality audio. Without good audio, the final sequence is not watchable, and the project is likely to fail.
On the editing timeline, video is shown on one track. Below the video, will be several layers of audio tracks. The reason that editors don’t stack video tracks on top of one another has to do with the fact that video is solid and a track beneath another track will not be visible. Unlike video, audio is layered into multiple tracks. Each track will contain a different type of audio. As part of mixing the audio tracks, editors will specify the volume levels of the individual tracks and the individual clips within the tracks.
Some examples of audio types on the different audio layers are:
- Ambience Sounds – Ambient sound refers to the sound of the environment or the sound backdrop on location. It is the atmosphere. It is sounds such as cars driving by, pedestrians walking by, the wind hustling and the fan spinning. Even when recording audio within an empty and “noise free” room, there is still sound. The “atmosphere” or “room tone” must be recorded and layered underneath all of the recordings that happen on set, so as to create a consistent feeling across cuts as the sequence progresses. At least a minute of room tone needs to be recorded for every location. The atmosphere or ambient sound track will provide a feeling of reality. Otherwise, pauses in sound will occur in the sequence and the soundtrack will feel as if it’s recorded in an artificial studio.
- Dialogue – Dialogue is recorded on set. If it is not recorded properly, then the project will be at a high risk of failing. Fixing recording problems with dialogue is not usually possible.
- Effects Sounds – Effects sounds are sounds that are artificially added in a studio. These could include things such as footsteps, punching, or glass breaking.
In the timeline, there should be several layers of audio. As these layers are stacked on top of one another, the sound levels will add up. The resultant level may be too “hot”. The editor needs to monitor the audio track levels by looking at the audio meters. If the meters change from green to yellow, that means the audio clip is nearing peal levels. If the meters change from yellow to red, this means the sound is exceeding the peak levels. The editor must adjust the audio volumes for the individual tracks and monitor the resultant overall mix level throughout the sequence, so as to ensure the audio track does not distort and clip.
PART 9 – FILE MANAGEMENT
The transcoding process allows the editor to convert the highly compressed DSLR capture format into a less compressed intermediate format for editing. The reason an editor would want to do this has to do with the fact that if the colouring work is done on an intermediate format, the resultant image quality will be higher quality than if it is done within the native capture format. In addition, the fact that intermediate formats are less compressed, a lower performance editing workstation can be used for editing. However, though the intermediate formats can be edited on cheaper computers, the performance requirements and disk space requirements will be greater. From a file management point of view, it is much easier for the editor to use a very high performance computer, and edit native highly compressed capture format directly.
Editors don’t just create work, but they archive work for long-term storage and for distribution. A good backup workflow to use involves using two portable hard drives to backup the contents of the camera’s memory card. Then the contents of the memory card are also copied to the computer’s editing drive. Finally the computer’s output sequence is archived to long-term storage like Blue-Ray and two archive hard drives. Yes, media storage costs can add up in price. It is important to remember that hard drives fail, and discs scratch and corrupt. Multiple copies and redundancies ensure that high quality versions of the project continue to exist long after they are shot.