By Ron Seifried and Craig Rosenzweig
Every image tells a story. The subjects, colors, and composition of an image contain a wealth of information. Even if we removed these attributes, a digital image still retains vital information necessary for any video editor. The following article covers the primary terminology required to understand the wide range of resolutions, fields, and frame rates. The learning curve for High Definition can seem large at first, but is easily understood after learning these basic concepts.
There are several different aspect ratios, but in relevance to video we will cover the two most important. The image aspect ratio is width divided by height. The 4:3 ratio has been around since the beginning of television in the late 1940′s, and is also commonly used in still photography. The HDTV standard, however, is 16:9, also known as widescreen. The “letterbox” format, most commonly recognized on standard-def TV’s as “black bars” on the top and bottom portions of the screen, is the result of converting a widescreen film to a standard-def TV. The opposite is true when playing standard 4:3 video on a 16:9 HD screen- “black bars” appear on the left and right sides of the screen, which is called “pillarboxing.”. Generally, today’s HD cameras capture in the 16:9 aspect ratio.
|The 16:9 image on the left has approximately 30% more picture than the 4:3 on the right, which is lost on the right and left hand side of the screens|
It is important to note that the word “resolution” has more than one meaning. This article will cover resolution as it pertains to television and TV specs, where the resolution of an image is determined by the number of pixels or scanned lines within it, and how they are arranged. In digital video, this is measured in pixels per inch, and in analog video it is measured in horizontal lines per height of screen. For example, in digital video 720 x 480 is 720 columns of pixels wide by 480 rows of pixels high. When naming a particular video format’s resolution, we refer to its vertical number of rows of pixels high. Therefore, 1920 x 1080 refers to a digital sampling structure of 1920 horizontally (width) and 1080 vertically (height) and is called “1080.”
Pixels (picture elements) can be described in a variety of formats, including computer displays and digital cameras. In the world of standard-def video, pixels are non-square, or rectangular. Therefore, the horizontal and vertical lines are not equal, making images look up to 10% wider than they’re supposed to when displayed on a computer or HD monitor. With high-def video, pixels are square, eliminating the possibility of stretching video or text that is sometimes inherent in standard-def video. For a reference, DV video is displayed in non-square pixels, while images from a computer are viewed in square pixels. When converting video for YouTube, for instance, it’s a good idea to de-interlace the video so it displays square pixels.
When it comes to the two most common HD resolutions (1920×1080 and 1280×720), capturing and displaying with square pixels is not a problem. But for HDV, the resolution is 1440×1080, necessitating squeezing the pixels so a skinny picture can be displayed on an HD monitor. HDCAM also has a 1440×1080 frame, but outputs to 1920×1080. In the past, post-production artists had to be especially aware of the elements they were working with. Today, most editing and compositing applications make it easy to mix different media formats in the same project.
A technology that has been around since the dawn of television, interlaced video scans all the odd lines of the first field of a frame, and then scans the even lines of a field. When putting these two fields together, or “interlacing” them, you get a complete frame. Beginning with the upper left corner of the display and ending at the bottom right corner, each field contains half the lines required to complete one frame. Interlaced video is limited to 525 horizontal lines of standard-def TV’s.
One main disadvantage of interlaced video is that because two different fields are capturing one moment in time, fast-motion artifacts will appear on the image when playing interlace video on a progressive display monitor. This can somewhat be corrected through a process called deinterlacing, which converts interlace video to a non-interlace progressive format. The increased vertical resolution to a progressive frame reduces fast-motion edge artifacts because the progressive image has higher resolution. Although shooting fast movement with interlaced or progressive will lead to fast motion artifacts, due to ” “shutter drag”, interlaced captures are still better when shooting fast-motion like documentaries and sporting events. The interlace picture capture rate gives a story teller freedom to move the camera; it has a temporal advantage over a progressive capture rate.
Progressive video is the method by which all lines (whole frame) are captured at the same instant, representing a single moment in time. The advantage it has over an interlaced method is that each frame is scanned in sequence, utilizing twice the bandwidth and resolution. Even though the number of resolution lines is the same between interlaced and progressive, when the number is followed by a “p” (i.e., 1080p) more detail – with no artifacting – will be displayed at any given moment. Because of the higher bandwidth required for progressive, component video or HDMI cables are needed to transfer the signal. It is also important to note that all progressive formats are available only in 16:9 aspect ratio.
|The Sony PMW-EX1|
Sony’s latest solid-state camera, the PMW-EX1, is the first reasonably-priced camera that supports 1920 x 1080 progressive recording rather than the 1440 x 1080 interlaced you find in HDV cameras. Available with a ½-inch CMOS chip, rather than the traditional 1/3 chip, video is saved on flash memory cards rather than tape, and is easily switchable to progressive and interlaced scanning methods at various capture rates.
|Chart illustrating the relative frame dimensions of different video formats in Progressive and Interlace|
480i is full-frame, standard-definition NTSC video. (NTSC is the analog television system used in the United States, Canada, Japan, Mexico, and other countries, originally adopted by the National Television System Committee, for whom it is named.)
480p, a progressive mode scanned at 480 lines, is found primarily in EDTV sets (Enhanced-Definition Television).
576i/p is the PAL television format (used throughout Europe, parts of Africa, the Middle East, the Indian Subcontinent, the Pacific Rim countries, the Pacific Islands as well as Australia, New Zealand, and Tasmania), available in both interlaced and progressive.
720p is a progressive rate found in HD. The 16:9 widescreen format has a horizontal resolution of 1280 pixels (1280×720), and is one of the two most common HD resolutions in use today. The other is 1080i.
1080i is a common HD format, interlaced, that scans 1080 lines vertically for a frame resolution of 1920×1080. Despite being of a higher resolution than 720p in spatial resolution (pixels per frame), the interlaced scanning introduces artifacts during fast-moving shots, but shows more detail in stationary video.
1080p is a progressive rate with 1080 vertically-scanned lines (1920×1080). Most commonly found in high-end HDTV’s, Blu-Ray, and HD-DVD formats, 1080p produces a sharper image than the aforementioned formats, but is currently not available from broadcasters.
2048 (2k) Designed for Digital Cinemas, the 2k (2048×1080 at 24fps) format is primarily scanned from 35mm films for distribution to digital-cinema theaters.
4096 (4k) The optimal resolution (4096×2160 at 24fps) available today, 4k has been the hot buzz phrase in the industry, thanks to the availability of the RED One Digital Video Camera.
Ultra High-Definition Video is in development, with a proposed resolution of 7,680 × 4,320.
|The top image roughly demonstrates what takes place with interlaced scanning, with the progressive image below|
What if you want to display an interlaced image on a progressive-scan monitor? This situation will occur when you attempt to edit interlaced video on a computer. If the video is intended to be played back on a computer, you can convert or de-interlace the video using your editing program or capture device. However, if the final video will be played on a standard television via DVD or tape, you need to maintain interlacing while you edit.
Three current, popular HD camcorders in the Canon HD series, the XH-A1, XH-G1, and the XL-H1 use a very intuitive de-interlacer. For the de-interlacing process, a progressive video stream is extracted from an interlaced one. The benefit of the Canon camcorder is that the de-interlace feature gives you the best of both worlds – interlaced shooting for fast motion content like sports and documentary shooting, and progressive for higher-resolution dramatic content such as character-driven movies. Canon’s Cinema-mode 24F clocks the CCD at 24Hz, which is 48i, rather than the traditional 60i. The Canon derives 24F from 48i, and as we elaborate below, 48 interlaced is equal to 24 full-frame captures, or 24 frames per second. Please read on to learn more about capture rates.
|The Canon XL-H1 3-CCD records Native 16:9 High Definition 1080i|
Frame or Field Rate (Capture Rate)
Capture rate is a term used to describe the number of times per second that a picture is taken or captured in an imaging system. In a progressive system, the capture rate is equal to the frame rate, while in an interlaced system the capture rate is double the frame rate because only one field (a half-resolution image) is acquired at each interval. It takes two fields to make a complete, interlaced frame. It is standard practice to refer to the capture rate of an image, as well as how it is captured when describing it, instead of the frame rate – i.e., 60i (60 captures, 30 frames per second), 30P (30 captures, 30 frames per second) and 60P (60 captures, 60 frames per second).
In theory, television has a frame rate of 30 fps, with interlaced video displaying two fields (half resolution) per frame, and progressive capture rate equaling one frame per displayed image. Film has a frame rate of 24 fps, and in video, 24p is captured in the progressive mode. Many videographers are buying camcorders with 24p for a more “cinematic” experience. The focus on improving video has always been to narrow the gap between video and film quality, making video more “film-like.” With 24-frame HDTV, Digital HD now has achieved basic imaging attributes that greatly narrow the gap between video and film. 24-frame HDTV, now more than ever, has similar cinematic qualities to a feature film release you would see on a theater screen.
|NTSC Capture Rate
How it is captured
|60i||interlaced||(60 captures, 30 frames per second)|
|59.94i||interlaced||(59.94 capture, 29.97 fps)|
|60P||progressive||(60 capture, 60 fps).|
|59.94P||progressive||(59.94 capture, 59.94 fps.)|
|30P||progressive||(30 captures, 30 fps)|
|29.97P||progressive||(29.97 captures, 29.97 fps)|
|24P||progressive||(24 captures, 24 fps)|
It is debatable whether or not a “film-like” image can be achieved with either 24-Frame HDTV or 60-Field HDTV. Some “progressive” thinkers maintain that the 60-Field HDTV can compete with film in terms of the tonal and color reproduction, exposure latitude, and picture sharpness of traditional film production. The progressive scan found in 24p will increase the vertical resolution and reduce fast-motion edge artifacts, making 24p frame rate capture more sharp, but proponents of 60-field capture say that the picture sharpness is not that essential; often, filtration is used to decrease picture sharpness for certain scenes anyway.
This interlaced format has been the standard frame rate used in TV for years. One of the technical limitations of a 24-frame capture is a visual effect called “staccato judder.” Staccato judder is a visible flaw in 24-frame video that occurs when the camera is panned quickly. The high 60i picture capture rate gives a storyteller freedom to move the camera quickly, without worrying about artifacts.
Although not commonly used in productions today, this progressive format works with the 1080p format.
50 interlaced fields per 25 frames. This is the standard for PAL and SECAM (used in France, Russia, and certain western African countries as well as Madagascar) television.
Progressive rate that produces 30 frames per second
Popular before the 24p video revolution of the last few years, this progressive format produces 25 frames per second and comes from the PAL standard.
Although the 1080/60i has high-resolution frames, the 60i field rate displays reduced vertical resolution due to interlacing. 24-frame capture has been around for decades, and ostensibly allows a greater creative dynamic and a more subtle picture portrayal. The 24 fps film-camera capture rate was developed in the 1920′s, coinciding with the arrival of film sound. 24 fps was the lowest frame rate that could satisfactorily reproduce audio on the film’s optical tract. 24 fps is technically a product of sound; however, it did render motion picture more accurately than what filmmakers created during the silent era. Therefore, it worked. 24 fps has a more “artistic” appeal than 60i. The benefit of shooting 1080/24p is that this format has the resolution, scanning method, frame rate, and aspect ratio closest to motion picture film and is commonly used for converting video into film.
What capture and editing specs you choose for your productions will have an important effect on the final look. This article merely touches upon the surface of what needs to be understood for capturing and editing video. In the future, we will expand on each technical definition for a more comprehensive understanding of video resolutions, frame rates, and aspect ratios.