Since the earliest filmmakers, there was always a need for editing – cut out the boring bits and keep the good stuff. With good old fashioned celluoid film, cutting apart and splicing pieces of film together is a rather intuitive process. Editing Machines like the Moviola have been around since the 1930s with Steenbeck flatbed editor becoming popular in the 1970s and still used in specialty circle today.
So how did we move to a digital form – the computer based “non-linear editing” machines that dominate the industry today? To answer that question – we need to dail back the clock of history and look at the early days of the industry that was the impetus for computerized editing: Television.
Live on Air
Our story begins in the era of electronic and mechanical engineers. Technologically speaking, the capability of broadcasting live television signals started rather early in the twentieth century. In fact November 2, 1936 was when the BBC began transmitting the world’s first public regular television broadcast service.
But it had to go off air during World War II. That war required world economies to switch gears and produce military supplies, prevented the mass production and adoption of television. It wasn’t until 1948, three years after the war ended, that the first commercial broadcasts of television began in the United States – the medium caught on and exploded during the 50s.
People could now watch shows and news broadcasts in their homes… These shows were cut live in a studio that had several cameras hooked up to a video switcher that could switch between cameras. This signal sent over the air and through cables to affiliates in other parts of the network for broadcast. But everything had to be live as there was no way to electronically record the television signal. That was fine unless you wanted to delay the broadcast – say for a far away part of the country that was in different time zone.
To “record” television, the networks turned to a device called a kinescope – which was quite simply a film camera focused on a video monitor. The concept was simple but it was anything but. This process never did result in really good image reproduction, there were a lot of technological hurdles, like ghosting and banding – but the kinescope was an essential tool that started connecting the world together through the medium of television.
In 1951, once CBS and NBC had a coast to coast network, they would produce a show in New York at 8PM Eastern time. Kinescopes in Los Angeles which were 3 hours ahead would record signal through the network, and film was rushed into development and then shown at 8PM on west coast pacific time, a process called Hot Kinescope because the film didn’t even had a chance to cool from the development process before it was sent to air.
The demand for TV was great and by 1954, the TV networks were actually using more raw film stock in their kinescopes than all of the Hollywood film studios combined – spending up to $4,000 per half hour – $33,000 in today’s money. The networks desperately needed a cheaper alternative.
The Arrival of Tape
Magnetic tape had been used for audio recording for years, but there were significant technological hurdles to actually getting a video image on tape. In 1951 Engineers working for Bing Crosby’s production company – yep that Big Crosby, were the first to record video images onto magnetic tape. Unfortunately they looked terrible, but it was still an image and it proved it could be done. In 1956, after 5 years of hard work by brilliant engineers overcoming myriad of hurdles, Ampex would release the first commercially available video tape recorder – the 2 Inch Quadruplex video tape.
Sales of the first video tape recorder went through the roof when the company showcased it at the NAB convention in April of 1956. Sales were so strong – they were taking orders on napkins. CBS was the first to put it to use in a West Coast delay broadcast of “Douglas Edwards and the News” on November 30th, 1956.
On January 22, 1957, the NBC game show “Truth or Consequences” produced in Hollywood became the first television program to be broadcast in all time zones from a pre-recorded video tape.
By the 1959, videotape was almost fully accepted by television industry and tape played an interesting role in a small Cold War confrontation. That summer, the US Information Agency set up an exhibit in Moscow to show off American progress and technology to the Russians. This included a model American home with a fully decked out kitchen, and a model tv studio with its own Ampex video recorder. On July 24th, 1959 then Vice President Richard Nixon invited Soviet Premier Nikita Khrushchev to visit the display. Khrushchev was fascinated by the television studio and joined Nixon for what was essentially a photo op in front of these cameras and new video tape technology. The two started chatting and before long it turned into a full-blown debate on the merits of capitalism and communism.
Both leaders agreed that the exchange should be played in full in their own countries – Ampex International president Philip Gundy rushed the tape back to his hotel and wrapped it in a dirty shirt for his flight back to the United States. Before it was aired American newspapers had reported the exchange as so icy it practically started world war three… but when American TV networks played it the next day what viewers actually saw were two leaders doing what politicians do. This Kitchen Debate as it came to be known was a milestone for video tape proving the importance of the medium to world events.
Cutting Tape… Literally
At this point tape was only being used for archival and distribution purposes. It was possible to edit these early 2 inch Quadruplex tapes: it was a similar process to cutting film but extremely more cumbersome: First the tape had to be “developed” using extremely fine iron filings suspended in toxic and carcinogentic carbon tetrachloride solution, making the magnetic bands on the tape visible when viewed through a microscope so that they could be aligned in a specialized splicer which had to cut the tape exactly during a vertical retrace signal without disturbing the odd/even-field ordering… and since the video and audio read heads were several inches apart it was not possible to make a physical edit that would function correctly in both video and audio so the cut was made for video and a portion of audio then re-copied into the correct relationship.
And of course, you had to do all this without actually seeing what frame you were on because the quadruplex tape was incapable of holding still frames.
NBC developed a work around using the tried and true kinescopes – not for broadcast but for creating work prints – shows were edited using these kinescope film prints which had audio cues which the editor could match back when splicing the video tape. Known as ESG it was a process similar to what would later be called Offline editing which essentially means editing with a lower quality copy of the original raw material then assembling the high quality originals based on that edit.
This technique would reach it’s height with Rowan and Martin’s Laugh In in 1968 which required 350-400 tape splices and 60 hours of physical splicing to build up just one episode. Laugh In ended up to be the only program to actually use this technique extensively.
There was another technique on editing video coming into play – using two video decks, you could transfer the video from one deck to the other and build up a show by assembling a bunch of different cuts one after the other assembled in a linear fashion – this was linear editing. But again, it sounds simpler than it was – for one how do you make sure the signals between the video decks matched up? Before Linear editing became a popular solution it would require a few technological advancements.
The first came In 1963 Ampex introduced the Editec – an all electronic videotape recorder that had simple microprocessor that could control in and out points marked by audible tones.
Helical scan systems coming more into use, wrapped the tape around a spinning read head. This could fit more bandwidth on the tape which allowed you to pause and see individual frames making editing much easier.
And in 1967 – the SMPTE timecode developed by EECO and Society of Motion Picture and Television Engineers was a way for a video tape player to locate any frame on the tape as each frame was assigned “address” in terms of hours, minutes, seconds and frames..
The practice of Timecode accurate Linear Editing became common place in the 1970. We went from TV networks spending thousands per hour on kinescope film and development fees, to now smaller market TV affiliates having their own video editing systems to cut their own shows.
But linear editing did nothing to advance the craft creatively. Editing became almost a strictly technical profession – managing large EDLs – edit decision lists that marked in and out points of clips to be used. And because a show was assembled in a linear fashion, any changes to the beginning of a show would mean everything after would have to be reassembled so there was no such thing as a rough cut.
There was alternative starting to emerge – almost a rejection of the strict time code rules of linear editing and going back to the freedom of cutting actual film – a system that would eventually be called Non-Linear editing. Non linear editing was nondestructive. You could assemble a cut in whatever order you wanted and go back and make changes without disturbing the rest of the assembly. There was no generation loss that you had with linear editing that required you to copy from one tape to another. It was a much more natural way of editing.
The first NLE was the CMX 600 in 1971 -it was a beast of a machine that recorded half resolution black and white video files onto washing machine size disk packs and cost a little over $250,000 in 1971 dollars – which is about 1.2 million in 2013 money. Only 6 were produced.
But the idea of editing nonlinearly was too good to go to waste. Through the 80s it was really matter of waiting for computational power and storage capability to catch up. You had a few experiments during this time – notably the EditDroid which debuted at NAB in 1984 from a George Lucas spin-off company, DroidWorks. This computer pulled footage stored on LaserDisks which really didn’t work very well and the company shut down in 1987. Other machines tried using a bank of VCRs but they were also slow and cumbersome.
Then in 1988, EMC2 introduced the first All Digital Offline NonLinear editor with data stored on optical disks
This was followed a year later by the public release of the Avid1 – a macintosh based Non Linear Editor in 1989. Avid would go on to become the gold standard for editing in Hollywood.
Storage was still a issue and these machines could only edit short music videos and commercials. But In 1993, engineers added more storage to an Avid System – debuting a 7 terabyte system capable of handling a feature length film. Now these films were being cut “offline” – the reverse of NBC EGS method. They used low quality tape to create a work cut and use the timecode to create an EDL which was given to the Film Lab to assemble the original film prints.
The first studio film cut on an Avid was Lost in Yonkers in 1993 and just a few years later in 1996, editor Walter Murch would accept the Oscar for Best editing for the English Patient which he cut offline on an Avid.
From clumsy kinescope film to bulky magnetic tapes which finally bring us to the first NLE systems which proved to be an important creative tool just a few yearrs after it’s release, the path to modern editing in this period has been dominated by some amazing achievements by electronics engineers. But story is only half finished, as it enters the last part of the 20th century, computer scientists, programmers, and mathematicians would pioneer the revolution that would join film and television as more or less interchangeable visual mediums – a medium now accessible to nearly everyone – It’s the story of digital which we’ll pick up in part 2 as we trace the journey to modern editing.