* PRIME FOCUS IS HIRING ARTISTS FOR ITS MUMBAI FACILITY. ANY FRESHER OR EXPERIENCED ARTIST CAN APPLY THROUGH WALK IN TEST ALONG WITH THEIR RESUME. FOR LOCATION DETAILS, Click here *

* PRIME FOCUS IS CURRENTLY HIRING FOR ARTISTS AND PRODUCTION OPENING FOR ITS VFX CAMPUS IN MUMBAI. ANY FRESHER OR EXPERIENCED ARTIST CAN APPLY THROUGH WALK IN TEST ALONG WITH THEIR DETAILED RESUME. FOR LOCATION DETAILS, Click here *

Monday, 31 March 2014

Project DAVID Creates Digital Content Preservation Tech

Digital audiovisual content is everywhere: film, television and online media; personal content from cameras and phones; and content from environmental monitoring, corporate training, surveillance and call recording, for example.

Project DAVID, which stands for: Digital AV Information Damage - prevention and repair (and not to be confused with many other "David Projects" formed by some religious organizations), is Europe's preeminent society for the detection, restoration and avoidance of future media degradation through optimizing the long-term storage of A/V content. Headed by Peter Schallauer of Joanneum Research, an Austrian think tank comprised of research scientists, their Digital Division is an applied institutional sector developing innovative technologies for AV media analysis, indexing, search, quality control and restoration.

For up-to-date information, visit: http://david-preservation.eu/news/.

Most people not in the studio industry - and some who are on the peripheral - have never thought about the preservation of digital media. They are of the closed mindset that preservation of media is for old films (and the like), rather than digital content. But not so.


According to the PrestoPRIME 2009 Audiovisual Digital Preservation Status Report, over 8-million hours of digital content was produced in Europe, alone, during the last decade. The effect of corrupting just a few bits in a frame of digital video can be seen at right.

You can see the report here: http://www.prestocentre.org/library/resources/audiovisual-digital- preservation-status-report-1-2009.

Project DAVID has four main goals (or mindsets) when it comes to repairing damaged content and restoring it afterward: Understanding the damage, Preventing from damage, Detecting and repairing damage and Improving the quality.

In the understanding part, a determination of how the damage can happen in digital video file/digital video tape-based system, while contemplating the magnitudes of this damage on the ability to make future use of the audiovisual content.

In the prevention part, effective risk management and quality assurance techniques are contemplated and designed to be built into preservation systems in order that the systems themselves can become more healthy and resistant. Better preservation techniques are discussed at this point, and how they can be integrated directly into the devices and systems that create new digital content.

The detecting and repairing part where it is accepted that the odds are high that damage will occur eventually, so procedures must be established to efficiently monitor and detect the potential damage and techniques employed for the content to be repaired to enable re-use.

The improving part is where the technical quality of content and how it can be improved beyond its original form to satisfy requirements of new use networks is generated.

Preservation of audiovisual content in the varied applications presents the same four concerns: content reuse, regulatory compliance, and archive monetization. The difficult part is that this is where each differs, depending upon the industry; each has contrasting worries about content quality, safety, storage, access and budget.

The only common thread, really, between the organizations is when it comes to content obsolescence, media degradation, and failures by the very people, processes and systems intended for safekeeping of that content. Only the testing, evaluation and demonstration by involved researchers and actual users using real-world data can guarantee a high-quality result from the project.

JOANNEUM RESEARCH is a non-profit organization concentrating on applied research with a highly qualified staff of more than 400 people. Services include specifically-geared research tasks for small/medium-sized companies, complex interdisciplinary national and international assignments as well as tailored techno-economic consulting. They participate in setting up and organizing national competence centers as well as in numerous large international projects.

The DIGITAL Institute for Information and Communication Technologies (http://www.joanneum.at/en/digital.html) specializes in web and internet technologies, image, video and acoustic signal processing together with remote sensing, communication and navigation technologies. Direct enquiries can be sent to: david-office@joanneum.at.


Digital AV Consortium


The consortium includes: a broadcaster and national audiovisual archive, both with experience of research and development in digital preservation; two industrials on the supply-side providing media migration, digital restoration, and quality analysis; and two research partners with long histories in digital audiovisual content analysis and restoration, risk management, and storage technologies.

Properly, they are known as: iObserve, Area-Mumosis, AudioMine, Visis, Scovis, Outlier, MediaCampaign and DirectInfo; during FP6, JRS has coordinated the SALERO, CLINICIP and Aposdle IPs, as well as DirectInfo and MediaCampaign; in FP7, JRS is coordinating FascinatE and TOSCA-MP.

As a participant, JRS was (and is) active in the Integrated Projects 2020 3D Media, PrestoSpace, IP-RACINE, NM2 and PrestoPRIME, in the NoE K-Space and in the SEMEDIA, porTiVity and Polymnia projects.

Detailed information on projects and publications is available from the web site: www.joanneum.at/digital.

360 Degrees of Historical Immersion

IMAX, 35mm or Digital movie theater, broadcast, streaming, iPad, iPhone. What do all these viewing options have in common? They all display product on a screen or frame somewhere in front of the audience. All of our production and post-production decisions are based on that fact. But, what if the screen is in front, to the left, the right and behind the audience? What if the screen completely wraps 360 degrees around the audience?
If the audience can look anywhere, how do we force them to see what we want them to see? Can an audience follow a narrative this way? How do you tell a story visually without a frame? There was a time when I did not know the answers to these questions. That time has passed.
I recently finished Post Production on a 360 degree film for The Civil War Museum in Kenosha, WI. Produced by BPI and entitled "Seeing the Elephant" (a term Civil War soldiers used to describe the experience of battle) the 11-minute show was created to honor all the men from the Mid-Western states who fought for the North during the Civil War.
The story follows three men and their experiences in the Union Army - the endless monotony of marching and training and waiting punctuated by the horrors of battle. In "Seeing The Elephant," the 360 degree theater is not simply a novelty; it is another tool to completely immerse the audience in the story and the world. Hopefully, they leave with at least a small idea of what it was like to be in the middle of a Civil War-era battle.


There are other 360° films around, but most of them are more abstract or environmental. They do not really have a linear narrative. We certainly wanted some of the immersive environmental qualities of the 360, but our main goal was to tell a story.
The script (written by John DeLancey) required well over one hundred Civil War reenactors - as well as horses, uniforms, guns, cannons, explosives, a main cast of twelve and a crew of forty-five. The shoot would be completely file-based, so I was brought on location as the Media Manager. Since I would be cutting the show, the thinking went, I might as well be the one gathering and organizing all the footage.
The lone location for the 5-day shoot was Old World Wisconsin (oldworldwisconsin.wisconsinhistory.org) a living-history museum in Eagle, WI, that completely re-creates the farmsteads and settlements of late-1800's America. This one site gave the show verisimilitude. It had the perfect houses, buildings and roads, plus a church (in fact, the oldest Catholic church in Wisconsin) and a large, open field for the climactic battle. The museum's employees also appeared as extras in the film.

Old World Wisconsin completely re-creates the farmsteads and settlements of late-1800's America - perfect for the project. Since there is no "behind the camera", the director and crew would have to crouch below the line of sight.

We had six cameras for the shoot. The main camera was Sony's F55 and the B camera was Sony's NEX-FS700. The F55 was chosen as the A camera due to its 4K capabilities; we would need all that resolution in Post. We also had a Canon 5D and 7D and two Go-Pros.

In addition, to take full advantage of the 360° screen, we rented a 360° camera rig from Paradise FX in LA. (www.paradisefx.com) An entire article could be written on shooting with this rig alone, but for now all you need to know is that the rig consists of nine Silicon Imaging 2K cameras and nine 13.7mm Tokina lenses set up on a platter the size of a large pizza.



The 360° camera rig from Paradise FX


Each lens points straight up into a mirror, like a periscope, which allows for completely seamless 360° shots. (I'll explain more about how that works in a bit.) An umbilical cord connects the cameras to a cart that houses 9 small monitors and 9 Mac laptops - one for each camera.


Above: The 360° camera rig. Below: The cart houses 9 small monitors and 9 Mac laptops - one for each camera.

Post-wise, the biggest hurdle when shooting with the 360° rig was where to hide the crew and the cart. The camera is pointed in every direction so there is no "behind-the-camera." It was funny to see the camera crew and Director crouched down underneath the camera rig as well as the various other crew members hiding behind trees or bushes. Some were better at hiding themselves then others. My rotoscoping and cloning skills got quite a work-out painting out the tops of heads, knees and elbows or sometimes entire bodies. Luckily all the 360° shots were locked down.

By the end of the shoot I had about 5 terabytes of data spread over 11 hard drives (each camera on the 360° rig got its own hard drive). Now I just had to cut it together into a movie.

The Post-Production challenges began immediately. These first hurdles were creative and story-telling exigencies created by the technology. The seamless 360° effect would be achieved by using 8 digital projectors to project 8 individual 1920X1080 Mpeg-2 files on the screen. (For the remainder of the article, I will refer to these projections as "screens," but remember that the final result is a seamless projection with no hard edges or frame lines.)



Eight digital projectors would be used to project eight individual 1920X1080 Mpeg-2 files on the 360-degree screen.



I did all the editing using an Avid DS system. (Rest in peace DS.) I have been cutting on the DS for several years now and its ability to do so many different tasks, both offline and online, without leaving the box is unequaled. But, the DS couldn't natively play the F55 HD material. I had to use Media Composer as a middleman to get that footage into the DS. Media Composer was able to transcode Sony's XAVC files to DNxHD. The footage from all the other cameras came in perfectly.

Setting up multi-screen shows is a breeze on the DS. To represent the eight screens I created an 8-layer composite with 5 screens arranged in a semi-circle on the upper part of the frame and three more horizontally arranged in a row on the lower half. This way I could see how all the imagery worked together while I cut.

Inside the 360° theater there really is no "front" or "back." The audience can look wherever they want. The Director, Bob Noll, did not want any black areas on the screen. I had to make sure there was always something to see on every part of the screen.






However, we are trying to tell a story so there had to be a main focus, a part of the screen that is a little more important than the others. How did we achieve this? By taking advantage of one of the oldest techniques in cinema - the opening crawl. (If it's good enough for George Lucas it's good enough for me.) We were also lucky enough to have the great Bill Kurtis provide all the voice over narration.

The show begins with some exposition about the timeline and an explanation of the unusual title. Reading the crawl compels the audience to look at one particular area of the screen. This became the "front screen." Having established this forward position we expanded the front screen to include the two screens on either side as a continuation of the central front or forward.

We figured that the peripheral vision of the audience would allow them to take in the visual information on these five screens without too much trouble. That left three screens behind their heads that would require them to turn around completely to see. This became the "rear" of the theater.



Reading the crawl compels the audience to look at one particular area of the screen.


So, we decided that for the majority of the show all the important story-telling and character bits would occur on the five "front" screens, but occasionally we would force the audience to turn to watch the "rear" screens. We wanted the audience to be an active participant in the show, but we didn't want to give them whiplash.





We also quickly learned that audio would be extremely important, even more so than usual, to guide the audience to look at the parts of the screen we wanted them to see. The theater has a total of 11 speakers - one above each projection and three pendants hanging from the ceiling as well as a sub-woofer and a "butt-kicker" underneath the floor to shake the audience whenever the cannons go boom.

When each of the main characters appears for the first time, their dialogue comes directly from the speaker mounted on the same screen that holds their image. This forces the audience to look to that area so they learn who each character is and what they sound like. Once that relationship is established, the audience will always know who is speaking even if they don't see the character on screen.

These rules that we came up with were like nothing I had to deal with in the edit suite before. While creating the story in the front, I also had to always make sure there was relevant and interesting imagery in the rear in case someone decided to look back there. The 360° set-up completely changed the way I dealt with rhythm and montage and pace, with the length of shots and the selection of shots. So many things that have become instinctual over the past 18 years of editing were new and different. It was exciting and scary at the same time.

And, if I didn't have enough flies in the ointment on this show, I also decided to cut the show without a temp music track. I knew we were going to be hiring a composer to create an original score for the show so I wanted to give her the freedom to create music that hit all the right emotional beats without being tied to a pace or tempo created by another piece or pieces of music. This was something I had only done once before and that was for a simple one-screen documentary. Award-winning composer Ruth Mendelson (www.reverbnation.com/ruthmendelson) was extremely happy to have a completely clean musical slate to work from and she created an incredible score for the movie.

Getting back to the visual - how did the seamless 360° shots work? From conception, Director Bob Noll wanted the 360° shots to exist for more than the simple "Wow!" factor. He designed them to appear at very specific points in the narrative to help tell the story, immerse the audience in the world of the story and give them something they haven't seen before.





The first shot we see after the crawl is a sunrise that surrounds the audience. This was not a true seamless 360° shot. I created it in Photoshop by stitching together a series of stills that Bob shot one early morning in the hotel parking lot. I also added a flock of birds created in After Effects just to have some movement in the shot. This sunrise remains for the first few minutes as we see sequences of young men as they proudly sign up to fight the Southern Rebellion. We watch as they leave their families and friends.

The 4K resolution of the F55 material allowed me to stretch those shots across three screens without losing quality. All this imagery fades up and down, layered on top of and blended into the sunrise. Yes, we see multiple images on the screens, but no hard edges ever. It always had to seem, well, seamless.

In this same technique, we introduce our three main characters: a Captain in the Union Army, a veteran soldier, one of the few who has seen battle before and knows the costs; a young Irish immigrant eager to have an adventure; and an Abolitionist who fights for a cause. We first see him in Church, blended into the sunrise like all the other shots, but then the Church interior slowly unwraps across the entire theater to reveal the complete congregation. This is first full, seamless 360 shot and it puts the audience right in the middle of the Church. Hopefully, at this point, the audience realizes this show is going to be different.

But, how do the seamless 360° shots work?

Basically, each lens captured an image area that included overlap from the lenses on either side. Each camera was fed directly into a separate Mac laptop and all the files were transferred to hard drives at the end of the day. For all the 360 shots every individual "take" consisted of 9 separate files.

Once I had decided on the takes I wanted in the show, I had to stitch all the files for that shot together in After Effects. The original 2K files from the 360° rig were encoded with the Cineform codec, but AE doesn't really play nice with too many Cineform files at the same time so I had to export .png sequences for every file and bring those back into AE for the stitch. It took a unique combination of extra large compositions, distort and offset filters, scaling and nudging as well as many masks to get the pieces to fit together.



Each file had to be individually color-graded to get the entire shot to match.


There was no all-purpose solution so every shot had to be tackled on its own. And, of course each file had to be individually color-graded to get the entire shot to match. Once that was done, I then took my final large comp and broke it up into 8 1920X1080 HD comps to be rendered out and cut into the show.

The process was like putting several puzzles together than breaking them up to be put back together in a different way somewhere else. I was worried at first that the nine 2K files wouldn't play nicely in a show destined for eight screens, but with the overlapping and distorting, it worked out extremely well. We ended up having seven seamless 360 shots in the show plus 2 "faux-360" shots that were created by using several static shots from the F55 4K camera and stitching them together in a similar way. As long as there was nothing crossing in front of the camera the illusion was complete.

Yet another issue sui generis to this project was the review of rough cuts. Our theories about peripheral vision, front and rear, etc., all made sense, but they were still theories. We had to see if they actually worked in the real world. There was only one way to do this - we had to watch the show in a 360° theater. Since those are rare, we decided to build a half-scale prototype in our studio.

We (and by "we" I mean Pete Does, a Senior A/V Installation Technician at BPI) built a structure to support eight Digital Projection Inc. HIGHlite Cine660 projectors, each one weighing 85 pounds. This contraption was hung from the actual building supports and allowed us to project the show on a ring of bedsheet-screens.


Pete Does, a Senior A/V Installation Technician at BPI, built the structure to support the eight eighty-five-pound HIGHlite Cine660 projectors. Click to view larger images.

Dataton's Watchout multi-display software (www.dataton.com/watchout) was the choice to properly sync and display the eight projectors. Once again, an entire article could be written about Watchout, but we were using the software for two of its more impressive capabilities: projection edge-blending and geometry correction to account for projecting onto a curved surface. Both of these can be adjusted real-time and on-the-fly in Watchout.

After each major revision (and there were a lot) we would screen it on the prototype to see if our ideas were working. We brought in staff and friends to see if the story made sense and was easy to follow. Creating and using the prototype was an absolutely essential part of the Post process for this show.

In fact, we went a few steps further. I mentioned the immense importance of the audio mix for the show. Since we had the prototype set up we also hung all the speakers in their correct positions. The BPI ProTools system is designed to be mobile, so Audio-Mixer Extraordinaire Mike Rafferty simply wheeled his cart into the studio and was able to mix the show exactly the way it was to be heard in the theater. And then, (whew!) we also set up the final aspects of the show.

The theater includes a few different lighting cues to enhance the experience, so we put all the lighting equipment up so the lights could be programmed. And the final feature of the theater that really adds something special to the experience is a large air cannon. Engineered and built by 5Wits Productions (www.5witsproductions.com/) the air cannon is inconspicuously mounted in the wall of the theater. It blasts the audience with large puffs of air to match cannon shots, explosions and the like. We mounted that in our studio, too.

On February 27, 2014 the museum held a premiere celebration for "Seeing the Elephant." I was invited to attend. Personally, I might have preferred a Wisconsin-based premiere in, say, August, but never-the-less it was great to see the final show in situ. Bill Kurtis was there as well so it was fun to get the reaction of someone like him, a guy who has made well over 500 documentaries himself and voiced 500 more. He loved it. The Museum says that they've had groups as diverse as fourth-grade field trips, to Veterans from the Korean War - and anything in between. The feedback has been universally positive.

All in all, a ton of work. But, worth it, I think. "Seeing the Elephant" was a chance to do something very different on both the creative side as well as the technical side. And those don't come around that often.

Thursday, 6 March 2014

Prime Focus: A force to reckon with Hollywood



Namit Malhotra doesn’t run Hollywood, but the founder and managing director of Prime Focus, a leading visual entertainment multinational based in Mumbai, holds far more clout in Hollywood than many local post houses in Los Angeles, California. Why? The global success and appeal of 3D has opened up a whole new genre of content in the visual entertainment space. The US new 3D release market alone is estimated to be about $248-297 million annually. Moreover, after the successful release of 3D converted versions of Star Wars: Episode Iand Titanic, US studios are exploring the immediate payback on the cost of converting existing ‘back-catalogue’ titles, compelling them to visit their archives. According to industry data, the US studios own at least 800 titles that have each grossed over $100 million in worldwide box-office since 1995, and the estimated size of the US ‘back-catalogue’ blockbuster film market ranges from  $1.8 billion to $3.4 billion.

  These figures make a strong case for Prime Focus, a market leader in 3D conversion, with an estimated 38 per cent market share in 2011. Since the launch of View-D, its proprietary 2D to 3D conversion in 2009, the company has delivered 15 projects including prestigious projects such as Avatar, Transformers: Dark of the Moon, Harry Potter and the Deathly Hallows Part 2, The Chronicles of Narnia, and Star Wars: Episode I - The Phantom Menace.In fact Prime Focus recently announced a partnership with Industrial Light & Magic (ILM) and Lucasfilm for the 3D conversion of the Episode II and III of the epic movie Star Wars. The projects are presently being converted and are scheduled for a back-to-back launch in September and October this year.“No company in the world has ever worked on a Star Wars film outside of ILM.
 
Founders of Prime Focus (L-R): Huzefa Lokhandwala, Prakash kurup, 
Namit Malhotra and Mezin Tavaria


 It’s an honour to be associated with George Lucas and be offered an opportunity to convert the ultimate space saga,” says Namit Malhotra, the founder and global CEO of Prime Focus.Visual effects and animation are other segments that are likely to ring bells for Prime Focus. VFX presently consumes almost 30-40 per cent of a film’s budget, and films with heavy VFX content have been performing best internationally.The VFX division of Prime Focus also stands to benefi from the outsourcing model of Hollywood because of the obvious international cost savings it offers, which could amount to as much as 75 per cent for some Hollywood producers. In fact, the 83rd Academy Awards forever erased the idea that the post-production studios in the United States have a monopoly in the global visual entertainment space with the company contributing to five of the ten films that ran for the Oscar awards last year.The company created eight stunning shots for the impressionistic movie The Tree of Life, 22 VFX shots for superhero adventure X-Men: First Class, 31 VFX shots and 3D conversion for the action movie Transformers: Dark of the Moon. The company was also responsible for providing on-set equipment for Hugo. Also very few companies can offer such a wide range of services under one roof. Prime Focus offers end-to-end solution from pre-production to final delivery, including VFX, 3D conversion, video and audio postproduction, digital content management and distribution, digital intermediate, versioning and adaptation, and equipment rental.

 


Much of the global ambition of Prime Focus can be traced back to the formative years of Namit Malhotra. His grandfather MN Malhotra was a renowned cinematographer who shot India’s first colour film Jhansi Ki Rani and several Hindi movies for legendary producer-director BR Chopra. His father Naresh Malhotra produced Shahenshah, the 1988 Bollywood superhero film starring Amitabh Bachchan. As an 18-year-old, Namit went on a business trip to Hong Kong. His big idea: to set up backend operations for Star TV in India. The year was 1994. Star TV was part of Hutchison Whampoa group and had begun beaming programme feeds to India. The fact that the teenager had sufficient contacts to set up the base but no money to invest didn’t cut much ice with the Star TV executive. But on the transit to Hong Kong he met his father’s friend on the flight who advised him that the next big thing in media business was computer graphics. Without blinking an eyelid, Namit joined a computer graphics institute in South Mumbai for a six-month certificate course that offered training in Corel Draw, Animation Pro, and 3D Studio Max.While he had settled for much less, he took another shot at enterprise: setting up an editing studio to cater to imminent TV boom in India. He knew he needed about Rs 10 lakh to start this business, and a few good men to run his business. Namit raised about seven lakh rupees from a bank and borrowed about three lakh rupees from his father.Meanwhile, he approached his three instructors at the training institute to join him in business: Pravin Kurup, Merzin Tavaria and Huzefa Lokhandwala. All three were basically college students doing their summer jobs. In January 1995, Video Workshop (the original name of Prime Focus) was set up in a garage adjoining Namit’s apartment. For the next two years, the company picked up odd jobs such as editing small corporate films or low-budget TV programmes for small-time producers.  

 

Immediately after finishing college in 1997, all four members started working at full blast, which included 100 episodes of the famed dance show Boogie Woogie for Sony and top music countdown show Colgate Top 10 for Zee. From simple editing of episodes, the scope of work widened to high-end services such as creating title sequences, making promos, and mastering the programmes. While Prime Focus was digitally mastering Ramesh Sippy’s serial Gathaa for Star TV, it was setting up a post-production studio for Channel V at their Khar facility in Mumbai. In the next two years, the company was putting together 21 television shows a week. The success of Prime Focus can be primarily attributed to the fact that every time it matured in a business, it would diversify for the next big opportunity. So instead of scaling the broadcast business, Namit shifted focus to newer markets such as advertising and music videos, which paid much more money than television for the same service.While on one hand Prime Focus was working with top music video producers such as Sanjay Gupta, Anubhav Sinha and Kunal Kohli, it was also servicing biggest advertising producers such Sunil Manchanda of MAD Entertainment. By 1999, the turnover of the company had touched Rs 75 lakh and its capital investment had gone up
to two crore rupees.