Thursday, March 31, 2016

Fwd: Light Field Cinema Capture Coming



---------- Forwarded message ----------
From: SMPTE Newswatch <communications@smpte.org>
Date: Tuesday, March 29, 2016
Subject: Light Field Cinema Capture Coming
To: John Sokol <sokol@videotechnology.com>


You're receiving this email because you are a Member or have expressed an interest in SMPTE and/or HPA.
SMPTE Newswatch
 
The Journal

 
The current issue of the SMPTE Motion Imaging Journal is now Available in the Digital Library.

Exclusive online peer-reviewed articles are available only in the Digital Library!
March 2016

Hot Button Discussion
Light Field Cinema Capture Coming
By Michael Goldman 

Conceptually, the notion of capturing what has come to be known as "light field" imagery dates back to Leonardo da Vinci in the 16th century. Back then, da Vinci detailed theories about an "imaging device capturing every optical aspect [of] a scene." In one manuscript, he talked about "radiant pyramids" being visible in images. Today, optical experts say by "radiant pyramids," da Vinci meant "light rays," and explain that da Vinci was describing what we now refer to as "light fields."   
 
In simple terms, the concept of a light field means that one captures the direction traveled by every ray of light present in a specific volume from a series of separate points in space. Experts say that when this information is captured by filmmakers or photographers, it provides the opportunity to computationally process light field data into various formats with an understanding of the volumetric information, including multiple focal points and perspectives as would have been captured from a standard two-dimensional image—all from a singular light field data set. According to Jon Karafin, Head of Light Field Video for Lytro Inc., the capture of this kind of "angular information," or what some people call "directional data," at the time of image acquisition allows content creators to make optical and virtual camera decisions after capture with precision. The search for a practical method to capture and utilize such data has been an ongoing quest in the image-capture industry for decades.
 
In recent years, a variety of companies have produced various light-field related camera technologies as it relates to consumer still camera applications; smart phones; cameras used for research and optical analysis, and inspection; and more recently, virtual reality applications. These include Jaunt's NEO system, Google's GoPro Jump array system, and Lytro Immerge, among others. Karafin claims there is some debate within the industry as to what kinds of imagery actually end up as true light-field images in the final analysis, because while some cameras themselves may capture multiple viewpoints to begin with, the processing of those images does not always result in the mapping of all viewpoints in such a way as to end up with the entire light field accurately represented in the final image. However, these developments all represent, to one degree or another, strides in how technology built around light field concepts can be utilized.
 
 
Feature filmmaking, however, is an entirely different kettle of fish, Karafin suggests. For cinematic image capture at extremely high resolution specifically, the challenge has been more complicated because it requires dense sampling of the light field, as well as enormous amounts of data captured, processed, and stored. How to do this practically on a movie set and fit such techniques into existing production workflows has always been complicated, Karafin says.
 
Until recently, the preferred method of achieving the effect for cinematic imagery was to use a variety of multi-camera arrays to permit the capture of image light fields using multiple apertures or lenses simultaneously. A big breakthrough in this regard initially came in 2011 with the experimental Stanford University multi-camera array system, and several academic projects have advanced that work since.   
 
"For a long time, [the multi-camera array approach] was initially the fundamental way to acquire angles of light, to look at how photons were moving through space, and reproduce them at that point of time on analog film media," Karafin explains.
 
Building on that methodology, Germany's Fraunhofer IIS has been utilizing proprietary algorithms within a suite of post-production software tools to make possible sophisticated image processing solutions for material captured using multi-camera arrays configured with high-definition cameras in a planar arrangement. Fraunhofer will be formally presenting at NAB 2016 in April what it is calling a new light-field media production system consisting of new tools for this approach, developed out of a long-term project to test requirements for a light-field system in a real-world production environment using existing workflows. As part of that project, in partnership with Germany's Stuttgart Media University HDM, Fraunhofer's Moving Picture Technologies Department put together a short clip made using multi-camera arrays and its light-field processing software, which you can view here, along with breakdowns of how it was put together. 

Karafin's company, Lytro, meanwhile, has attacked the problem differently, opting for a single camera solution built around what is being called a "micro lens array." This solution stems from Lytro founder Ren Ng's dissertation at Stanford and was eventually incorporated into consumer products with two previous generations of Lytro still cameras. For the cinema market, although Karafin was not at liberty at press-time to divulge too many specific details of an upcoming official announcement from Lytro, he did say that Lytro believes it has solved many of the engineering challenges posed by the micro lens array approach for that application. The company, he says, for cinema capture, has committed to the single-camera, micro-lens methodology out of the belief that "dense sampling provides greater amounts of information about the [light] rays of the scene, while multi-aperture approaches require sometimes significant interpolation between viewpoints, which can inherently generate artifacts."
 
Either way, in terms of motion-picture production, Fraunhofer and Lytro are, for the moment, the first two companies out of the gate with light-field solutions specifically targeting Hollywood for longform projects.
 
"Ren Ng took the fundamental concepts of light-field arrays, where you could potentially have hundreds or thousands of individual cameras and individual electronics and lenses, and he designed what we call our micro lens array," Karafin says. "That is basically a lenslet array system that is produced at the wafer level—and it can contain tens of thousands, if not hundreds of thousands, or even millions of micro-lenses that get placed in front of the imaging plane [of a digital imaging sensor]. We have that available for our consumer [still camera] products, where you have a main lens, whether a zoom or a fixed prime, followed at the focus plan of that lens by the micro lens array, and then behind that is the sensor, which instead of a two-dimensional image, it now captures data or angles of light for each of those lenslets. The way I think of it is that it actually captures a holographic representation of the world as seen through those lenses."
 
The challenge up until now, he adds, in terms of cinema applications, lay in trying to engineer such a light-field system for professional use whereby the image quality, or resolution, was sufficient by modern cinema standards in a package that was useful on a movie set or location.
 
"In the consumer market, there is not the same level of sensitivity to dynamic range and color accuracy and the size of the pixel," he continues. "But those considerations were among the most challenging aspects of bringing a technology to market [for professional cinematic applications]," he says. "Where you need a pixel that has sufficient size, and you need to have sufficient number of stops for the dynamic range—these things determine what kind of electronics and what kind of silicon can actually be used. And then, even more challenging than that, was how do you store that much data. For any light field, you would be capturing at least 10 times the amount of data you would capture for a standard 2D image. And therein lies one of the main critical challenges to form factor, size, and costing structure for any type of system like this. Plus, you know how much data is actually required for available 4K cinema cameras, and then if you want to do 4K RAW with on-board recorders that all have limited recording time. So light field acquisition can be between 10 times and 100 times the amount of data you would normally have [on set] to stream at high frame rates."
 
Karafin says these are challenges "I believe we are now in the process of solving. The main advance in technology has been the ability to produce cinema quality imagery from a light field, and this required a significant leap to achieve the resolutions required at high frame rates, as well as the ecosystem of data capture, storage, and processing."
 
Regardless of what systems eventually percolate into the marketplace, Karafin believes the potential for true light-field image capture generally, if handled by dedicated artists and processed correctly, is significant for the motion-picture industry, particularly because the technology offers the opportunity for what he calls "more control in post." However, he adds, "With this great power comes great responsibility." By that, he means that this issue is controversial because it opens up the potential to alter a cinematographer's creative intent if changes are not handled correctly or do not involve the cinematographer or director. But, he continues, "the technology gives you the obvious benefits of any control you would traditionally have at the time of capture—that is something you can now control at the time of post-production."
 
"That's the double-edged sword, of course, because now you are taking all of the control that traditionally you would have a very skilled cinematographer bake into what would have been the final image. But then, the reality is, no shot, even if you bake it in, is ever truly final. It goes to the digital intermediate and then you change the color, and it also goes to visual effects and the post-production workflow. So what light field provides to that segment of the post-production community is the ability to have complete freedom in post or any portion of the production pipeline. So, for example, if you capture within the range of the light field volume, you don't have to worry about missing focus. You simply won't miss focus [because the technology allows focus to be adjusted in post since every angle of all light in the image is available], and you can do things that otherwise would be fundamentally impossible when focusing shots [live on set]."
 
Karafin also says the micro-lens, single-camera approach permits more seamless output of multiple format versions of content captured with a single camera.
 
"You can create stereoscopic or multi-view images from the same shot from the single light-field lens, meaning you don't have to think as much about stereo composition or release format or how you are going to cut in editorial when you are shooting," Karafin says. "You have complete baseline freedom or inter-axial control over camera pairs, and you can also have an automated render, if you want to change the content for an IMAX render, which because it is such a large screen, might require potentially different stereoscopic decisions, or if you go to a smaller [3D] screen for the RealD format, or home theater or a mobile-size screen. All of those have different geometric considerations. With light-field data, you are able to re-render just by inputting virtually what your screen size is, what your artistic decisions are, and then, you don't have to re-shoot the scene."
 
Karafin adds the technology also has major visual effects implications, including the ability for filmmakers "to create a semi-automated 3D camera track of the world, meaning just the rich metadata from the light field in combination with some other hardware elements in the camera, can actually allow you to generate camera positional data as part of the metadata that renders out with the light field to begin with. Today, that is something that is traditionally done by an integration department and takes quite a bit of time, especially given the complexity of some shots. So this gives you the ability to integrate the CG world with the live-action world in a much easier way."
 
There are other unique advantages to light-field imagery, Karafin adds. But along with this great potential comes questions about whether the technology, as filmmakers slowly begin to investigate and incorporate it, will push creatives to do things in new ways and go to places they have never gone before, or whether it will simply and neatly fit into how they already prefer to do things, both technically and artistically.
 
The answer to that question is a resounding "both," depending on what they want and prefer, Karafin suggests.  
 
"The goal is always to integrate into existing workflows wherever possible," he says. "That is why we try to build plugins to existing software packages, rather than creating new, standalone tools. This way, you don't need to re-train artists and add new software to make workflows more complicated. But what would change is the way that, potentially, a filmmaker would want to think about a scene or a shot, given the types of creative flexibility they would now have. But if you have a director or cinematographer who fundamentally does not want to change their way of thinking, the tools would still give them focus control, aperture control, and so on. So there is fundamentally no difference [on set] outside of the fact that you would now be using a tool that is streaming a much higher volume of data, and be a different form factor."
 
Meanwhile, with all these developments, and more no doubt on the way, the question arises of whether standards for such technology will have to be developed. Karafin says major light-field players have already been discussing the technology "conceptually" with various SMPTE members because the technology does present the industry with a basic, but important, challenge—"we generate a massive amount of metadata, after all. So what are the formats we should be adopting? How should we standardize that data? Is it per frame? Is it global? Where do we store it? In a separate file? A lot of this, we are already going to standardize how we do it based on other formats that currently exist. But these are some areas where we will be seeking input from the SMPTE community to figure out what works best for current production workflows."