Shooting Blind: Giving Operators the Tools Needed to Capture Live 3D

A 3D production of golf’s latest major championship is coming to a close. Young Rory McIlroy lines up for a birdie putt on the 18th green. Just as he lets his shot go, Bam! A member of the crowd has jumped up in front of the camera. At home, 3D viewers wince in discomfort, pulling their glasses from their faces.

It’s dramatic parallax violations like this that make live 3D production one of the most challenging tasks in the industry. And with the new wave of 3D still in its infancy, there are few tools available to help camera operators and directors make informed decisions on good vs. bad 3D shots.

Michael Bergeron, strategic technology liaison for Panasonic Solutions Company discussed the latest advancements for the practitioner in his report on ‘Stereoscopic 3D Assist Technologies’ at the Society of Motion Picture and Television Engineers’ Second Annual International Conference on Stereoscopic 3D for Media & Entertainment in New York City earlier this week.

Panasonic and other companies are working to develop features that will not only help present 3D crews with more data but also simplify that data and its presentation so that artists can operate more intuitively.

“We need to get them the things they are going to need to help them make decisions that are on the line between technical and creative,” said Bergeron.

The Need for S3D Shooting Guides

There are a wealth of shooting guides available to operators and directors working in standard 2D and HD broadcasts. These guides help the user see parts of the image that they need to adjust that aren’t directly viewable; factors such as exposure and lighting. However, operators shooting for 3D broadcasts are typically viewing their shots through viewfinders and monitors in 2D, leaving crucial depth composition essentially unsupervised.

“If you are shooting a live production and you’re using convergence – and in a live situation you almost always will – you need to be able to see where the convergence plane is,” said Bergeron. “I need to be able to see the convergence plane and the Z-space from the perspective of a camera operator or as the director of several camera operators.”

In order to accomplish this, cameramen, directors, and producers must have parallax information presented to them in order to aid in decision-making in real time. Parallax is an apparent change in the direction of an object, caused by a change in observational position that provides a new line of sight. Live 3D productions, especially sports broadcasts, do not have the luxury of correcting parallax violations through editing. On-the-fly accuracy is not only crucial, but also necessary.

“We need to be able to present information to the user in an easily understandable…and quick to understand way that will give them everything they need to know to make the proper adjustments to find the shot,” said Bergeron.

Current Methods

Bergeron discussed popular 2D guide analogies used today that could also serve as models for effective tools for 3D.

On the video side, the most popular of these analogies is zebras, which allows the user to set an area of exposure, or an IRE level, that, at a determined point, will display black and white diagonal lines across the frame in areas where violations occur. For example, zebra markings could show an operator where overexposure is occurring.

“What the tools provide for you is an easy way to see that information without having to look at numbers, without having think about numbers,” explained Bergeron. “I just know that now I can work in an instinctive way to set my exposure.”

Bergeron also discussed observing the technology behind other analogies such as Y-Get, Vector Scope, and Wave Form Monitor.

2D representations of stereo signals

As inconvenient as it may seem that production crews would use 2D displays on 3D rigs, it’s proven to be the most practical way to go. Few camera operators will find comfort in wearing 3D glasses throughout an entire shoot and the fact of the matter is that most viewfinders and displays aren’t trustworthy representations of shot composition even in standard formats. Guides are necessary.

“(Using 2D displays) has more or less become the norm,” said Bergeron. “I think initially because it was very hard to find 3D displays but also because in some ways the 3D display would be misleading.”

This arises from the fact that much of the success of 3D relies on two factors: screen size and the distance between the viewer and the screen. An operator watching 3D through a viewfinder would be far too close to get an accurate reading of parallax and convergence, rendering the image virtually useless to them.

A common method that networks are using during 3D productions today is a left-right mix where the left and right images are overlapped on the same screen.

“One very important thing that you can do when looking at left and right mixed together is you can judge convergence,” said Bergeron. “It works much in the way focus does. It’s still difficult to tell whether you are looking at negative or positive parallax but at least you can see what is coverging and for an operator that’s always a piece of information that is particularly important.”

Another popular method is Left Right Difference, which is a gray display where depth violations are impressed on the screen. As an object is placed on the convergence plane, it’s outline blends into the grey background.

“That allows you to see much smaller differences in left and right and in that case when you get convergence, the image disappears,” said Bergeron. “Of course that’s very useful and there are many stereographers who swear by that.”

Flagging Parallax Violations

The ultimate challenge for companies like Panasonic is to provide the data necessary so 3D production crews can quickly identify a parallax violation and make the proper adjustments.

Bergeron displayed the concept of flagging parallax violations through the use of the Binocle’s 3D Disparity Tagger, which shows parallax errors in horizontal display with acceptable 3D appearing in green and unacceptable 3D appearing in red.

He than proceeded to demonstrate Panasonic’s latest advancements to the HS450 Switcher: a guide designed to enable manual parallax adjustment. Through the use of color coded bars atop the screen or viewfinder an operator can see what elements of his or her shot are acceptable or unacceptable.

A model in the foreground of a shot, for example, would be coded in green while background objects behind the scene will be shaded in yellow. Any objects too close to the screen, be it on the edges of the image or elsewhere, will be flagged in red, warning the operator that adjustments are necessary. This can be done by either moving those images out of the scene, pushing the convergence plane forward,  or decreasing the inter-axial.

Continued Developments

Live 3D broadcasts are still in an experimental stage and sports are at the forefront of its advancement. With the announcement of Sony’s year-long extension with ESPN 3D, live 3D sports will have more time to perfect the art.

Among other technological advancements currently in development, Bergeron expressed his support of the concept of a seven-second delay system similar to that used for censoring audio in live broadcasts. In this case, if a dramatic parallax violation occurs, the director has the time to react and the dump the shot, moving to a more acceptable image.

However, when it comes to making camera operators more effective in shooting high-quality 3D, the challenge is still on the table. While so much information is necessary, finding an acceptable balance might be the toughest task of all.

“We can probably get better representation of what’s going on on the whole screen but we want to make sure that we aren’t overloading the user with too much data and that’s where a lot of the research is going into right now,” said Bergeron. “A zebra might not be too distracting and an entire waveform monitor in the display might be.”