2D-to-3D Conversion: Filling the Void of 3D Content

Until the price tag on live 3D production drops a bit further, broadcasters must rely on some 2D-to-3D–conversion technologies to deliver more 3D content — and to incorporate existing footage into a 3D production. On a panel at SVG’s 3D & Beyond Summit last week in New York, representatives from four companies that offer 3D-conversion technologies discussed their products and how those products can help 3D production continue to grow.

Better Than the Real Thing?
Dr. Miky Tamir, chairman of Stergen, began the conversation by showing live 3D footage from a soccer game, followed by 2D footage that his product converted into 3D. The goal at Stergen, he explained, is to create converted 3D that delivers a better experience than the original 3D production.

“In soccer, our conversions are much better than the 3D production,” he said. “There is a physical reasoning behind that. We have total flexibility to change the stereo parameters. That’s no problem because the second camera is virtual. We can separate the cameras as we like, eventually creating better depth perception and perhaps even a better look than the original production.”

Tamir also showed some archival footage that had been upconverted to 3D, perhaps the first look at Pelé in 3D, and emphasized how Stergen’s technology may provide better 3D images than a true 3D production.

Goal Is Addition, Not Replacement
Dana Goodwin, sales support engineer, live production systems, for Sony Electronics, had a different message surrounding his company’s 2D-to-3D solution, the MPE200.

“We don’t see this as replacing genuine 3D shooting,” Goodwin explained. “What 2D-to-3D gives you is the ability to add more pieces to your story. We’re not trying to re-create 3D as shot in 3D; our goal is to be able to intercut it. In a live-sports environment, to be able to stick in a converted shot in the middle of three or four live shots is a useful feature.”

The MPE200 is an engine with five applications, including quality control, which Goodwin emphasized as a crucial component of the 3D production.

“If we consider that we’re all trying to make 3D, whether we’re shooting it live or converting it, there needs to be a lot more process about how we evaluate it,” Goodwin said. “You can buy this MPE box and have several tools in your box. You can evaluate current programming, use it when you do your own conversion to measure things, have specific rules about how much focus offset allowed, and use it as a tool when you’re fixing shows in postproduction.”

Craig Yanagi, manager of marketing and brand strategy for JVC Professional Products, described the IF-2D3D1 image converter as a tool for post and live production that is able to implement a more realistic 3D effect than was previously possible.

“It provides easy and mask-free automatic creation of 3D effects for background images and really nice details like the curvature of a face,” Yanagi said. “The use of the original 2D image for the left eye is the basis for establishing the 3D shot.”

Feeding the Need for Content
Ray Conkling, assistant general manager at Teranex, added his company’s products to the conversation, emphasizing the array of format conversions the Teranex products can handle.

“We have the ability to take anything you have and convert it to 3D,” he said. “We can do upwards of 300 different format conversions in and out. The 3D Toolkit allows you to adjust for camera inaccuracies, mechanical maladjustments, and minimize or enhance tone effects. You can do color correction between the two channels live when you’re capturing or in postproduction.”

Still, like Sony’s MPE200 box, which is meant to be another tool in a storyteller’s toolbox, the Teranex suite of products is not intended to compete with stereoscopic capture material.

“We’re trying to give a transitional type of technology that helps feed the need for more content,” Conkling explained. “We can take 2D content and put it in 3D perspective. Part of the transition is getting people used to watching 3D. We’re trying to fuel the whole 3D movement, and part of that is the ability to get legacy stuff into 3D and use that in different workflows.”

Compromises in Conversion
Even for those not looking to replace true 3D production with conversion software, some compromises are involved in using any of the processes represented on the Summit panel. As with all 3D, the conversion software cannot add much depth to a subject that is far from the camera, and shots where the subject is less than 1 meter away are not ideal, either. Still, low cameras are best.

“With low cameras, we have players with more-complex backgrounds, which makes it more difficult for us,” Tamir pointed out. “We have close to comparable results regarding the low cameras.”

Yanagi added that quick cuts are also difficult to convert, as are shots that move from one depth of field to another extreme depth of field.

“But, as long as you can manage the intensity, the degree of 3D attributes that you have, and have that mapped out prior to the production,” he said, “then that would provide a much better viewing experience in the long run.”