The BRUCELY Project: 21 Red Epic 4D Capture Rig setup
The BRUCELY Project is an independent, self-funded skunkworks project. The goal; to construct the world’s most advanced 4D performance capture system and to use the performance capture in Augmented Reality applications.
Augmented reality is the stuff of sci-fi. With new technologies like Google’s glass project and Occulous Rift, AR is almost ready for mass integration. AR is magical when you first play with it, but too often content producers stop there; showcasing the technology as if it were a magic trick. What we wind up with is flat gimmicky novelty content that stalls creative innovation. Building on the incredible work of innovators like Paul Debevec, Lee Perry-Smith and the incredible team at Agisoft, BRUCELY exists to execute AR experiments that rely on story, not technology to move audiences. BRUCELY is entirely self funded and relies on the passion and sweat of it’s collaborators, and the generous help of it’s sponsors.
The Technology:
The purpose of all this is to develop a system and to forge a pathway from 4D capture to mobile execution. Currently there is no practical method of capturing a photo-real 4D human performance in realtime. Additionally there is no open system for using that capture data for use within a mobile AR application. The capture data would be just too massive. We started to address the first problem by constructing the capture array. The Epic BRUCELY capture array uses a series of hi-res motion cameras; specifically 21 Red Epic Cameras to photographically capture a human performance. Using passive photogrammetry technology, dense 3D models are rendered from the captured images with high resolution geometry and textures. Part of the process relies on the incredibly powerful Agisoft Photoscan Pro; a software originally developed for the GIS (geographic information system) industry to render georeferenced orthophotos.
The amount of data produced by the Epic BRUCELY array is staggering (>50GB of converted raw footage per second.) Once it has been rendered as geometry and texture files, the size is much more reasonable, but it is still not practical for mobile delivery or playback. A critical step is making the capture data usable for mobile. To address this problem, we must retarget the data using morph target animation principals to convert captured data to onto a single piece of geometry so it can be played back using mobile platforms like Unity3D. The result: viewers will see a captured performance in 3D with motion, rendered exactly as it was intended by the artist.
Expectation/Outcome:
The possibilities of what can be done with this type of capture system are staggering. Imagine being able to direct a shoot with actors, and only focus on the performance? Later, the director can go in and replay that performance, watching it in 4D using an AR display, and only then have to worry about directing the camera, the lighting, that dangerous stunt that would have been otherwise impossible. They could make a dolly shot handheld and change the scene to night-time months after talent has wrapped. These advances will be possible and the work we are doing is contributing to that. There is an opportunity to captivate in ways that the world has never seen. Our goal is to develop the first narrative “short-film” produced as an AR application.
Our intent is that content creators will continue to use these capture innovations to bring meaningful story content to AR. We believe it’s one of the hallmarks of true disruption: to work with new technology and timeless storytelling to make something so captivating and meaningful that the technology itself disappears.
More inspiration via www.alexxhenry.com
0 comments:
Post a Comment