What happened? University of Washington and Google have just announced the launch of HumanNeRF, a new 3D rendering system that is able to synthesize photorealistic details of the human body in motion. This includes fine details such as cloth and facial features, as well as volumetric representations in canonical T-pose. In addition, HumanNeRF can decompose neural skeletal structures from single camera video footage. This opens up many possibilities for 3D content creation and analysis!
Why is this important? With a single-angle camera perspective, the algorithm is able to extract: photorealistic features of the body, such as fine details like cloth and face; and skeletal rigid/non-rigid decomposition. This enables content creators to 3D-print human models with unprecedented fidelity, or to create digital humans for use in games and movies. For researchers, the system could be used to study how people move or to develop new ways of tracking human motion for applications like virtual reality and robotics.
What’s next? The potential applications for this technology are virtually limitless. 3D-printing human models, developing new ways of tracking human motion, and creating digital humans are just a few of the possibilities. We can’t wait to see what creative people do with HumanNeRF!
What do you think? Let us know in the comments below.
Author: Christian Kromme
First Appeared On: Disruptive Inspiration Daily
The latest disruptive trends with converging technologies that will change your life!