I’d like to clarify some topics related with my last post:
- First of all I’d like to credit the Institute for Creative Technologies for the amazing performance capture provided for the animation (http://ict.usc.edu/prototypes/digital-ira/). Their new capture technology enable photoreal facial animation performances together with extremely detailed skin features. The full team behind the capture, leaded by Paul Debevec, is the following: Oleg Alexander, Graham Fyffe, Jay Busch, Ryosuke Ichikari, Abhijeet Ghosh, Andrew Jones, Paul Graham, Svetlana Akim, Xueming Yu, Koki Nagano, Borom Tunwattanapong, Valerie Dauphin, Ari Shapiro and Kathleen Haase.
- Lauren head scan (the woman one) was obtained from Infinite-Realities.
- Second, it has been quoted in numerous sources if this was related with the Nvidia FaceWorks presented in GTC. I’d like to clarify that we both use the same performance capture from the ICT, but the animation and render engine is completely different. In other words, same source data but different engine.
- Finally, I’d to clarify that the technology we presented runs in its higher quality preset at 93/74 fps at 720/1080p respectively, in a GeForce GTX 560 TI (a two-year old mid-range GPU).
Thanks to all the people who showed interest in our reseach, the slides will be available pretty soon online!