Last winter, I crafted a 15-second video for a client. Although I can't share the final product, I'm excited to give you a glimpse behind the scenes. This project was a deep dive into blending AI with traditional animation, starting with the creation and training of an AI model of my client (which was done with her full consent, of course). We integrated this AI model into a base animation which I created in Cinema 4D, only to feed said animation into WarpFusion for temporally consistent mapping.
This journey of exploration and research and development involved a diverse toolkit, including Cinema 4D, After Effects, WarpFusion, EbSynth, Davinci Resolve and Topaz Video AI. While it all spanned from initial experiments in Unreal Engine, where I created and animated a MetaHuman version of my client, it ended into the depths of AI, all in the pursuit of (hopefully) reducing the uncanny valley effect produced by purely 3D avatars. While I'm not claiming to have escaped the uncanny valley, the outcome, compared to my renders with MetaHuman, nudged me a tiny bit closer (in my humble opinion) to quasi-realism. This process of trial and error was a blast and I'm excited to leverage modern AI tools that have been since released since the completion of this project to achieve even greater realism and temporal consistency.
Back to Top