Exploring the application of machine learning models to achieve advanced character control and improve the naturalness and immersion of virtual characters in games
BackGround & Objectives

BackGround & Objectives

FYP23008

Yu Ching Lok

Supervised by Dr. Choi, Yi King

BackGround:

AI technology is advancing rapidly and finding applications in various fields, including entertainment. Virtual YouTubers are an example of AI in entertainment, where users control animated characters using camera-based facial expression and body movement capture. However, current AI methods for controlling avatars have limitations. Landmark detection models only allow basic movements and simple expressions, while advanced control requires expensive motion capture systems. Expressing emotions also needs manual input, resulting in stiff expressions. To overcome these challenges, I plan to integrate different AI models, such as emotion detection and gesture recognition, for more natural and unique avatar control. This approach aims to enhance the user experience while keeping costs low by using a single camera.

Objectives:

This project aims to investigate the integration of artificial intelligence models, specifically emotion recognition and gesture recognition, to improve the control and interaction with avatars in a natural and unique manner. The goal is to develop an application that provides users with advanced avatar control, enhancing the realism of their interactions.

Brief Description