Why Oculus’ new expressive avatars don’t impress me

And what needs to change

Have you ever played a modern AAA Title that makes your eyes bleed just with how beautiful the visuals are, only to be disappointed when any character show’s any form of emotion? Even if the voice actor behind the face is pouring their heart out in their role, the models themselves barely move a muscle. This exact problem has bothered me for years while many players have become desensitized to it.

During their keynote, Oculus also announced Lone Echo II, the sequel to the popular, story driven, Lone Echo. The trailer showed the struggles as Commander Liv tried to rebuild her Robot and Companion Jack. When the robot finally comes back online, the voice actor lets out a huge sigh of relief, transitioning into a hysterical laugh. This was an expressive moment for the character that was portrayed wonderfully by the actor, but the character model’s face showed little to no expression. In fact, more expression was depicted by her body, which flew back and slumped in zero gravity, than her face. This is a prime example of the work that needs to be done in truly expressive modeled faces.

Liv’s Expression in the Lone Echo II Trailer

The same problem is amplified in VR. Imagine walking into a real party to find that not a single person in the room seems to be enjoying themselves. This is how it feels to log into a VR chatroom today. Oculus has set out to make a dent in this problem with the announcement of “Expressive Avatars”. These new avatars do away with the visors and glasses that traditionally cover the eyes of Oculus’ avatars allowing them to be more expressive. The eyes now mimic how real eyes would move and lock on to people and points of interest. There is also simulated mouth movement and expressions.

This is a valiant to relieve some of the anxiety around speaking to a face chiseled from stone. However, these new avatars still don’t seem to display much more expression than what we see in those modern AAA video games I mentioned earlier. Unless a game uses motion capture to record the movements of a real voice actor’s face these movements are usually just individual parts of the face, such as the mouth or nose. This is not how facial muscles work. In reality, the same muscles that cause you to smile also cause your cheeks to rise, your jaw to tense, and sometimes even your eyebrows to furl. The face does not move in segments but as a single, dynamic structure.

Faces in video games and VR will remain cold and lifeless until 3D animators think of a face as a group of muscles, each one affecting the structure as a whole, not as a static plane with moving eyes and a mouth.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s