Look at the difference between these pictures.
It’s easy to see that Senua looks far more realistic than the characters in L.A Noire.
Both of these games use motion capture – the difference is the detail. L.A Noire was, at the time, revolutionary in its use of mo-cap. But it’s in more modern games like Hellblade that we see how realistic games can become.
A picture doesn’t tell the whole story. It’s when you see mo-cap in action that really highlights just how transformative this technology is for creating engaging, realistic media.
But how do studios build these characters?
From Real Faces to Digital Emotions
The human face contains over 40 muscles, each playing a unique role in displaying our emotions. From brows furrowing in confusion to lips lifting into a smile, our faces convey a range of feelings, often without us being fully conscious of it.
A crucial aspect of understanding how motion capture taps into this realm of emotion is recognising the sheer complexity of the human face. Every emotion we feel results in a dance of muscle movements, some overt and others incredibly subtle.
These nuanced movements, commonly called ‘micro-expressions’, are often fleeting but rich in emotional data. Capturing them accurately is essential, for they are key to rendering authentic and convincing digital emotions.
The Facial Capture Process:
Facial capture, a subset of motion capture, involves recording an individual’s facial movements and expressions.
This process has evolved with various technologies, each with its challenges and steps from capture to rendering.
Marker-based Systems: Reflective or coloured markers are placed on the actor’s face. Multiple cameras track these markers to capture facial movements.
Markerless Systems: These systems use high-resolution cameras to capture facial expressions without markers.
Depth-sensing Systems: Use structured light or time-of-flight technologies to capture depth information, creating a 3D model of the actor’s face in real time.
Tracking and Solving: The captured data is processed to track facial movements accurately. A solver then translates this data onto a digital model.
Cleanup: Manual cleanup may be required to correct any errors or inconsistencies in the captured data.
Mapping and Retargeting
The processed motion data is mapped onto a 3D character model’s facial rig.
Retargeting involves adjusting the motion data to accurately portray the actor’s expressions on the character, which might have different facial proportions.
The final step is rendering, where lighting, textures, and other visual effects are added to produce the final, realistic animation.
Resolution and Accuracy: Capturing the subtleties of facial expressions requires high-resolution and accurate data capture, which can be challenging, especially in real-time scenarios.
Real-Time Performance: Real-time facial capture and rendering demand significant computational resources and optimised pipelines to deliver convincing results with low latency.
Occlusions: In marker-based systems, occlusions can occur when markers are obscured from the cameras, leading to lost data.
Data Volume: The sheer volume of data generated, especially in high-resolution or real-time systems, requires efficient data management and processing pipelines.
Mapping and Retargeting: Accurately transferring facial expressions from the actor to a character with different facial proportions is a complex task that requires sophisticated retargeting algorithms.
Lighting and Texture: Achieving realistic lighting and texture in the final render is a significant challenge, requiring advanced rendering techniques and often manual tweaking by artists.
Differentiation between Technologies
- Require physical markers and multiple cameras.
- Can suffer from occlusion issues.
- Generally provide a high level of accuracy.
- Don’t require physical markers, hence no occlusion issues related to markers.
- Often use machine learning algorithms for tracking facial expressions.
- Might struggle with capturing finer details compared to marker-based systems.
- Can capture 3D facial geometry in real-time.
- Less prone to occlusion issues.
- Can provide a more holistic view of facial movements by capturing depth information.
Each of these technologies comes with its own set of advantages and disadvantages.
The choice between them depends on the specific requirements of a project, such as the level of detail needed, real-time performance, and budget constraints.
Why do it – The Magic of Believable Digital Characters
At the heart of every compelling story lies emotion. It pulls us into narratives and makes us care about the characters. Creating that digitally is difficult, but when digital characters showcase believable emotions, they are more than just a collection of data and animations – they feel alive, relatable, and genuine.
When we see a character’s eyes well up with tears or their face light up with joy, mirroring genuine human emotion, we’re more likely to feel, root for, or even cry and laugh with them. This emotional tether deepens our connection to the narrative and amplifies the story’s overall impact.
Take Hellblade. In the first game, the accuracy of the emotions portrayed pulls us into her story. We felt her anger, her confusion, and despair as we journeyed with her through the underworld. Combined with the audio and environment, it’s an incredibly immersive experience, and the second game will likely feature the same strong storytelling through motion capture.
Why Don’t All Games Use Motion Capture?
Motion capture has undoubtedly revolutionised the gaming industry. It offers unparalleled realism and emotion in character movements and expressions, which begs the question – why isn’t it used everywhere?
The most obvious answer is budget constraints. Mo-cap studios are expensive, with space needed for actors to move around. Developers also need to hire actors. On top of that are the cameras, software, and other hardware needed to capture and process actors’ movements.
For many indie developers, the capital outlay simply isn’t worth it. Traditional methods still work very well, and have become even more accessible as technology improves.
Some developers don’t want to use mo-cap. Their animation style may not require it, and the uniqueness of their games often stems from their bespoke animations. ‘Hollow Knight’ is an excellent example of a classically styled game that isn’t held back by its lack of mo-cap.
And, not all games focus on deep narratives or character-driven plots. For titles where character movements aren’t central to the player’s experience, the added detail of mocap might not significantly enhance the gameplay.
Motion Capture – Digital Emotions for Immersive Games
Mo-cap isn’t always needed, but it shines for creating realistic, character-driven games. Hellblade, God of War, The Last of Us – are all better for using mo-cap to portray facial expressions and emotions realistically.
The technology is impressive. Combined with the talents of the actors and developers, motion capture apps and systems are a crucial part of modern digital media.
Ready to bring your unique characters to life with the magic of motion capture?
Get the Performit Live app now and start creating unforgettable animations. Let Performit Live help you tell your story.