This week, the AI video generator Runway launched Act-One, anticipating a new era for animation. The feature tackles two major challenges in AI video generation: creating nuanced facial expressions and maintaining character consistency.

All of this can be achieved without using complex facial motion capture software to map expressions onto digital avatars.

How Does Runway Act-One Work?

“Traditional pipelines for facial animation often involve complex, multi-step workflows. These can include motion capture equipment, multiple footage references, manual face rigging, among other techniques…Our approach uses a completely different pipeline, driven directly and only by a performance of an actor and requiring no extra equipment.” Runway Research explains breakthroughs in Act-One as lowering barriers for creating animated contents.

Runway’s introductory video brought guests sharing everyday moments. They talked about a phone breaking, cheering for a team project, picking out groceries, sparring with a boss, or recounting a funny story.

Their faces spoke louder: frowns of frustration, gasps of shock, squints of doubt, bursts of laughter, pouts of disappointment.

Act-One then transferred these expressions to animated characters, cartoon or realistic, including witches, kings, foxes, and even a Roman sculpture, appearing in confusion, joy, surprise, anger, sadness or indifference.

The aspiration is to allow users animate faces that respond with lifelike expressions, making characters more believable and stories more engaging.

How Does It Change Animation?

Walt Disney just formed a new business unit to evaluate and develop AI and mixed reality’s usage across its film, television and theme park divisions.

To see why AI technologies matter, look back at animation’s evolution.

In 1937, Disney’s Snow White and the Seven Dwarfs stunned audiences as the first full-length animated feature. Every frame was drawn by hand and painted on transparent sheets to coherently capture movements and facial expressions. It set the bar high for decades.

Then, in 1995, Pixar’s Toy Story marked a milestone in animation by fully using computer-generated imagery (CGI). Animators built 3D characters that moved with depth and detail.

Avatar (2009) marked another milestone in animation. By combining CGI with performance capture, Avatar brought hyper-realistic characters to life. One striking example is when Jake Sully, played by Sam Worthington, tames his banshee. The scene reveals every flicker of ambivalent emotion—audacity and determination mixed with caution and underlying fear.

This blend of actor-driven motion capture and CGI set a new standard for animated films.

Runway’s Act-One presents the potential for a new age for animation by democratizing the creation without the need for complex equipment or extensive expertise.

In many AI-generated videos that hallucinate, characters morph unpredictably. Viewers become unconvinced when faces change mid-story.

Act-One fixes this by letting users film themselves or actors and map those movements directly to a character.

In this way, the character stays intact, anchoring the story.

Leverage Runway AI For Inclusive Storytelling

Runway Act-One may impact a number of industries from e-commerce and entertainment to social media content development and corporate training.

Advertising and marketing can be made more compelling, and gaming can be made more interactive.

But do we have to turn every technological advance into profit-driven?

Instead, creators should take advantage of Act-One to tell stories unheard. This is a great opportunity to advocate social and racial justice, as well as sustainable practices with dynamic visuals and little financial costs.

We can interview people around us, whose life experience are deeply moving.

Picture capturing your grandmother’s stories from a small village, recounting her life, the social changes she witnessed, and the struggles she overcame.

Document a friend’s narration of efforts to counter racial violence—stories of resilience, perseverance, communal support and hope.

Imagine interviewing a local community member who has dedicated their life to protecting nearby forests.

Or think of visualizing the story of a volunteer who helps rebuild homes in a hurricane-hit area.

An expression needs meaning and rich experiences to move people’s hearts.

The true value of apps like Act-One lies in the freedom of choosing your characters based on real life experience.

Youtubers can use Act-One to shine a light on lives in their communities filled with twists, tenacity, and strength.

Nonprofit organizations can document oral histories of marginalized social groups and environmental degradation in animated educational videos.

The animation and entertainment industries are already leaning in. Directors like James Cameron and Jia Zhangke are planning to use AI in their films.

But a broader range of creators outside of cinematic profession will help make more inclusive and diverse animations, where voices and stories from all walks of life can shine.

Limitations And Potential Risks of AI Animations

Despite its promise, Act-One and other AI video generators have limitations.

  1. Head movements are mostly confined to a single plane, so characters can’t easily turn to speak to another character, or move dynamically while speaking.
  2. Spatial and narrative contexts and interactions between characters are difficult to be generated with AI.
  3. Creating complex scenes requires multiple reference images and frames. More tutorials and examples from creator communities can help.
  4. When anyone can build an identity, the line between authentic expression and constructed personas blurs.

Young people especially may find themselves navigating a world filled with AI-crafted faces and voices.

As these tools become part of our daily lives, we must ask: How do we separate fiction from reality? What stories will we choose to tell? And what will that mean for who we are?

Full-length, detailed animations may still be a ways off. But we can actively shape the development of AI video generators by creating contents that care about public good and evoke genuine human emotions.

It’s about reshaping how we see ourselves and how we can we can promote stories that make positive social change.

Share.
Exit mobile version