Hosting
Wednesday, February 5, 2025
Google search engine
HomeGadgetsRunway Act-One: AI motion tracking with your smartphone camera

Runway Act-One: AI motion tracking with your smartphone camera


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. More information


AI video has come incredibly far in the years since the first models debuted in late 2022, increasing in realism, resolution, reliability, fast compliance (how well they match the text prompt or description of the video the user typed), and number.

But one area that remains a limitation for many AI creators – myself included – is rendering realistic facial expressions in AI-generated characters. Most seem quite limited and difficult to control.

But that’s no longer the case: Today, Runway, the New York City-headquartered AI startup backed by Google and others, announced a new feature called “Act-One,” which lets users record video of themselves or actors from anywhere video camera – even the one on a smartphone – and then transfers the subject’s facial expressions to those of an AI-generated character with uncanny accuracy.

The free-to-use tool is gradually rolling out to users “gradually” starting today, according to Runway’s blog post about the feature.

While anyone with a Runway account can access it, it will be limited to those who have enough credits to generate new videos on the company’s Gen-3 Alpha video generation model introduced earlier this year, which features text-to- video, image-to-video supports. -video and video-to-video AI creation pipelines (for example, the user can type a scene description, upload an image or video, or use a combination of these inputs and Gen-3 Alpha will use what is given to them to generate its new scene).

Despite its limited availability at the time of this post, the growing scene of AI creators online is already welcoming the new feature.

As Allen T. noted on his X account: “This is a game changer!”

It also comes on the heels of Runway’s move into Hollywood film production last month, when it announced it had signed a deal with Lionsgate, the studio behind Runway’s film production. John Wick And Hunger Games film franchises, to create a custom AI video generation model based on the studio’s catalog of more than 20,000 titles.

Simplification of a traditionally complex and labor-intensive creative process

Traditionally, facial animation requires extensive and often cumbersome processes, including motion capture equipment, manual facial manipulation, and multiple reference images.

Anyone interested in filmmaking has probably noticed some of the complexity and difficulty of this process on set or when watching behind-the-scenes footage of effects- and motion-capture-heavy films, such as The Lord of the Rings series, Avataror Rise of the Planet of the Apesin which actors are seen covered in ping-pong ball markers and their faces littered with markers and blocked by head-mounted devices.

Accurately modeling complex facial expressions is what drew David Fincher and his production team to it The curious case of Benjamin Button to develop entirely new 3D modeling processes and ultimately earn them an Academy Award, as reported in a previous VentureBeat report.

Still, in recent years new software and AI-based startups like Move have tried to reduce the equipment needed to perform precise motion tracking — though that company in particular has focused mostly on full-body and broader movements, while Runway’s Act -One is more focused on modeling facial expressions.

With Act-One, Runway wants to make this complex process much more accessible. The new tool allows creators to animate characters in different styles and designs, without the need for motion capture equipment or character manipulation.

Instead, users can rely on a simple driving video to transform performances (including eye lines, micro-expressions, and nuanced pacing) into a generated character, or even multiple characters in different styles.

As Runway wrote on its X account, “Act-One is able to translate the performance of a single input video into countless different character designs and in many different styles.”

The feature is “mainly” focused on the face for now, according to Runway co-founder and CEO Cristóbal Valenzuela, who responded to VentureBeat’s questions via direct message to X.

Runway’s approach offers significant benefits for animators, game developers and filmmakers alike. The model accurately captures the depth of an actor’s performance while remaining versatile across different character designs and proportions. This opens up exciting possibilities for creating unique characters that express real emotions and personality.

Cinematic realism from camera angles

One of Act-One’s key strengths lies in its ability to deliver realistic, cinema-quality results from a variety of camera angles and focal lengths.

This flexibility increases creators’ ability to tell emotionally resonant stories through character performances that were previously difficult to achieve without expensive equipment and multi-step workflows.

The tool’s ability to faithfully capture an actor’s emotional depth and performance style, even in complex scenes.

This shift allows creators to bring their characters to life in new ways, unlocking the potential for richer storytelling in both live-action and animated formats.

While Runway previously supported video-to-video AI conversion, as mentioned earlier in this piece, allowing users to upload footage of themselves and “reskin” Gen-3 Alpha or other previous Runway AI video models like Gen-2 with AI effects, the new Act-One feature is optimized for facial mapping and effects.

As Valenzuela told VentureBeat via DM on X, “The consistency and performance are unparalleled with Act-One.”

Enables more extensive video stories

A single actor, using only a consumer-grade camera, can now play multiple characters, with the model generating different output for each.

This capability is poised to transform narrative content creation, especially in indie film and digital media production, where high-quality production resources are often limited.

In a public post on X, Valenzuela noted a shift in the way the industry approaches generative models. “We have now crossed the threshold of asking whether generative models can generate consistent videos. A good model is now the new baseline. The difference lies in what you do with the model – how you think about its applications and use cases, and what you ultimately build,” Valenzuela wrote.

Safety and protection for impersonations of public figures

As with all Runway releases, Act-One is equipped with a comprehensive suite of security measures.

These include safeguards to detect and block attempts to generate unauthorized content featuring public figures, as well as technical tools to verify voting rights.

Continuous monitoring also ensures that the platform is used responsibly, preventing possible misuse of the tool.

Runway’s commitment to ethical development aligns with its broader mission to expand creative possibilities while maintaining a strong focus on safety and content moderation.

Looking ahead

As Act-One is gradually rolled out, Runway is eager to see how artists, filmmakers, and other creators will use this new tool to bring their ideas to life.

With Act-ne, complex animation techniques are now within reach of a broader audience of creators, allowing more people to explore new forms of storytelling and artistic expression.

By reducing the technical barriers traditionally associated with character animation, the company hopes to inspire new levels of creativity in the digital media landscape.

It also helps Runway stand out and differentiate its AI video creation platform against a growing number of competitors, including US-based Luma AI and China’s Hailuo and Kling, as well as open source rivals like Genmo’s Mochi 1, which also debuted just today.



Source link
RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular