Imagine having the power to embody any character you can envision, where your expressive presence—both visually and vocally—brings an animated persona to life. This is the opportunity RunwayML’s Act-One provides, giving artists, performers, and storytellers a new tool to channel their creativity. Whether you’re exploring a quirky cartoon or a lifelike character, Act-One offers a way to merge your personality with animation in an accessible, intuitive process.
With just a simple video, Act-One can capture not only the subtleties of your facial expressions but also the emotional depth of your vocal performance. Your highs, lows, and everything in between add another layer to the character’s presence. This means that, regardless of your background in animation, you have the potential to create vibrant, dynamic characters through your unique style and voice. Act-One doesn’t just see your expressions; it listens to your voice, capturing visual nuances and vocal dynamics to create a complete, expressive character.
Getting Started with RunwayML Act-One
RunwayML’s Act-One is a groundbreaking AI-powered tool designed to transform your performance into expressive character animation. Traditionally, creating high-quality character animations required technical expertise, specialized equipment, or complex software. Act-One changes this by allowing anyone to bring characters to life through a simple performance video—capturing both visual expressions and vocal dynamics.
Workflow Overview: Bringing Your Character to Life
This section will guide you from concept through the final animated product, with tips to streamline the process and bring out the best in your animation.
Create Your Character Image - Conceptualize your character’s style and personality. Use a forward-facing image with a neutral expression, plain background, and even lighting to allow Act-One to animate the character naturally. You can use a character image from the provided library or upload your own. For my example, I used Midjourney through a text prompt to create my character. Once I generated a character that worked for the scene idea, I used this as a seed image to generate a few variations for options.
Capture Your Feeder Video - The feeder video is your “driving” performance. Use a simple background, keep movements subtle, and, if possible, use a quality microphone in a quiet space to capture the richness of your vocal performance. High-quality audio enhances the emotional depth of your character and helps it fit seamlessly into the animation’s environment. If you have a camera on your computer, you can record directly in the web application or upload a video. I recorded the video and audio with the front-facing camera on my iPhone. Tip: Trim your feeder video recordings to include only the performances you’d like to animate. This will save you credits (money) since you are charged per second of generation, with a minimum charge of 5 seconds per generation.
Animating with Act-One - Import both your character image and feeder video into RunwayML, select the Act-One Gen-3 Alpha model, and let the tool map your performance onto the character. Adjust animation settings such as expression intensity and choose landscape or portrait output format. Once ready, generate the animation and download it in your desired format (MP4 or GIF) for further editing or sharing.
Cost and Subscription Information
Currently, Act-One costs 10 credits per second of generated animation. For example, my 7 individual clips adding up to a total of 59 seconds required 590 credits, which is approximately $9.20 on a monthly Pro Plan subscription. These costs may vary depending on the subscription plan and usage, but Act-One’s flexibility and creative possibilities make it a valuable investment for digital creators.
Preparing Your Assets
To create a compelling animation with Act-One, starting with well-prepared assets—your character image and feeder video—is essential. Here’s a guide to setting up each one effectively.
Character Image
Human-like Face: Use a human-like character for compatibility. Act-One will alert you if the face cannot be animated.
Neutral Expression and Direct Facing: Face the camera straight on with a neutral expression to ensure balanced animation.
Lighting and Background: Good lighting and a plain background (white, gray, or green) ensure Act-One can focus on facial details.
High Resolution and Detail: Ensure a high-quality image with sharp focus.
Framing: Capture or generate the character from the shoulders up, with emphasis on the face. If creating your character with an AI image generation tool like MidJourney, DALL-E, or Leonardo, include these details in the prompt to ensure compatibility.
Feeder Video
The feeder video captures your unique performance and drives the animation. For the best results:
Simple Background: Use a plain background; white or gray works best. Green should work fine, but might introduce color spill on the actor’s face or clothing, which could interfere with the animation process.
Controlled Head Movements: Keep head movements subtle and avoid extreme angles.
Natural Facial Expressions: Keep expressions realistic and not overly exaggerated.
Direct Facing: Face the camera straight-on for the best tracking.
Soft, Even Lighting: Diffuse light ensures facial details are clear and avoids shadows.
High-Quality Audio: Use a quality microphone in a quiet location. Clear, rich audio adds to the character’s emotional impact.
High Resolution: Use high-resolution video for better capture of expressions.
Animating and Finalizing Your Character
Once your assets are ready, bring them into RunwayML for animation and refinement.
Importing Assets: Upload your character image and feeder video in compatible formats.
Applying the Act-One Model: Select the Act-One Gen-3 Alpha model and map your performance onto the character.
Adjusting Animation Settings: Experiment with settings like expression intensity to achieve your desired look.
Generate and Export: After generating, download the animation in a high-quality format. If you plan to add backgrounds or effects, consider using a green background and plan for post-production keying.
Enhance the animation in your video editor to finalize your character’s look:
Import into Video Editor: Match project settings to the animation.
Edit: Assemble multiple short generated clips if creating a composite story.
Add Backgrounds and Layers: Use backgrounds or layering to add depth.
Effects and Transitions: Color grading, shadows, and lighting can polish the animation.
Sound and Voiceover: Add audio to support facial movements and enhance realism. If recorded in a quiet, ambient-free room, consider adding vocal processing, background sounds, or ambient effects to help the character feel at home in its fictitious environment.
Fine-Tune Animation Speed: Adjust timing to match other elements.
Final Export: Export for your intended platform, considering platform-specific quality settings.
Experimentation and Uses
Experimentation with Act-One offers inspiration across various applications:
Your Own Demo: Showcase your animated character as a demonstration.
Storytelling: Act-One enables diverse character portrayals in storytelling.
Virtual Performances: Create animated personas for digital events.
Social Media Content: Develop unique animated characters for social engagement.
Educational Use: Use animated guides for more engaging instruction.
The Bigger Picture: Current and Future Uses
RunwayML’s Act-One is breaking down traditional barriers and expanding what’s possible in animation, with potential uses that reach far beyond current applications. The name “Act-One” hints that the current iteration may be the start of a series of future “Acts” that empower the Creative Mind with more tools and capabilities. Here’s how Act-One (and future versions) could shape creative fields in the years to come:
Empowering Diverse Performances: Act-One opens doors for people who may have felt limited by their appearance or technical skills. This tool empowers anyone with an expressive performance to embody a character, fostering inclusivity and diversity in storytelling, virtual performances, and branding.
Transforming Creative Workflows: Act-One is streamlining the creative process. As technology improves, simpler and faster workflows may emerge, enabling creators to integrate animated characters seamlessly into their projects.
Expanding into New Fields: Future versions of Act-One could impact various fields. In education, animated characters could engage students interactively. In therapy, relatable avatars could make virtual sessions more approachable, enhancing mental health support.
Augmented and Virtual Reality: With the growth of AR and VR, Act-One could soon be used in immersive environments, enabling anyone to embody a character and interact in virtual spaces, perform for audiences, or engage in collaborative storytelling in real time.
Future Innovations in Animation and AI: With continued AI advancements, we can expect greater precision, realism, and creative control in tools like Act-One. Future iterations might include customization options, real-time character rendering, and tools that allow users to fine-tune every aspect of the animation.
Act-One exemplifies how AI is democratizing creativity, giving people from all backgrounds the power to bring their ideas to life. The creative mind no longer needs to be limited by physical or technical constraints—these tools provide a new canvas for expression, opening endless possibilities for the future.
Conclusion
As AI continues to evolve, tools like Act-One remind us that technology can be a catalyst, not a replacement, for human creativity. Yes, AI may feel intimidating, but when we approach it as a tool to amplify our expressive abilities, it becomes a partner in our creative journey. Act-One offers the chance for anyone, regardless of background, to bring characters to life in ways previously unimaginable—a new frontier for storytellers, performers, and dreamers alike.
The future of creativity is bright, as long as we, as humans, stay at the helm, using these tools to push boundaries and deepen our unique voices. My mind is personally spinning with possibilities on how to leverage this technology for creative expression. Give it a go… you may just amaze yourself.
If you enjoyed this article, please like and subscribe to my newsletter, "AI for the Creative Mind." I’d love to hear your thoughts or experiences with Act-One in the comments. How has this tool expanded your creative process? Let's continue the conversation and explore the possibilities together!