In 2025, as we see technology exploding around LLM capabilities, many of us are always looking for the next new thing. This can feel exhausting. So we look to others for input, combing Reddit just to hear another real human user’s opinion of a model, or figure out what people are actually saying beyond the blogosphere.
But sometimes something just sticks out at you – if that’s the right way to say it.
Runway’s “Whisper Thunder” model, a.k.a. version 4.5, is a GenAI behemoth that produces wildly robust video – where early video demos show people and animals running down streets, morphing into each other, and experiencing the kinds of thrills you’d expect to see on the big screen, if you’re the type who still goes to the movies in the age of streaming video.
So what is Whisper Thunder? And who is Runway?
A Dark Horse
Unfortunately, we don’t have much by way of Reddit reviews, and even Youtube fails to turn up instances of people playing around with this technology – or at least, not willing to label their content.
There’s scant information on Runway, too- this isn’t OpenAI, or Google, or Anthropic. My digging turned up a partial list of investors: General Atlantic, Baillie Gifford, Nvidia and Salesforce Ventures. So apparently Jensen Huang knows who these people are.
There’s a little more in a CNBC interview with Runway CEO Cristóbal Valenzuela who says that the new image gen model was “an overnight success that took like seven years,” and gives the nod to a competitive AI industry, something that’s important to many folks.
“(Runway is) excited to be able to make sure that AI is not monopolized by two or three companies,” Valenzuela said.
This coverage also turns up the additional news that 4.5 or Whisper Thunder was also known as “David” while under production, as a reference to the company’s status as an underdog.
Runway’s Razzle Dazzle
One of my favorite podcasters, Nathaniel Whittemore, provides this input on Runway’s new model, although I believe he is quoting someone else’s words:
“Runway Gen 4.5 is state of the art, and sets a new standard for video generation, motion quality, prompt adherence and visual fidelity. It certainly outperforms on the text to video leaderboard … and it seems like a lot of the advancement fits what I would call the “unlock score,” basically, improvements that unlock use cases that would have been difficult, if not impossible, before.”
For reference, here’s a quick list of features from Hacker News at Y Combinator:
“• Text → Video: Enter a prompt, choose a style/ratio, and it generates a complete video.
• Cinematic Quality: Natural motion, consistent scenes, realistic lighting — more stable than most similar tools.
• Fast & Easy: No watermark, no payment required, and quick generation — great for video prototyping.
• Style Control: Supports realistic, animated, and cinematic styles, and can maintain consistency across shots using reference images.”
This model is one to watch, along with Nano Banana Pro, which I wrote about earlier this week as it popped into everyone’s feeds. That comes along with some serious pressure evident on OpenAI to stay ahead of Google’s quick progress in the model world.
Stay tuned for more.



