Hollywood
Shoot begins for Tom Hanks & Irrfan Khan starrer ‘Inferno’
MUMBAI: Principal photography has commenced on Inferno, the new film in Columbia Pictures’ Robert Langdon series, which has taken in more than $1.2 billion worldwide to date. The film is slated for release on 14 October, 2016.
In the film, Academy Award winner Tom Hanks reprises his role as Langdon. An international cast of actors, including Felicity Jones, Irrfan Khan, Omar Sy, Ben Foster, and Sidse Babett Knudsen, joins him. The film is directed by Ron Howard and produced by Brian Grazer and Howard.
The screenplay is by David Koepp, based on the book by Dan Brown. The project’s executive producers are David Householter, Dan Brown, Anna Culp, and William M. Connor.
Inferno continues the Harvard symbologist’s adventures on screen: when Robert Langdon wakes up in an Italian hospital with amnesia, he teams up with Sienna Brooks, a doctor he hopes will help him recover his memories and prevent a madman from releasing a global plague connected to Dante’s Inferno.
Howard said, “I’m so excited to be starting production on our third Robert Langdon film. Audiences everywhere have shown a tremendous appetite for the Robert Langdon adventures and Inferno has all of the intrigue and action they could want.”
The film will be shot on location in such exotic cities as Florence, Venice and Budapest. Howard’s production team includes director of photography Salvatore Totino, production designer Peter Wenham, editors Dan Hanley and Tom Elkins, and costume designer Julian Day.
Hollywood
Utopai Studios partners Huace to deploy PAI for long form content
Deal includes revenue sharing as Huace adopts AI engine across global ops
MUMBAI: Lights, camera… algorithm, the script just got a silicon co-writer. In a move that signals how storytelling itself is being re-engineered, U.S.-based Utopai Studios has partnered China’s Huace Film & TV Co. Ltd. to bring artificial general intelligence into the heart of long-form content creation.
At the centre of the deal is PAI, Utopai’s cinematic storytelling system, which Huace will deploy as a core engine across its production pipeline from development and creative iteration to global localisation. The partnership includes a large-scale annual usage commitment from Huace, alongside a usage-based revenue-sharing model, underscoring both ambition and commercial confidence on both sides.
For Huace, one of China’s largest film and television companies, the bet is not on automation alone but on scale with control. With distribution spanning over 200 countries and a presence across more than 20 international platforms, including Netflix and YouTube, the company brings a vast content ecosystem where even marginal efficiency gains can translate into significant output shifts. Its extensive TV IP library further positions it as fertile ground for AI-assisted storytelling workflows.
The choice of PAI follows what Huace described as a rigorous evaluation of existing AI tools, many of which remain limited to fragmented use cases such as video generation or editing. What tipped the scales, according to the company, was PAI’s ability to handle long-form narrative complexity maintaining continuity, structure, and creative coherence across entire story arcs rather than isolated clips.
Utopai, for its part, is using the partnership to anchor its international expansion strategy, pitching PAI as an enterprise-ready system built for customisation, privacy, and regulatory adaptability across markets. That positioning becomes particularly relevant as global media companies increasingly scrutinise how AI integrates into proprietary workflows.
The timing is notable. Earlier this month, Utopai upgraded PAI to support three-minute 4K video generation and advanced multi-shot sequencing features designed to tackle one of AI storytelling’s biggest hurdles: consistency across scenes.
What emerges is not just another tech collaboration, but a glimpse into how the grammar of filmmaking could evolve. Because if stories were once crafted frame by frame, the next chapter might just be coded scene by scene.








