From the beginning of film, moviemakers have experimented with special effects. The earliest and simplest involved stopping the camera, swapping an actor for a dummy, then starting the camera again and allowing the character on screen to meet an apparently gruesome fate.
From there, methods grew more sophisticated – animation, models and puppets were used to bring monsters and spaceships to life on our screens, before computer graphics enabled more realistic and complicated visual effects to be produced.
But creating movie-quality computer graphics is laborious and expensive. Or rather, it was… until generative artificial intelligence (AI), came along.
First demonstrated with static images, AIs such as DALL.E, Midjourney and Firefly showed they could generate amazing visuals from text descriptions.
Ask for a tap-dancing cat on a tightrope strung between two skyscrapers and, within an instant, you’d get an image depicting exactly that.
But new AI-powered tools also enable images and footage to be rapidly edited. They let you change a character’s clothing without you needing to reshoot the scene, or remove something in the background you don’t like, or even change an actor’s expression or their age. (AI clones – aka deepfakes – generate realistic avatars that can perfectly mimic real-life actors or create entirely fictional, yet totally convincing characters complete with movement and voice.)
Most recently Sora from OpenAI and Lumiere from Google DeepMind have shown they can generate stunning video clips lasting several seconds and showing almost anything you might ask for.
Could we do all that with computer graphics? Yes, provided you don’t mind paying talented computer graphics artists for months or years of work. The difference that AI makes is mostly about time and cost. With AI, you can generate movie-quality special effects instantly.
Anyone can make entirely computer-generated movies by curating AI-generated footage and editing it together. And who needs actors when they can be replaced with virtual entities that are entirely under the control of the studio that’s producing the film?
Writers and actors staged a 148-day strike in 2023, in part objecting to the use of generative AI in film and television. As a result, AI will not be taking over the industry just yet.
Their objections were about more than AI putting skilled people out of work, though. For one thing, AI is trained on existing content and the people who own the copyright on it won’t be pleased if the AI uses their content as training data.
But, creatively speaking, the nature of its training means AI can’t come up with much that’s original or novel.
Given all this, it’s hard to say how AI will change the film industry over the long term with any great degree of certainty. But, in the immediate future, it may be that special effects become less ‘special.’
When any visual element can be easily and cheaply produced, it’ll be difficult to sell movies on the basis of their amazing visual effects, as was once common.
Plus, the current limitations of their training means there’ll be weird tell-tale inaccuracies that make it too jarring to place AI centre stage without extensive editing work.
But used appropriately, as just another postproduction tool, and perhaps AI could enable a return to what matters most when it comes to making memorable movies: thrilling performances from actors, beautifully imagined scenes and compelling narratives.
This article is an answer to the question (asked by Hilda Patterson, via email) 'How much will AI change the film industry?'
To submit your questions, email us at questions@sciencefocus.com, or message our Facebook, X, or Instagram pages (don't forget to include your name and location).
Check out our ultimate fun facts page for more mind-blowing science.
Read more: