It’s now basically done in every CGI enhanced movie - e.g. all Avenger movies, Maleficient and all modern Disney movies.
It wasn’t unheard of, but wasn’t yet common in the early 2000s when the Marrix sequels were made.
Also, Matrix has lots of Bullet Time rigs, which aren’t motion capture rigs, but we’re clearly usable even back then for some “transplanting” of recordings to another scene, much more so than regular shots.
Jet Li was extrapolating into the future, but not far into the future. It was already common and happening 10 years later.
It's why AI-generated movie shots would make a lot of sense. Hollywood spends billions of dollars, builds and blows up elaborate sets, and hires 100,000s of thousands of people... just to be able to have pixels move in a pleasing way. How much of that will be cut out when we just go straight to generating pixels? CGI goes in that general direction but it's still very labor intensive.
I wonder when this technology will be good/flexible enough to use in filmmaking, where it seems like it could replace a chunk of work currently done by makeup/VFX, and give real-time results.
There’s an old Asimov short story, Gold, which describes a future of filmmaking that seems more similar to this than the way movies are currently made: everything is animated/CGI, but in real-time—the animators are performers as well who deliver their performance while the director offers feedback.
I guess my thoughts are, why would you? The only time you would use VFX is if you can't do something in real life cheaper, and since it's extremely expensive that rules out filing the frame with mundane realistic objects.
Aside from time and budget, art direction and production design often rules out a photorealistic look anyway, even if technology allows it.
I'd like to dig further into what you mean by photorealistic CG being fiction? There are still tricky areas that are actively being researched, but I've seen plenty of CG assets that are photorealistic.
What do you think is technically missing from CG video, that makes it lack realism?
The main point was that film isn't interested in exact photorealism. As you said, it doesn't matter, because the simplified models are good enough. Therefore it's unlikely that the film industry will be the first to produce a fully computer-generated video that will be indistinguishable from a camcorder capture.
The reason most of the CG you see in TV or movies looks very good is because they take place within real video. We're not looking at a completely CG scene -- it's mixed with video from the real world. And that's a perfectly valid technique, but my comment was talking about 100% CG.
A secondary point re: the film industry is that artists must necessarily retain control of the art pipeline in order to create scenes that advance the plot. That requires the art pipeline to be flexible. The more flexible your art pipeline, the more productive your studio is. Yet that flexibility is precisely opposite to realism. Obviously, the more realistic a purely CG scene looks, the less flexibility you get, otherwise it wouldn't appear real; hence the argument that the vfx industry won't be the ones to produce the elusive fully-CG fully-realistic video. (It doesn't make financial sense for them to do so, if nothing else.)
CGI is much more widely used in film than I think you think it is. Half the distance/crowd shots in LotR nearly 20 years ago now were CGI, from what I remember, and e.g. the following scenes of the car sequence in Deadpool use CG humans in far more places than I’d guessed from having watched the film: https://youtu.be/C8D_o8bOeOc
The cinematics in videogames likewise can be pretty convincing, even when the gameplay itself isn’t so much.
CGI in films used to be obvious all the time no matter how good the artists using it, now it's everywhere and only noticeable when that's the point; the gap from Tron to Fellowship of the Ring was 19.5 years.
My guess is the analogy here puts the quality of existing genAI somewhere near the equivalent of early TV CGI, given its use in one of the Marvel title sequences etc., but it is just an analogy and there's no guarantees of anything either way.
Interesting. Physical reality is infinitely complex, and a CGI is a computational model of that reality. It’s become impressive (hair!), and I’m sure I’ve seen plenty of CGI in movies without realizing it. But it’s not the same.
Many films use CGI in places, but this one interests me to think about because (as far as I am aware) it is completely real-time, rather than post-processed, letting physics itself do some of the heavy work.
It also is probably a breath of fresh air for actors. Anecdotally (I'm heavily paraphrasing) I remember Ian McKellen, while returning for The Hobbit, to have complained that it is rather hard to emotionally put yourself into a character amidst a hall of green screens. This makes sense -- in Lord of the Rings, even forced perspective with a moving camera [0] was practical effects!
It was interesting to see that this bit of trivia I remembered from a Star Wars show has appeared in a few other bits of media since (without me realising as I watched them!); I wonder in three more years if it will be yet more prevalent.
Yes, but you won't see it often because VFX artists aren't paid what they're worth, are given unrealistic timeframes, and are even given sloppy direction that is counter to what a realistic effect should look like.
I don't disagree with the conclusion in that video. But my criticism is precisely when filmmakers rely solely on CGI for every aspect in a scene. When used in moderation, and to _augment_ practical effects, as in the original Jurassic Park, or All Quiet on the Western Front, then it can increase, rather than break, immersion.
Eventually, we'll reach a point when CGI truly becomes indistinguishable from reality, but even then I'd argue that using it for all aspects in a scene would undermine what makes the art of cinema "magic". After all, would it really be filmmaking, or animation, at that point?
Certainly. But the point is, if a scene is conjured up by artists, the material they work with -- the RGB textures they create, the specular maps, etc -- do not in any way match real life, or real objects. They make those textures with the goal of getting an engine to display results that look sufficiently good.
That means if you're interested in creating a truly realistic CG video, you have no hope of succeeding if you use an art pipeline. If it's true that a truly realistic CG video will only be created by data that matches real life, then artists must not be allowed to conjure up the data that you use. The data has to come from real-life sources.
There is a contradiction here, and I think it's worth accepting and embracing it. Once you accept that true realism isn't the goal, then you can focus on making CG look cool.
> with CGI, the uncanny valley hasn't been surmounted
I'm not so sure about this. While some scenes are still obviously using CGI, I think a lot of CGI in movies now passes unnoticed, even entirely digital characters.
We certainly notice when those characters do things humans can't do, of course, and when budget or schedule or both result in things being pushed out too early, but how would we know when digital characters look natural on-screen? We wouldn't!
Sorry, I'm not convinced. :) The video talks about editing footage to add or remove details using cgi. The images are a better example, but those are all comparing cg vs heavily retouched images.
I think the other thing that is missing from this conversation is the bar of entry has lowered so much with modern CGI. Look at a YouTube channel like Dust[0], which has literally hundreds of sci-fi short films using CGI to tell stories. Sure, they aren't 2022 blockbuster level, but they are 2010 TV level. There is simply no way practical effects would be able to scale the way that CGI has.
But yeah otherwise I agree with your points. The person you’re replying to is vastly over estimating unreal use for final CG.
Definitely isn’t being primarily used for hero character work.
reply