Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

i'm curious: what is the incentive for dreamworks to open-source this? surely having exclusive access to a parallel renderer of this quality is a competitive advantage to other studios?


view as:

The other large studios that compete with them (e.g. Pixar) already have their own just-as-good renderers.

Even still, what's the incentive to open-source? So the community can bootstrap a better solution than Pixar?

By showing they care about open source, there's a chance they'll attract developers and animators who care about that.

Seems targeted at CentOS7, support for which is sunset in little over a year. Smells a bit like abandonware, hoping for adoption by unpaid volunteers.

Still, dumping it into FOSS community is not the worst graveyard for commercial software...


They just released a feature film with this renderer, grossing $462 million and widely praised for its animation.

Large studios don't update so regularly vs e.g. a startup. They have very specific setups, which is in fact a large part of why it took them so long to release moonray vs when they said they would last year. And they are moving to Rocky Linux soon IIRC.

>dumping it into FOSS community

They are not "dumping" anything. Would it have hurt to look into the facts before commenting?


It's not a big deal to take something built for CentOS7 and port to a later Red Hat (or clone) distro. It appears that they released a setup for what they use, which is CentOS7.

Major animated movies take years to develop, and they don’t like to change the build process during. I used to cover a major animation studio for a major Linux vendor and they did in fact use very old shit.

re: CentOS7, the VFX Reference Platform (https://vfxplatform.com/) is probably relevant here. Their latest Linux Platform Recommendations report from August last year already covers migrations off of CentOS 7 / RHEL7 before the end of maintenance in 2024

Studios don't want to upgrade fast (e.g. they're not interested in running Debian unstable or CentOS's streaming updates thing)... they're interested in stability for hundreds of artists' workstations.

Getting commercial Linux apps like Maya, Houdini, Nuke, etc. working well at scale is hard enough without the underlying OS changing all the time.


Or create a pipeline of young talent who can come from university already trained on their system.

Why not?

The tool may be good, but the output visuals are only as good as the artists that use said tools. They can open source the tools all they want and try to hire all the talent that can use it. :)


At this point every studio has their own renderer, Pixar has RenderMan, Illumination has one from MacGuff, Disney has their Hyperion, and Animal Logic has Glimpse.

and there's still plenty of Arnold and Clarisse houses out there.

I can imagine a few reasons why they'd do this, but some of it may just be 'why not'. Studio Ghibli has done the same thing with their animation software and it hasn't turned into a disaster for them. Making movies, especially movies that people will pay to watch is hard, and any serious competitors already have their own solutions. If people use moonray and that becomes a popular approach, competitors who don't use it are at a disadvantage from a hiring perspective. Also, DreamWorks controls the main repo of what may become a popular piece of tooling. There's soft power to be had there.

They mentioned in this video[0] that they expect others, mainly other studios, to contribute to the project.

[0]https://www.youtube.com/watch?v=Ozd4JqquG3k&t=117


The competitive advantage is in storytelling, not necessarily visual fidelity. People will watch a somewhat worse looking movie with a better story than a better looking movie with a worse story. And honestly, can anyone really tell slightly worse graphical quality these days when so many animated movies already look good?

The exception, of course, is James Cameron and his Avatar series. People will absolutely watch something that looks 10x better because the visual fidelity itself is the draw, it's the main attraction over the story. This is usually not the case in most movies however.


The rendering in the Avatar movies is at the cutting edge. But quite apart from the very uninteresting storytellying there's something there that just doesn't work for me visually - I don't know if it's the uncanny valley effect of the giant skinny blue people with giant eyes or what, but I'd definitely rather watching something creative and painterly like the Puss in Boots movie, or even something like the Last of Us with really well integrated CG visuals and VFX that aren't necessarily top of the line, but well integrated and support a good story.

Did you watch in IMAX 3D? I watched in both 3D and 2D and the 2D simply cannot compare to the 3D. The way most 3D movies work is the 3D effects are done after the fact in post-production. 3D in Avatar movies are done entirely in the shooting phase, through 3D cameras. Hence, the 3D in Avatar films is much more immersive to me than in something like Dr Strange 2, which simply could not compare.

I remember watching Avatar in 3D and being blown away (both by the dreadful screenwriting and by the amazing 3D effect).

It taught me that the key you immersive 3D is to treat the screen as a window. Things have depth beyond it, but never protrude out of it.

I noticed only two shots in the movie where anything protruded from the screen towards me, and they really caught my attention.


s/key you/key to/

I hate typing on screens.


I try to find 3d movies but so few are of them are made. And like you say, most of them are automated billboard extractions rather than actual 2 cameras.

I haven't seen the second Avatar film at all; my observations are merely from seeing the first one in 3D and the trailers for the second. I'm aware that it's shot entirely 3D as well. While I was wowed at the 3D effect when I saw the first one, the thrill of that entirely wore off within a week or two and is not a big enough draw for me to see the second. I don't think I'm in the minority here, 3D was huge in cinemas for a year or so after the first Avatar film and then interest from the general public waned as well, probably in part due to subsequent 3D films using the post-production method and I agree the effect is not as good.

3D (including "real 3D") just doesn't seem to be the drawcard though that the geek community seems to think it is - I think the public in general would prefer better story, CG that serves the film and the story etc. And that is why I probably won't see the second - the story is not strong or interesting enough for me, and even the "wow" full 3D effect is not strong enough to pull me back, given the uncanny valley effect of the characters and the lacklustre story.


I'm talking about 3D in the Avatar series specifically, not 3D (including real 3D) as a whole. Avatar has made billions of dollars so far so it's doing something right. But yes, I agree that generally speaking, people like better stories with worse CGI than vice versa, it's just that James Cameron's movies are an exception to that rule.

Unreal is eating everyone's lunch. If they cannot get anyone else to contribute to their renderer, it will wind up getting shelved for Unreal with a lot of smaller animation studios already using Unreal instead of more traditional 3D Rendering solutions like Maya.

I'm not really sure if they are competing with Unreal. Large studios will probably never use real time rendering for the final render unless it achieves the same quality. Dreamworks have built a renderer specifically for render farms (little use of GPUs, for example) which means they are not targeting small studios at all, rather something like Illumination Entertainment or Sony (think Angry Birds movie).

> If they cannot get anyone else to contribute to their renderer, it will wind up getting shelved for Unreal

Why do you think this? Nobody in film or vfx is using Unreal for final rendering, Unreal is built for games not offline path tracing.


Tons of studios are now using Unreal for final rendering, including Disney and several blockbuster movies.

The fantastic thing about Unreal is that you can do realtime rendering on-set (e.g. for directorial choices/actor feedback) and then post-production upscale it with the ceiling only being cost. Unreal in the TV/Movie industry is already huge and only getting bigger, year-on-year.

You've definitely seen a TV or Movie that used Unreal.


You've definitely seen a TV or Movie that used Unreal.

Name a major-studio movie that rendered final camera-ready VFX in Unreal.

For TV, you can name The Mandalorian season one, sure, but even then, ILM switched The Volume to their own in-house real-time engine for season two.


DNeg did one sequence in the latest Matrix film. But it looked very obviously “real-time”.

But yeah otherwise I agree with your points. The person you’re replying to is vastly over estimating unreal use for final CG.

Definitely isn’t being primarily used for hero character work.


Yup. The state of the art for real-time rendering just isn't there yet for hero work. Even ILM's custom Helios renderer is only used for environments and environment-based lighting, as far as I've read. Assets, fx shots, and characters are still rendered offline.

Even with real-time rendering for environments, I'm sure there's plenty of post-processing "Nuke magic" to make it camera-ready. It's not like they're shooting UE straight to "film".

I have seen reports of Unreal Engine being used quite successfully for pre-viz, shot planning, animatics, etc., though.


I mentioned it in another comment, but Star Wars Rogue One also used it for a few shots.

Ah yeah that’s the perfect kind of show for it. It was for K3-SO for certain shots iirc.

I'm inclined to believe you, but can you quote one reasonably popular movie that was rendered with Unreal?

Which Disney films use Unreal for final render? Disney has two separate path tracing renderers that are in active development and aren’t in danger of being replaced by Unreal.

https://disneyanimation.com/technology/hyperion/

https://renderman.pixar.com/

These renderers are comparable in use case & audience to MoonRay, which is why I don’t think you’re correct that MoonRay needs external contribution to survive.

“Used unreal” for on-set rendering is hand-wavy and not what you claimed. Final render is the goal post.


Star Wars Rogue One had used Unreal for final render of a couple of shots. However it was more a proof-of-concept then a better worflow or anything.

Edit: I worked on this, Adding reference video from GDC https://youtu.be/pnigQTOig8k?t=510


Hey that’s pretty cool! Thanks for the link, it’s helpful to see the shots in question. Am I understanding correctly that the K droid was rendered from behind using Unreal in those shots, and the front shots were rendered with the in-house renderer? If true, I’d love to hear what the reasons were for not being able to use it on all the shots in the sequence. Are there more recent examples? Is Unreal still being tested like this at ILM, or is the focus on the in-house real time renderer?

BTW I’m hugely in favor of pushing real-time rendering for film (and I work on high performance rendering tech, aiming squarely at film + real-time!) I only was disputing the broad characterization by @Someone1234 that Unreal is widely used today for final, and that film studio path tracers are in imminent danger of death by game engine.


Hi!

So, it's been a while, but I'll try to add clarification from the best of my memory.

So, in this sequence, I think it is the case that K2-SO was only rendered from behind.

IIRC, the reasons for not using it on more shots, and specifically the front shots in the sequence were two-fold, Primarily, we only had one TD/Lighting Artist trained in our pipeline using Unreal, which was still a little clunky to fit into our pipeline, so we were time limited. Now, K2-SO was not rendered from the front in a close-up due to problems with complexities with his eyes. (Some details from Naty later in the talk) Specifically, K2's eyes require fully lit transparencies with the full lighting model, at-least at the time Unreal only supports their full lighting feature set in their deferred renderer, and their forward renderer, used for transparencies was a vastly simplified lighting model which isn't able to fully capture the effect of K2's eyes. We were building a capable Forward renderer inside of Unreal internally, but this was not finished in time for Rogue One.

As an aside, we had a parallel internal renderer we were building for use on Rogue One, that even at the time had advantages, but Unreal was chosen for what I saw as political reasons.

I do not know of more recent examples, but I'm not involved in this project anymore, I know they used Unreal for Season 1 of the Mandalorian, but moved to their internal real-time renderer for Season 2. The internal renderer has a few advantages, not having to deal with the complexity of merging significant changes with Epic's engine for example with the forward renderer was one major advantage, but my understanding is that the major win, is just being able to build a renderer that integrates much better in their existing pipeline. Unreal's renderer is pretty strongly integrated into the rest of their engine, and the engine itself is very opinionated regarding how content is managed. And as you can imagine, ILM has their own opinions going back to about 30+ years of history.

I agree with your dispute of the broad characterization, but thought the counter-example would be illustrative.

BTW, I'm starting a new project investigating real-time rendering for film, and always interested to new perspectives, hit me up if you want to chat real-time rendering sometime.


Yes, the example is very illustrative, thanks again for posting it, and thanks for the context here! This history is fun to read. I was partly curious if texture sampling is still one of the reasons for avoiding real-time tech. Back when I was in film production at PDI two decades ago, texture sampling was near the top of the list. It seems true still today that games tolerate (suffer from) texture aliasing and sizzling routinely, while film sups will not tolerate it at all, ever. High quality texture sampling was, at the time, one of the main reasons offline renders took a long time. I remember being blown away how sensitive the lighting sup was to the differences between texture filters, lanczos, sinc, Blackman, Guass, etc., and how quickly he could see it. Today maybe it’s more often about how many samples are used for path tracing.

The K-2S0 droid character in Rogue One (voiced by Alan Tudyk) was, in fact, rendered in real-time using Unreal, then composited into shots afterwards.

John Knoll from ILM gave a talk at GDC 2017 about it.

The catch, though, is that ILM took the Unreal Engine source code and modified it extensively in order to be able to render K-2S0 as he appeared in the film. It's not like they just downloaded it from the Epic Store and ran with it.


Unreal is used in TV quite often, yes. But no major studios use it for theatric releases, and I'm not aware of any who plan to. (Partner is in the industry)

Except no one uses unreal for movies..

yes, unreal is used in movies and TV shows, both. usually not entire movies or shows, but lots of individual scenes.

Not for final shots that make to the cinema.. for tv yes

Why are you saying 'yes' then saying the exact opposite of what they said?

Unreal is not used for final shots in film.


it has been, I just don't know of any films which have used it in the last year or so.

Who is talking about the last year? What films used direct unreal output for final frames at all?

Not yet.

> surely having exclusive access to a parallel renderer of this quality is a competitive advantage to other studios?

The renderer is an important of the VFX toolkit, but there are more than a few production-quality renderers out there, some of them are even FOSS. A studio or film's competitive advantage is more around storytelling and art design.


Open sourcing the trailing edge stuff is a great way to keep people behind you.

They were acquired in 2016:

https://techcrunch.com/2016/04/28/comcast-to-acquire-dreamwo...

Perhaps they've moved on to a new renderer.


Legal | privacy