I guess it's easier to ship an updated rendering engine if you are also the one shipping the single embedding application of said rendering engine, unfortunately.
I won't really complain if there are just 2 render engines out there in the market, given that both are open source. It will make work hell lot easier for so many developers and the competition and development will stay healthy.
However, like I said, there are superior solutions available for these content/media producers.
Am I supposed to feel bad for them not optimising a tool that is apparently critical for their throughput and suffering mediocrity as a result?
There's an information asymmetry here and if they don't put the effort in to squeeze extra performance out of their equipment then either:
A) Someone who is putting this minimal effort into actually choosing the best tool for their job will usurp them with greater productivity.
B) Render times really aren't a bottleneck compared to the struggle of coming up with content ideas/filming the actual content -- slightly longer running tasks while people slack off and drink coffee are just fine.
I suspect it's actually the latter hence this is really a non-issue.
> surely having exclusive access to a parallel renderer of this quality is a competitive advantage to other studios?
The renderer is an important of the VFX toolkit, but there are more than a few production-quality renderers out there, some of them are even FOSS. A studio or film's competitive advantage is more around storytelling and art design.
I'm guessing it's a combination of genuine goodwill (Dreamworks has a history of open-sourcing stuff) and wanting to be able to hire people who already have experience with their renderer instead of having to train every new hire. Not to mention the improvements that will surely be made to it once anyone can contribute a pull request. Whatever their reasoning, I'm hyped because it seems like an unconditional win-win -- artists get a great new render engine, Dreamworks gets kudos + free development + market/mind share.
I am curious why they chose to build their own rendering engine. They could have first written a terminal application with the same (at least functionally) collaboration features and gotten a cross platform solution almost for free. From a latency perspective, it's hard to imagine that they can do substantially better than, say, kitty + neovim. Hats off to them for pulling off a poc, but I do think the rendering engine is a liability for them that will potentially taketh as much as it giveth. In other words, is it worth investing a significant portion of their $10MM investment on rendering when it is still quite uncertain what the market for editor based collab tools looks like?
Yeah. They've open sourced OpenVDB and other smaller things before, and have contributed things to Embree and OpenSubDiv, but those were libraries/storage formats, not entire production-capable renderers.
At the same time however, does their renderer give them that many advantages? As someone who works on a (sort of) competing proprietary renderer, it's a lot of work and effort to do it and support it, and maybe they want to build a community around that from smaller studios and compete with Renderman a bit for mindshare?
It's all cool and awesome, but why put all those resources into new renderer, when one could contribute to Blender and such, which are way more mature?
It is basically a render engine to scene representation abstraction API so that multiple rendering frontend can talk to multiple scene backends.
I do take issue with their phrasing of being the “Industry’s first”.
This is only true if you narrow the scope of Industry sufficiently to scientific visualization.
There are other rendering abstraction APIs, like Hydra from Pixar that are also platform independent and very prevalent in the computer graphics industry. There’s been a few more over the years but with less traction. Many of the renderers and partners they mention in the press release also have Hydra versions of their renderers which is what makes it all the more odd.
Either way, there are some nice things here like the C99 use which makes it easier to use in more contexts.
It will be interesting to see how this plays out for use long term
This is primarily so people can learn the renderer in their free time and for educational use. It's akin to what used to be called the Personal Learning Editions in the industry.
There are too many mature renderers and the competition is intense.
The current popular actively developed ones are: V-Ray (which we use in http://Clara.io), Arnold, Maxwell, Pixar's RenderMan, and KeyShot.
Less popular but actively developed ones are: RedShift, Furry Ball, 3Delight, NVIDIA iRay, Octane/Bridge...
And then the ones that are integrated into the 3D packages themselves like Blender's Cycles, Modo's renderer, Cinema 4D's renderer, Houdini's Mantra, Mental Images (included in most Autodesk products.)
Then the smaller opensource ones: Sunflow, Lux, Corona, Mitsuba, Pixie...
That is a lot of renderers and I am sure that I am missing quite a few.
Actually it has been adding them back in the renderer process, and encouraging developers to use explicit IPC between the renderer process and the main process.
They are touting it specifically in the context of the visualization of very large datasets.
The fact that their software exists is itself a breakthrough. It enabled me to do things that other equivalent tools (such as in statistical packages) could not allow. I would have been reduced to directly implementing my rendering pipelines, and I would also have had to make many of the same design decisions they made, such as doing things out of core.
It's not - they didn't deprecate the 'built in' render pipeline, and said they don't plan to until the new one has feature parity.
They have odd marketing for their new features - where they want developers to use them so they call them production ready - but they do that so they can get feedback and continue to fix them
reply