Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Blender 3.6 LTS Released (www.blender.org) similar stories update story
250 points by victornomad | karma 1087 | avg karma 5.97 2023-06-27 10:30:47 | hide | past | favorite | 66 comments



view as:

   > Support for hardware ray-tracing acceleration has been added for AMD and Intel graphics cards.

   Added experimental support for AMD hardware ray-tracing acceleration, using HIP RT. This improves performance on RX 6000, RX 7000, W6000, and W7000 series GPUs.

   Known limitations:

       Windows only, as HIP RT doesn’t support Linux yet.
       Degenerate triangles may causes crashes or poor performance.
       Shadows in hair are not rendering accurately.

   > Windows only, as HIP RT doesn’t support Linux yet
NOOOOOO. I guess I'll have to stay with Blender 2.80 for now. Honestly, the situation with hardware acceleration on Linux is pretty sad. I'm a full-time Linux user, and I acquired an AMD GPU specifically because of better Linux support and much better open-source drivers.

Blender 3 taking so much time to give AMD Linux users hardware acceleration back makes me sad. I guess it's not misguided though, they got a lot of attention from the industry in recent years, and that's where funding comes from, so naturally they will focus on supporting the major use cases (i.e. Windows and Nvidia) and improving features. Though I do love the new and improved features, UV packing, for instance, was a longtime pet peeve of mine while using Blender compared to other (closed-source) modeling packages.


>because of better Linux support

In my experience nvidia has better Linux support. Just look at how many projects are using Linux and cuda and don't support AMD.


Depends on what you're doing, which GPU you go for, etc. For my use cases switching from NVIDIA to AMD was a great decision. Also, if you're going to stick with open source drivers there is basically no comparison.

Just for reference, what setup are you using? (Linux kernel, drivers, GPU, etc.)


For example, on my AMD Linux test case, dual monitors and one with VR would be unstable.

Nvidia handled it ok.


AMD has much better open source drivers. nvidia gives you a binary blob... but it works better.

That's out of date, I believe. Nvidia went open source last year, though they did that by moving a lot of code from the drivers over to their firmware (in binary blob form).

https://developer.nvidia.com/blog/nvidia-releases-open-sourc...


Those open source kernel modules are not usable in any practical sense for consumer hardware in their current state.

From the repo readme[1], "GeForce and Workstation support is still considered alpha-quality."

[1] https://github.com/NVIDIA/open-gpu-kernel-modules


Another limitation was in the GPUs supported - only the latest generation is supported, so no support for common consumer level cards - (1050, 1080 etc).

That’s only for the kernel part, not the userspace part

It seems like Nvidia’s stuff is a bit more difficult to work with for the community, but the community puts a ton more effort into it.

The GPGPU community is full of a decade+ of bright-eyed bushy-tailed AMD fans who were painfully and traumatically pushed into the NVidia camp after believing AMD's promises that it was totally ready for GPGPU (if only people would write OpenCL!) but then discovering the horrible truth that this was very much not the case. AMD's OpenCL was dire. I wasted months into that flaming piece of trash -- twice! -- and really wish I hadn't. Both times I had to toss my code, sell my card, eat the spread, eat the ebay tax, and pay the green tax to buy a lower-specced nvidia card, but that all sucked a lot less than tossing my very own programmer months into a dumpster fire. Ironically, the very openness that was supposed to win converts from the green camp actually sent people in the opposite direction because you could run your OpenCL on an nvidia card and find out that actually the problem wasn't with your code but rather with AMD's drivers.

Hopefully now that AMD has money they can afford the headcount to fix their bugs and some more headcount to make up for 15 years of lost time in the applications that matter. Hopefully. But I trusted them twice and got burned, so this time I need to see proof before I jump. I need to see blender renders, I need to see torch trainings, I need to see NAMD (or insert science code here) running on a compatibility layer. It is unfortunate that the actual compatibility situation seems to have hardly moved at all in 10 years (it was right around the corner in 2013 too, you see) but the fundamentals are different now so I hold out hope.


Lemme guess. You tried OpenCL 2.0 and found out it was a dumpster fire on AMD?

OpenCL1.2 worked fine for all the test code I made for myself. But OpenCL 2.0 was... horrid. Unworkable.

--------

My next step personally is to give DirectX 12 computer shaders a serious try. A lot of people are also talking about Vulkan shaders, as apparently that works well on Linux too.

ROCm on Linux is fine enough. But this weirdness of having a feature on ROCm Windows (when ROCm Windows isn't even public yet?) is well... weird. I guess its cool that AMD has worked with the Windows team to get everything squared up, but it'd be nice if ROCm on Windows were available to the masses.


NVIDIA cards have better Linux support roughly up until you connect a display output to the machine.

AMD's issues are more general and not really a particular problem with their Linux support: they're spotty in general (bad drivers on card release, spotty OS support a la no ROCm on Windows.) In practice it obviously matters if you still can't get the features you need, but in my experience trying to run a modern Linux desktop setup with NVIDIA cards is absolutely second-class at this point. I tried my hand at it with a 2060 and the card was out of the machine in a month so I could have a stable desktop that actually resumed from sleep properly again.

Granted, issues like that are very much case-by-case. Still; I can't even run SwayWM with NVIDIA as I do with AMD and Intel; even with patches to "fix" the flickering issues, XWayland stuff is broken. This makes no sense of course. It shouldn't need to be a snowflake case.


> I can't even run SwayWM with NVIDIA as I do with AMD and Intel

How do you know this is an NVIDIA issue, as opposed to a Wayland or wlroots issue?


Same experience here. I get 4k 120hz over HDMI with my puny AMD laptop embedded GPU without tearing on Wayland.

Previous machine with Nvidia GPU (on Xorg) couldn't resume and had atrocious performances because of overheating.


Wayland does not contain any NVIDIA defeat devices, and neither does SwayWM. They just use standard Linux GPU interfaces. What I do know is that you can't use the same codepaths you'd use with countless other GPUs, again including AMD and Intel: NVIDIA cards need special cases. This is just as true for KDE and GNOME, which also do not work as well with NVIDIA under Wayland, despite having special codepaths for NVIDIA cards specifically.

This issue isn't new either. In fact, until very recently, it was significantly worse. But progress on NVIDIA's end has been slow, it seemingly gets completely stalled for years at a time.

On the other hand, Nouveau works quite well with Wayland compositors if your card is well-supported, AND you never have to wait on a kernel upgrade because NVIDIA's shim hasn't been updated yet.

There's basically exactly one driver that works this way. It's not a hardware problem, it's not a general GPU issue. There's just one thing it could be.


This is from a maintainer of wlroots:

>Maintainer of sway and wlroots here. The Nvidia proprietary driver does not support sway, not the other way around.

>Their standard, EGLStreams, is only implemented by their proprietary drivers, so in order to test that codepath we'd have to use a famously broken and undebuggable driver, which we're not interested in - and make no mistake, we get our hands dirty in the drivers all the time. Their standard also has serious technical problems - lots of really important features simply are not possible to implement based on the proprietary driver. We can't export buffers to clients, overlay planes don't work, buffer presentation timings are impossible... Supporting this driver would be a massive overhaul and would probably make the good drivers worse.

https://news.ycombinator.com/item?id=21628494


To be fair, they did actually abandon EGLStreams. It took far too long, but it did happen.

Because wlroots and most Wayland implementations strictly use the kernel abstractions for buffer management, nothing else. They run on literally anything that supports linux’s abstractions, but nvidia drivers did not (it changed recently, not sure about its current state).

Also, xserver used to work with nvidia cards due to goddamn proprietary binary patching done by drivers.


So safest to put both an AMD card in for display, and an Nvidia card for compute?

Actually, this might not really be that bad of an idea. Certainly for headless CUDA, it has a lot of potential to be useful.

That said, in practice I expect this to suck. There's probably a lot of stuff in the Linux desktop stack that does not deal with heterogeneous graphics cards well right now.


I have had no issues resuming from sleep on my 2000 series card.

> Just look at how many projects are using Linux and cuda and don't support AMD.

That's because of the CUDA api specifically.

For the display support it's very hit/miss. Somethings work fine, then you want to try Wayland and things get odd...

When working correctly they Nvidia is mostly fine - Xorg games have support for everything except dlss3 AFAIK.


>That's because of the CUDA api specifically.

No, if they wanted to support AMD they would provide an alternative GPU implementation that worked on their. It's like saying an app only supports Windows because of the Windows API. If someone wanted to support multiple apps they wouldn't just use the Windows API.


> No, if they wanted to support AMD they would provide an alternative GPU implementation that worked on their.

AMD's implementation (ROCm) has historically been incredibly difficult to work with. As in their own examples won't compile.

Historically if you even wanted to _touch_ data-science you had to have CUDA hardware/software (it's been improving a bit but alternatives are still lacking).

> It's like saying an app only supports Windows because of the Windows API.

It's a little more nuanced than that because the CUDA api is specifically for one set of tasks - data science.

Nobody uses nvidia for their wide desktop support on linux - because it's terrible. Literally the only benefit of using nvidia + linux is either because of raw performance of the hardware and/or the cuda support. If you look at the arch wiki (applies to other OSes too) for anything graphics related you'll almost always seen specific sections dedicated to workarounds and fixes for nvidia hardware.

If you don't believe me then try setting up wayland with nvidia drivers. Steam just had a fun issue where they were crashing on nvidia hardware with their latest redesign (no idea how they didn't catch that?!). These types of issues are commonplace, and are largely avoided by using AMD.

thus why I say it's only really because of the cuda support.


[dead]

> Blender 3 taking so much time to give AMD Linux users hardware acceleration back makes me sad.

I believe the issue comes from AMD which disable/doesn't support their GPGPU libraries on Linux and Blender cannot do much about it. GPGPU support in general is much much better on NVIDIA thanks to CUDA, both for Windows and Linux.


I am with you, went for AMD too and I am having the same issues. I don't need 3D often, but occasionally I would like to/need to do something in Blender and though luck. I even tried rebooting to Windows, which I do like once in a few years, still no dice.

Blender is super good and I love it and support it via plugin contributing. I keep my fingers crossed :)


HIP works in Linux! You can use your GPU to render with cycles, you just can't use this specific feature of your GPU (yet).

There's no reason to stick with Blender 2.8.


The speed difference with ray tracing is substantial based on demos I've seen. At least with RTX.

I'm definitely excited to try it out, but it's not like we were completely missing GPU rendering without it.

No, but I do think it's reasonable for them to express frustration over missing out on a ~2x or better render speed improvement. It's like having a whole extra GPU installed but unusable.

Yes, but that's not my point. You don't have to keep using 2.8 to get GPU rendering in Linux.

Only if you use the AMDGPU-Pro drivers, since it requires HIP. They only introduced HIP support in Blender 3.2: that's two minor releases (almost an entire year) without support for AMD GPU rendering at all. I should've made that clearer in my original comment. If there's a way to get HIP working under the Mesa drivers, I haven't been able to find it in my cursory searches through the internet.

The newer versions of Blender only support CUDA and HIP as render devices, whereas in Blender 2.83 we still had the option to use OpenCL for rendering. You could say the blame is on AMD for giving subpar HIP support on Linux, and that would be fair IF the OpenCL backend didn't exist in Blender 2.83.

Of course, to add insult to injury, they now don't port the RT acceleration to Linux, which just makes the experience truly inferior to Windows. As I pointed out in the original comment, this is a reasonable decision from a business standpoint. I just find it sad because Blender is one of the darlings of the open-source movement, so seeing them neglect Linux so much is frustrating.


> Only if you use the AMDGPU-Pro drivers, since it requires HIP.

The video and compute drivers were merged some time around the ROCm 5.0 release last year. If you have a sufficiently recent kernel, you shouldn't need to install any additional drivers.

On Debian Bookworm or Ubuntu 23.04, you can `apt install libamdhip64-dev` and you'll have everything needed to use Blender's HIP GPU rendering.


Thank you so much for this, I wasnt aware of this change. Just need to find the equivalent package on the Fedora repositories now and I guess I get to use more recent Blender versions :D

On fedora (38), there is rocm-hip package. It is relatively new one, it appeared this june.

For blender you will also need rocm-hip-devel, since blender dlopens() libamdhip64 not by soname (libamdhip64.so.5), but by the name libamdhip64.so, and that symlink is in the devel package.

For what I can say, it does work with Vega64 and cycles is rougly twice as fast compared to cpu (ryzen 2920x in my case).


I managed to install those packages and now Blender recognizes my 6700XT as a HIP-capable device, however selecting the GPU and switching to render view crashes Blender with a segfault, I might have to do some investigation to see if I can fix this. Thanks for the help.

You can often get more information about an error by setting the environment variable AMD_LOG_LEVEL=3.

I briefly talked to the Fedora maintainer and they mentioned there was a known issue with their HIP package. I'd encourage you to file a bug report with Fedora (if one does not exist already). Their packages are under active development and I think you can expect the matter to be resolved promptly.


It doesn't seem to be 2x faster on AMD (but mandatory "bait for wenchmarks).

AMD ran a YouTube ad that claims it's a 30% performance improvement on RX 7000 series: https://youtube.com/shorts/BKzty6YdRNM

Of course it depends on the exact scenario, but in games it's already known AMD ray tracing performance is quite bad compared to Nvidia and Intel, so it's not surprising they're also slow in Blender.


Lot of great quality of life stuff in this update, but especially being able to edit text objects by just.. clicking on it. Changing styles is nice too.

I don't often do anything with text within Blender for this specific reason, if your text was anything more than really a word or two it was nothing but friction.

Excited to be able to use it in a much easier manner. Blender remains one of my favorite pieces of software out there.


Impressive numbers.

-Up to 60x Faster copying mesh attributes

-10x Faster loading curve objects

-9x Faster loading point clouds

-4-6x Faster loading large meshes


“List mode items can now be dragged by the name“ not just the icon — very helpful

https://projects.blender.org/blender/blender/commit/c3dfe1e2...


Does anyone know to what extent Blender is currently making use of hw acceleration on macOS?

I've got 3.1 (early 2022) installed locally and there's a Metal GPU+CPU backend for Cycles.

Apple joined the Blender development fund and contributed a metal backend to the offline Cycles renderer in Blender 3.1 https://code.blender.org/2021/11/cycles-x-project-update/ https://www.blender.org/press/apple-joins-blender-developmen...

And in Blender 3.5 a Metal background was added to the viewport https://code.blender.org/2023/01/introducing-the-blender-met...


It's snappy and stable now. 3.6 works great on my M1 Max. Both Eevee and Cycles have some nice accelerated performance. The render performance isn't Nvidia good, but I'm sure enjoying the superior stability and lack of VRAM limitations.

I suggest you use it for Eevee or images in Cycles, but you probably don't have time to wait for Cycles video.


I love Blender so much. A true success story for Open Software.

I wouldn't be surprised if it was in the top 10 of FOSS history

Honestly I think a lot of it comes down to Ton being the right personality for leadership in that community. I know of various tech and art people that bounced off Blender over the years because of justified problems with it at different times, but Ton didn't let that become a terminal problem and steadily iterated the whole thing to get it to where it is today. It's great to see this being far more widely appreciated than even ten years ago.

Counterpoint: Ton is also one of the biggest reasons companies and individuals struggle to contribute bigger things to Blender. He’s very difficult to work with, and will quite often say really inflammatory things in other open source communities when he doesn’t understand them.

A lot of the Blender redesign and big features is the community pushing over years to say: “this is what we need” and being shot down till someone finally breaks through. Many features stay in forks for ages because they get blanket statements turning it down until they’ve amassed enough hype in the community. See grease pencil and the 2.8+ redesign.

I would give the most credit these days to the folks around him who are more willing to listen to the community and work directly with them.


Ironically started as a proprietary software. The community had to buy back the rights for the source code to open source it.

It was a wonderful example of crowdfunding. I wonder what other software area might benefit from a similar Foss injection. A few years ago they overwrote the last of the original code.

The funding model is fantastic. They probably have more developers than the commercial competition. They've been helping Krita in setting up a similar ecosystem

where can I learn about the funding model?

The Blender Fund's website[0] gives some good background, and has a list of members and corporate sponsors.

[0] https://fund.blender.org/


Yes, and this page [1] has a nice big-picture graphic showing where money comes from and where it goes.

Not directly mentioned there is the income they get from selling merch in their store [2]. I guess it's small compared to their other sources of income.

[1] https://www.blender.org/about/

[2] https://store.blender.org/


Are there any plans for a C/C++ API for Blender?

Not currently.

No Blender 3.6 Splash screen Demo file (yet?):

https://www.blender.org/download/demo-files/

Those were nifty. ;)


Legal | privacy