I suspect that in a couple of years this driver will become incredibly useful for low-end ARM boards. Manufacturers of cheap hardware will only need to get an open source Vulkan driver mainlined and suddenly the machine will be able to play virtually any game ever made because it will indirectly support OpenGL and every version of DirectX. Likewise it'll give WebGPU/WebGL support thanks to ANGLE.
Seem to be referring to DXVK; and honestly, that’s pretty fair given how well it works. Zink + DXVK might end up being an impressive combo for simplifying the Linux graphics stack. A bit like what Gallium3D was trying to do, but the abstraction layer is exposed as its own standardized API.
dxvk only supports dx10/dx11, but vkd3d does dx12, d9vk does dx9, and wine itself does all versions. performance isn't great, but the compatibility definitely is.
In terms of achieving a certain level of functionality, you can do that, but the additional abstraction you add into your software stack translates into poorer performance and additional complexity. It's a trade-off.
Source: worked on GPU drivers for over a decade and had to make decisions like that.
This may be a weird question and probably not your call but, since you work for Nvidia and you mentioned having worked on GPU drivers I would be curious to know something I have been wondering for a long while, namely 1) how come the Nvidia drivers for FreeBSD don’t support CUDA, and more importantly 2) what would it take for CUDA support in Nvidia drivers for FreeBSD to happen?
Apologies in advance if this is asking questions that you are not allowed to comment on.
I've never worked on drivers at NVidia, so on the one hand I don't know the answer, but I can speculate with an educated guess.
Porting, testing and providing ongoing support for an OS is very costly. If there's not enough customer demand for it, it won't be done because it doesn't make business sense.
Gallium's architecture of "state trackers" are sort of antithetical to the idea of Vulkan in many ways. Neither Intel's Vulkan driver, nor RADV, bother using it.
Gallium's 0.4 performance on my card is atrocious.
I am using MESA 13.0.6 on libDRM 24.68, and yet even Flare is damn slow. On current, it's much better, but Google's Street View is still slow, albeit gaming on PPSSPP is fast and good.. AMD/ATi RS 690 with MESA 20, I tried it, but I went back. Go figure. Even Quake ports are barely accelerated under the old MESA/KMS combo...
It's things like this that make me not worry about sticking with opengl. It's supported everywhere, fast enough for my uses, and is exactly the level of abstraction i want to be at. I completed the vulkan triangle tutorial and i cannot imagine needing all those knobs for the games and other things i make. I'm pretty confident that by the time opengl is no longer natively supported, software layers like this will be stable and fast enough.
They are beginning to exist but it is very unlikely that these will have the stability that OpenGL has. They will likely be more convenient, but if you are building something today that doesn't have very high performance needs sticking to the OpenGL API is probably a good way to ensure that you won't have to migrate to something else for a very, very long time.
The OpenGL API is standardized and has multiple implementations so it isn't susceptible to some layer/library developer waking up one morning and deciding to break the API to make it "cleaner" or "easier" or whatever.
Hell, even though Khronos tried to do exactly that with the core/compatibility schism, pretty much all existing implementations decided breaking people's code is a stupid move and mostly ignored it (Apple being an exception but Apple hates developers[0] and their OpenGL implementation was always awful - even then, Apple's implementation is still an implementation of a stable API).
In practice it means that your currently working code will remain working in the future and there are good chances that you'll be able to port it in other places with either official or unofficial implementations.
[0] OK, OK, i know, Apple doesn't "hate" developers, they just do not care about them at all and they'd happily break things like a hippo dancing in a glassware shop if they believe that would make their own developers feel better.
Multiple implementations suck because implementors rarely aim for full compliance, and now you have to have as many code paths as there are implementations to account for the differences between them. Best to have a single implementation that everybody agrees on -- like the winner of the graphics API wars, Direct3D.
> Best to have a single implementation that everybody agrees on -- like the winner of the graphics API wars, Direct3D.
Ah yes, D3D, well known for its broad cross platform support! In all seriousness, would a Linux D3D driver even be legal? I have to assume that major legal or technical barriers exist, otherwise why wouldn't a GPU vendor have developed one at some point?
I don't think a compatibility layer implemented in software is the same as a driver implemented by the vendor that interacts directly with the hardware.
Mesa actually does support D3D9 natively for AMD (and, sort of, Intel) GPUs via the Gallium Nine project, and there is a branch of Wine that uses it.
But these days that's mostly superseded by DXVK (which implements D3D9 through 11 over Vulkan, kind of like Zink in the OP) and VKD3D (D3D12 over Vulkan).
I don't know what you are talking about. Direct3D is Windows only. Metal is iOS and Mac only. Vulkan is in theory supported on all platforms but UWP doesn't allow it and Mac requires a compatibility wrapper.
If anything the graphics API war is still going on.
> if you are building something today that doesn't have very high performance needs sticking to the OpenGL API is probably a good way to ensure that you won't have to migrate to something else for a very, very long time.
That is, in fact, precisely the wrong advice.
Vulkan runs on Windows, Linux, and OS X (via MoltenVk). Nothing else does.
OS X runs OpenGL 3.Ancient and has now dropped OpenGL. OpenGL drivers for Linux tend to be laughably worse than the Vulkan drivers.
All of the major gaming companies have basically said "We have no OpenGL jobs. We have a ton of unfilled Vulkan jobs."
If you aren't using DirectWhatever, Vulkan is going to be the only useful 3D API very shortly.
OS X runs OpenGL 4.1 and is still available even if deprecated. If you are considering MoltenVK as a viable option then perhaps Zink over MoltenVK would work in the future.
Note that most of the newer OpenGL features are largely about sending stuff faster to the GPU, not enabling new GPU features - you can do a lot of stuff with OpenGL 4.1. IMO if you are struggling for CPU performance with OpenGL then you might be better moving to Vulkan. But if this isn't your bottleneck then there isn't a reason to not stick with OpenGL.
If OpenGL drivers on Linux are "laughably worse" (though in practice i haven't much of a difference) then the solution is to improve those drivers. It'll be better for all the thousands of existing applications too.
> OS X runs OpenGL 4.1 and is still available even if deprecated.
You are correct. I misspoke. I meant to say 4.Ancient since 4.1 is 10 years old now.
> Note that most of the newer OpenGL features are largely about sending stuff faster to the GPU, not enabling new GPU features
For shaders, I certainly find that not true. There are a lot of shader features that got added over 10 years.
And, I believe things even as important as Uniform Buffer Objects are later than 4.1.
OpenGL 4.1 just ... isn't good in this day and age.
> If OpenGL drivers on Linux are "laughably worse" (though in practice i haven't much of a difference) then the solution is to improve those drivers.
I totally disagree. There simply aren't enough people in the Linux ecosystem to maintain those drivers and Vulkan drivers. I'd rather those developers all work on the newer and better API.
Vulkan runs on Windows , only runs outside sandbox model and doesn't support RDP sessions, because ICD OpenGL driver model isn't supported in such scenarios.
ICD OpenGL drivers are also not supported on Windows ARM variants.
Microsoft is driving the effort of OpenGL and related technology on top of DirectX instead.
Also so far there is no Vulkan on PlayStation, and while Switch does support Vulkan/OpenGL 4.5, most titles are either using middleware like Unity (ca 50% of Switch titles) or NVN.
I think the person you responded to was comparing OpenGL to Vulkan middleware wrappers from the perspective of API usage. From that angle, whether or not OpenGL drivers are provided at all is completely irrelevant. The API itself will presumably remain available, implemented under the hood via Zink (or similar) on top of Vulkan (or DirectX, or Metal, etc). Such an arrangement is likely to be more stable than middleware that can change overnight at the whims of it's developers.
As verbose as Vulkan may be, it's a well-designed vanilla C API. Some of the future helper libraries may also be in vanilla C, in which case it's not difficult to version and ensure relative stability.
> Pity that they decided to just bother with C99 though.
Isn't this the trend nowadays? I've seen recently that simple languages are trending again, languages like C, Go, Zig, Erlang, Lua. I think we hit the ceiling with mammoths like C++ and Scala.
C99 ensures it can be used with basically every other language and can be compiled for almost every platform. Bindings on top of that for other languages would be nice but can be provided by the community too. If it was C++, Rust, Nim, etc you'd have to write it in a way that allows exposing a C API anyway so the only advantage would be for people creating an implementation of the API getting to use a better language internally.
How do atomics help a high-level graphics API? The implementation can use them, but threading probably shouldn't be part of the interface.
Generics are cool, but auxiliary to any core part of the api since it's not really possible to use them outside of in a macro that wraps over a real function.
Can't you (ie the programmer) still use them though? You can write C11 code which freely calls the C99 API. By using C99 the API is compatible with a wider range of compilers and callable from a broader range of languages.
(Also where would the API itself have benefited in any significant way from the use of atomics or generics? Recall that compiler support for atomics is optional in C11.)
The problems of global state persist in WebGL, because (1) if you need to save and restore state inside a single context, WebGL provides no help; (2) the driver is unable to amortize validation as the state is not tied to the individual render pass/pipeline descriptors.
Interesting. When you say "save and restore state", do you mean actually serializing it and loading it later? In what situation would you want to do this? Not doubting that there is one but i haven't heard of this before.
Typically you use this feature to associate state with an object in your render graph. For example, all of the render state used to render the sky, including shader program, vertex buffers, textures, blend state, etc. can be attached to the sky node in your scene graph. Then you can (basically) have a generic "draw an object in your scene graph" function that just performs the appropriate drawcall with the right state. The performance advantage of this is that the driver can do all the validation once, when you construct the sky object, instead of every frame.
I recently wrote a tiny game[1] and rendering with WebGPU was reasonably easy as it was mostly clear why/where things were going wrong and there was little state to manage, while WebGL was enough of a hassle that I didn't even manage to get instancing working.
Something like `glBindState()` would encapsulate the global state, but it wouldn't eliminate it. A better approach would be to pass in the state to the draw command like WebGPU/Metal/Vulkan do, which would also allow the driver to amortize the cost of state validation.
That sounds much worse. That's still global state, just encapsulated global state. Any time you say glBindWhatever(), you're assigning global state. And the state obj should be an opaque handle so you don't end up with stale pointers. Should be glEnable(state, GL_BLEND); glDepthTest(state, GL_LEQUAL);.
I'm far from an expert in the field but what annoyed me deeply with OpenGL was the implicit global state and, in particular, the fact that it's very difficult and error-prone to make applications that interact with OpenGL from multiple threads.
In a way I find OpenGL too complicated sometimes, some of its abstractions don't really make a whole lot of sense on modern hardware. Having a lower level API can make some code simpler to write, at least in theory. When I use OpenGL I often find myself thinking "I sure hope the driver will manage to understand what I'm trying to do and not end up doing something silly".
Note that my exposure to OpenGL comes mainly from console emulators though, and that's obviously very biased since emulators need to mimic very quirky behaviors of the original hardware, and that often results and very un-idiomatic GL code.
> the implicit global state and, in particular, the fact that it's very difficult and error-prone to make applications that interact with OpenGL from multiple threads
These things are obviously related.
From my work in OO langs, I believe it is possible to wrap OO code in functional code to some degree without rewriting, assuming that global references can be intercepted and resolved to local ones somehow, and that the state reference that encompasses all the globals is an immutable data struct
> in particular, the fact that it's very difficult and error-prone to make applications that interact with OpenGL from multiple threads.
These days I rarely ever bother with threaded OpenGL, and I stick to one context per thread, using OS surface sharing primitives (DXGI, IOSurface, etc.) to communicate.
If you use it, then you'll get no help from the platform on getting it to run. Random parts may begin to break, and the intention is likely an eventual removal.
ANGLE (Chrome’s OpenGL ES backend) has a Vulcan backend as well, although I haven’t seen any performance numbers. It’s made to run WebGL though so strong incentive to have good performance.
It has a significant goal that tends to degrade performance: it needs to disallow exposing undefined or unsafe behavior of the driver to the client, regardless of how crazy the calls are.
This headline feels extremely out of sync with the tone of the blog post.
> My expectation when I clicked into the timediffs page was that zink would be massively slower in a huge number of tests, likely to a staggering degree in some cases.
> We were
So performance was better than expected on a small handful of tests out of tens of thousands?
First acknowledge that C is outdated, support a polyglot environment for GPGPU programming, including IDE and debuggers with ability to single step shaders, conditional breakpoint and everything else that one has come to expect from modern CPU programming.
Khronos did the first part with Khronos and SYSCL, but only after taking a beating from CUDA, so the ecosystem never cared, and OpenCL 1.2 became 3.0.
Vulkan Compute is even worse than OpenCL in what concerns existing tooling.
reply