Yes, you would still need to the NVIDIA kernel driver (preferably the most current one). Desktop users typically have it already installed. But the main difficulty in my opinion is to install CUDA (with CuDNN,...). Even the TensorFlow documentation [0] is outdated in this regards as it covers only Ubuntu 18.04.
The installation process of CUDA.jl is really quite good and reliable. Per default it downloads it own version of CUDA and CuDNN, or you can use a system-wide CUDA installation by setting some environment variables [1].
Yes. Now I regret I upgraded my Ubuntu to 15.04 Since there is no support for cuda from NVIDIA, so is the Intel graphic driver. If you are using an OS without third party applications, I think it should be fine, but that is not the most case.
Yep, just apt-get install nvidia-cuda-toolkit. It's currently v5.5.22 (a mid-2013 version). Debian started working on a packaging of v6.x in the "experimental" repository a few weeks ago, which will probably migrate to regular Debian and Ubuntu releases once it's tested a bit.
I just want to be able to install a program or library, like Tensorflow, and have it work on my GPU whether it's from Nvidia or AMD without having to first find and install the right compiler.
How do I run a Python program that imports Tensorflow on my AMD GPU today?
If you want to run neural nets you'll need to use the proprietary nvidia drivers and their matching cuda versions. I'd check which versions of the best ppas (graphics-drivers) are available for the newer distro.
No, this is the CUDA toolkit, it doesn't depend on the driver version. You can compile CUDA code without having a GPU (which is the case during a "docker build").
Edit: in other words, your Docker image doesn't depend on a specific driver version and can be ran on any machine with sufficient drivers. Driver files are mounted as a volume when starting the container.
yes Flux doesn't ship GPU drivers. It ships everything else (like CUDA toolkit etc) as needed, using the artifact / pkg system, for all mainstream OSes. Doesn't interfere with system libraries.
Would this be useful for people who wanted to leverage GPUs for deep learning, but didn't have the wherewithal or the willingness to set up all the dependencies on the host machine?
Have you tried WSL? The NVidia developer CUDA repo has a specific folder for "wsl-ubuntu" where you only install the toolkit and it reuses the Windows graphics drivers IIRC.
It uses the NVIDIA drivers on your system, but it should be possible to make the rest of CUDA somewhat portable. I have a few thoughts on how to do this, but haven't gotten around to it yet.
The current GPU enabled torch runners use a version of libtorch that's statically linked against the CUDA runtime libraries. So in theory, they just depend on your GPU drivers and not your CUDA installation. I haven't yet tested on a machine that has just the GPU drivers installed (i.e without CUDA), but if it doesn't already work, it should be very possible to make it work.
My machine doesn't come with an NVIDIA GPU. However, my Ideapad previously had a dedicated NVIDIA MX 150. You can refer <https://wiki.archlinux.org/title/NVIDIA> for more information.
[0] https://www.tensorflow.org/install/gpu [1] https://cuda.juliagpu.org/stable/installation/overview/
reply