Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Yes, you would still need to the NVIDIA kernel driver (preferably the most current one). Desktop users typically have it already installed. But the main difficulty in my opinion is to install CUDA (with CuDNN,...). Even the TensorFlow documentation [0] is outdated in this regards as it covers only Ubuntu 18.04. The installation process of CUDA.jl is really quite good and reliable. Per default it downloads it own version of CUDA and CuDNN, or you can use a system-wide CUDA installation by setting some environment variables [1].

[0] https://www.tensorflow.org/install/gpu [1] https://cuda.juliagpu.org/stable/installation/overview/



sort by: page size:

The past couple of years that I've been using NVIDIA with Ubuntu the process was pretty much straightforward.

You can get Tensorflow and most Miners to work with just a ppa addition and apt installation.

Not sure how games and other 3D enabled apps behave though.


I'm having a hard time figuring out if I can simply install NVIDIA proprietary drivers and CUDA on Vanilla OS.

Yes. Now I regret I upgraded my Ubuntu to 15.04 Since there is no support for cuda from NVIDIA, so is the Intel graphic driver. If you are using an OS without third party applications, I think it should be fine, but that is not the most case.

Is that needed? Can't you run the GUI (Wayland, or X11) with Nouveau and CUDA with the nVidia driver?

Do you need the proprietary graphics driver to do machine learning stuff?

Only if the software you have supports it. For example, Tensorflow on OS X doesn't use GPUs since version 1.2

Yep, just apt-get install nvidia-cuda-toolkit. It's currently v5.5.22 (a mid-2013 version). Debian started working on a packaging of v6.x in the "experimental" repository a few weeks ago, which will probably migrate to regular Debian and Ubuntu releases once it's tested a bit.

I just want to be able to install a program or library, like Tensorflow, and have it work on my GPU whether it's from Nvidia or AMD without having to first find and install the right compiler.

How do I run a Python program that imports Tensorflow on my AMD GPU today?


If you want to run neural nets you'll need to use the proprietary nvidia drivers and their matching cuda versions. I'd check which versions of the best ppas (graphics-drivers) are available for the newer distro.

I did manage to setup the notebook and cuda on my local machine. I have an older GTX card.

From the readme, no. Says NVIDIA drivers required

No, this is the CUDA toolkit, it doesn't depend on the driver version. You can compile CUDA code without having a GPU (which is the case during a "docker build").

Edit: in other words, your Docker image doesn't depend on a specific driver version and can be ran on any machine with sufficient drivers. Driver files are mounted as a volume when starting the container.


yes Flux doesn't ship GPU drivers. It ships everything else (like CUDA toolkit etc) as needed, using the artifact / pkg system, for all mainstream OSes. Doesn't interfere with system libraries.

https://julialang.org/blog/2019/11/artifacts/


Would this be useful for people who wanted to leverage GPUs for deep learning, but didn't have the wherewithal or the willingness to set up all the dependencies on the host machine?

Have you tried WSL? The NVidia developer CUDA repo has a specific folder for "wsl-ubuntu" where you only install the toolkit and it reuses the Windows graphics drivers IIRC.

No additional libraries required on Windows (even CUDA). For linux you'll need to install CUDA and onnxruntime gpu lib.

Thanks :)

It uses the NVIDIA drivers on your system, but it should be possible to make the rest of CUDA somewhat portable. I have a few thoughts on how to do this, but haven't gotten around to it yet.

The current GPU enabled torch runners use a version of libtorch that's statically linked against the CUDA runtime libraries. So in theory, they just depend on your GPU drivers and not your CUDA installation. I haven't yet tested on a machine that has just the GPU drivers installed (i.e without CUDA), but if it doesn't already work, it should be very possible to make it work.


My machine doesn't come with an NVIDIA GPU. However, my Ideapad previously had a dedicated NVIDIA MX 150. You can refer <https://wiki.archlinux.org/title/NVIDIA> for more information.

Do you have to use the nvidia as your graphics adapter if you are only using it for ML training?
next

Legal | privacy