Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

No, this is the CUDA toolkit, it doesn't depend on the driver version. You can compile CUDA code without having a GPU (which is the case during a "docker build").

Edit: in other words, your Docker image doesn't depend on a specific driver version and can be ran on any machine with sufficient drivers. Driver files are mounted as a volume when starting the container.



sort by: page size:

Is that Docker image capable of utilizing a GPU?

It shouldn't, nVidia provides a CUDA Docker plugin that lets you expose your GPU to the container, and it works quite well.

The problem with that is gpu access from the docker image and iirc Nvidia doesn't have a Windows passtrough cuda driver

You can (allow) access to GPU for cuda applications in docker containers, so I'd be surprised if it wasn't possible to get video acceleration to work normally

Yes. Ignore the naysayers :-)

Check out VirtuaGL and TurboVNC. It is possible to run these two things inside a docker container, install a DE and accelerate a GUI app. I've been doing it for quite a while.

Also for all those checking the OPs original lib out. It does work and it is great and also probably the most mature option, but my understanding is that future focus for GPU inside containers is with this newer lib, also looked after by Nvidia:

https://github.com/NVIDIA/libnvidia-container

I'm fairly familiar with this stuff, though not an expert, for my startup http://realityzero.one. But, I've not been paying that much attention to this area of late, so if I've got this wrong then please feel free to correct me. Perhaps they're for different things. I know there's been some discussion from the k8n crowd about the newer lib in k8n's github issues.


If you're using DC/OS you can also run docker containers that leverage GPUs. We worked with Nvidia to mimic the same functionality provided by nvidia-docker. Here's a talk I gave at this years Nvidia GTC conference explaining how it works:

https://drive.google.com/file/d/0B7vZqCY-AJrpMEYyVmZkVGlyOEE...

https://docs.google.com/presentation/d/1q6brajoQkFZVtSocMPGg...


Not for Intel, but if you use an nvidia card, you can also use ‘—-gpus all’ with the nvidia container runtime. https://wiki.archlinux.org/title/docker#With_NVIDIA_Containe...

The official JellyFin docs suggest using —-device for docker/Intel cases: https://jellyfin.org/docs/general/administration/hardware-ac...


The CLI wrapper is provided for convenience since it should be enough for most users. We recently added advanced documentation on our wiki, we explain how you can avoid relying on the nvidia-docker wrapper: https://github.com/NVIDIA/nvidia-docker/wiki/Internals

This section should also have plenty of information for you if your goal is to integrate GPU support in your own container tool.


Thanks :)

It uses the NVIDIA drivers on your system, but it should be possible to make the rest of CUDA somewhat portable. I have a few thoughts on how to do this, but haven't gotten around to it yet.

The current GPU enabled torch runners use a version of libtorch that's statically linked against the CUDA runtime libraries. So in theory, they just depend on your GPU drivers and not your CUDA installation. I haven't yet tested on a machine that has just the GPU drivers installed (i.e without CUDA), but if it doesn't already work, it should be very possible to make it work.


> For infrastructure, Compute Engine and Google Container Enginer allow you to run your GPU workloads with VMs or containers.

It's been my understanding that container engine does not support GPUs and I can't find any documentation stating otherwise.


Does this support GPU?

It would be great to be able to run deep learning workloads via docker on macos.


Nvidia provides a toolkit to do this [1], getting a GPU into a container is as easy as running `podman run --device nvidia.com/gpu=all`. The process is similar for Docker, but rootless Docker requires some extra steps IIRC.

[1] https://docs.nvidia.com/datacenter/cloud-native/container-to...


I never played around with nvidia-docker. Is there much, if any additional overheard to the gpu compute by running through it vs. natively with an AMI?

It does use gpu, you need to compile it with a flag.

I'm having a hard time figuring out if I can simply install NVIDIA proprietary drivers and CUDA on Vanilla OS.

From the readme, no. Says NVIDIA drivers required

Yes, you would still need to the NVIDIA kernel driver (preferably the most current one). Desktop users typically have it already installed. But the main difficulty in my opinion is to install CUDA (with CuDNN,...). Even the TensorFlow documentation [0] is outdated in this regards as it covers only Ubuntu 18.04. The installation process of CUDA.jl is really quite good and reliable. Per default it downloads it own version of CUDA and CuDNN, or you can use a system-wide CUDA installation by setting some environment variables [1].

[0] https://www.tensorflow.org/install/gpu [1] https://cuda.juliagpu.org/stable/installation/overview/


Looking at the docs, it appears that the CUDA drivers have to be manually installed on the host image, which takes time/money.

Is there a GCP Image with CUDA drivers pre-installed? Or is that not possible with the hardware architecture?


yes Flux doesn't ship GPU drivers. It ships everything else (like CUDA toolkit etc) as needed, using the artifact / pkg system, for all mainstream OSes. Doesn't interfere with system libraries.

https://julialang.org/blog/2019/11/artifacts/

next

Legal | privacy