Sorry, I think you misunderstood. This is installing both types of card in the same computer: one for desktop/Wayland (AMD), and the other for ML/CUDA (Nvidia).
Couldn't you use the Nvidia card for CUDA and just have an AMD card for the display. It's a little dumb having two graphics cards, you need a spare PCI slot and graphics cards are expensive (but you could just get a cheap old and used card, if it's just for your desktop environment).
I'm asking because it seems like no one relying on Nvidia card for number crushing and who want Wayland seem to have opted to just get a second card.
I think that, for my use case, it would be a good combo to put both an Nvidia card and an AMD card in the same Linux machine, using the AMD one on Linux and spinning up VM guests using the Nvidia card for Windows gaming or for ML work. This way I can use Wayland (especially the Sway window manager) for my development workflow while being able to use CUDA and stuff.
I've set up a dual-GPU system in the past using two nvidia GPUs and whilst I found the trek towards PCI passthrough to other virtual machines rewarding when it finally worked, I also found the arrangement to be inconvenient.
What you've achieved here, seems the ideal. Well done :)
I will either patiently wait for an Arch Linux version of the install, or I'll eventually end up impatient and see if I can rustle up something - an install script is an install script, it should be just a matter (famous last words) of altering the install script/procedures to suit.
Well, not exactly. AMD calls this Dual Graphics and it only works with certain graphics cards. In the case of Kaveri this is like 2 entry level R7 series cards that use DDR3 memory instead of the traditional GDDR5 for more powerful graphics cards.
This makes it almost pointless in my opinion as you can get a discrete card that is more powerful on its own then the combination for not much more money.
I only have one card. I swapped the Titan V with my 1x1080 in my old, cheap motherboard and it just worked. I had to hook up the water cooling, but I did that because it came with a block, not because I optimized the thermal design. To verify motherboard compatibility, I looked up in the nvidia specs how many PCIe lanes and at what speed I should expect and confirmed in HWinfo that they were active in that configuration -- much like I'd look at a network interface to make sure my 10/5/2.5GbE hadn't turned into 1GbE on account of gremlins in the wires.
I'm not using this for machine learning, so you might want to talk to someone who is before pulling the trigger. In particular, my need for fp64 made the choice of Titan V completely trivial, whereas in ML you might have to engage brain cells to pick a card or make a wait/buy determination.
reply