AMD is still old school and uses PGA sockets, at least for their desktop parts. I wish they would get with the program (they have for their server CPUs).
Yes because they can't either develop their own controller or say implement the one from say Texas Instruments?
All they need to do is have a TB controller and have a free PCIe interconnect, from looking at say the Promontory chipset (AMD's answer to Intel's PCH) you can just drop it in without sweating.
The problem is Framework is limited by standards here. Is there a chipset + connector that works for GPU hot-swapping? Most consumer-grade PCIe chipsets don't support it, and even the OS probably doesn't support it. USB-C supports it, but that's a standard designed for external connectors, I don't think that there's a reliable internal USB C edge connector or something.
The sad thing is that Hyper Transport was supposed to offer this exact feature and implement it just like SGI did with NUMAlink. There were a few boards produced with HTX slots, I have an older Tyan dual socket Opteron board with an HTX slot kicking around.
The first generation of Infinipath did exactly that. We had to create a slot standard for Hypertransport (HTX), convince motherboard makers to build boards with that slot, and then at the end of the day we just ended up convincing everyone that they never wanted to go that route ever again.
These days Intel's putting Omnipath on-package for Xeon Phi and Skylake. Likely it's still a PCI-Express connection, but it doesn't count against the total available lanes for external cards.
PCIe switching would have been really interesting to see, even in a non-concurrent mode.
One also desperately wishes that multi-host network adapters had a low-end market. A 1 x 25Gbe connection shared between 6 hosts would be epic. Or if sharing the connection is too much, just a 4 x 2.5Gbe NIC chip would be acceptably interesting & useful to have. I forget, maybe the gbe is free on the Pi4's chip & this is more of a non-issue.
The 1x16/2x8 GPU port and 1x4 M.2 port goes directly to CPU, Rest of ports come through the Chipset/PCH/Northbridge/Southbridge/however you call it. The CPU-Chipset link is not real I/O bus coming from CPU die for quite some time. So motherboard brands can't just offer x16 to everyone.
Breaking out full sized full speed buses was possible with PCI and PCI-X because those were real buses bridged to real CPU buses. That's not the case with PCIe.
This is why we never use much less connect the onboard Intel Ethernet port for any motherboards having Intel support. (Same for AMD); we always add a (better) Ethernet NIC adapter card.
Meanwhile, mothetboards having ARM, RISC and PowerPC continues to gain support.
I think most (if not all) modern PCI-E GPUs still supports this. If I am not mistaken, this is what is preventing them from working on ARM right now (Raspberry PI 4, Apple M1, etc.), because the driver expect specific BIOS features
This is interesting. I hope someone will do the same to re-add PCIe 4.0 support to AMD AM4 B450 chipset mainboards (it was available temporarily until AMD took it away again in later AGESA versions). Not a trivial hack i reckon.
Intel does the same, there is still a "north/south bridge" (AMD had on die memory controller since the Athlon 64 days IIRC) which adds "additional" (via a bridge, you aren't getting additional bandwidth unless you are using QPI with multiple CPU's) PCIE lanes and centralizes I/O in Intel's case it's the PCH. https://en.wikipedia.org/wiki/Platform_Controller_Hub
AMD will have something similar even if the CPU will control most of the PCIE lanes in the system (usually GPU + high speed storage is over the CPU, the rest of the IO is over the PCH).
Making use of the NIC/MACs in the CPU would require board makers to buy PHYs, which would be only used by AMD boards, or they could not do that and slap the same PCIe NICs on there as they do on all other boards.
Is there any explenation on why they couldn't achieve this with AMD's new chips?
I'm sure they would have supported that if they were able to.
reply