Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That's pretty rough, one of the great things about the Framework was being able to treat all ports the same.

Is there any explenation on why they couldn't achieve this with AMD's new chips?

I'm sure they would have supported that if they were able to.



sort by: page size:

AMD is still old school and uses PGA sockets, at least for their desktop parts. I wish they would get with the program (they have for their server CPUs).

Still if they want they can allow only Quadro cards to expose the necessary interfaces on the host side.

Wasn't the PCI version based on a different chip, and not fully compatible? I could be misremembering.

Yes because they can't either develop their own controller or say implement the one from say Texas Instruments?

All they need to do is have a TB controller and have a free PCIe interconnect, from looking at say the Promontory chipset (AMD's answer to Intel's PCH) you can just drop it in without sweating.


Agreed, especially frustrating with new motherboards having just 3 full PCIe slots and 2 being blocked by the GPU.

Also the crap with vendor locking SFP slots by Dell and Cisco especially. At least Intel cards are quite universal.

USB-C to SFP+ is also way to expensive and you can't do SFP28 on USB 3 speeds.


The problem is Framework is limited by standards here. Is there a chipset + connector that works for GPU hot-swapping? Most consumer-grade PCIe chipsets don't support it, and even the OS probably doesn't support it. USB-C supports it, but that's a standard designed for external connectors, I don't think that there's a reliable internal USB C edge connector or something.

The sad thing is that Hyper Transport was supposed to offer this exact feature and implement it just like SGI did with NUMAlink. There were a few boards produced with HTX slots, I have an older Tyan dual socket Opteron board with an HTX slot kicking around.

There is a connector standard: https://www.hypertransport.org/ht-connectors-and-cables

Connectors available from Samtec: https://www.samtec.com/standards/ht3#connectors

Manycore CPU's and converged ethernet pretty much made it moot.


The first generation of Infinipath did exactly that. We had to create a slot standard for Hypertransport (HTX), convince motherboard makers to build boards with that slot, and then at the end of the day we just ended up convincing everyone that they never wanted to go that route ever again.

These days Intel's putting Omnipath on-package for Xeon Phi and Skylake. Likely it's still a PCI-Express connection, but it doesn't count against the total available lanes for external cards.


PCIe switching would have been really interesting to see, even in a non-concurrent mode.

One also desperately wishes that multi-host network adapters had a low-end market. A 1 x 25Gbe connection shared between 6 hosts would be epic. Or if sharing the connection is too much, just a 4 x 2.5Gbe NIC chip would be acceptably interesting & useful to have. I forget, maybe the gbe is free on the Pi4's chip & this is more of a non-issue.


The 1x16/2x8 GPU port and 1x4 M.2 port goes directly to CPU, Rest of ports come through the Chipset/PCH/Northbridge/Southbridge/however you call it. The CPU-Chipset link is not real I/O bus coming from CPU die for quite some time. So motherboard brands can't just offer x16 to everyone.

Breaking out full sized full speed buses was possible with PCI and PCI-X because those were real buses bridged to real CPU buses. That's not the case with PCIe.

0: https://images.anandtech.com/doci/17308/hlg-intel-prod-brief...

1: https://www.lowyat.net/wp-content/uploads/2021/08/AMD-AM5-Di...


You can just section off the bits that need to be PCI compliant.

It would be a good architectural decision, actually.


Still no PCI passthrough support though, right?

This is why we never use much less connect the onboard Intel Ethernet port for any motherboards having Intel support. (Same for AMD); we always add a (better) Ethernet NIC adapter card.

Meanwhile, mothetboards having ARM, RISC and PowerPC continues to gain support.


I think most (if not all) modern PCI-E GPUs still supports this. If I am not mistaken, this is what is preventing them from working on ARM right now (Raspberry PI 4, Apple M1, etc.), because the driver expect specific BIOS features

This is interesting. I hope someone will do the same to re-add PCIe 4.0 support to AMD AM4 B450 chipset mainboards (it was available temporarily until AMD took it away again in later AGESA versions). Not a trivial hack i reckon.

Intel does the same, there is still a "north/south bridge" (AMD had on die memory controller since the Athlon 64 days IIRC) which adds "additional" (via a bridge, you aren't getting additional bandwidth unless you are using QPI with multiple CPU's) PCIE lanes and centralizes I/O in Intel's case it's the PCH. https://en.wikipedia.org/wiki/Platform_Controller_Hub

AMD will have something similar even if the CPU will control most of the PCIE lanes in the system (usually GPU + high speed storage is over the CPU, the rest of the IO is over the PCH).


That's lame. Some competitors are working on DAX-capable PCIe devices. Intel seems to be surprisingly behind here.

How would a hypervisor sulution work? Trap and emulate all accesses?


Making use of the NIC/MACs in the CPU would require board makers to buy PHYs, which would be only used by AMD boards, or they could not do that and slap the same PCIe NICs on there as they do on all other boards.

IIRC some Realtek cards did have the hardware to route it that way.
next

Legal | privacy