I'm guessing that "about the same" will be hard to measure, and that at some point, thermodynamics will dictate the maximum performance per power output (assuming fixed transistor architecture).
Not really even close. Even with a 235W CPU and a theoretical 600W GPU you wouldn’t actually exceed even half the capacity of a single 15A circuit in synthetic benchmarks that stress the system beyond real-world loads.
A phone uses about 5W, a laptop 50W and a ultra-high-end desktop 500W. That's two orders of magnitude of difference, no way the CPU can be the same for all that.
It's impossible to compare either of these CPUs because one is heterogeneous and the other is big.LITTLE. It's still fair to analyze their power consumption as a black-box machine though.
This is only true when comparing older PC hardware.
There is simply no comparison to make when we are talking raw compute power. If there were, the DC market would be dominated by x86 processors with stupidly low power consumption. The cost savings from the smaller power/heat footprint would be absurd.
Given similar power consumption, cooling is up to the OEM, not the chipmaker. So this question should be directed at Clevo, Dell, HP, Lenovo, ASUS, Acer, Tongfang, etc.
This is physics after all and 15W are 15W no matter if they go into an Apple M1 or an AMD Ryzen.
Exactly. If Intel's 3ghz cpu eats 262W and emits 895 BTU/h, and Apple's passively cooled ARM laptop chip eats 130W and 450 BTU/h (completely made up numbers), how fast would Apple's ARM chip run with active cooling and 250W?
So? They aren't performing the same computation. You can't compare the two. What you can compare is power draw at an equivalent tokens/sec on the same model. But you don't have that number.
reply