Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Meh, the comparison is somewhat pointless when it doesn't account for the slowdown that the vast majority of pytorch codebases experience on TPU's vs using JAX and it's accelerations specific to TPUs, and vice versa.

https://arxiv.org/pdf/2309.07181.pdf

Meaning the TPU v5p is likely slower than H100 for most ML workloads that depend on pytorch.



view as:

Legal | privacy