Software is their main skill. Yet, Google and Facebook have both been doing custom hardware for datacenters for years now. They're also funding academic R&D on chip design and such. Such companies needs also created an ecosystem offering things OpenFlow switches that are custom hardware implementations of radically different software. Intel and AMD are also both doing custom work for unnamed, datacenter companies. It's hearsay but I'd expect top players to be involved. So, they're already all well into the hardware business.
The simple route, as I indicated, would be to copy, contract, or even buy a MPP vendor. In academia, MIT Alewife showed one custom chip was all that was necessary to build a NUMA-style, 512 node machine with COTS hardware. Existing, shared-nothing clusters already scaled higher than Google using custom chips for interconnect. One can buy cards or license the I.P for FPGAs, Structured-ASIC's, etc. Much software for compute and management was open source thanks to groups such as Sandia. And so on. Plenty to build on that's largely been ignored except in academia.
Instead, they've largely invested in developing hardware and software to better support the inherently-inefficient, legacy software. So, they're doing the hardware stuff but just not imitating the best-of-breed solutions of the past or present. The only exception is OpenFlow: a great alternative to standard Internet tech that major players funded in academia and are putting in datacenters. Another success story is Microsoft testing a mid-90's approach of partitioning workloads between CPU's and FPGA's. So, they're... slowly... learning.
It is still true that most cloud-provider datacenters house racks of commodity hardware? If so, I could definitely imagine a shift to hardware that was designed to support running virtualized environments while keeping power and cooling costs down.
I'm not sure what that would look like. Mainframe-esque, perhaps?
They're apparently doing hardware-software co-design for rack-scale servers, integrating some kind of state-of-the-art systems-management features of the sort that "cloud" suppliers like AWS are assumed to be relying on for their offerings.
Kinda interesting ofc, but how many enterprises actually need something like this? If there was an actual perceived need for datacenter equipment to be designed (and hardware-software co-designed) on the "rack scale", we would probably be doing it right and running mainframe hardware for everything.
their schematics of datacenter reminds about schematics of a big server 15 years ago. Server racks instead of CPU-boards. "The datacenter is the computer."
I suspect they'd be designing for datacenters rather than consumers. That's essentially what they already do with bitcoin rigs too. It's a rented mining cloud.
reply