Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

They mention a datacenter in the article, but seems like a perfect application for custom hardware.


sort by: page size:

Sounds like a great datacenter :)

The product being announced is for data centers.

Well they did specify it was for datacenter use.

A new datacenter.

Software is their main skill. Yet, Google and Facebook have both been doing custom hardware for datacenters for years now. They're also funding academic R&D on chip design and such. Such companies needs also created an ecosystem offering things OpenFlow switches that are custom hardware implementations of radically different software. Intel and AMD are also both doing custom work for unnamed, datacenter companies. It's hearsay but I'd expect top players to be involved. So, they're already all well into the hardware business.

The simple route, as I indicated, would be to copy, contract, or even buy a MPP vendor. In academia, MIT Alewife showed one custom chip was all that was necessary to build a NUMA-style, 512 node machine with COTS hardware. Existing, shared-nothing clusters already scaled higher than Google using custom chips for interconnect. One can buy cards or license the I.P for FPGAs, Structured-ASIC's, etc. Much software for compute and management was open source thanks to groups such as Sandia. And so on. Plenty to build on that's largely been ignored except in academia.

Instead, they've largely invested in developing hardware and software to better support the inherently-inefficient, legacy software. So, they're doing the hardware stuff but just not imitating the best-of-breed solutions of the past or present. The only exception is OpenFlow: a great alternative to standard Internet tech that major players funded in academia and are putting in datacenters. Another success story is Microsoft testing a mid-90's approach of partitioning workloads between CPU's and FPGA's. So, they're... slowly... learning.


It is still true that most cloud-provider datacenters house racks of commodity hardware? If so, I could definitely imagine a shift to hardware that was designed to support running virtualized environments while keeping power and cooling costs down.

I'm not sure what that would look like. Mainframe-esque, perhaps?


They're apparently doing hardware-software co-design for rack-scale servers, integrating some kind of state-of-the-art systems-management features of the sort that "cloud" suppliers like AWS are assumed to be relying on for their offerings.

Kinda interesting ofc, but how many enterprises actually need something like this? If there was an actual perceived need for datacenter equipment to be designed (and hardware-software co-designed) on the "rack scale", we would probably be doing it right and running mainframe hardware for everything.


In the video he mentions that one of the uses is for datacenters.

Probably a datacenter...

Oh oh! Put a datacenter in there!

In...datacenters? That's a strange usecase!

Great for data centers?

So not a data center, but like an Arduino.

They also build their own servers in their own datacenters

I believe they manage their own hardware in a datacenter

it might be cool to represent the physical datacenter in something like this.

How do you find datacenters which will let you colocate hardware? How much hardware do you have? What was the cost?

I'm really interested in this kind of thing.


their schematics of datacenter reminds about schematics of a big server 15 years ago. Server racks instead of CPU-boards. "The datacenter is the computer."

I suspect they'd be designing for datacenters rather than consumers. That's essentially what they already do with bitcoin rigs too. It's a rented mining cloud.
next

Legal | privacy