Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

There are lots of startups over here that plan on massively serving webpages

Seamicro is using the same sort of lateral move that RISC made and that Transmeta attempted to apply: Could certain functions happen more efficiently if moved outside of the traditional 'box?'

Could a datacenter provide a low-latency infrastructure for front-end web-servers while reducing their power and other expenses? I think there's a good chance this is already being done.



sort by: page size:

Interesting... I've never been in a really big datacenter, so I'd like to see some (hopefully non-biased) reviews from somebody that does work in those places.

Would this really work well for the intended market? There are lots of startups over here that plan on massively serving webpages - would something like this (only cheaper :) ) make you reconsider using whatever cloud services you're currently using?

Articles I found on Google:

http://gigaom.com/2010/06/13/seamicros-low-power-server-fina...

and the Wall Street Journal's take:

http://blogs.wsj.com/digits/2010/06/14/seamicro-tries-to-ret...


From my very superficial reading of the article it would seem like the win is that you're getting huge amounts of density.

I think their argument is that a typical web-style datacenter is throwing up hundreds and hundreds of commodity servers to do some sort of distributed processing tasks (think of something like a hadoop cluster or a ton of app servers behind a load balancer).

With this setup their argument may be you're no longer in need of a giant network/switching infrastructure to do these types of compute tasks. Plus all the power/cost savings of only needing a couple boxes just to get those thousands of CPUs. I wonder what kind of RAM these things are going to have, as obviously that's pretty crucial.

But yeah, losing one of these things would cause more than a little headache of an outage. Seems like an interesting approach that's worth more thought though.


They mention a datacenter in the article, but seems like a perfect application for custom hardware.

Sounds like a great datacenter :)

It will be interesting to see how these improvements trickle down.

At the moment, a few providers have the scale and skills to run datacenters much more efficiently. But I'm guessing that within a few years there will be some generic datacenter-in-a-container available, with efficiency not much inferior to the big four.

At that point, we go back to the hosting market of 15 years ago. Everybody can offer a datacenter without deep technical knowledge, and sell compute cycles on an open cloud market.

It's like all the tiny hosting providers, except that it requires more capital. So it becomes financialised -- if you can get cheap power, low temperatures and good connectivity, and borrow a few million dollars cheaply, then you're in business. But margins collapse precisely because nobody can do it.

In the end, once we get over the transition period of this move to cloud everything, datacenters end up like utilities


A new datacenter.

This is true. However, the services that startups use to host their servers (Amazon Web Services, Rackspace, etc.) could adopt these practices in new datacenters because the specs are open. This could lower costs for the startups that use these services, and be good for the environment.

Seems like everyone wins.


This seems like really interesting technology; where is it now?

If it was a real thing, I imagine places that run big data centers would chase any operational efficiency available.

https://www.frostytech.com/articles/2722/index.html answers the obvious question of "can I run it sideways" (yes).

So why can't I buy one?


You're paying for a super ninja ultra special internet connection as well? Yikes. Interesting concept though.

So your strategy for scaling up is building a data center then?


Oh oh! Put a datacenter in there!

More datacenters

They're already moving into the datacenter as well.

In...datacenters? That's a strange usecase!

Yes it is a radical simplification of the data center setup that allows for the scaling to design this.

In the long run building their own datacenter and backend would save money.

A clever idea. People are wondering why such a thing might be useful, so let me advance a theory:

Latency.

Suppose you have a bunch of people somewhere, say, the US, and a bunch of other people somewhere else, say, China, and there's an ocean in between. If they need to work collaboratively on something, placing a datacenter in one country or the other yields asymmetric latency; someone has a lot more.

If you can just plop a datacenter exactly at the midpoint, everyone wins. It needn't be the biggest datacenter ever, just one that can handle the latency-sensitive tasks.

Neat project.


Probably a datacenter...

Or perhaps a bunch of less efficient small data centers.

Makes you wonder if this could eventually lead to Datacenters in space, say for financial or latency sensitive applications, and the kinds of exotic architectures that would be needed for 17,000 MPH servers to shuffle traffic around.
next

Legal | privacy