The way I read this, they are achieving savings by virtually mux-ing (or de-muxing depending on viewpoint) much of everything that's not the CPU. Is this optimized to make the support of virtual servers with relatively low throughput more efficient?
I read it to be they created a chip that runs code written to emulate/virtualize most of the other (non-cpu) logic on a motherboard. so instead of 10 "io" chips and 10 "memory" chips most of which are idle they have 15 virt chips which may act as either "io" or "memory" depending on demand.
I guess (de)muxing is close to that? Multiplexing many "request" amongst fewer real(non-virutal) objects.
I read it to be they created a chip that runs code written to emulate/virtualize most of the other (non-cpu) logic on a motherboard.
That's exactly what I meant by "virtually (de)muxing everything but the CPU." I should have said, "CPU+chipset," though.
I wonder if their architecture is also (unintentionally) optimized for languages like Ruby/Python? I suspect that languages like those tend to have more CPU operations per IO operation. I wonder if anyone has researched this metric?
Well, they're only muxing IO; each Atom CPU has a chipset CPU and it's own DRAM (up to 2GB (the Atom has a hard 4GB limit), no ECC since Intel only offers that for official server chips).
Interesting... I've never been in a really big datacenter, so I'd like to see some (hopefully non-biased) reviews from somebody that does work in those places.
Would this really work well for the intended market? There are lots of startups over here that plan on massively serving webpages - would something like this (only cheaper :) ) make you reconsider using whatever cloud services you're currently using?
There are lots of startups over here that plan on massively serving webpages
Seamicro is using the same sort of lateral move that RISC made and that Transmeta attempted to apply: Could certain functions happen more efficiently if moved outside of the traditional 'box?'
Could a datacenter provide a low-latency infrastructure for front-end web-servers while reducing their power and other expenses? I think there's a good chance this is already being done.
what I see right now is that people are choosing fewer, higher speed, more expensive and more power hungry cores over more, slower, cheaper and more efficient cores. Anyone who uses Xeons instead of the new 8 or 12 core opterons has made this choice. Sure, the Xeons kick ass when it comes to single threaded performance, but as far as performance per watt or performance per dollar go, the new socket G34 opterons kick the shit out of the Xeons. For the cost of an 8 core Xeon, I can get a low power 8 core opteron, 32GiB ram, and a nice SuperServer to put it all in.
James Hamilton, VP at Amazon and author of the popular blog "Perspectives" has some interesting insights into and great things to say about the work SeaMicro and other new startups are doing to revolutionize the server industry.
See http://news.ycombinator.com/item?id=1429625 for HN item on this excellent posting of his with a few comments, e.g. I note he notes this system has no ECC. And that Intel is pretty unique in that limitation.
Hopefully it will be cheaper. $20/month might sound very cheap but for some guys (like myself) living in 3rd world countries, it's almost next to impossible.
Hey, I have a question. If I wanted to advertise to /technical/ people in your area, where would I do it?
Note: I'm not really interested in business people; I want *NIX enthusiasts who are /not/ spammers, who can write English and are capable of running a Linux box that won't get compromised, and who don't need much help from me to run the thing.
yeah, that's the sort of thing I'm looking for... but often it's hard to tell from the webpages which ones are popular, and which ones are just two guys.
Hm. But yeah, I should play more with LUGs in poor parts of the world; I've tried to give away stuff to local LUGs around here (in the SF bay area) but they usually have a setup they like already, and other providers falling over themselves to give them free crap.
For my ad money, I like display ads rather than pay per click ads... I am just trying to say 'Hey, I exist' and pay per click is usually not a very cost-effective way of doing that.
(now, most of my advertising 'budget' was spent on, say, writing a book... but per-click ads are the opposite of what I want.)
Thanks. I've never used referrals, though. Usually I prefer cheap display ads, donating services, or community participation. I have used discounts for certain low-risk groups; I guess that's a little like a referral.
Try http://www.philweavers.net - I am not sure how it is now, but I have had a bunch of clients from their old site.
I think most people there are from Luzon - the biggest island in the Philippines - has the biggest city - so it is probably more expensive there than to get someone from Cebu. http://www.mynimo.com/ is local to Cebu. Or you could try JobsDB.com OR Jobstreet.com (almost all the big companies in my place advertise in JobsDB.com and Jobstreet.com)
What sort of skills are you looking for? I know some people who can run Linux boxes.
Edit: Just contact me through my email in my profile since I am a bit busy at the moment to check HN regularly.
hah. I'm trolling for customers, not employees. I serve the price sensitive, you see, but I do it in part by providing less support service, so I need people who know what they are doing as customers.
If you'd like really cheap web hosting, try http://nearlyfreespeech.net. Write something in PHP, and you can use a SQLite database for free (just put a high timeout on write locking) or a MySQL database for $0.01/day or something. They don't support FastCGI, so any other language would require starting up a new runtime for each web request, which is pretty much a non-starter if you can't afford $16/mo (which is the prepaid limit).
If you need money for a project to get kickstarted, try Kiva or Kickstarter.
On the plus site, if 5 of you can fit onto the EC2 small instance, it's only $6/mo, if you do a spot instance with a high maximum.
Thanks. NFSN looks really awesome - but the pricing is a bit too complicated. Right now, I am with Webhostingbuzz and I get a warning for using too much CPU from time to time.
Kiva / Kickstarter is very interesting. I wished I have heard of these sites right before my "busy season".
Right now, I am planning to save up so that I can get a Linode for one year.
eventually, maybe; but right now there just isn't very much price pressure in the VPS industry.
Personally, I won't touch any hardware until I can get it from more than one vendor (usually, until you get to that point it's too expensive. Their may not be much price pressure in the industry as a whole, but in my niche, there certainly is.)
right now there just isn't very much price pressure in the VPS industry
Isn't that the point? Right now, there isn't much of a margin left to undercut competitors with unless you scale up to Amazon or Rackspace size. Developments like this could change that again.
look at my previous post; this is actually a good bit more expensive than 32gib ram/8 core opteron pizza boxes.
People keep telling me that margins are low in this industry. I mean, my prices are dramatically lower than others, and I imagine my my internal processes are rather a lot less efficient than the competition (we provision by hand, for example) also, we're not nearly at scale; I mean, we have 4 full racks and maybe 15Kw of power capacity total- I'm not making a lot of money, but I'm afloat on 1/2 the revenue and higher internal costs.
I mean, if you'll pay my bills, I'll pretty much do it for fun, so I'd expect my margins to be lower than most, but considering the price differences, some people must be making some serious coin.
Indeed. It's a rung on the ladder to times when what fills a rack or a server room today will fit into a 1U tomorrow. There will come a time in the not too distant future when the ability to host a site in which quite literally a majority of the world's population accesses multiple times per day is simply a matter of setting up a relatively run-of-the-mill inexpensive VPS. The implications such technology will have on business, society, culture, etc. are left as an exercise for the reader.
Big surprise, Dell, Hewlett-Packard and IBM not innovating? I can't believe it...
Just look at the computers they are selling. most still offers 512M of RAM as a default. What can you run on 512? Nothing like having to upgrade the day you receive your product. Consumers by and large need the guidance of manufacturers to make the right HW choices and the manufacturers just want the cash. They will suck each segment dry until the market forces them to make the changes. That's progress?
With all the advances in multi-monitor add-on SW or the fact that Windows and Mac OS have been supporting multi-monitor control for years, try finding hardware already fitted with multiple display connections. In the end, the user is forced to customize their own equipment. And yes, I know most HN readers do this, but I am talking about the general public.
Great to see the little guys are still fighting. The big ones really don't give a damn.
A lot, really. I have rendered DVD quality movies, handled product composition databases and processed millions of orders on machines with less than 512 megs of memory.
Not many companies (or people) have enough data or volume to stress a machine with 512 megs of RAM. I agree it seems a pittance - my netbook has more than that - but if I break up memory usage by application here, Firefox takes most of the RAM, then Gwibber, then Rhythmbox. None of them would be running on a server.
It would be interesting to write down a list of what you can't do with 512M.
Well, you can't effectively run Mac OS X or W7 on 512 to start, the most used operating systems in the world. Even Ubuntu needs an upgraded graphics card and RAM to utilize all of the bells and whistles.
As I said before, I was referring to the general public. There will always be hackers or high-end users that can make miracles.
I run Ubuntu off of 512M of ram, and it has always performed well for me, I don't think it is a miracle to run off that amount of memory, for most people's needs they could deal with a lot less memory. Most people just want more memory for more eye candy.
"Even Ubuntu needs an upgraded graphics card and RAM to utilize all of the bells and whistles."
Bells and whistles are what makes computing fun for the average user. Why do you think the iPhone is such a hit? It's basic functionality? or the fact that you can have a lot of fun on it? It even sucks as a phone and people don't seem to mind.
In my IT business, the first thing I do for most customers is upgrade their RAM because their desktops come so poorly packaged and the performance is abysmal. Once done, I have never heard a complaint.
We need to stop talking about exceptions and start discussing general usage. Most users are running W7 or Mac OS X, and run a multitude of programs, apps and at the same time. I believe in breaking bottlenecks.
Lastly, I am glad to see smaller companies making a difference, like the one in the article.
Why do you think the iPhone is such a hit? It's basic functionality? or the fact that you can have a lot of fun on it?
The iPhone was initially a hit because it was the first smart phone that was really designed for the consumer (before that, the way I see it there were mostly business smartphones). It is a hit today because it is seen as the cool phone to have and it is very well marketed. The eyecandy provides only the finishing touch.
their desktops come so poorly packaged
People get what they pay for, and for a lot of people they pay a relatively small amount of money to a company that puts a large markup on the hardware in their PC so they end up with something cheap. Most higher end machines from the likes of Dell are pretty useable on the RAM front from what I see.
Most users are running W7 or Mac OS X
On the server this doesn't matter at all. Servers run mostly Linux, generally with no UI. Or Windows Server - which runs with a UI not that much more advanced that what you might have found back with Windows 2000.
Lastly, I am glad to see smaller companies making a difference, like the one in the article.
I'm sure we all are - but I don't think that they are making the difference you think they are. A 512 core server set-up won't help any user make a home video, listen to their music or play games. What it will help is large datacenters who can serve more requests simultaneously.
The iPhone was initially a hit because it was the first smart phone that was really designed for the consumer. Yes, you are correct, because consumers want the bells and whistles I mentioned earlier. Apple's marketing was a success in highlighting the eye candy that the Mac OS X has been known for. Even MS eventually added most of the visual effects associated with Apple's OS.
The consumer is mostly uninformed when it comes to their hardware needs and that is where the manufacturer should step in. As another HN article discussed, the confusion in PC model labeling is another example of this carelessness. Steve Jobs has always said the consumer does not know what they want, you have to tell them.
I should have said most consumers use Mac OS X or W7 and those machines need more than 512M of RAM.
In helping large data centers improve their power consumption, creating consumer products that inspire and ignite a passion or pushing larger competitors to continually have to innovate, smaller companies always make a difference.
I agreed with the article in the sense that a smaller company has seen the error of some major manufacturers' ways. My opinion was that these major manufacturers do not have the user's best interest in mind and used their PC sales as another example.
They have billions for research at their disposal and they couldn't see that their equipment was too power hungry, would not be sustainable in the long run and probably needed re-tuning? I'm guessing they did.
It would have been better to see a comparison with SGI (ex-Rackable) CloudRack systems, which have a bit of an inbetween approach, using Xeons, but at least nominally seem to pack in more cores in the same enclosure size. One of their power tricks, in addition to pulling back DC converters further from the computers, is to allow things to run hot, resulting in savings on cooling costs.
Too bad the electrical signal on the board cannot be altered in some way to allow less heat production for a given amount of processing. That would be a win-win situation all around. The heat seems like wasted energy, perhaps inefficiency that could be done away with by thinking outside the box.
The SGI machines are geared pretty heavily toward supercomputing applications. As such they're tailored for applications that need to share data between processes efficiently, and that also need to do a lot of computing.
The SeaMicro server probably would look pretty lousy when compared on SGI's turf, since Atom processors aren't designed to be performance-competitive with the current desktop and server processors.
What would make more sense is a comparison a datacenter for a company like Disney or Google or Amazon. I suspect that even though SeaMicro's rig would look pretty bad for heavy number-crunching apps, it would most likely do very well in web and database. Those apps don't need to share a lot of data between threads/processes, and they're generally I/O bound rather than compute bound. With proper caching setups, they're mostly network bound, rather than disk bound also.
An ARM-powered variant would sit on top of this Atom-powered machine, with quite possibly the same or higher numbers of efficiency when looking at performance-per-watt.
The article quotes the interviewee saying that ARMs can be used, but not clear enough to determine if they actually have ARM versions of these machines, which I'd be very interested in seeing 100k specint_rate figures for.
Most of their technology is not related to the processors themselves, but to the rest of the computer. They've got a fancy high-bandwidth, low-latency interconnect between the chips. They multiplex access to I/O hardware, which I'm sure takes advantage of that nice interconnect. They have hardware-accelerated network load balancing which tries to keep the CPUs at maximally-efficient levels of utilization, letting the spare CPUs sleep, and avoiding malfunctioning CPUs. I would love to see this with ARM Cortex-A9 processors, and I think that's actually very likely if this company succeeds.
It looks to me like Larrabee's successor is is going after the high-performance computing market, not the server market. Why else would they devote so much die area to having loads of wide floating-point SIMD units?
There's going to be limits to the amount of pressure as long as Intel doesn't offer ECC for any Atom variety microarchitecture. One of the articles I read said that they thought about using an AMD chip but it wasn't going to ship soon enough. I can see them hearing from their customers (would be and actual) that they'd buy more if they offered ECC and SeaMicro offering an AMD based box in the future.
Great news! Hardware innovation typically means new software opportunities. If this is turns out to be a generally accepted workhorse server design and not just a hotrod box I wonder who will be the first in here to develop a profitable software product for it.
Dell products have features and services that make them enterprise friendly, they are more than just hardware to the customer. So trying to compete with Dell head on might not be the company's best strategy at first. Perhaps following the strategy EMC used with CLARiiON to sell through DELL would be more of a money maker for the company.
Interesting in terms of technology, but of no interest to me as someone who does colocation and web servers for my clients, especially since they almost all use traditional RDBMSes like MySQL and Postgres.
The Atom is too underpowered and too RAM limited for individual systems - you would do better in most cases with a 2x quadcore setup and 32-64GB RAM combined with OpenVZ or Solaris Zones. Lack of ECC = automatic disqualification for me as well.
For a company that is doing a lot of web serving a la Facebook or eBay I can definitely see the appeal. In such larger cases, power usage dwarfs many other considerations.
I would rethink that if I were you. What if you started to provide virtual private servers on one of these boxes instead of offering colocation? I think this innovation makes VPS solutions even more efficient than they are today.
The thing about being a VPS provider is that if your hardware chokes, your customers notice, and they get pissed. Sure, if you are running some kind of web farm you set it up so that things keep running without interruption if you lose a server. But if you provide VPS hosting and you lose a server? People notice, and they will be pissed. If you are a VPS provider and you loose customer data? they will parade your severed head around town on the end of a pike.
Amazon has successfully tackled this problem by training their customers that they can shoot a server at any time. This is good for them, but it's not how VPSs have traditionally operated. customers expect the thing to stay up and for their data to stay safe.
I've seen ECC mentioned a few times in this thread but how relevant is it really? At my old job we used both regular COTS hardware from Dell and HP and high-performance gear from SUN, often at four times the price. The latter had ECC memory and, if memory serves, shielded CPUs but I never noticed much difference in stability or performance when compared to regular hardware. If someone has some numbers on this, I'd love to see them.
if your dell and HP kit was server grade, it almost certainly had ECC ram. before nehalam, it was almost impossible to get dual-cpu servers that would support non-ecc ram. Even today, now that unbuffered ram is supported (buffering and ECC are not the same thing, but it's unusual to see registered or buffered ram that is not also ECC) nearly all server hardware you get from the likes of dell or HP is going to default to using ECC ram.
The big deal with ECC is that with non-ecc ram, you don't know when in-ram data was corrupted. If you have a bad bit of ram that is being used for our journaling/disk caching, it's pretty easy to corrupt your data.
If you've got ECC, it corrects single bit errors, and logs that there was a problem, and it can be configured to crash on double bit errors, rather than just silently corrupting your data.
If you use non-ecc ram, not only will bad ram silently corrupt your data, and sometimes crash your server, but there will be no indicator (other than random crashes) that the problem is in fact bad ram.
You can get away with not having ecc in large server farms if you have lots of checksums on generated data... but in a VPS type situation where people get pissed if you loose a single server, it's sure to end in tears.
James Hamilton (the Amazon VP linked to in another comment) also mentions lack of ECC in his appreciation of SeaMicro; he links to his own take last year, referring to studies with numbers: "You Really do Need ECC Memory in Servers"
http://perspectives.mvdirona.com/2009/10/07/YouReallyDONeedE...
hm. the price data would show it's not more efficient in terms of capital. $140K, right? before disk? for 1024GiB ram and 512 CPUs. with my current setup, I end up paying about $1900 for 32GiB ram and 8 (/much/ more powerful) CPUs before disk. So, I'd need 32 of those puppies, at a much more reasonable $60,800 - it would give me half as many CPUs, but, uh, from my experience a atom core is less than half a opteron core.
how much power would that eat? in my setup, we'd be talking about 3840 watts, each one of those single socket 8 core boxes eats about one amp of 120v, give or take. they say 4w per server, so that'd be 2048 watts for their solution; so yeah, it will take quite a while to make up the price difference.
Now there are some reasons why having your own physical server is better than having a virtual slice, but many of those reasons have to do with I/O, and as these boxes don't come with local storage... so at least for my workloads, they still can't beat the nice, standard opteron G34 systems I throw together in my workshop.
You can put up to 64 hard disks/SSDs in the front.
If your power consumption figures are right, it would take 46 years to make up the difference at 10 cents/kW h. As I recall, you'd divide that by half for the cost of cooling, and then you'd have to figure in the lower capital costs of requiring 1/2 the cooling.
Also possibly subtract some network hardware. The box provides "Up to 64 1 Gbps or 16 10 Gbps uplinks". And it may be more "server grade" than your setup, it integrates load balancing and server management; at the very least, the hotswap granularity is smaller.
That still sounds off, except for companies that are running out of machine room space (the type of premium you pay for a laptop).
How sure are you of your alternative's power consumption? I.e. have you hooked one up to a Kill-A-Watt meter? They claim 1/4 of the power of "today's best in class volume server".
I think we're obsessing too much on the "$139,000" figure, if you look at http://www.seamicro.com/?q=node/38 you'll see there are so many options (1 or 2 GB per CPU, Ethernet and storage cards, possibly/likely some cost for the optional hot swap disk physical infrastructure, etc.).
To make a good comparison you'd need to figure out and price a system including load balancing and network hardware, measure it for real and contact SeaMicro to find out the real price for a comparable system from them. Plus until they work through their backlog they won't be too eager to discount....
I can slot 4 disks in the front of my pizza boxes, too... I was just doing the calculation without disk because the blade was priced out without disk.
my power figures are approximate, but real-world; This is what it says when you log in to the APC.
also, unless you own the data centre, cooling costs are priced into power. 2x20a 120v circuts are the most I can get in one rack (they don't want too much power density, right?) that's 30a 120v usable, or 3600 watts, and it costs me about a grand a month. Being as you can easily slot more than 30 pizza boxes in a 44u rack, density doesn't matter, because even my lowest density solution can give me more power density than the co-lo cooling system can handle, so when I'm pricing things out, I simply count each watt as costing me around $0.28 per month. Which includes rackspace, but as I said, in the data centres available to me, the factor limiting density is the cooling system, not my equipment.
With that many processors crammed into such a small case isn't heat dissipation a problem? Or are the Atom chips used not as power hungry as a standard commercial cpu?
"Atom chips used not as power hungry as a standard commercial cpu" is quite literally the entire point of this startup/product and the article. You had to quit reading before the second sentence to miss this:
> The startup is announcing today it has created a server with 512 Intel Atom chips that gets supercomputer performance but uses 75 percent less power and space than current servers.
No, no: each board has 8 CPU/chipet/DIMM(s?) (latter on the other side of the board) with 4 ASICs each handing all the rest of a motherboard's IO, virtualizing it (e.g. each core sees 4 virtual SATA drives). That's why each board needs 2 x16 PCIe connections, the backplane has a fancy supercomputer style torus topology interconnect, and you can put up to 64 physical disks in front. Lots of Ethernet out the back and there's an FPGA system for the backplane, which might be due to the small number of units but will give them lots of flexibility.
"People who are really serious about software should make their own hardware." - Alan Kay
This news is yet another data point that developers will need to hack concurrency sooner than later, as a core skill in one's professional repertoire. Off to learn Stackless PyPy, Clojure, Scala, etc...
Is Apple just an anomaly? Also, Sun was bought partially so that Oracle could offer a complete software/hardware package, so I'm not sure your point stands. The others had great runs with great hardware/software... they just didn't survive to the next generation.
Apple, Google, and Microsoft don't build all their own hardware -- not in the same sense as Sun, SGI, and Cray. Apple uses Intel and ARM chips (even the A4 is a Cortex A8 with a PowerVR SGX). Apple is builds the best cases for their hardware, but the hardware is all pretty standard.
I think I can afford to wait until people start proving undeniably (through out-competing rivals) that Scala and Clojure have any pragmatic benefit in real multicore programming tasks. Anything that mentions Fibonacci numbers doesn't count, nor do easily parallelizable problems commonly handled in C/C++ (ray tracing, media encoding, web servers, etc.) It needs to be something not trivially broken up into independent parts.
He talks about how this scheme can parallelize non-trivial data structures, and how tedious book keeping can be avoided. I haven't used it yet in a project, so I can't speak of its efficacy - but worth a watch, imho.
Don't forget that part of the claims are not that the resultant programs are more computing resource intensive but that they are more easily developed and debugged. Which is pretty important in this arena.
And while I don't know about Scala, Clojure is a little too young to obsess on preformance, each point release (1.0, 1.1, etc.) is delivering significant improvements.
This seems quite similar to the FAWN project at CMU. http://www.cs.cmu.edu/~fawnproj/ The idea is similar: if IO is the bottleneck, instead of scaling up IO, scale down the CPU power.
I do see a potential problem here. In the pictures, they show a bunch of Atom CPUs soldered directly to the board. That means dire things for service. Now, if a single CPU has a a flaw, you need to replace an entire board of CPUs.
Compare this to a standard blade setup, where you could just swap out CPUs, or even an IBM System Z where you could hotswap the CPU, and service doesn't look so great.
I worked with one of these prototype boxes. I was using it for something a bit outside their common use case: clustered ETL processing of log data. I was quite happy with the performance. In the workloads I had that needed lots of threads, I was able to use the box to spin up a lot of nodes and crunch through several hundred GB of log data very quickly. The machines were easy to work with since they felt like normal Linux nodes, and the interconnect fabric made inter-process communication very snappy.
I'm still confused, is it a single multiprocessor machine, or is it a 'cluster' in the sense that each node has a single processor? (You used both threads and inter-process, but also implied multiple nodes...)
I apologize for the self-reply, but am I wrong here? I use mostly OpenBSD and linux only lives on servers that I don't physically see booting. I remember when I first starting using Linux, there would be one penguin for each processor core. Booting it up for the first time on a dual xeon machine and seeing 8 of them, was kindof funny in a nerdy way.
Would this be different? If there are 512 cores...would it show a penguin for each core?
Sorry, like I said, I haven't actually watched the boot process on a linux machine for quite some time, and now I'm a bit curious...
Sorry if this question is juvenile or stupid or something.
It's a cluster, each node is a single core processor with a chipset chip, DIMM and 1/2 of an ASCI providing virtualized I/O (e.g. each core sees 4 virtual SATA drives) and an connection to a supercomputer style interconnect. Memory is not shared, each core is running it's own OS image.
Sorry I was rather vague there.
Ultimately needed lots of parallelism. Smallest unit of parallelism in my ETL tool is threads. If you need more threads than one core or CPU can deliver, then you cluster out to multiple machines, and they communicate via sockets as needed.
Performance per watt depends on the computer and the workload. It's not apparent from the SPECint benchmarks they show in the article, but chips like Atom are better than beefy server chips for some server workloads, and worse for others, when measuring performance per watt. Here's a paper from Berkeley's RAD lab which tries two Atom processors and a Xeon on several different server workloads, and compares their performance per watt:
The tl;dr version is that what you really want are hybrid datacenters, where you can assign various workloads to different types of machines, and use each machine type for what it's best suited to do.
That chart compares theoretical peak GFLOPS per watt, which is an absolutely terrible metric for server workloads. As the paper I linked to showed, Atom is sometimes much better and sometimes much worse than large, fast server processors in performance per watt on real server workloads. Those charts don't take into account things like cache, or branch prediction, or multicore memory coherence.
The only way to get meaningful performance per watt data is to actually run real workloads on various processors.
Yeah, there are still some single-threaded tasks that I want my servers to be able to tear through. For serving up HTTP requests, though, this sounds great.
I would think that ARM chips would use less power than Atom; but compatibility with existing software is a big selling point (as always). But this would provide a pressure towards ARM servers, and therefore ARM-compatible software.
OTOH, I get the impression that the bulk of the power savings are not from the CPU at all, but from virtualizing the other components. Therefore, the pressure towards ARM is much less.
And I suppose Intel wouldn't be supportive unless there was a compelling long-term reason to choose Atom.
Anyone care to suggest some ideas on a few things:
a) who would likely buy these (corporates, SMEs, startups)?
b) it seems that they have increased the risk of single point of failure (e.g. 1 PSU taking down 128 nodes) what's the mitigation strategy?
c) what would an architecture on a box like this look like? Should I just be thinking of it as a cheap set of VPS nodes?
d) People keep mentioning the kind of processing these chips are good for and not so good for. Can someone be explicit about good real world uses and bad ones?
reply