The way I read this, they are achieving savings by virtually mux-ing (or de-muxing depending on viewpoint) much of everything that's not the CPU. Is this optimized to make the support of virtual servers with relatively low throughput more efficient?
I read it to be they created a chip that runs code written to emulate/virtualize most of the other (non-cpu) logic on a motherboard. so instead of 10 "io" chips and 10 "memory" chips most of which are idle they have 15 virt chips which may act as either "io" or "memory" depending on demand.
I guess (de)muxing is close to that? Multiplexing many "request" amongst fewer real(non-virutal) objects.
I read it to be they created a chip that runs code written to emulate/virtualize most of the other (non-cpu) logic on a motherboard.
That's exactly what I meant by "virtually (de)muxing everything but the CPU." I should have said, "CPU+chipset," though.
I wonder if their architecture is also (unintentionally) optimized for languages like Ruby/Python? I suspect that languages like those tend to have more CPU operations per IO operation. I wonder if anyone has researched this metric?
Well, they're only muxing IO; each Atom CPU has a chipset CPU and it's own DRAM (up to 2GB (the Atom has a hard 4GB limit), no ECC since Intel only offers that for official server chips).
reply