Indeed. You should never, ever just spin up a single physical box, put virtualization on it and go to work. Doing this is not only not engineering for failure, but is setting the company up for failure.
I agree with you. Many applications grow linerally or not at all. I personally run a couple hundred servers, but it's all VMware, chef and clustered. Frankly, most just sit at a stupid low load average, but I don't fuss it because the spare cycles aren't wasted and I can have hosts and storage fail with impunity.
Many ways of doing it. Virtualization is basically all direct hardware access for the stuff that counts.
But I digress. I am guilty of over building things. In part for fun. In part hubris. In part I like sleeping at night
Virtual machines are awesome and have many incredibly great uses. However, they aren't awesome for everything. I would personally tend to avoid machines if I couldn't run an OS on the bare metal for many purposes.
VM centric development seems so wasteful to me especially doing it on your hacker home lab. Say you have six cores, overprovision 3x, that leaves you with 18 vms each running One Thing in the vm ethos. Meanwhile, open a pc or a mac and see how many services are running on the bare metal, its probably over 100 and not even impacting you. The author wants you to run parallels to run vms on macs… its madness
I know a good way to make a process make the most of the hardware and play cooperatively with other processes: don't use virtualization.
I will never understand the whole virtual machine and cloud craze. Your operating system is better than any hypervisor at sharing resources efficiently.
reply