It seems to offer a convenience for the services integrating with it that wasn't possible before. If it's a nice addition for the customer then they get value out of it.
I agree, but it also lets you automate and probably lets you get rid of or not hire as many infrastructure people between automation and outsourcing the simple stuff to a managed service provider.
That this as a solution applicable to _personal_ computing is a bonus. The real benefit is in datacenters which could be made smaller, more efficient, and cheaper while simultaneously adding capacity.
Seems like a really smart way to use spare capacity to market your services. Especially so when your services aren't quite the norm (i.e. ARM rather than x86)
it is also a potential piecemeal way to migrate a monolithic app to the cloud. It can be used to offload a function from a larger applications that doesn't scale well with the application itself. (i.e. batch analytics). It can also be a nifty way to a specific function in a different tech stack if that makes it easier.
In addition to what others have said, I think it’s a handy option for a lightweight http backend or proxy when you need it to glue something together. For example, quickly fronting an s3 bucket, or serving static content generated by another container in the same pod, etc.
There are other affordances though: the fleet can now have a legitimate chance of standardizing on a coherent software platform, reduce the network admins afloat (which cost way more), improve cybersecurity, etc.
reply