Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

... until a service reaches a certain level of operation costs, a switch to other, more efficient technologies will be enforced by management.

Nicely put! ;)



sort by: page size:

I feel you're kinda moving the goal posts here.

From "always going to be unaffordable" to "Service will continue to be trimmed until the system collapses"


> then they'll need to either reduce demand

Worst case everything just gets a bit slower (right?), depending on how much traffic went through that cable.


> They'll probably raise prices of anything

> There ought to be strict laws around long service outages, resulting in automatic multiple months of free service, etc. as a deterrent for allowing things to just collapse for hours at a time.

FYI, that means even higher prices...


> How do you stop the billing otherwise?

With prioritisation, so the non-steady state services are stopped/killed with plenty of time to leave the needed foundations still running. :)


> That feels like it should be 100%.

It’s a matter of priority. During periods of revenue or user growth, reducing infrastructure cost may be a premature optimization. Now may be a time to focus on operational and logistical inefficiencies


> make the business case

On the assumption right now there may be less people using the services and may be the least disruptive if anything goes down. Upgrading now means you have less hassle in the future.


I don't think end-users will get the message. Can you illustrate to them how much slower their service will be as a result of rising costs?

> Infinite scalability is also a curse

People don't like to admit it, but in many circumstances, having a service that is escalating to 10x or 100x its normal demand go off line is probably the desirable thing.


> vertical scaling can take you so far these days that 99% of companies will never, ever reach the scale where they need more

its less about the scale and more about HA and service interruption: your service will be down if server dies.


> Of course they can be capped, you just turn off the services.

That's not a he's cap, since turning off services isn't instant and costs continue to accrue. But, yes, there are ways to mitigate the risk of uncapped costs and they are subject to automation.


"Accordingly, we are currently working directly with their next provider to maintain smooth service as they cut over."

"Provider" really sounds like they are talking about infrastructure


> We recommend that over the next year, you identify an alternative solution and execute a migration strategy.

Something enterprise customers love hearing. Ooof imagine being assigned to sunsetting the service and explaining to what enterprise customers it still has.


> if you're only going to use a portion of a server

only if your data usage decreases over time.

There's a break even point, and that break even point is closer than you think. But by then, you're locked in. It is also the same reason why they are able to charge you more money than a self-setup storage system, since once you're locked in, the cost of switching is almost always just slightly higher than staying.


> ...no weird routes to tack on extra fees.

At first. They will optimize that later.


...yes?

You're already offloading critical parts of your infrastructure anyway.


> From an engineering point of view, with a scarce resource, dropping only the biggest user (Netflix) looks reasonable.

The decision of who is the biggest user needs to be made in realtime at the point of congestion, not identified on a quarterly basis by the accounting department and pushed out as policy across the whole network.


> When it reach the cap, all your service stop working?

That exactly what I would like to have. Services stop and I get time to review what's happened without any stress that my bank account will be emptied.


> Real time tele-operations are very expensive and require specialized equipment.

Because this is the current state of affairs doesn't mean it has to be this way in 2 years. That's the very definition of disruption.


> For the past year, they have been down a couple of magnitudes more time than I have spent managing my server.

I have spent orders of magnitude less time feeding my pet rock than the average dog owner spends feeding their pet.

The difficulty of keeping a service online depends on what it actually does. Not to mention, outages are generally caused by making changes. Changes which are required if a service is going to continuously improve

next

Legal | privacy