Because designing, building, maintaining and supporting multiple versions of a site costs more. Imagine a tech support call when the first thing you need to determine is whether a user is using your full-featured site or your reduced-functionality site, and then explaining to the end user that they're on entirely the wrong site. Id guess that cost/benefit analysis just doesn't justify the effort in most cases.
True, I noticed that too. I mean, how low do you value your time to not pay $7/mo for your a project you are spending significant time on?
On the other hand... there is a ton of intangible value in the form of knowledge to be gained from switching tech stacks and hosting platforms. So another way to look at it is, this is a chance to re-evaluate and potentially upgrade your tech stack or skils.
It might not be more difficult, hell it is probably easier for each individual platform, but even if each one is 20% quicker than a web based one, all together you would end up spending 240% as much.
Plus then there is ongoing maintence costs, probably a ton more documentation costs (unless they all look and act the same), more coordination needed with new features to ensure parity, and more bugs and security issue surface area.
That's not a trade-off I'd want to make unless it was required for some other reasons.
In the case of software as a service, different pricing tiers are at least semi-justifiable because you can argue that additional featuresets require more system resources to be added to the hardware infrastructure undergirding the site. If you want the more demanding features, you must compensate the provider for the extra infrastructure hit they'll be taking. Therefore, it doesn't feel as lame as intentionally performing a simple downgrade with the hope that you'll be able to trick others into paying more money for something that's already there if you know how to remove the foam/hack the resistor/whatever.
Maybe because 10% on a server is much more valuable than 10% on a client.
On a server, you’re paying for that 10%. On a client, you’re not. If it was 10% for nearly free then sure - but maintaining a separate implementation of a language is costly.
A product that is 10% better to use is a lot better... people routinely put tons of effort into much smaller amounts of benefit, as benefit is difficult to pull off; and the reality is that servers probably aren't costing the majority of your revenue.
No 1. Reason: Streamline operations cost with regards "communication/collaboration needs". Disparate system cost more than an integrated collaboration suite.
I would imagine that they calculated it would be cheaper to scale up their application servers and backend caching than pay for the development resource to facilitate this change.
Well, it seems obvious that if you want scalable, redundant, no downtime infrastructure, it will cost more than non-redundant, non-scalable infrastructure.
While it was obviously a good trade off for this company, it seems silly to then extrapolate that all companies would benefit by getting rid of redundancy and scalability.
Both performance and complexity cost. Yes if you are google/amazon/fb scale you basically have to pay the complexity cost because of how many users/how much money downtime costs/etc. Most businesses are not nearly on that scale. Last I knew StackOverflow was running on 2 webservers and one DB server or the like. Most companies are not even at SO's scale.
Thanks for taking the time @comis, this is a valid question.
My assumptions are the following:
1. Dev cost is not only the time spent by the Dev team but, planning, project management, back and forth between tech and the different stakeholders. On top of that, you can add deployment cost, QA, etc...
2. Miss-opportunity cost, I feel like there is a trade-off between developing core features and supporting features (marketing, sales static pages) and that on a competitive market, most of the time, supporting features are not being prioritized due to a lack of tech resource.
3. Finally, small initiatives have a huge overhead for marketing and sales teams, if adding a drop-down to a form requires to create a ticket, ask the product team to spec it, design it, and add it to a Sprint,... Then, the drop-down will probably seat there until it becomes a critical issue or until it can be bundled with other small tickets. Which might never happen...
This is basically to avoid all of this that I'm looking for a CMS to embed in our existing site.
Because spending tens of developer-hours to save tens of compute-hours usually isn't worth it. Justify to me that it's worth the time investment, maintenance burden, and risk of failure, then I'll let you work on it.
Very much so. The cost benefit analysis must also question how much it would cost to educate the teams on the new hosting tools. Management has clearly decided that it’s not worth the retraining and loss of productivity.
I agree. I was referring to the actual cost of the cycles but there are many situation where you need to find a balance between cost and fluent user experiences. My point was primarily that the consideration that server side resources are more expensive should be part of the evaluation.
That seems...sensible. When you start using a new product there's a lot of hidden costs associated with it (backup, failover, load balancing, support, incident management) so you need to take the time to explain why the benefits of that shiny new thing outweigh the costs of all that other stuff.
For many sites, a single server in a single zone (e.g., a non redundant server, an instance, a slice, a VM, whatever) is the right decision for ROI.
For many sites, the money spent on redundancy could be better spent on, say, Google Adwords, until they're big enough that a couple hours downtime has irreplaceable costs higher than the added costs of redundancy (dev, hosting, admin) for a year.
reply