Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

It's a self-fulfilling prophecy. If your userbase doesn't think it needs improving, people with unfulfilled requirements will go somewhere else. Which may be a good thing if you want to limit the scope and serve a specific market well. But it may also leave you behind with shrinking usage and few new users. (since they prefer to start with the overall better option)


sort by: page size:

Because what is good for the user and what is good for company revenue don't always align (at least in the short term).

It's not really. The point is that the market is segmented and you need to target the segment for which your product is the right fit. People who already have a fully-featured tool which solves their problem and they are comfortable with are never a good market.

It's not that they don't "know better", it's that their needs are different. Just like it's hard to sell an amateur tool to a professional, it's hard to sell a professional tool to an amateur.

Of course, that's not the only division. Small business vs. enterprise is another. For instance, Amazon S3 enabled an entire new class of content-based web startup. Not because it was "like having fileservers but worse", but because it served the needs of users who needed to host files, but didn't need and couldn't afford to have their own hardware. A group which was otherwise underserved in the market at the time.


Or they might just find the user experience lacking and want to improve on it.

I make all sorts of small apps and utilities because to improve usability of services i consume. It doest mean I'm some lackey to big corp.


It's definitely not "fewer and fewer users wanting a sync solution" and instead "more and more computer users we can sell something to if we pile features into our overloaded app".

There's a criticism in there that I share because I miss when things were more modular, but I also understand and appreciate the value and opportunities that modern tech is bringing to people.

The back and forth chaos of progress and foundation-building is important but frustrating.


Also currently there is a bit of an incentive to make it user friendly.

If it becomes "consulting-ware" like Linux there would be an incentive to make it user-unfriendly and complicate it with high costumizability, bad UI and bad defaults.


Why? They may grow their customer base by bringing on bigger projects, but how would that alienate the customers who are already using it?

I've read some reasoning about that when they introduced the product... and it wasn't convincing (something about keeping discussion on topic I believe?). I really think their reason is ease of design, implementation, maintenance and scalability.

Because more often than not good enough is good enough. Electron is a great example. Look at the market share of slack, discord, spotify, etc.

Optimizing as step one is a very good way to not ship on time and miss opportunities to competition.


I think there are lots of reasons, some of which are unfortunate (eg. lock in), and some of which are great: we want a bunch of people trying novel things, and separately, one man's "versatile platform" is another man's "stagnant bloatware."

They dropped the ball because no major part of their current business model involves creating a better operating system for the sake of attracting new users

They seem to aim at minimising users leaving and maximising the extraction of data obtained per user


The issue being that most customers won't agree using tools that aren't stable and having libraries that keep on depending on nightly doesn't bring any confidence to those businesses.

Switching costs are very high, so the bar for keeping customers just happy enough to not switch is very low.

In addition, the people buying the software are often not the ones using it, so optimizing end-user satisfaction is not a priority.

In other words, it's software that's built to be sold, not to be used.


Because then people turn around and complain that it's not being improved. Seriously, you can't win here.

If you kill it because it costs something and you have no plan to ever keep it going, people complain

If you let it live but just "keep it working", people also complain.

Additionally, killing it brings certain nice guarantees, like, for example, "you can stop worrying about security bugs, privacy bugs, etc at some point" because it's gone.

You can't really just keep stuff working anymore, even in the simple case, new attacks get found that apply to old code, etc.

When those things get busted, or bad things happen, people complain about it.

As for "why not just put one person on it", there's a multitude of reasons. Let's start with "what is the career path for a guy who does nothing but keep a dead product walking?"

So in reality, what people want is "keep focusing on products i care about". But that isn't always a viable model.


Success hides problems though. If through some unforeseen means a competitor were to arise in 5-10 years who happened to have a better UI that works for a wider range of people, then spending 35 years saying “They can’t choose anyone but us! Let them eat CLI arguments!” is going to look pretty silly.

If it would cost a tiny, tiny fraction of their revenue to shore up their UI to make it harder for a future competitor to disrupt them, that seems like a worthy use of money.


Anwering Q1: a number of reasons, the most important being: 1) programmers' immaturity, we're always looking for new toys; 2) excess of venture capital fuelling too many new projects, opening space for new stuff to be used instead of the old, stable and boring ones; 3) product/design people asking for increasingly insane UI frills, influenced by trends started by big tech's products.

Q2: it depends. If you think of what users need and pay for, yes, it is HUGELY inflated; if you think of the actual requirements we receive, maybe it's not that inflated -- at least part of this complexity is needed in order to build such truckload of eye candies.

It's so disheartening to see such a gigantic waste of effort. But that's the current state of our field.


That stems from the wrong idea that a software has to grow endlessly in its user base and revenue.

You start offering A, then you offer everything from A to F, putting complex strains on design and overwhelming your users over and over. Then you start losing the customers that liked A but find the software too bloated.


I think this is a self-aggrandizing myth that is common in forums like HN with lots of programmers and designers:

- UI's usually do not improve. They "improve" by making subjective changes for the benefit of the maker (often at the cost of the user).

- Platforms are constantly shifting when they are owned by corporations. At first, businesses will penetrate the market by creating a competitive platform. Once their platform has reached critical market share, they will switch to milking their users who are now trapped due to vendor lock-in. This forces users of proprietary software to constantly jump ship in order to get a good deal.

- Features sometimes expand, but most apps will reach a point where it no longer makes sense to add additional features. In the free-software world, we generally try to make an ecosystem of programs that work together: if a single program is too complex, its features may be divided into several smaller programs. In the corporate world, a single program may be bloated and expanded long past the point at which additional complexity will serve the user- so long as developers can continue to invent new ways to extract money, data, etc from their users.

- New vulnerabilities may be uncovered, but generally vulnerabilities are only added when the complexity of program increases. Vulnerabilities don't just appear out of nowhere. If you write a program, it doesn't just become more vulnerable over time just by merit of being old.

- Server bills usually need to be paid because people have inserted themselves as middlemen. If the internet was designed in a way that did not require paying a racketeering fee for DNS, PKI, etc. then we would see a lot of decentralized alternatives to "essential" services.

It's hard for me to entirely put into words how flawed your statement is. If you have time, you may want to check out a book called "Bullshit Jobs: A Theory" by David Graeber (https://libgen.gs/edition.php?id=5852679). It might give you a new perspective on the software industry.


Maybe the demand is from the worst kind of users a business would want, those that would never convert to actual customers but want something for to tinker and install for free...

This, in my experience, is what seems like the most likely culprit. A CTO/CIO gets impressed by a fancy feature list when really, what would be the best fit is something simple, stable, and easy to use. We have more and more examples nowadays of ease of use beating out giant monoliths and it's only likely to continue except in a few fringe use cases.
next

Legal | privacy