Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> as a community, we’re missing the knowledge of how to take a proven prototype and continue improving the quality rather than bolting on questionable new features.

No, we aren't. We know how to do that, and can do it when we want to. In software dev we rarely want to, because (loaded use of the qualifier “questionable” aside) product teams tend to perceive (not entirely inaccurately, though sometimes the particulars are in error) market demand for additional features. (Also, what physical products are very often optimizing is unit production cost for equivalent products, not quality. But software already has an essentially zero unit cost, so there's essentially no gains to be had optimizing that.)



sort by: page size:

> A high level of quality in software is not important unless you're entering an already well-served market. I wish it was.

Somewhat weird mindset. It is natural that when a new market gap is discovered, the first iteration of products are crappy and do the job just barely. Applies to software, cell phones, forestry machines, water toilets. Having a mindset where everything should be perfect from the start will get you nowhere.


> We miss subtleties, test scenarios,

This is something only 4x+ perfectionist types of engineers can do naturally.

They cost double the price tag, they need freedom and respect, you don’t attract them by leetcode interviews or arrogant managers but its the only way to get high quality product in reasonable time.

Otherwise you have no alternative but do it like everybody else — spend time and money on training, layers of management, elaborate system of acceptance tests, feedback loops etc.

Including training your users to live with your mediocre product and hope for the best.


> if consumers wanted reliable stuff, they'd buy it.

Trouble is, often I can’t when I want to because no one’s making it—that the industry (whatever industry, you find this in many of them) has accepted mediocrity.

For software specifically, my observation is that attempts to retrofit reliability, performance and the likes almost always fail: these are properties that you have to deliberately bake in with from the start, and maintain lest you find them difficult to claw back.


> It rarely happens that once a prototype is written, the developer finds out it was all a wastage and the customer wanted something quite different.

In my experience this leads to mediocre results. You need to throw out prototypes often, otherwise you'll end up with shitty solutions.

It's an unfortunate fact that as soon as you see a prototype, everyones instict is to fix the most glaring issues and call it done.

But to get to a truly great solution, you often have to throw out the prototype, and start all over.

Unfortunately very few people have the patience for that. Usually the prototype already took so much time, that everyone is already totally stressed, and starting over is out of the question.

If throwing the prototype away isn't possible, it's not really a prototype anymore.


> My (admittedly biased) opinion is that the best way to deliver value to a business is to spent a lot of time and energy figuring out what your customers actually want, which is usually just a few key things, and then take the time to build those features the right way (less time pressure because the surface area is smaller).

I'm of the exact opposite opinion, but we agree on almost everything, funnily enough!

I think the best way to deliver value is to quickly write small, fast, rock-solid but relatively un-featured prototypes and see in which direction production feedback indicates you should evolve those.

The above is based on two things:

1. You can't know what your customers want because not even your customers know what they want. They only know what they don't want once they see it. You can spend a lot of time an money on research but the MVP is cheaper and a stronger signal.

2. The same idea you put out there: most things are bad market fits and should be sunset fairly quickly. By building small things, you optimise for this case. By also making them reliable, you don't make it too painful to evolve them later on. (Easier to add features to quality software than add quality to featureful software.)

This, of course, also leads to less maintenance and higher pace of innovation down the road!


>> I spent a week putting too much time into a web game about Dogecoin. ... I can go from concept to product pretty damn quick.

I hear this pretty often, typically from very talented developers who have never shipped traditional commercial software as a product (which is definitely a dying breed). I don't care how good you are, you shipped a PoC in a week, not a product. It's great that your self reflection has identified what you're both good at and want to do, but a lot happens outside of writing code to create a product and this can take a long time. I've built a few in my career where I had involvement in the entire process, and typically the difference between when we're "done coding" to when the product is completed is 4x. I'm slow so maybe you can get this down to 3x, but I'm not convinced.

If I had one dream for software development the field and craft it's that we would build less, better.


> My experience is that you push out the feature and test them. If there us usage, you keep them

There are virtually an infinite number of features you could be developing. But you have finite resources to develop them.

And some features will take a lot more of your finite resources to develop than others.

You need some way to prioritize what to actually develop.

If your product doesn't actually do too much but is essentially just clickbait, then, sure, your features are very cheap to implement, and you just throw everything you can think of out there and A/B test.


> I am hesitant to build software that already exists because it seems sort of pointless in the grand scheme of the universe

It's unlikely what you build will be identical unless you intend to make an exact copy. You really should target a niche that is being ignored and have something that differentiates your software from existing competitors.

That said, I don't think competing on cost as your only difference is a good idea at all. If your software is solving a real business problem that brings in big income, why would a business care about saving a (relatively) small amount of money? If you become popular, could the big player wipe you out by offering a cheaper price tier? Is price going to be enough to make businesses take a risk of moving away from the established player? Will your low cost make paid advertising infeasible?

Also, don't underestimate the difference in the time it takes to build a quick prototype that mostly works, and building something robust + user friendly that people will be willing to pay money for.


> The biggest issue for founder-type programmers is not code volume, its code quality.

> Most founding code of products I have seen is the most fucked up, horrible, over- or under-engineered, hacky, buggy, usually platform-specific and NIH covered pile of hot shit.

I dunno your background, but I think you have it almost completely, 100% incorrect in your first paragraph, while being completely, 100% correct in your second.

The summary is this: if you're launching a product, you want to do it as soon as possible, damn the quality of the code as long as the quality of the product is sufficient.

The reason for this is because, 9 times out of 10, you're going to be throwing all that code away anyway! It's rare that the first product you ship is successful.

And if it does get successful? Well, then you have the money to redo it, refinish it, polish it, address tech debt, whatever.

There's only two outcomes: the product succeeds or it fails.

If, in the rare case, you launch a successful product and the code is clean, perfect, etc, you haven't gained anything over the person who launched a half-assed, broken-but-still-sufficient successful product. The competitor can still get his product to your standards because they succeeded.

If you fail, then you have spent unnecessary time determining what doesn't work, while the competitor has spent less time and can attempt another product after his failure, while you're still perfecting your first failure.


> When we started development in 2009, few rich web applications existed. And with so few examples, there were no best practices to follow.

I think what you need is not best practices. You need principles.

If you take performance as a matter of principle from the get go, you will never ever ship a slow product.

Most startups nowadays seem to value design more than performance. They even value the design of their landing page more than they value the design of the actual product.

Another thing they seem to value is _perceived_ ease of development, at the expense of performance.

I say perceived because, in my experience, what you may perceive initially as a productivity boost ends up becoming a productivity burden as the code size grows.


> Secondly, the best way to build something good is to have more data.

The best way to build something good is to have talent, intuition and ideas.

There are people creating wonderful things from a single idea without a minute of market research, and there are people just navigating blindly driven by a bunch of metrics they try to optimise separately.

Usually the latter are mediocre optimizations of mature products, and they often reduce its value for the user.


> This, ultimately, leads us to what may seem like a paradox of product design.

> - We need to focus on the high-level design questions first, because otherwise we will make incoherent detailed design decisions.

> - It is important that we get the high-level design questions right, which we can only do if we postpone them for as long as possible.

This rings true to me. At least I can’t find a way around it per se.

In practice I have two ways to deal with (but not fix) this problem:

1. Make it easy to prototype. Trying out is often the fastest way to disprove a hypothesis, or modify it.

2. Don’t over-invest in the solution, even if it’s for production. Instead, pick the easiest solution. Almost all overengineering I’ve done have been wasted, because high level decisions were changed anyway. Leave a comment but don’t implement.

At the end of the day, you have to develop a gut-feeling for unknowns. Realizing what you don’t know is an invaluable asset, if you shift your method accordingly. The more experience I get, the more I realize how little I know. On the plus-side, it’s wonderful when you do find those unexpected problems that are very clear and optimizable, and a lot of fun.


>Your friend’s terrible prototype is worth 100x more than your great idea.

>Why? Because your friend actually did something with their idea. They created something. It may be terrible, but at least it exists. They’re now informed about how to proceed based on something real, something tangible.

I recall an article (or maybe it was a comment) posted on HN about someone who was very engineering focused about their start up. They focused heavily on code quality and product quality. Their product was said to be rock solid.

They had a good thing going in a space they believed would be huge.

Then over a few months a competitor showed up with a janky / flaky product and took all their customers. This new competitor output new features left and right, they often didn't work well, but they existed (unlike the startup in question).

It janky product, it was Salesforce.

Finding that 'executed well' in this case was getting features out the door.

"Executed well" could just be a matter of recognizing "right now we need to get these features out the door". How / when ... who knows how you figure that out.


> companies may stop naively believing engineers who say they can build what a vendor is offering in 2 weeks

I love to build tools out rather than buy tools, but it's not for the resume line item, and it's not for the love of development (I'm not actually a software dev, I'm infosec). I prefer to implement tools internally, when feasible, because it makes them more likely to be understood and utilized.

80% of the functionality in 90% of the tools we buy is un-utilized or under-utilized. We are constantly having vendors or tool owners offering training credits for us to learn the unnecessary bells and whistles of the tool they spent too much money on.

In contrast, the tools we build have no unnecessary features. No key functions that are *future roadmap items* for a vendor. No black-box processes that we can only shrug when asked what it's doing or how, because it's 'proprietary' secret sauce for the vendor.

Now, do most of those tools merit their own business? Not at all, and we don't try to sell them, but many of them would make great open-source projects.

Just because something would not generate money, does not mean it's not a good and worthwhile project.


> if your product doesn't embarrass you, you've released too late

In the age of shallow work, this is the dogma, that justifies shitty things being released as a philosophy of continuous improvement.

In my opinion no lasting and great product can be built that way. Only more shitty things that flood our market. You collect feedback on your shitty app and implement optimization? How does that differ from design by committee?

To be timeless and produce the highes possible quality you have to go deep, understand the problem you are trying to solve and solve it once and for all.

Solve it to and above the best ability you can bring to the table at exactly that moment in time.


> The thing is, there's a lot more that goes into a viable product than just the core features.

Yes, yes, YES! I'm doing this now, and just figuring out the "business" side of the product code is so much more work than the core idea (which is simple to code).

And getting the actual sales/marketing/accounting/admin/etc stuff is going to be 10x worse, I am sure.


> Why would you waste your time proving that something you wrote works, if you don't have product-market fit,

What if your poor execution and untested software simply makes it appear you don't have product-market fit?


> Problem is that most developers think they're in product manufacturing rather than product design. If only the requirements were clear, we could build it right in one go.

Oh, man. I think you hit the head of the nail. One succinct phrase that describes the entire problem.


>I'm guessing you'd be equally amazed at how many people are embracing rigorous software engineering principles, and not ever reaching the point of getting a viable business off the ground.

Yes. It's very strange the process business impose on software teams when the business is really just stabbing in the dark. I think it's because they don't realize software development is really an art rather than a science.

>So, perhaps the best way to go about it is to sell "prototypes" at first

Yes, I think that is a good method. Really everything is a prototype from a mile high view. Word 2.0 from today's view is certainly a prototype. A sellable running prototype.

In my experience, business process means almost nothing in software development, it's the people. Good people are expensive but you will most likely fail without at least a few of them.

next

Legal | privacy