Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think, ultimately, what matters is how the labeling and corrective actions falloff over time.

But reasonable people will notice that information is streaming In faster than any single person can code



sort by: page size:

Depends on how often they intend to make changes to that code. But in general I think you are right.

Yep consistency truly matters, since its likely this won't be the only need for data retrieval and everyone doing their own special thing means the code becomes an unreadable, in-consistent mess that cannot fit in anyones heads and development velocity slows to a crawl.

No more often than the names of the sub-functions themselves becoming inaccurate ime.

At some point the system does depend on its authors doing the right thing.


Another thing to note, however, is that not only is the code changing too fast, but also that code doesn't always do what the programmer thinks it does.

What I'm saying is that depends on the nature of the code that's being written. Many small improvements can be made with a simple code change that has clear meaning on its own -- a long approval process would just make these changes less likely to happen.

Almost every day I guess? I mean writing new code always needs seemingly subtle decisions which will affect users sooner or later.

That goes back to if the decision to assign them is strategic, or not. If they wish to deprecate the language, it will not be done. Also one could argue that that small code change which took 5 minutes could have cost the company hundreds of hours supporting their clients if something happened. So that would make that very small change a very big change, not based on effort but on impact.

In an ideal world, correct. If this is the case:

> changes can take literally 20 times as long as they should while we work out what the existing code does

I imagine you have to sell the extra time to (aka convince folks this is the right path) _somebody_. Maybe I've been working at the wrong places :) .


I think this is true for any specific moment, but as we are creatures of the 4th dimension, we must also care about how we change our code over time. When code complexity is managed, users can enjoy better versions of a piece of software more quickly and more often. If we let our code deteriorate into an inscrutable pile of tech debt, it will eventually affect the user experience, so the guts need to both do something correct (and useful) and also be amenable to change over time.

It seems to me like a big part of this would depend on a stable foundation. And in the software world, that foundation shifts a lot. Last I checked, no one in the industry had come together and agreed on a list of programs that would never be changed again. And even if you could stabilize it all, how long would it be before you're forced to port it to new hardware?

The moment anything shifts, any validation work you've done is called into question. Okay, for 30 seconds, your program was proven correct; but how about now? Did the change cause a problem? And how, exactly, would you detect it, without repeating the validation entirely?

To me, this does underscore the importance of keeping programs small, stable, and well understood. If anyone's bragging about how many millions of lines of code they have, as Microsoft put in a bullet point for Windows once, they should not be considered serious programmers. The bigger something is, the less it takes to introduce one of these unvalidated quirks.


Let's say we find a whole new edge case after this thing has been running for six months. Now we need to update the data structure and the generator code that knows how to interpret the data. So I'm not sure how much is gained.

I think one pitfall that a lot of software designers fall into is assuming they can know the entire problem domain up front. Maybe that works for a super-mature industry like airline reservations or something. But I still tend to doubt it.

In my experience you constantly get stuff that borks your data model after going live. I always assume this will happen continuously throughout the lifecycle of the product, and try design accordingly.


I think it depends on how confident you are regarding the change and how important the code is. If something needs to be done, don't use language that suggests it isn't important. Similarly, don't made a big deal out of things that don't matter.

This is true until it's no longer solving the problem. Maybe the business grows or changes and the software needs to change with them. Then the code needs to be easy to change and robust.

On the flip side, if nearby things are never updated to match changing understanding of the system, then very shortly the code will be cluttered with possibly dozens of different styles: multiple naming conventions, constants in some places, hard-coded values in others, and values read from a parameter file in others, and other kinds of variations. The result will be a chaotic scramble that has no clear structure, and requires programmers to know and understand multiple different ways of expressing the same business concept.

Now that is truly increased risk.


like everything, it'd definitely change. Adapting simple code for a change is much simpler than adapting a complex code even when that complex code supposedly had that change already baked-in from the beginning, yet naturally almost always not exactly right. 30+ years ago starting programming i was of an opposite opinion :)

I disagree with this emphatically.

Code is meant to change. It changes. That's really all there is to it.

It's like saying a tire was bad because it got changed at some point during the lifetime of the vehicle. It's just not close to related.


Interesting. We find the situation to be quite the opposite. The older and larger the system, the more expensive figuring out is.

You mention a 3-6 months. That's quite a high price. Would it be inconceivable to optimize this significantly?

Stability can also be misleading. It's especially the small changes that can be problematic because when the code is large enough, people simply do not know what of what they already know (what's already in their memory) is no longer true. This is the challenge. Instead, the proposition is that we should not have to rely on our memory of things that can change. We can just check it. Only to do it in a reasonable timeframe, we need custom automation.

In any case, the article does not propose a tool. It proposes a way of working. The tool is important to show how it can work in practice.


I understand where you are coming from but I don’t agree.

Yes in a low quality codebase with focus on producing changes quickly and not maintainability it is true that the code will change but comments/names stagnate. What you are referring to is reinforcing a problem a cultural and organizational issue, which left unkept will make progress stagnate in the long run.

But in a high quality codebase armed with proper peer-reviews this divergence of name and implementation won’t be tolerated and if such divergence exists it should be considered a bug not the expected state of things, such a defect should be resolved when found instead of making it the norm that you can’t trust the code base.

What if we couldn’t trust that parts in our cars and heavy machinery does what they say, I wouldn’t want to tear down the engine every time I’m about to use a car just to make sure there’s actually an engine inside and it’s not a fridge compressor due to implementation diverging over generations. This is of course an extreme example but what I’m saying is that we should allow our code to decline into such a state to begin with.


Ultimately all that matters is how many person-hours an organization dedicates to a product that’s considered low priority, finished, deprecated, whatever you want to call it.

Debating all the linguistic similes we can use to describe the process is fruitless.

Despite that freedom of interpretation, I think your hypotheticals have really clear answers that most software engineers would agree with based on what was originally promised and intended.

If a user expects a sorting function to have ascending and descending, but I only originally delivered and promised ascending, later adding descending is a feature/enhancement. It doesn’t matter that the user didn’t like my original implementation, I didn’t promise descending.

Conversely, if I add the descending function because I originally intended for that function to be there and wrote some amount of code for that function, but it doesn’t work, that’s a bug fix.

I don’t understand the ambiguity, the difference is clear as day to me.

In any event, the whole debate revolves around the language of intent. Truthfully, a group of people can make up whatever language they want to describe their approach.

next

Legal | privacy