Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> It has to be timeboxed into a Sprint to make sure software can be released into the market in an “iterative” cycle.

No it doesn't, you just want to be able to get feedback from stakeholders at each iteration.

There also exists the idea of spikes, where the whole purpose is you throw it away at the end.



sort by: page size:

> A Sprint is 2 weeks. The requirements and tech design don't have to be perfect, and they can be updated, as time goes on.

> This mostly works for my company.

What is working for your company (and others) is: delivering software FAST. Which is good and all, but I think it’s important to highlight this because sometimes, some people prefer to deliver maintainable software in a paced way (no sprints, but marathons... I hate the analogy but hey).


> We need to timebox that activity so it does not swallow the time that should go to features and bug fixes

it is probably more so that they can still have good estimates. If you start introducing new work that has no time limit and no timebox, it can push out your estimates.


> Well, that didn't count the amortization of developer time over the lifetime of the product once the product was developed.

I'll bite. I think it actually counts not only that, but also the probability that the product will actually get to be developed, and not need to significantly pivot, therefore throwing all developer time spent on perfomance to garbage.


> May I ask what could possibly take 12 weeks worth of work before having anything useful?

12 person weeks is a tiny amount of effort.

> but in the first sprint you can release the component to tokenize strings, for example.

Ah I see - You just have to redefine what useful means :P.

If your customer needs a search engine then having a string tokenizer is not useful at all.

You don't run a project this way to improve individual developer productivity. You run it this way to reduce infrastructure and project risk.

Often using Scrum practices is a good tradeoff - the problem is the messaging. The truth is we are trading off developer productivity and growth in order to lower overall risk and make the software process more predictable.


> I call those “deadlines” time boxes. I believe that the industry would greatly benefit from distinguishing the two.

Interesting, I've not heard time box used in that context. I've seen it used to limit the amount of time you spent on something. So if there is some tricky bug, you would timebox it to 3 days one week. Regardless of if you fix the bug or not, you stop working on it after 3 days even if you don't deliver a fix.

In what I was talking about, you still delivered _something_, it might just be less than what was (initially) agreed upon. Normally it's polish/ux niceties that don't make the cut and then come in a later version.


> I agree, but is there a way that could happen without slowing the process to a crawl?

I don't see why a more rigorous release process would slow progress down at all. All the iterations that lead to progress should be done on test vehicles, not customer vehicles.

"Move fast and break things" is a development model that should only be applied to low-importance, low-risk systems. Most software development work occurs on such systems, and I think that narrows the perspective of the software development community as a whole.


>This seems a very short sighted attitude to me, like saying that there is no time for testing, there are too many bugs to fix.

It is. The priority for a programmer goes: make it work -> make it fast -> make it clean. And you either have time for all three of those, or you don't. Generally in reality though, when dealing with business needs and product managers, the pipeline becomes: make it work -> alright now make this work -> alright now also make this work.


> but how does one deal with extrinsic deadlines then?

Deliver working software one chunk at a time, however you have to break it down to make it so.


> then they need to provide precise requirements

> They can do that.

Yeah, no, they can't. Not at all.

While some, or a lot of that, is due to incompetence, it is also due to the nature of the beast: Software Development is an iterative process. You have to prototype and figure out what works and what not.

So regardless on how tight your processes are, whether you use agile or waterfall, it will never be possible to give perfect time estimates. You can only improve your risk-management.


> Maybe a spike takes a week. Maybe a spike takes an hour. I can't estimate that.

Timebox it then! If you think the spike is going to take a day, and it turns out you'd need a week to even get the prototype working, then that's a successful spike -- you dramatically increased the lower bound on your estimate.

(Sadly by the asymmetry of estimation it seldom happens that you think it's going to take 5d and it ends up taking just 1d).


> The things that take off are usually created in a short time.

Fair enough, but that's totally anecdotal. A helluva a lot (<-- carefully imprecise) of systems created in a short amount of time also flop. Is a short amount of a time a cause of success? And who's to say they won't have an early access program to help build-measure-learn?

I do think the notion of putting such a long timeframe around a tech project is naive, but it could be done as a matter of any number of non-tech reasons (setting public/shareholder expectations of two separate companies first/foremost, as well as ensuring resourcing from three orgs).


> Once the product-market fit is achieved (...)

> it makes sense to start cleaning up the codebase.

This is very optimistic assumption that people will actually clean the codebase at some point. From what I've seen technical debt can stay very long time. Temporary hacks can become permanent.

Spikes/prototyping can be quite effective, but to be effective in writing spikes/prototypes a team must be disciplined to rewrite/refactor later. And from my experience teams are often not very disciplined and pile of spaghetti code begin to grow...

> The problem with this approach is that if the tech debt

> catches up with you faster than you can figure out the

> product-market fit. In this case it gets harder and harder

> to move and instead of going faster you go slower

Yeah. This is what I've seen. Even in startups quick and dirty code can be developed over months. And in such long time pile of spaghetti just piles up (people can make quite a mess in days. Imagine what they can do over months...).


> Personally, I find it a bit pathetic and childish. I say this as a long time software engineer who knows perfectly well how this business works.

How does this business work?

Why is software special? This happens the same in construction and other industries. The estimates usually make 0 sense and there are often huge delays. Same as store openings, film production, etc.

There's a problem with tinkering and devs going off in the wrong direction but on the flip side when people say that something is done in an hour or a day often never happens. Managers and agile practices that want tasks broken done to the smallest atom just messes things up. To make things work product often never has accurate requirements to that detail either so that complicates things even more. The requirements change on the fly.


>So essentially, the company gets version 2 of the code, at the cost of 1x user time + 1x developer time. Instead of version 2 of the code, in 2x developer time, which is too expensive and never happens.

Not sure why, but in my experience the difference has been closer to 1x user time + 1x developer time <> 4x developer time.

I believe empowering a customer to think along — be pushed to their limits, even — within the boundaries of a system's technical/business domain _from the very start_ is likely to dramatically improve a system's design.

From a technical perspective it may not be architected well, but in terms of user <> business fit it probably will be.


> The CD community is overly obsessed with velocity.

I think CD is about minimising the amount of code released in one go, which allows you to catch issues much faster and revert issues much quicker. Compare that to something most banks do, release once a quarter, and you'll get stuff like that UK bank that went down for days (can't remember which one it was).

I've yet to meet anyone saying you have to finish your features faster.


> But we spent zero time designing a pipeline for content and league cycles. We only focused on the core product, which is very similar to feature creep. Because we never saw the entire lifespan of the product

This is important at all scales. Whenever I undertake a new task no matter now big or small, I first exercise a full pipeline as soon as possible. That often means putting into 'production' something that might not actually do anything but touches the main parts that will be needed.

After having made that 'null product', features and processes can be improved incrementally anywhere along the pipeline.

Currently working on micro-services, I first deploy a version that has the right names, storage, auth, and a status endpoint. The first feature comes after that gets to prod. It also gets the tedium of setting up the CI/CD out of the way so it doesn't slow you down when you're rolling.


> ship awesome product and iterate as fast as possible

And that attitude is why so much software just sucks. Spending some time to contemplate and test and find out what's good and throw away what's not before it ships, all that goes a long way with quality. "Iterate as fast as possible" is one of those immature ADHD approaches and I'd run if a manager forced my team to do this kind of rushed nonsense.


> - I agree with the notion but I don’t think it’s only because it’s cheap. It’s also about speed. Some projects take ages at large companies no matter how much you throw at them

I believe one thing that really kills (software) projects is throwing too much resources (people) at them too soon in the anticipation of growth and success.


>>> slowest part of software delivery is testing

In my experience the slowest part has been marking a feature as done. I loved working at places with QA. I could assign tickets to QA once the PR was up.

Now I gotta build in that I’ll be bumping PRs for review for approximately 30-50% of the time I’m working on a feature.

next

Legal | privacy