I working in XP for a bit and the constant pair programming aspect was simply too fatiguing for me. I enjoy pairing every now and then on certain problems, but the constant presence of another person left me utterly drained at the end of every day.
Yup, my major beef with XP back in the day was in its arrogance: This is the way to do things, and if you don't do it entirely this way, you're not extreme programming! (that was literally in the manifesto)
But the article highlights the good points of agile, and I agree with them:
- Iterative development (smaller steps with faster feedback)
- Unit tests (and building with unit tests in mind)
- Code Refactoring (which you can do confidently when you have unit tests)
The rest of XP/Agile is not necessary, and in some cases even detrimental. YMMV
> if you don't do it entirely this way, you're not extreme programming
But isn't this... OK? Extreme programming is probably defined as a certain set of practices, and if you are doing a subset of that, you are doing something else. This doesn't make you into a bad or a lazy person; it just means that your process is something other than extreme programming.
It's the same with scrum. It is extremely common for people to pick some practices from the scrum framework, skip others, and still claim that they are doing scrum. It gets very confusing when people do this.
Scrums biggest problem has always been management interpretation of sprints as deadlines (commitments) and not estimates (forecasts).
The former eventually leads teams to lowball everything to make sure they always complete everything at the expense of trying to accomplish more. Which will then lead the company to wonder why everything is so slow.
Getting people at the top to understand that challenge is the hard part.
Scrum, XP and/or Agile are not going to save you if the guy leading the project decides the hardware and firmware have to be completely re-done for no good reason other than his own opinion, lack of experience and ego mixed in with a bit of "not invented here syndrome", "if it ain't broken fix it anyway" and "make it complex not simple stupid" (recent random example).
My point being: other aspects of companies and teams are far more important than the latest fashionable project management methodologies/frameworks/philosophies.
The critical piece to any framework is how work is prioritized. If it’s by 1 guy, you will have the problems you describe.
If you have an approach that forces the people doing prioritizing to weigh Return on Investment (benefit or value / estimated time) to largely guide the priorities, busywork with little benefit like you describe won’t be prioritized.
Often the people at the top understand. However what most developers fail to understand is the guy at the top really needs accurate estimates. the more accurate the estimate the better he can do their job. Thus they are constantly loohing for a magic bullet that gives them that. it doesm't need to be perfect, but the closer the better.
It's not that they need accurate estimates is that often sales people want to share those estimates externally, so anything that takes longer due to unexpected complexity becomes an accountability issue.
There is a balance there. sometimes you need to sell on the next version's features to get enough now to develob that. Not everyone gets unlimited venture capital - and it isn't always a good idea to take it if you can.
Accountability is everywhere, but software managers are always looking for the silver bullet that would let them have a stellar reputation of being on time.
> Scrum practitioners/coaches always seemed to have this idea that if Scrum wasnt working then you werent doing it properly.
And if people mean different things by the word scrum, it is rather hard to tell whether they are doing it properly :-)
Scrum is pretty difficult to do properly (the famous "easy to understand, but difficult to master" formula; although I would argue that it isn't that easy to understand either, at least not something that someone unfamiliar with it can understand in a day or two). It requires certain changes within the organization, which few organizations are willing to adopt.
I find this to be a problem with agile, but not so much with Scrum. It's quite proscriptive and there is a set of quite specific rules and processes you can follow.
> I find this to be a problem with agile, but not so much with Scrum.
I am seeing problems left and right. Mostly because people often copy the activities (they would often even call them rituals or ceremonies, as if to make it even more obvious that they are treating them as mysterious quasi-religious practices) without understanding what they are intended to achieve.
For example:
- Lots of daily scrums in which people go around the circle saying what they did yesterday and what they are going to do today. No sense of a common sprint goal, and no indication of collaboration in reaching it.
- Lots of sprint reviews that are just demos
- Lots of sprints without goals, that are just timeboxes to finish a certain number of tickets/stories
- Lots of teams that remain hierarchical, and instead of a real product owner have some kind of a middle manager (who might also assume scrum master responsibility, because he is manager)
- Lots of teams focusing on story points, velocity, and estimation
- Lots of teams that don't adapt previously formed plans to the emerging reality
>Lots of daily scrums in which people go around the circle saying what they did yesterday and what they are going to do today. No sense of a common sprint goal, and no indication of collaboration in reaching it.
I remember reading the scrum manual but I don't remember it saying anything about it being necessary to show esprit to corps and a "collaborative spirit" during standup.
It did say how to conduct a standup ceremony though...
>Lots of teams focusing on story points, velocity, and estimation
The problem is that these kinds of things become shackles for the mind, denying the agency and mental suppleness required to navigate a changing and chaotic world for which we can never achieve perfect knowledge (or even anything remotely close to it).
There is no such thing as a perfect set of rules or practices - only things that are useful in certain contexts. It is the practitioner's task to decide which principles apply, and to what degree.
Saying "You're not doing [brand new popular thing] unless you do it this exact way" is a trap laid for neophytes who lack the experience to discern that this is just arrogant bullshit, denying them the exercise of their own brain to judge what applies where (and learn from their mistakes).
Code refactoring with unit tests is fine when it is within the unit test units, but when it crosses unit boundaries the tests often must be re-written, doubling the effort required.
I know some people really work well when pair programming, but I find that I either slip into Student Mode or Teacher Mode when I try it. Either the other person knows the environment we're working in (architecture, codebase, etc) a lot better than I do, and I'm using the session to gather as much information as I can, or I know the environment better and so I'm explaining why I'm doing things.
If the other person and I are on par with the environment then we're more likely to have a discussion about what we're wanting to achieve and how best to go about it, but then one of us will go away and knock out a first cut for the other person to review.
I do wonder what differs between groups of people who can actually do pair programming and those that (like me) that can't.
I think slipping into student mode or teacher mode is a fantastic reason to pair: it's full time training. You quickly will all rise to the skill level of the best parts of every engineer on the team. Otherwise training is rare and slow.
My mode is either driving and listening to advice or watching and trying to spot things the driver missed.
Occasionally I do side-work if Im not driving - e.g. look something up, message devops, a very small PR etc. so the driver can maintain focus.
I find it to be much more effective that working alone simply because so much stuff that would take me 20 minutes to to go down a rabbit hole ends up being caught by an extra pair of eyes in 4 seconds.
I actually loved it. I had never learned so much or shipped so quickly and reliably.
And then I moved teams... And realised it's all about the people and culture around you.
I probably wouldn't sign up for it again when joining a new company, because at its worst it's absolute torture, but I still believe that at its best there's nothing like it.
You both can be right. It can be the best way for you, and it can also be utterly exhausting for them.
Personally, I'm not willing to get rid of _all_ my comfort zone to get something out the door more efficiently, where shareholders and C-levels get rich off my labor, and if I'm very lucky, I get to keep doing that.
first, there's a definitely a stamina to it - I worked in an XP team for best part of 3 years and have at other jobs done a lot of full-time pairing (sometimes months at a time).
Second, there's a chemistry element. When there's good chemistry it can be quite effortless and the chemistry can grow - as you start to develop an intuition for how the other person thinks there's less communication and mental jostling.
I've had remote pairing sessions that have felt almost as fun as online gaming.
I went through a period of a couple of months where I pair-programmed almost every day, and it was probably the most productive time of my career. My partner was very junior, and new to the project, and the process of explaining things to him definitely helped avoid some of my blind spots. He had several good insights and ideas, too, coming in with fresh eyes.
It was high-energy, though, and an extended period might well have been fatiguing.
If I had to pair-program, I'd probably call in sick. And if this was a repeated demand, I'd eventually just quit.
To me, this is one of the worst ways to work. I very much prefer to communicate in writing, and at prearranged times. To me, the idea that there's another person working with me on the same problem at the same time is as absurd as having another person cooking in the same kitchen with me, or painting on the same canvas as me. Whatever result I can produce on my own, it will be ten times worse if I had to do it with someone else working on the same problem with me.
XP is the only methodology I've used that gave direction on what programmers should be doing. Most methodologies have plenty to say on what happens around the programming, such as planning and design, but when it comes to producing code---the entire point of the exercise---they treat it as a black box. I don't think XP is the final word on programming, or necessarily even good in all its suggestions, but I think it's worth knowing just because it tries to tackle the most important part of software development.
And after 20 years Agile has been subverted by faux-agile methods such as Scaled Agile Framework that unfortunately have a lot of traction in the industry.
The main philosophy of XP, as stated by Kent Beck himself, is to take everything good to its extreme (hence the appellation Extreme Programming).
For example, if testing is good, then it must be done to the extreme. Basically this means you will need to have a 100% test coverage to comply to XP.
But history proves that extremism and fanaticism are bad and harmful in every area of life, including in software development. You can, for example, read about the consequences of testing to the extreme here [1]
Where did the extreme programming gurus advocate for 100% coverage? I've never read the original texts, but I'm familiar with some of their writing, and this doesn't quite jive with my conception of their opinions. Do you have a specific place I can look?
Parent commenter said "basically" have 100% test coverage, and I'm not sure what is meant by that. From the text[0]:
> Any program feature without an automated test simply doesn't exist. Programmers write unit tests so that their confidence in the operation of the program can be part of the program itself. Customers write functional tests so that their confidence in the operation of the program can become part of the program, too. The result is a program that becomes more and more confident over time -- it becomes more capable of accepting change, not less.
> You don't have to write a test for every single method you write, only production methods that could possibly break. Sometimes you just want to find out if something is possible. You go explore for half an hour. Yes, it is possible. Now you throw away your code and start over with tests.
[Emphasis mine.]
I can see how this is interpreted both ways, so I'll leave it to you. For what it's worth, I found the book to be an easier read than most books on coding practices.
[0] Extreme Programming Explained by Kent Beck, Chapter 10 under "Testing"
The original Agile Manifesto was very valuable, and really did strike a cord.
The original Manifesto was actually about being "Agile", developers being developers. How to really focus on being the best developer. Moving fast, focused, cutting away un-needed tasks.
I'm continually disappointed that it spawned some giant consulting industry that became an onerous layer on development, a wet blanket.
Agile had it's moment back then.
Is agile still appropriate today?
Now we have agile coaches (per team), agile leads (all teams' agile lead), scrum masters (the guy who does the meetings), and project owners (giving requirements), and lead project owners (planning) all trying to get developer time and attention.
Right size this, OKR that, attend this attend that.
Wasn't agile always about finding what works best for the team and doing it, with consideration of tracking what works well and what doesn't, and refining the process to fix it?
Now it seems to be some sort of glorified job role, so even if the team get it right and run well they're patronised into attending non stop meetings from non technical staff trying to justify their jobs and whose own job role even states if the team is running well and self organising, then they need no involvement.
All I hear these days is, how can we fix that? And the team answers; the delay is off team, in either third parties, business itself, or we've worked around the bits that we can.
The dev team is now being asked to fix other teams' delay causing problems and then effectively silently shamed for not doing it week after week (because it gets raised again and again).
This is end game and wasting time if you're team reaches this point, yet the coaches still pound that shaming in like there's something that can be done in team.
Of course these meetings aren't meant to do that, but when has human psychology ever been considered (being asked how to fix something a dev has no control over is now a common theme in questions I'm asked)?
IMHO agile in its current bastardised form should be taken out the back and shot.
When it became a rigid set of rules (oh but "that's not how it's meant to be applied" is the war cry!) which promotes long term job roles preaching it, adds nothing to the teams' ability to adapt, and the team is already meeting most sprint goals and deliverables, the team has hit peak agility already.
Why do people always say agile doesn't work and then tell some personal baggage story that clearly shows their company is doing it wrong. Is agile really to blame?
Agile, as in fast, periodic cycles of gathering requirements, design, implementation and release, is a good idea, especially when this matches your scenario.
In my limited experience, Scrum usually comes with a bunch of people who think Scrum principles come from some holy book and act accordingly.
You can't estimate how long your tasks will take because you are flooded by interrupts (coming from outside the Agile team or even from within)? Well, shame on you. Sprint review will show a decrease in completed tasks and everyone will wonder why.
At the next planning you want to add stories without story points? The horror!
You are part of an Agile team and also have to do stuff for another team? Good luck closing your tasks without missing stuff that was assigned to the Sprint.
Daily meetings take more than 15 minutes? That's not how things should be!
Kanban is a much more reasonable approach, because its focus is on "flow" and not "whatever we decided three weeks ago without taking into account things outside of this team".
> Scrum usually comes with a bunch of people who think Scrum principles come from some holy book and act accordingly.
well, there is the 'scrum guide' so its not surprise imo [0]
> You can't estimate how long your tasks will take because you are flooded by interrupts (coming from outside the Agile team or even from within)? Well, shame on you.
i think the fundamental issue of scrum/agile is thinking one can estimate accurately in the small/short-term and do it continuously. if you have deliverables every quarter for example, as long as they are delivered isnt everything else just superfluous?
>well, there is the 'scrum guide' so its not surprise imo [0]
It sure has a book (in fact, many books), but that doesn't mean it should be treated as the <enter favorite holy book, if any>.
Before ITIL4, ITIL v3 (a famous IT service management framework) didn't mention Agile. Changes must be requested (including the actual configuration items you want to change), someone else evaluates them, if the risk is too high you need approval for a specific board, and so on.
Companies were following ITIL practices to the T, and some people said "don't you realize this whole thing is ridiculous in most cases?".
Cue to Agile. Sure, Agile itself promotes adapting to the environment, people not process,... But then Scrum with its rigid practices is treated as a sort of religion, leading some people to say "don't you realize this whole thing is ridiculous in many cases?". Pretty ironic.
> think the fundamental issue of scrum/agile is thinking one can estimate accurately in the small/short-term and do it continuously.
I agree. Doing stuff and estimating stuff are clearly different skills. And it is well known that many programmers estimate time by doubling, or even multiplying by 10, the time they expect to require for a task. So you have some people who vastly overestimate, others who underestimate,...
Yeah wherever I've been, it has worked quite well? Where I've worked at the focus has been on short iteration cycles and delivering value to customers, and retros were used as a tool to identify where we can improve. We didn't have any rigorous process, just added and removed processes where we saw it was needed, didn't have any scrum masters either. Sometimes I've been in teams where we removed standups and grooming, when everything was crystal clear. Then later on, we added them again when we had new people and scope was less clear. I think that is in the spirit of agile, and it has worked decently where I've been.
Is it a wonderful process that would work with any team or is it a wonderful team that would make any process work, or maybe (what I think) has the team evolved a process that works based on influences like agile and team dynamics, and is probably still refining it.
Agile is like communism - everyone does it wrong but if done right it will work. Every single company doing agile sucks at it. All you need is to have a brief chat in the morning, and every couple of weeks go through what needs to be done and try to estimate things. Stuff not done spills over. Agile militarism is about turning software into manufacturing and it clearly doesn't work.
Way back around 2000, I worked at a company that used what most people would think of as an agile dev process, administered by the devs, for the devs, and it worked pretty well. It did the thing that a software process needs to do: let everyone know what was important, who was working on what, who needed help, and what needed to be done differently. And it was good.
As a _philosophy_, Agile gets those things done. It's fine.
As a _process_, as most places that "do Agile", it exists for it's own benefit and not for what it _does_. In almost all cases, it's a people problem more than a process problem, but no process will solve those sorts of people problems.
Well kinda. There’s wisdom in crowds (sometimes). If it doesn’t just naturally gain adoption then that has to count as a negative. Maybe the positives outweigh the negatives, but I think that’s arguable.
Making full time job roles around it imho is doing it wrong and this was an industry decision. Agile consultants (or fixed length contract) yes, full time roles, no unless they are going to start removing actual impediments in day to day work rather than remarking on them.
It’s selling snake oil from the beginning. Prototypical snake oil salesmen like Martin Fowler, Kent Beck and Uncle Bob have made careers out of it, and have fostered a generation of people doing it at the company level.
Meanwhile Kent Beck [1]: "I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence"
Yeah totally. I find tests a useful design tool in what I do, but I definitely only test everything when I'm working on a complicated piece of legacy code that I don't understand.
What I find most appalling is that even though all these positions are staffed, there is no limit to what they will ask developers to do. They want the developer to come up with all the ideas, write the stories, do the work, do the coordination, do the release plan, roll to prod, do all the documentation, and on and on and on.
This is an absolute nightmare for retention and recruiting. When I went to university to learn about computers and technology I always dreamed of writing user stories and doing other mostly pointless clerical work.
It was never appropriate. It was created by consultants to sell consulting services. In that way, it's a huge success. As a practical development methodology, it's always been a disaster.
At the consultancy company i worked for in the past they would rent out dev teams in sprints time periods xD. As a way to get a foothold at a new client. First design do orientating talks about the full product. Start shaving away features for the MVP and sell the MVP as the first shot.
I found it quiet genius move from their sales teams. Because usually customer went well actually i could also use feature x, y and z that didn't made the MVP product.
It was created by consultants to sell consulting services.
This is not how it works. "All revolutions are conceived by idealists, implemented by fanatics, and its fruits are stolen by scoundrels" — Thomas Carlyle.
It started out with good intentions -- it is hard to dispute that. However, it has turned into one thing -- a fraud factory. People preaching it can be waterfall without all the proper planning of it.
For a moment I thought this was about Windows XP, given the title and date...
IMHO following any sort of methodology dogmatically is unlikely to give great results, and the best path is probably somewhere in between two extremes. Waterfall emphasises "plan ahead" --- a concept that seems to have completely disappeared from some types of software these days.
I was in the RUP universe for a period of my career and I would never want to go back on it agile was a refreshing breeze back then.
However, the framework needs updates in several important areas: operations, data science, end of the PO figure and incorporating other actors as part of the feedback cycle, tech debt management, how to deal with uncertainty more efficiently, and so on.
Unfortunately due to a lot of issues in our industry, agile got a lot of bad rep plus a lot of luggage it turns very hard to have a dispassionate discussion that the IQ of the participants does not drop 20%, at least. It's a shame, but I really would like to see some updates on this.
When I started out (80s), we would work closely with the real users and they got what the wanted. Back then development was text screens.
Around 10 years later, some methodology came along and working with the users ended. This methodology was suppose to give the programmers more time to work. We ended up with frustrated users who started moving to Lotus 123 or Excel. Back then I developed (on the side) many processes where my users could get the data from the mainframe for use on their PCs.
Agile came along later with prepackaged software, AMAPs(?), SAP, Oracle and many others that I have forgotten. Now we have many many users spending at least 50% of their time (some much more) testing upgrades instead of analyzing their data. End result, users are just doing what they can to get by whatever GUI screen is presented to them, not caring at all what data they are entering in. And most real work occurs in excel to the point where users are downloading gigs of data to their PC to get away from the canned, hard to use Enterprise Systems.
Try building any high-stakes engineering project (say, a bridge, or a production operating system) like that…
With Agile, at best, you will build an amateur treehouse.
Sure, prototypes that test the viability of certain parts and aspects of the system are very useful. But having those prototypes and toy models then be “free-styled” (without a clear design document to be referenced - what exactly is the system supposed to do?) into a repurposed role to deliver the whole system’s functionality and reliability is the recipe to disaster. Or in other words: technical debt.
- Where’s the Systems / Systems Integration perspective?
- Where’s *Repeatability*?
(middle management does like Agile to justify their many times useless positions)
PS: I believe “Agile” explains a lot of the 737 MAX owes…
I started my career back in the mid-80's and so I've lived with the process being described here. Kent Beck was a breath of fresh air and we all thought yes, this is what we've actually been doing! Mostly.
Here's the thing: in some ways I think what we were doing back then was better. I just noted XP was essentially what we were doing anyway. Different teams and projects essentially chose their own internal methodology depending on what worked best for the project. Waterfall was what was presented to the project stakeholders.
What does that mean? Paradoxically, we actually had more flexibility back then! Thanks to the tooling, which more or less standardizes the methodology, and all the Agile Coaches which have been trained on that tooling and standardized methodology - Agile today isn't very, well, agile - at least not from a development perspective.
People point out that Agile allows for the requirements to change. No it doesn't. That's not a magic wand. Requirement changes still have project delivery impacts. Agile doesn't make that go away. It just provides for smaller iterations which facilitate the management of requirement changes. Know what? Iterative Waterfall, which we had been doing since the 80's (actual Waterfall is an artifact from the 70') also allowed for that.
Now I live in a world of WaterScrumFall, which is absolutely the worst of all worlds.
Something people forget about the rise of agile is how many of the practices have become mainstream. Most shops I encountered 25 years ago:
- had a manual build
- deployed by hand
- did all testing by hand
- never touched code except for new features
- might not even use source control
But in the modern environment, we assume:
- comprehensive source control, or even total BOM control
- automated builds
- automated tests
- continuous integration
- continuous refactoring
- incremental design
Congratulations, you are already doing half of XP! Those were all considered radical back in the day.
What's left as purely "Agile" is the branded process stuff. E.g standardized meetings and titles like Scrum Master, SAF, etc. Given how prescriptive the business culture around it has become, it is a kind of anti-agile.
So death to Agile, long live agile.
Personally, I'm still an XP fiend at heart. Give me: a single list of prioritized work; a test suite I can trust that runs in < 15 minutes; and a dedicated team with good comms; and we'll tackle any problem you give us.
Most of those were considered normal best practice in watarfall shops I worked at. They were common, but we knew many not doing them and laughed at those fools.
- We happily adopted a CI system from another team in 1998.
- We had a decent unit testing framework and a fair bit of automation for integration testing (although yes, some aspects of UI were hand tested periodically).
- We continuously refactored and improved existing code, especially if we touched something in it or around it. The only practice I think was poor then was mixing the refactoring and new work into a single commit, which I would never do today.
- I was using source control even in school in the late 80s, and I’ve never come across a team that didn’t, although I’m sure they did exist at that time.
- As far as deployment, everything I’ve worked on has been shipped on a very episodic basis (anywhere from a month between shipping to a few years), so most of the automation I’ve seen there from early on was about deploying internally for testing, and that’s always been more or less automated.
Although I did work at one place that tried Scrum for a little while around 2003-2005, the only practice I’ve generally seen adopted from that in the remaining years has been having periodic check-ins on status, e.g. monthly demo days to show off accomplishments from the last month. These are relatively low-prep although not impromptu, and that practice isn’t uniform.
With all that said, some things I have seen change, which aren’t mentioned by parent:
- Much more emphasis on developers writing tests. In the 90’s test teams were sometimes as large as development teams, and usually way behind the development team in progress. There was a belief that developers couldn’t possibly write good tests, or simply wouldn’t.
- I never worked anywhere where it was “docs first”, at least not to the extent that people mean that when they talk about older development practices. We did sometimes write the equivalent of a 1-pager to get everyone aligned on what particular work was about, and had a meeting to discuss and determine if everyone was in fact aligned. I think that was very useful, and sadly what I see more often than not now is “code first” meaning that you write code and then try to use code reviews to deal with everything, which means that what could have been an hour of writing something up and an hour meeting turns into someone spending days coding something that has to be thrown away because it’s so clear during code reviews that the approach is the wrong one. So now this “agile” approach means a throwaway work sometimes because people aren’t communicating in any meaningful way before writing code.
XP never claimed to invent any of those practices. But they were unevenly distributed. I note that 1998 was ~25 years ago, so I'm not sure we disagree here. You just described CI as a new-ish practice you adopted from another team.
I'm happy you had your experience. But I will testify I've seen lots of places that would slap your hand if you touched code to "clean it up". When "testing" was two people running a week-long manual script, narrowing the scope of changes was a required practice!
As for developer testing: a core idea of XP is that we developers should take responsibility for the quality of the code we ship. Test-first was a practice to support that goal. Automated testing wasn't completely unknown, but many shops had zero automated tests. Or if they had automated tests, they were clicky-clicky tests run by a ui-automation tool, and were written by the testing team. Developer written tests were rare back then. I remember interning at MS in the 90s and how proud they were that their target tester:developer ratio was >1! The creation of JUnit was an important historical event for us.
As for the lack of communication and wasted work: I completely agree. Turning a story/use-case problem statement into a concrete design has to happen sometime. We used to do a whole-team tasking meeting for each story we opened, and group estimate of our tasks on the whiteboard. Combined with pairing, whole team rotation, and continuous integration, there was very little opportunity for someone to get lost.
I don’t doubt anything you’re saying, either. I was just providing another perspective.
I think one thing I have seen over the years is people making extreme assumptions based on experiences they hear via the grapevine, and seeing more perspectives seems like one way to counter that.
For example I have seen the claim here that CI systems were somehow new in the same timeframe that XP and Scrum were first written about and being adopted, and that testing was always an afterthought back then, etc.
If anything I think we were often building more reliable and higher quality systems in that timeframe, if only because there was more alignment that testing and fixing bugs mattered. Today I get responses like “quality isn’t just about bugs” if I am critical about the quality of a particular product. As if what users want is 2N as many features, none of which quite work rather than N features that are all solid.
My memory was that CI was kinda new in 2000. What were you using? Hudson came out in 2005. Cruise Control was 2001, which was we adopted to replace our hand cranked shell scripts on CVS hooks.
It was a completely custom system built by another internal team at the company I was at. Once word got out, other people copied the system and tailored it for their needs.
So it was definitely not a general purpose reusable system.
I’ve talked to other people working at other large companies in the 90’s and they had similar experiences. I think either multiple people at different places recognized the need and value of systems like these before they were widely written about, or word was getting around in some circles. Not sure which.
this, my suggestion to the scrum master once was, make document a story of its own with decent points, otherwise the whole thing is agile but chaotic, or at least inefficient at a tech level.
Source control was definitely a thing, but I think the popularity of git and what grew from it deserves a lot of the credit for current build pipelines. IMO a lot more than agile or xp.
> Personally, I'm still an XP fiend at heart. Give me: a single list of prioritized work; a test suite I can trust …
For me the essence of XP was always programming in pairs in front of a computer. That notion was somehow lost in the article and you also don‘t mention it.
Pairing was the practice people talked about the most, because it caused the most friction. I'm not sure that I would say it was the essence of XP -- I think XP was about taking collective responsibility, sustainable pace, and feedback. Pairing was a practice to support those values.
I miss pairing. It is powerful on its own, and is shockingly effective when combined with the whole XP thing. But you can still get a ton of value from the rest to XP even with async comms.
Companies like Microsoft already had internal build servers for Windows.
From Wiki:
"In 1994, Grady Booch used the phrase continuous integration in Object-Oriented Analysis and Design with Applications (2nd edition)[8] to explain how, when developing using micro processes."
Cruisecontrol and XP were building on an already known concept. By 2001 hardware also became cheap enough to have a dedicated build servers.
The XP community sometimes give a distorted view of waterfall. It present waterfall as a dogmatic approach run by incompetents. In reality waterfall was a lot more advanced, with testing, build tools and a phased delivery approach.
One of the reasons people dislike staunch agilistas is because they try to lay claim to anything that turns out to be a good idea and deny everything that turns out to be a bad idea.
The Joel test was published in 2000, the very first question on that test is "do you use source control". It was known back in the 90's that using source control was a good idea.
Sure. XP didn’t invent anything. But Joel published that because those practices were not widely known/practiced. That was his contribution/reaction to that industry conversation - single backlog, continuous integration, tests, etc. They really were uncommon and controversial in many shops.
From Joel: “The truth is that most software organizations are running with a score of 2 or 3, and they need serious help”
the value of source control was understood before XP even existed, it just hadn't been widely disseminated, which is _why_ Joel created his test in the first place.
XP and agile had absolutely nothing to do with the use, or lack thereof, of source control. At best you could argue there might be a correlation due to the sophistication of the shop itself.
Completely agree. SCCS goes back to 1973. I meant my comment about the lack of source control as a remark on how many shops had no hygiene at all, not that XP was the inventor of it. Joel wasn't a fan of the agile hype, and published his test in opposition to it. I agree with his assertion that much of the value was in dev hygiene: builds, tests, automation, and continuous delivery. And those could be adopted separately from the rest of "Agile". But agile had the buzz. Was the buzz part cause of broader adoption? I believe yes. Agile consultancies inflicted build hygiene on reluctant teams in the same way that Rational Rose salespeople inflicted RUP on them. And I'd argue those early Agile consulting engagements were better value-for-money than those RUP licenses.
I feel more strongly that continuous integration and continuous testing became widespread as part of the early agile movement. Again - none of those practices were invented there. Booch published his "Booch Method" in 1991, and Microsoft had monster overnight automated testing when I was there in 1992. But Beck wrote SUnit in 1995, and then JUnit when the Smalltalk diaspora fled to Java. XP was about evangelizing practices the authors had found to work and turning them to 11. CruiseControl came out of Thoughtworks in 2001, and the XP book literally coined the term "continuous integration".
> Something people forget about the rise of agile is how many of the practices have become mainstream
I don't know how else to interpret this other than the list given are practices of agile, I responded because they definitely were not.
You may be right about the consultants bringing in better practices overall, but that goes back to my point that agilistas love to take credit for things that are not agile.
I would also argue that cloud was more the motivator for good CI/CD than agile.
At the end of the day it's all mixed together, but when you try to attribute something that's just good software dev to agile when it has nothing to do with agile itself you open yourself to the criticisms I've leveled.
If your complaint is that smart people were already doing those things, but that better way had no name, or industry leadership, just word-of-mouth and vibes -- I agree? That's kinda why the whole manifesto came about. It was people collecting a bunch of successful practices and branding it so they could more effectively evangelize them. That's what manifestos are. But like all revolutions, sometimes you win and need to declare victory and go home. Delivery agility won. The remaining "Agile Industry" is mostly cancer.
To be clear - I'm asserting that the Joel test was part of the broader delivery agility current in the industry. This current sprang from the lessons of the first web boom, and the fantastic changes in delivery possible on the web. In the shrink-wrap floppy world, yearly releases left so much time to just wander around and get lost. Head would spend weeks not compiling, and savvy devs knew the right times to merge head into their long-lived feature branches. Joel was explicitly reacting against that, and to SEMA, RUP, and the 90s planning driven movement where sub-module teams would write against fixed apis that hadn't been implemented yet.
Maybe I'm exaggerating. I agree that he differed sharply from the XP and other agile folks in some ways, and would never have signed the manifesto. But who was Joel talking to when he wrote: "Another great thing about keeping the bug count at zero is that you can respond much faster to competition. Some programmers think of this as keeping the product ready to ship at all times. Then if your competitor introduces a killer new feature that is stealing your customers, you can implement just that feature and ship on the spot, without having to fix a large number of accumulated bugs."
Or Joel again: "The other crucial thing about having a schedule is that it forces you to decide what features you are going to do, and then it forces you to pick the least important features and cut them rather than slipping into featuritis (a.k.a. scope creep)."
You can argue that people were already doing these things, but clearly Joel thought there was an audience that wasn't (including early 90s Microsoft!).
Compare that to the manifesto: "Welcome changing requirements, even late in
development. Agile processes harness change for the customer's competitive advantage."
Anyway: Single-backlog. Ship at any time with a single click. High quality. Clear comms. Dev focus. Blah blah. It's not Scrum, but I'd definitely call that agile as we understood it in 2001. If you think that's just good practice, and has nothing to do with "agile", I'm happy to retire the word. Time to move on.
what you're doing is akin to claiming LEAN manufacturing should be credited with the advent of anti-lock brakes because LEAN manufacturing was being employed during the time when anti-lock brakes became popular.
or guns should be credited with the advent of plumbing because guns were popular when plumbing became a thing.
The entire reason I pointed out that the joel test existed was to point out that industry leaders were actively trying to get people to become more sophisticated completely outside of agile. Why aren't you claiming agile is the reason people started creating timelines for software releases? Oh what's that, agile is really into NOT having those but releasing when it's ready?
It's almost as if different market forces were at play.
At some point you have to put your own biases to the side and realize that you do your own agenda more harm than good here.
The main problem with the Agile industry is that the certificate mills give people the idea that they have to know nothing about software to work on/in software.
Also Scrum as outlined by people like Jim Coplien would probably be kind of fine for certain set ups, but only a few agile coaches, POs or SMs know Scrum that well, instead it's a weird mix of apocryphal, illogical practices and superficial knowledge that, coupled with an inexperienced dev team, is similar to a giant Molotow cocktail that will sooner or later explode.
But in achieving that flawesseness, they made it so broad and vague that you really need to add additional practices to it. And then you got to be careful, mature, and thoughtful or else it's gonna turn into a shitshow... but if you just do Scrum you'll end up as a shitshow anyway.
PS- to me personally the biggest flaw is that so much hinges on the definition of done being correct, and that is at least half the battle in dev and can almost single handedly save or sink an organisation in itself.
Agile failed because it had a naive view point of politics. It never understood that the bureaucrat is the enemy and that the bureaucracy wouldn't give it up so easily.
Instead, the bureaucracy did what does when it is threatened, it pretended to pay lip service to the agile principles while subverting them in order to survive and thrive.
Agile failed because it had a naive view point of politics. It never understood that the bureaucrat is the enemy and that the bureaucracy wouldn't give it up so easily.
Bureaucracy is just the human political animal expressing its awful nature in organizational systems.
Instead, the bureaucracy did what does when it is threatened, it pretended to pay lip service to the agile principles while subverting them in order to survive and thrive.
I want to reflect specifically on the testing part. Henrik Warne talks about how unit tests were an improvement to the development process, and how designing for testing was an improvement for program design.
I think both are very naive ideas (as I think overall about Agile and XP). But, before I get to the "why" part, I want to first describe my perspective on how testing changed over time.
The perspective on testing before the "shift left" happened was that testing is supposed to be adversarial. I.e. it was important that the testers don't know how the code works or what decisions did the development team make to solve higher-level problems the software was meant to solve. This was important so that testers wouldn't turn the blind eye to the software problems they would've otherwise gotten used to, or would have accepted the trade-offs before even getting into testing. The reason is very similar to how in clinical trials the participants aren't told whether they get placebo or the real drug.
This is also why the testing would normally start after the software was in a shape ready for testing (i.e. after a reasonably long development interval). This is also why a lot of it was manual: there wouldn't have been time to develop utilities for testing since the testers didn't know what the software was like.
But, of course, there were also commercial incentives to finish earlier. This is how "patch version" became a thing for example: a typical testing cycle would be to receive the system from the developers, to deploy it, to validate it (i.e. to ensure that the minimal required functionality is there), and then to try to "break" it. If the system broke, the developer would be notified. They then would have to fix the problems and deliver the fixes as patches. These patches then would be given a version number, and the testers would be required to use them to patch the deployed system in place, in order to continue testing.
A system shipped to the customer would usually contain a bunch of patches, and the installation process would include patching as well.
The downside of this system is the time it takes to get to the working product. The upside is the better overall quality of the product. The better quality comes from the testers working from specification rather than from their familiarity with the code.
----
So, unit tests. From the perspective of someone who had to manage CI for rather large organizations: unit tests give the least value for the most effort (among different test types). Because unit tests deal with the minutia of the code details they tend to change a lot, they grow stale very quickly, and they require a lot of maintenance because they tend to also multiply very quickly. Besides, most of the unit tests in every CI run test the code that didn't change anyways. They just burn electricity for no effect. It's very hard to come up with the system that would only apply unit tests where the code changed.
In the end of the day, given the limited time and resources, if there's any test a developer should be spending their efforts on it's the end-to-end test. They don't change as often, and they at least can guarantee that the program, as a whole, works sometimes (unit tests have no useful guarantees about the whole program).
----
When it comes to designing for testing... it's also complicated. There are many dangers in doing this. Testing code is often a security backdoor. Testing code can also cause performance problems. There's a very delicate balance between the ways of how the system can be instrumented for testing and how much this instrumentation actually contributes to the quality of the system. My approach here is that as much as possible, the testing needs to be external to the system. This is to prevent tests from affecting the system, and especially from leaving undesirable artifacts in it.
----
Agile, or any other simplistic software development management process is just that: simplistic. It doesn't mean it "doesn't work" -- it means that it works sometimes. But, I wish we spent time on studying the process and discovering better way of doing things instead of trying to shoehorn things we meet in practice into existing simplistic process descriptions to justify their existence.
I don't think I've ever been in a company that's used any Agile practice in an iterative fashion, or that's in any way thoughtful about structuring dev work. Managers adopt it as an arbitrary template for more compressed work, regardless of how well it fits into 2 weeks or 1-day standup cycles, then usually the same template permeates every other team, making it impossible to design a schedule that's appropriate out-of-sync with any other department or team. So I reduce the scope of my work to what can fit into 2 weeks and that I could have something tangible to speak about every single day, and there's no real feedback cycle engaged without a ton of friction and a raised eyebrow. I have a hard time believing that anything has ever shipped because the team chose Agile; any success determined only by deadlines, budget, and periodic (real) feedback cycles.
reply