Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
How Going Back to Coding After 10 Years Almost Crushed Me (betterprogramming.pub) similar stories update story
135.0 points by mellosouls | karma 14439 | avg karma 4.46 2021-04-05 08:05:30+00:00 | hide | past | favorite | 126 comments



view as:

Apart from the headline and a parenthetical reference, there isn't really any insight here about why returning to coding almost crushed him, which would have been more interesting than yet another list of things common in modern development.

I have a similar switches between management and occasional development and there are a lot of "interesting" and frustrating challenges in returning to coding after a break of several years. Hearing other people's experiences helps dealing with the frustrations. There is a lot of tooling, frameworks, build processes, etc... which weren't widespread a decade ago which make starting a side project or jumping into code only occasionally an obstacle course for those who do not do it regularly. I'm ever grateful to those who write good how to guides for these things that I can refer to when I miss a crucial step in my setup.


I have my own rule for headlines. You can almost universally swap out the word 'How' for 'That'. e.g. "That Going Back to Coding After 10 years Almost Crushed Me."

I feel this - there's been a definite shift in using some surprisingly large frameworks it seems like and it does take time to ramp up with them.

Yup, I also couldn't find what exactly almost crushed them. That would've been the most interesting here.

Yeah... smells a bit clickbaity to me! If you decide to use such a title, you should at least expand on it in the article a little bit more than the "(although it almost crushed me)" at the very end...

Non walled version : https://archive.is/UNYn2

Surely, this guy realizes this is highly dependent on where you're working. Release managers, QA teams, and dev->test->prod issues are absolutely not gone. Arguably, dev->test->prod issues are even worse than they used to be if you're utilizing vendor-locked services that are difficult or impossible to effectively mock or simulate, and even worse than that when you're part of a multi-vendor enterprise solution where your service needs to interact with other services you don't develop and that don't actually exist yet because they're all being developed at the same time, in which case you're forced to mock an incomplete spec that may or may not end up actually getting implemented.

These new realities the author talks about are the only case if you're making sole source standalone services utilizing only externals that use stable, open APIs or standard protocols.


What sucks is when you have all these things that have to run for __your__ piece of code to work and something goes down, it's another team that has to deal with it and they're not available.

For example whatever handles oAuth in a local domain/test environment. When that goes down and the entire app/test harness doesn't work... at least there are other domains to use in our case but sometimes those are down too.


If I never have to explain again to someone that it’s THEIR JOB to investigate 500 errors from their code before someone has to come ask them about it, it’ll be too soon.

Set some alerts please.


Sadly, some people interpret agile and scrum as getting rid of QA teams. Of course you can release faster if no one has to manually check the feature but it just turns the user into the QA team. And no developer can properly test their own code.

I absolutely welcome automated testing wherever possible but for any software with a user interface (including websites) it can never fully replace manual testing.


I've been out of it for 30 years, and I have to dive back in. The hardware is amazingly fast compared to 1990, and as I frequently note, GIT makes a huge difference in being able to refactor, compared to a folder full of ZIP files.

The frustrating thing for me is that GUI development has apparently gone backwards a decade or so, with the rise of web frameworks and all the extra layers and connections between the user and the actual data.

It used to be that you could grab some components in VB6 or Delphi, and have your database prototype up and running in an hour, and make a stream of modifications as required, with the main delay being pkZIP and copying files prior to making any big changes, just in case. Compile cycles even then were in the 0.5-5 second range, like now.

I've wimped out and use a GUI version of GIT and GitHub, but it is amazing to start a project, and once it's running, push commits to the internet with a single click.

The main limits I see right now are,

  My age - I'm 57

  My skillset doesn't include the Rust, Clojure, etc...

  The programming languages are less impedance matched to the human mind than they were in the past. Pascal made you type a lot of things, but that also made it far less ambiguous, and much, much easier to read and write.
I expect to wait, but not to get crushed. For me, programming is easy, except when it isn't, then its a persistence hunt for the cause of the open issue.

Most of the trendy languages like Rust and Clojure are irrelevant. As ever it's C++ and Java that still dominate.

Python and Javascript are the new languages that dominate compared to 10 years ago.

As a trend, I've noticed that there are more companies with python/ruby/go/js based stacks than java/c++ stacks. However there are more jobs for Java/C++ than all the rest.

Large firms like Google/Amazon/Apple etc. tend to have an unquenchable need for skilled java/C++ devs. Going beyond the top firms large startups like Square/Uber/SalesForce similarly use java stacks.


They are the BASIC and LOGO programming languages of our time. Which is not a bad thing.

A javascript developer? A dude standing on multi-million lines of C++ code (the browser) to display a dialog box with some fancy animations? All the while destroying the accessibility that html offered ? If these program languages are so powerful why don't you write the underlying infrastructure and library layers in those languages (which are C/C++, maybe rust in future) for good reasons..


Notice that your reasoning could be perfectly extended to assembler: there are assembler parts below your C/C++/rust code. If C/C++/rust are so powerful, why don't they write everything in that language?

The answer is simple: different layers, different requirements. I'm not particularly fond of the myriad electron-apps out there, but we know how the alternative looks like: a world where most programs only work on windows.

The same thing can be argued in the C/C++/Rust vs Python/Javascript/Whatever: if only the former could be used we would have much less software in the world. Some would say we would have less and _worse_ (as in crashes and security issues everywhere) software.

Sometimes "a fancy dialog box with fancy animations" is precisely what a business needs. The fact that it can be created and deployed worldwide in 20min. by a run-off-the-mill worker is a plus in this context. The world doesn't care about good software, it cares about software that fulfills an otherwise unfulfilled purpose.


Not many companies had their entire products written in BASIC and Logo.

This is heavily dependent on the subject matter/industry.

Can I recommend finding work in Go? It’s a good fit for “makes you type a lot of things” while being super clear what’s going on. It’s basically C with memory safety-ish and a good standard library that includes modern things.

Give Go a try. It has a few unique conventions, but overall it strikes a nice balance between clarity (no surprises) and convenience. It feels like one of those languages you can fit in your head, idioms and all, kind of like C or Pascal.

> GUI development has apparently gone backwards a decade or so

I find this the exact opposite. Okay, maybe I could break out Delphi or Visual Basic and make a crap UI that only ran on Windows and required users to install the software.

Today, in a few minutes, I can make an app in HTML/JS that runs in a webpage and several billion people can access and use immediately on either their PC, Tablet, or phone. That it me is way better than it was 30 years ago.

And, if I want it to be an app I can use some HTML/JS wrapper and be done in a similar amount of time. I can have a new electron app up and running and on 3 platforms in under an hour. If I had more mobile experience I'm sure I could do the same there.


The nice thing about Windows programs was that you knew how big the display was likely to be, and more recently, the scaling has gotten much better.

The modern trend towards tiny screens that have to be scrolled in order to fit any usable amount of data in, and the various different scroll / UI paradigms means that you can technically have something that most of humanity could have access to, but most users aren't likely to be happy.


> And, if I want it to be an app I can use some HTML/JS wrapper and be done in a similar amount of time. I can have a new electron app up and running and on 3 platforms in under an hour. If I had more mobile experience I'm sure I could do the same there.

So now you have made a crap UI that is running on 3 platforms. It was quick and easy for you, but now every user has to suffer with a slow and bloated electron "app".

GUI development definitely has gone backwards at least a decade. Of course it can still be done right, but with so many low skill web developers everywhere, electron "apps" are what we get.


> up and running and on 3 platforms in under an hour. If I had more mobile experience I'm sure I could do the same there.

Simply, not.

We're accustomed to great mobile/desktop UIs from big companies with dedicated people for each platform. And these UIs take months or years to polish.

An individual maybe could setup a project skeleton in 1 hour, but no more.


That is an advantage for you, as a developer.

Maybe users today have zero expectations because they don't know better? I'm a dev too but my expectation as a user is higher. I never saw a modern UI framework not being a complete piece of garbage. This includes relatively "performant" stuff like QML.

I'm pretty sure regular young users would notice too, but it seems that most people just accept and adapt to crappier trends every year.


> I can have a new electron app up and running and on 3 platforms in under an hour.

I do the same with Lazarus, except that my app will have a richer interface with better widgets.


The main reason why the modern Web GUI frameworks does not always feel good is because they need to support wider ranges of screen sizes, ratios, resolutions, accessibility tools, touch inputs, etc.

And web browser/css if pretty good when supporting different devices/platforms.


Fully agree on the UI.

Every platform has regressed significantly compared to just 10 years ago. Modern UIs are slow, laggy, visually poorly designed (both in layout and in control design - they're just "flashy"), they're completely inconsistent with each other.

They seem to completely have lost 20 years to refinement in user interaction.

During the amiga times you would be yelled at for placing the "cancel" button of a dialog in the unexpected position. And I would say: rightly so. Following system guidelines to have consistency gave you great productivity as a side-effect.

Today you're lucky if you can guess that the dialog is modal at all, let alone figure out which one is the ack button and which one is a link that redirects to generic online help page.

The system design guidelines are ignored by system and app developers alike. It's a free-for-all.


I feel like living in another world when I am reading these. In what world were UI 10 years old better or good? Not that current are super great, but the past ones were neither looking better nor easier to figure out.

To make some examples that I didn't use to see:

I get a good chuckle when I see flip-switch style checkboxes which include a description text that also flips in meaning when you toggle it. Because flip-switch style are already frequently more ambiguous than a checkbox already, but the changing text really brings the ambiguity to a whole new level. Checkboxes were perfectly easy and took less space, but flipswitches took over most modern UIs on mobile and desktop because they look cool somehow?

Drop-downs popping-up a borderless dialog showing you some choices, with a modern scroll-bar that's hidden by default. Depending on vertical size, you might not realize there's more if you scroll, since there's no indication of further elements unless the next element happens to be truncated at the text, making it obvious.

Actions (which were traditionally buttons) interspersed into the text as links, making the distinction a crapshoot.

Responsive "desktop" UIs that can't fit 10 toolbar elements in view and switch to a hambuger menu if the window is smaller than 2/3 of the screen. Make it better by actually shrinking the available workspace when that happens due to an extra-large sidebar that pops in it's place.

Broken scrolling (as in non-standard behavior/travel lenght and so on) is something I've never seen in desktop apps until recently. Custom widgets that do not replicate correctly the system behavior. People complained for decades about Gtk/Qt not behaving exactly like Win32 (or say the same with Cocoa/Qt on Mac, or any combination), and yet they perform like a dream compared to Electron, Flutter, even QML.

Of course you had bad UIs back then and now. But I've never been so frustrated by so many commercial programs switching over to multiplatform development and custom UIs.


> My skillset doesn't include the Rust, Clojure, etc...

Don't worry about that; their usage is barely a rounding error. Skill up on (Java or C#) + (Javascript and a popular JS framework) if you want a job doing programmy things.


Honestly the part on daily stand ups sounds like a negative to me.

Sick of repeating the same status update every day for no benefit.


After working for 20 years with weekly status, switching to daily status is a big change. It removes developer autonomy and responsibility. Instead of toughing it out and trying different ways to solve a problem, we report our status and management leads us in a different direction each morning. Instead of gaining breadth, we get emasculated. Minor problems get elevated to obsessions, inverting priorities. Over the longer time frame, all the intelligence of the leader is "harvested" and ant-like developers haven't been practicing problem-solving skill so progress ossifies, but people still feel busy because they're trying new things every day. And the new things are only from the repertoire of the leader who isn't developing either. But hey we have chip makers speeding up chips every year, so this will continue. This honestly used to be much more fun in the old days. It was less productive, of course, but also less reverent. It didn't feel like a morning mass or pledge of allegiance, more like the Superfriends.

This sounds like a team that would be dysfunctional at doing weekly updates as much as it is now at doing daily stand-ups, except that you get bothered by it five times more often not. Don't blame Agile/daily stand-ups...

People are a product of their environment as much as the other way around.

Man, you must have worked at some really toxic workplaces.

I could write a paragraph reputing point by point every single one of your sentences in a healthy organization.

All I can really say is you should consider switching jobs. You'll have to take my word for it today but it's way better out there, and you can join a company and a team that will make you feel like NONE of those things (yet still have daily standups)


> Man, you must have worked at some really toxic workplaces.

Me too, then, because that accurately describes every Agile workplace I've been in the last 10+ years.

(I will concede that a lot of those were contracts and almost by definition, if they're hiring contractors to come in as emergency fixers, things are not going well anyway.)


I think this is as much an issue with your leadership roles than it is with daily stand ups in general. I worked with teams where stand ups are requested & seen as important by the team. Personally, I see the aim of stand up to inform the team of work in case it is relevant to them & ask for help if needed. How the leads of the team react determines how good a lead they are. Developers should have enough autonomy to feel they are working well & able to properly investigate, problem solve and design solutions, without affecting the overall efficiency of the team.

One of the more difficult aspects of a team lead role is balancing all the different personalities and work preferences in that team. Consider that "toughing it out" might be a desired work practice for some people but for others it's the opposite, it's leaving them struggling for longer than they need/want to be. Stand ups offer a chance to identify those situations, as not every developer will ask for help when they need it. The post stand-up follow up should then considers the individual developer's wishes (e.g. you should be able to ask to tough it out for longer, probably as long as it's not a critical blocker for other work) and that seems to be one of the places your leadership is going wrong.


Standup is a grind .. I think of ways to get out of it, like an extended toilet break

If you're repeating the same status update every day then you're either blocked or you're not offering enough detail for your status update to be useful.

In the first case you're depriving yourself of the opportunity to be helped by your team.

In the second you're depriving your teammates of the opportunity to learn from your process, insights, and potentially helping them on their own task.

Get your head out of your ass, and think about the maximum value you can provide to your team in 30seconds-2minutes in a daily status report.


> In the first case you're depriving yourself of the opportunity to be helped by your team.

I find that some people in the team want to help too much, wasting (a lot) of time on things that would've been solved in the same time by the one original person instead of by a bunch of persons while wasting multiple person's time. When someone is really stuck, then sure, but more often (in my experience) it's just the kind of stuck which is there because it just takes time to work through a particular problem and trying out solutions. I find that people often 'want to help' to show they need to be promoted / get more money because 'they are so good' and 'are such a great team player' (aka signalling instead of actually contributing to the bottom line; doing the opposite by, every day, wasting time 'helping out' instead of finishing their own tasks faster). Helping out is good, but I think daily is too often to assess (assessing from a 2minute blurb, because apparently the victim is not asking for it; they get 'help' forced upon them) if help is needed or not.


"Worked on feature ABCD yesterday, implemented part A, today will work on part B"

Repeat the next day for C, then the next day for D.

How is this useful information to anyone that it warrants a recurring meeting?

The problem is standup is useful like 5% of the time when someone is actually blocked yet for some reason we decided it needs to be every day rather than more efficient ad-hoc meetings to unblock people.


I've asked a few times my team whether they see value in our daily meetings, mentioning that it can easily disrupt flow, might have less value than ad-hoc focused meetings or chats on particular topics, and worse of all for me - might be counterproductive when we tend to wait the next day to address something and then opt to discuss it further separately.

Response is always the same: during lockdown/WFH, it is one of the few times -- sometimes the only one -- we get to talk and hear other people so they'd like to keep it.

So instead I'm trying to move away from it being a traditional scrum round-table and try to move towards chit-chat, show and tell, general updates, group quizzes, etc. All more useful probably.


I also see this as the value of stand-ups. Even in the office, it's nice to have time with the team if not everyone goes to lunch together. Forced daily interaction can be good for team productivity.

It may just take a few minutes of the day, but it's probably the most irritating and demoralizing few minutes of the day.

>developers now own the test writing and CI infrastructure runs and reports the tests. It has really made software more reliable

I switched from medical, with fierce QA, to more general SW dev over the last 3 years. Developers waste time writing braindead unit tests and don't stress their software like QA teams. Major bugs run undetected for weeks and cause chaos. Managers grin believing they're net-ahead.


Hmm i think the pace of changes has made software break more, but unit tests and integration tests are an improvement. But I agree writing unit tests takes a hella lotta time, not sure if a sr eng should be doing that 1/2 the time

I'm at a FANG with some of the smartest people I have ever encountered in my life, yet no one seems to agree with me that devs should not write their own tests.

There is an unwavering belief that knowing the corner cases and weak points of your own code makes you somehow better at writing the tests. We also seem to be obsessed with testing that is half-way between unit level testing and system level testing. We test APIs and features more or less in isolation.

Tests should be written by antagonistic developers who are not at all familiar with the code. They should stress the capabilities promised in the interface documentation. Hiring more people who are dedicated to testing costs a lot of money though. I don't see things changing anytime soon.


The problem is, when devs don't write their own tests, the code they write tends to be untestable.

I suspect you think the developer writes the code and then the QA engineer writes tests for that code. That's the wrong way round. The tests are written first, and they all fail, then the dev has to write code to make them pass. By definition it's literally impossible to write untestable code that way because then the tests won't pass.

It's the principle of TDD but separated in to different roles. It works well and produces good results but you need exceptionally good planning upfront to actually do it.


Who designs the interfaces that the tests call?

That's the planning part.

Usually it's done as a collaboration between the systems architect and the QA team. The process is essentially "design the system architecture -> document the APIs -> write the tests -> implement the code -> QA". It takes a level of rigour and planning that most teams don't want to do, and it relegates developers to a pretty minor role in the whole development process. They end up just filling in the blank bits in someone else's design. Most changes in team structure over the past couple of decades have been about developers taking on more of the responsibilities rather than less.

It's actually one of the aspects of TDD that makes it so robust. If you properly design your code and really think about its APIs you get a better app at the end. It does take longer to get something a user can see though, but often the process gets to a finished iteration of an app quicker because there's fewer cycles of debugging and fixing needed.


> The process is essentially "design the system architecture -> document the APIs -> write the tests -> implement the code -> QA". It takes a level of rigour and planning that most teams don't want to do

That's a very charitable way to say it, on my case I would call this kind of process very convoluted and similar to early 2000s style development. I obviously would not want to work in such a bureaucratic environment where every small tech decision has to go through multiple boards.


I obviously would not want to work in such a bureaucratic environment where every small tech decision has to go through multiple boards.

Everyone who works on the app is making those decisions no matter what you do. The difference is that in the "old style" the decisions were made before the code is written, and in the agile "new style" the decisions are made after the code is written. Often that's fine, because developers are generally good at their work and they make decent decisions, but sometimes they get it wrong and that's when code design issues arise. It's also what leads to automated tests failing to test for a lot of cases that a good QA engineer would have written tests for. Those things have an impact on the user.

A huge amount of the code in apps we use every day was never designed. No one thought it through. No one considered the edge cases. Features are thrown together in a week and 'sort of' work. Every time you see a shitty broken website, or a bug in production, or some crappy slow thing that should be fast the reason behind the problem is that the developers who made it didn't take the time (or even have the time in really bad companies) to think about what they were building. A lot of developers like that working environment because they can hack on things and move on to the next challenge quickly. That's fun. I argue it's also bad for users, and I care more about users than I care about developers (and I say that as a developer whose been making web stuff for almost 25 years.)

I'm not arguing that we go back to Prince2 and waterfall. There's a limit to my tolerance of bureaucracy too. I'm saying that things have gone a bit too far the other way, and many developers need to spend more time planning what code is written before leaping in and coding up the first solution they think will pass the acceptance criteria someone from product wrote.


> I'm not arguing that we go back to Prince2 and waterfall.

Earlier you proposed: "design the system architecture -> document the APIs -> write the tests -> implement the code -> QA". This is precisely waterfall. Detaching the term from the emotions and bad PR, there are decades of real experience of people who did that and discussed their results.

The bureaucracy is not the cause. I argue the causality is: the "design first" assumption -> the "throw it over the fence" practice -> everyone blames everyone -> Prince2 comes to the rescue.

> I say that as a developer whose been making web stuff for almost 25 years.

> I'm saying that things have gone a bit too far the other way, and many developers need to spend more time planning...

Amen!


>This is precisely waterfall

I'm not sure I fully agree, unless we want to define waterfall as "anything where a bit of time is spent up-front to decide how a part of the system should work" :)

For me, waterfall is where every single aspect of the project is pre-defined, and cannot be changed during development without serious pain and lots of awful bureaucracy

But in the above workflow, there isn't anything stopping us from making a loop for example on a sprint by sprint basis, and using feedback from both the tests and changing requirements to improve the design, update the APIs, change the tests, etc.

I suppose we could argue that this is "mini-waterfall" but it works in my experience :)


> I would call this kind of process very convoluted and similar to early 2000s style development

The 2000s style you are referring to was centered around "features" or "business cases". One unit of work = one feature. The bureaucracy is orthogonal.

Modern "agile" style is centered around "sprints". One unit of work = one sprint duration.

The bureaucracy inevitably involves synchronization points and enforces more or less linear process, which usually implies longer "real world" feedback loops. Lack of bureaucracy allows for concurrency and possibly shortens the feedback loop, which may be nice at first, but then you slowly informally incorporate the bureaucracy back - code owners, design sessions, etc..

The bureaucratic process is not inherently bad, the agile process is not inherently good. In some cases quick turnaround of basic features is more profitable, in other cases correctness may be most important. I can agree that quick feedback is more fun to work in, but it is not necessarily the best way.


So how do you test the design? I ask this sincerely. To me it seems that it's rare to know in correct detail what the design is supposed to do in advance, but also that even if you do, it often has oversights that aren't spotted until you've built them.

You still work in sprints and deliver features to the user regularly, and iterate to the right solution. I'm not suggesting you design all the code in the entire app and write all the necessary tests for every feature at the beginning; you still only design the bits that are needed right now. The development process just has a lot more upfront thought time, more upfront test writing time, and less coding time because if your code passes the tests you know it's correct (based on the assumptions about what 'correct' is right now.) There's always going to be a requirement to go back and improve things, pay down technical debt, fix design issues, etc no matter what your development approach is.

> The tests are written first

That surely implies a specification which, in my experience, is a rare thing in these Agile days.

> the QA engineer

In the last 10 years of contract and perm jobs, only once have the QA people actually written tests (and that was largely translation of an existing manual suite to a node-based automatic runner.) It's pretty much always been "these are the things the devs/product manager/pm wants to test for these changes, please test those and approve".


That sounds like an incredibly slow feedback cycle. How would that work? Write some code, check it in, wait for someone else to write tests... and come back in hours/days/weeks to fix any issues?

Getting constantly dragged back to fix last week's bugs sounds like a chapter from Dante's Inferno. I'd rather discover them while the code is fresh in my mind.

Without tests, how do I know when I'm done?


In the past, I had pseudo-pair programmed in order to test. We'd agree on the external-facing API and pair-program a slow in-memory implementation. Once that was completed, we would split off and one of us would write test cases against the API (fixing up the in-memory implementation as needed) while the other would work on a proper database-backed implementation.

I haven't done this in probably 7 years, though. Mostly because I haven't cultivated this kind of working relationship with my coworkers.


I've seen both types of org layouts where I've worked (we moved from one to the other).

Initially separate QA teams that would write tests, a subset of which might do manual testing in the UI.

The feedback loop for this was brutal as you suggest and caused a lot of problems, with delayed breaks that become hard to fix and debug. It's also hard for QA to often know what the code is doing or interact closely with the dev teams. You can try to get them to be as close as possible, but it doesn't work super well.

Now devs are responsible for writing tests and teams think about potential failures when designing things/discussing the plan with their team. They were supposed to consider these things before too, but the incentives weren't aligned - now they are. We have infrastructure teams to make automated testing as easy as possible and spin up a test environment where you can see your change, etc.

I think this model is better, testing well is hard and reasoning about failures and how things can go wrong is an important skill to develop. Offloading this to a different team feels like a variant of the dev that 'just writes features' and doesn't consider actual deployment.

This is often made doubly worse by companies that have QA teams often treating them as second class citizens both in status and in pay. I think it's better to have it be part of the job of the person writing the feature also write the tests, it doesn't help that writing tests is something few people know how to do coming out of school.


Write the spec.

Write the tests.

Write the code.

I'm taking about API and system level tests. Unit tests vary in scope between 'test a function' and 'test a unit interface'. Former should be done by the dev during development. Latter should be done by someone else. It's a grey area between those extremes.

I'm coming from a systems programming and embedded background so I have no idea if a web dev or someone else should follow my advice, but I suspect so.


You're getting heavily downvoted and I'd love to know why. I've seen similar attitudes toward testing among the groups I've worked with, nobody really wants to do it.

Are folks downvoting this because they don't like testing? Is this idea that prevalent?


Count me in as one of those who disagree with you.

I 100% side with the grandparent comment. When you've worked with a great QA, it's night and day vs a dev writing their own tests, but a dev not writing its own tests is a nightmare.

Ideally there's a place for both and they work very closely with one another to make the software stable at all levels. It's arguable the SRE is the new QA engineer.


> yet no one seems to agree with me that devs should not write their own tests.

I'll die on this hill. The developer that wrote the code by definition cannot write an appropriate test suite for it. It is entirely possible/probable that details missed in implementation will be missed entirely in test, as the developer missed them.

< controversial take> Manual QA promotes bad habits and is not a great thing to introduce to an org. SDETs/QA Engineers/Developers whose role is explicitly to create tests for an org are worth their salary 10x. </ controversial take >


> are worth their salary 10x.

Are you saying it's a bad role to get into because it's underpaid? ;)


Yes, sadly for some reason in most orgs, QA/QE whatever you're calling it, pays lower than dev work. Even though, IMO, they're worth far more.

They are worth more in the same sense a salesman is worth more than the dev. Without the dev neither have anything to do, yet they all provide equally valuable work. Perhaps it's not a matter of value, but supply and demand.

I don't agree.

You can't finish a project without developers and you can finish it without testers.

I've seen a tester changing requirements on the go, delaying features for things that aren't even the case. Those change requests then cause bugs, because... Dev.

The most important thing for a dev is domain knowledge and letting them care about the product.

Usually, when the original Dev leaves ( monolith), it's enough to call the project abandoned, but maintained. Leaving the devs wanting to spend a minimal amount of work with it.

And that's going to trigger sloppyness.


> You can't finish a project without developers and you can finish it without testers.

...for a given value of finished. My value of finished involves "no significant bugs that cost thousands of dollars of revenue per day". And in the past, QA has repeatedly caught such bugs, despite ample testing by the devs.

It really comes down to the mindset, and the approach - a QA approaches testing code differently to a dev who wrote the code.

> I've seen a tester changing requirements on the go, delaying features for things that aren't even the case.

In a well-functioning team, requirements should be explicit, understandable, achievable, realistic, and _agreed upon_ from the get-go. Once agreed, devs and testers work to those requirements. If, during the development cycle, it's realised that a requirement is lacking, or indeed, entirely absent, then it should definitely be discussed between devs and QA, at the very least, and _agreed upon_ - noting that sometimes, it might be a reasonably significant change in requirements that the business needs to be involved also in reaching that agreement. It may change delivery time, or delivered capacity etc.

The ideal, from my POV, is having testers fully involved in the planning discussion / backlog grooming whatever you call it, where your team ensures that the requirements from business are specific, realistic, achievable, and useful. And then, hopefully, given the entire team's domain knowledge, that is, domain knowledge from developers AND QA, because they will have a metric shit ton also, and I find it curious you only ascribed that to devs... ... then, hopefully you can determine which requirements are missing, get the business to agree, and then make those part of the agreed upon work also.

You'll note that I'm not speaking on dev vs. QA, rather on process and team structure. The issues you faced are not issues inherent to QA. The issues you faced are due to process and team structure.


I'll bite.

> without testers.

A "tester" is far far far away from a legitimate QA person.

> I've seen a tester changing requirements on the go, delaying features for things that aren't even the case.

Hiring bad employees doesn't invalidate any role. It invalidates A.) Your hiring process for that role, and B.) That employee


Reminds me of a NASA failure report where they determined the reason the satellite failed immediately after launch was because they built their own circuit-test device to match their assumptions rather than reality. But it passed their test. Now it is spinning uncontrollably above us.

Developers also risk the "I thought of that already so I don't need to test it thoroughly" pitfall, eventually leading to the usual "we didn't think it was possible fail like that"


See also the Hubble telescope mirror - it perfectly "passed" the test using the sophisticated automated testing machine, which was set incorrectly.

I'm with you on that hill. Good QA have a vastly different mindset to devs - and they're not prone to the unconscious tunnel vision you develop when writing code that assumes a particular approach.

However, I have to say, depending on your product, some of the best damn QA I've worked with aren't writing integration tests, at most they're using SQL to get the DB into an appropriate state, and then they're doing the rest manually, but it's really about their mindset, manual is just the easiest approach for them. Manual encompassing "click on the thing" as well as "hit the endpoint with Postman".

It's their mindset I value most. And some of the best pairing I've ever done is with a QA to write integration tests.


> Manual QA promotes bad habits and is not a great thing to introduce to an org

Interactive software, and very interactive software like games, absolutely requires good manual QA. There's a lot of interactive software out there.


The way I see this is there’s 2 aspects to test. 1) Uncovering bugs. 2) finding future regressions.

Having the developer writing the tests suits scenario 2) but it’s totally inappropriate for scenario 1).


>Manual QA promotes bad habits

I could understand Manual QA does not scale, but why bad habits? What if the person doing Manual QA of an application also then writes tests afterwards based on what they have determined are the likely problems in the application - which is what I would do if I was going to write (ui) tests, test it by hand first.


> What if the person doing Manual QA of an application also then writes tests afterwards based on what they have determined are the likely problems in the application

I wouldn't really call this manual QA then. I'm describing the thousands of orgs whose QA processes are limited to "These people will run these five thousand test cases by hand for every release", and/or "These people will be handed a ticket and manually click all the buttons before marking it done"


>The developer that wrote the code by definition cannot write an appropriate test suite for it. It is entirely possible/probable that details missed in implementation will be missed entirely in test, as the developer missed them.

The same is true of pretty much everybody - PMs and QA included. Everybody misses details and edge cases.

It's better to write the test suite in a readable form and get everyone to take a look at it.

Unit tests are almost entirely unsuitable for this purpose most of the time.


> The same is true of pretty much everybody - PMs and QA included. Everybody misses details and edge cases.

No, the same isn't true of everybody. They would all have to miss the exact same details and edge cases, was the point. It's the same reason that you have sensor redundancy, you don't have the same sensor measure twice and trust it.


??

That's exactly what I meant. With inputs from a diversity of people you can pick up the edge cases more easily. Separately everybody will miss some.


Then you just repeated what your parent already said. Your comment read like dissent.

That is not what the parent said. They advocated specifically for developers not writing test cases. I do not.

They argued that separate people should look at a problem so as not to make the same mistake twice:

> The developer that wrote the code by definition cannot write an appropriate test suite for it

You said that issue could be applicable to everyone:

> The same is true of pretty much everybody - PMs and QA included. Everybody misses details and edge cases.

I pointed out that the issue can't be true of everybody, since everyone other than the original developer would generally avoid it. You then said I repeated what you said. Which is it?


There should be both.

Unit tests written by devs are not testing that the system does what it is supposed to do. They're testing that the code does what the programmer intended it to do. They have to be written by the programmer because they're coupled with the code.

There's a separate testing process needed (as you say) that tests if the system does the thing it's supposed to do. The programmer cannot do this, precisely because it's basically testing their understanding of the requirements.

I have my co-founders Q&A everything before rolling out to production. It's a pain, but it's saved us a few headaches where I was just following the golden path and missed some obvious problems.


Depends. If the 'unit' is big enough to have an interface spec then a different dev should write the test. If you're doing function level unit testing, a dev should write it. There's a grey area between those two extremes.

My original post was intended to focus on system and api level testing but I wasn't clear on that.


I agree, there's a ton of grey areas in this whole field, especially around the borders between layers.

I'd argue that while they can't "do" their own testing, developers should be writing Unit tests that prevent their code from being broken in future. They should also be writing tests that exercise their code so they find corner cases that were not obvious when coding.

The problem is it's difficult to define what that is (and particularly to teach a junior Dev what to do).


I feel like this isn't that hard.

1. Write (some) tests as you develop to help you

Writing down your program's test cases should not add much time (assuming you have test infrastructure setup like you should), since you are already having to consider these things to simply figure out what the right logic is. With a little practice, you find a good rhythm for how far to take it for yourself. In my experience, I find this actually leads to faster and more correct solutions.

2. Collect tests while in review to cover the requirements

Once you've written it though, it's always important to try and get others to look over your code and try and find flaws. Ideally, they can even write tests for you to add to the feature. This is just a standard part of any good peer review.

3. REGRESSION TEST

Finally, though I'm not sure it matters very much who writes it, regression tests are something I never see stressed enough. I feel like if you are serious about testing, every bug should be required to add at least one test to the suite before it can be closed.

Just my 2-cents.


> There is an unwavering belief that knowing the corner cases and weak points of your own code makes you somehow better at writing the tests.

This is a very common -- but inevitable -- misunderstanding.

The real reason the same person should write tests and code is if you find a person who is competent enough in both technical domain and problem domain to write the tests -- you definitely want that person to write the code as well.

See how it's not "developers should write tests"? It's "domain experts should write the tests... and domain experts should write the implementation."

The two tasks happen to need overlapping skills.


Black box test - Testing a unit without any information about its internals. You provide inputs, you record outputs, and you have a spec.

By definition, the author of a unit cannot write a black box test for it. Any test they write is going to be a white box (clear box?) test, because they know how the unit works.

If you interpreted the spec incorrectly when writing the unit, you're going to interpret it incorrectly in exactly the same way when writing the test. White box testing has its place, but you need a second person if you want to do black box testing.


> I'm at a FANG with some of the smartest people I have ever encountered in my life, yet no one seems to agree with me that devs should not write their own tests.

> There is an unwavering belief that knowing the corner cases and weak points of your own code makes you somehow better at writing the tests. We also seem to be obsessed with testing that is half-way between unit level testing and system level testing. We test APIs and features more or less in isolation.

You're both right. Devs should write tests for their own code, but they shouldn't be the only ones writing them.

One of the problems (at least at my employer) is QA is considered a place for second-rate developers. Some of them are so bad that they rely on the devs to basically write their tests for them, by requiring extremely detailed and specific acceptance criteria. Only once have I worked with a tester that was engaged enough to really understand the requirements on their own and call out weird behavior or corner cases that weren't spelled out in advance. It was great.


I worked with a tester turned dev. He was amazing, his attention to detail and ability to grok specifications was second to none

> Some of them are so bad

So why is your company hiring bad QAs?


The answer seems to be in the same message:

> QA is considered a place for second-rate developers

Obviously, if you look for second-rate developers to be your testers, you will end up with second-rate testers or worse!


> Obviously, if you look for second-rate developers to be your testers, you will end up with second-rate testers or worse!

Yeah, and to make my thoughts a little clearer: if your organization does that, it's no wonder some people come to the (wrong) conclusion that only devs should test their own code. To come to the right conclusion in such an org, you either need an outlier experience (e.g. a talented tester who chose the role against organizational incentives) or the ability to see past your own direct experiences (which is really hard).


¿Porque no los dos?

QA teams and devs write totally different tests. Devs go about it to help future refactoring, e.g. codify what the feature is supposed to do, so that if we change it in the future, we will not inadvertently break something. QAs write tests that look from the point of the user.

QA tests are arguably more valuable, but writing testable software itself can only be done if the devs too are invested in the task.

Maybe we can form DevQA teams? the way we did for DevOps and DevSecOps?

But alas, I still think having everyone though out the whole stack invested in writing tests makes the the final product better.


> yet no one seems to agree with me that devs should not write their own tests.

Change Request (or whatever you call it in your flavor of agile) is the source informal description of behavior, while Specification, Code and Tests are three distinct formal descriptions of the same. If any of those three are written down by the same person, assumptions and thinking errors translate to the other largely defeating the very purpose of writing down another formal description in the first place.

Unit tests became mainstream with proliferation of weak-dynamic languages (JS, Python) and concepts like TDD largely became mainstream due to lack of formal interface specification capabilities in the languages themselves. Stricter languages (Java, C++) cover huge portions of the testing in the type system. Tests written by the developer are okay if they are used as a substitute for a formal type system. However, tests intended to catch logic errors will contain the very same logic errors found in the code and such tests will only give you false confidence.

It does not mean that tests must be written by a dedicated QA, but rather that tests should be written by a different person for them to serve the purpose they are intended to serve.


> Hiring more people who are dedicated to testing costs a lot of money though. I don't see things changing anytime soon.

Doesnt Google have a Software Developer in Test role? I work in a bank and every project has these people - their only job is to write automated tests (usually in Python robot framework).


Yes, I was hired at Google to do dedicated testing (Test Engineer was my title). Note that at Google, many test engineers were "supposed to" build test infra, not write tests or do testing. I was brought on to test some systems that were critically important, but built by a team that focused more on operations than coding. I set up a continuous build, fixed all the broken tests, and prevented team members from checking in broken code. I don't think developers can be entirely trusted to QA their code.

I think it's more appropriate to say, the quality of (enterprise?) software should not rely SOLELY on tests written by a software developer.

Is it not common knowledge already that code review is the single most valuable tool in getting rid of bugs? I remember reading it on Code Complete, maybe The Pragmatic Programmer. Can't remember.

I think there's still value in writing your own tests, at least with regard to refactoring (and sleeping better at night after deploying to production).


> Is it not common knowledge already that code review is the single most valuable tool in getting rid of bugs?

No, it isn't. Because there isn't that much evidence it is. It is a good tool, but "most valuable tool"? That's a very bold claim. I would still place code analyzing tools, linters and fast feedback cycles (= fast compilation and ability to check changes) over code reviews, if only for the fact many code reviewers tend to divert into "I like this way better" arguments.


Yeah definitely poor choice of words there. I misremembered the book. Anyway here's a cherry picked excerpt from Code Complete, 2nd edition:

> Glenford Myers points out that human processes (inspections and walk-throughs, for instance) tend to be better than computer-based testing at finding certain kinds of errors and that the opposite is true for other kinds of errors (1979).

> This result was confirmed in a later study, which found that code reading detected more interface defects and functional testing detected more control defects (Basili, Selby, and Hutchens 1986). Test guru Boris Beizer reports that informal test approaches typically achieve only 50–60 percent test coverage unless you're using a coverage analyzer (Johnson 1994).

So I would conclude that the best approach is to use both code reviews/inspections and automated tests/linters/analyzers.


I'm not debating automatic vs human approaches as much as how code reviews fit and why blind focus on them misses the forest for the trees in regards to other human approaches. When it takes 20 minutes from starting compilation to getting to the point of change, the Dev/QA/Review trifecta looks a lot different than when this takes 3 minutes. (Ideally,) developers are a lot less likely to make errors out of fatigue. They are more likely to pay attention to detail (when it takes 20 minutes, it's easy to go "whatever QA/reviewer will verify"). Reviewers start checking out the code and testing the change manually, rather than just looking at the code and pawning it off to QA. QA gets more time on their hands to proactively eliminate issues by being part of the design, instead of being the guys pointing out mistakes after the fact. Having shorter compilation cycles paves the way to increase the quality of all three segments of this cycle, yet it is often overlooked the most despite seeming so obvious.

The part about QA is perhaps the most infuriating. Most business logic is so simple it can be presented as test/result matrices, which generally translate cleanly into requirements and automated tests, are easy to reason with, implicitly have three parties with different perspectives, and should be blind to technical details giving devs more freedom. Yet instead, this piece gets relegated to "software requirement engineers" writing walls of text, who tend to forget even the simplest cases (null, empty list/string, etc.), and any automated tests are then designed by the same developers making the implementation and only caught after the fact by manual QA a few months later.


> We also seem to be obsessed with testing that is half-way between unit level testing and system level testing

I think that's where most testing should live. Testing single modules means you don't test how they plug together and makes it hard to refactor responsibilities between modules, such that it often doesn't happen. Full system level testing is painful to write as you often have to mock out the world, and painful to run as it it usually slow.

The middle ground where you test a block of modules together through the top interface is a great middle ground. It does not replace the others- you need some full system level testing to check that your system integrates properly, and most modules will need some unit testing (and some will need a lot), but it's the best "bang for your buck" in terms of invested effort.


I'll give an example for why the original author of the code should write the test. Say you write a sort and for performance reasons you use a different algorithm beyond collection size 100. This is prone to bugs around collection size 100. Your test needs to shed light on this dark corner. A QA generalist can't know that. They write some generic test cases. Size 0, 1 and 2 since they are inherently dark corners. Then some medium size and some large collections. They'll likely miss the 100. And the bug that's there for size 99.

You need both and ideally the original author writes the test and somebody else reviews the test thoroughly. That reviewer should have an intimate understanding of the spec you are implementing.

The more formal that spec, the less need for that external reviewer since the less ambiguity there is leaving room for missunderstanding. In a perfect world, the spec is completely formalized and instead of a test that shoots in the dark, you'd write a formal machine-verified correctness proof, automatically covering all cases. Then you would not need that external reviewer and could just have the programmer do it by themselves.


That's where red-green TDD and pairing comes in. Unit tests != class level tests.

There are tests and there are tests.

Software engineers benefit from having tests that encode invariants they cannot otherwise encode in the type system, so that they have a machine assisted refactoring aid. It's just an extension of writing the code; if you think you know what you want your code to do, you just add more of that

But it's very hard for software engineers to think about what happens outside of that realm, whether because the software encounters edge cases naturally or due to "creative" ways the end users find to (ab)use the software.

Some of those cases can be explored with a fuzzer, some of them are best found with actual human QA testers.

Obligatory: https://youtu.be/baY3SaIhfl0

As always, things stop working when you just go through the motions and become cargo cults.


That is my main beef with developer testing "braindead unit tests" I see loads of tests if `int a = b;` really happened. But everyone feels good because they are doing unit testing as "gods of agile and TDD" lined out...

Try to say that such test is useless and even harmful to long term value. You get "we all are people and make mistakes, we want to prevent that", if someone is trying to prevent wrong assignment in strongly typed language well I will call them "braindead".


Yeah I don't believe tests will replace testers; tests are a safety net for regressions, a safety net to allow for refactoring without changing the in- and outputs, but they are NOT a replacement for testers who look at and work with things from a user's or a business' point of view.

I mean by all means, if a tester finds an issue, write a test to replicate it before fixing it so that it can't recur.

But also, tests are code and code rots. Changing that implementation entirely voids the test. And at some point your test codebase size far exceeds that of your implementation.


And no unit tests can replace an actual person testing the software. Unit tests can catch bugs earlier but getting rid of a QA department (or even one person) manually testing features is worrisome. The developer will always be blind to their own work.

But it's not just software development, online news articles also tend to get worse with more journalists publishing without a proof-reader involved.


Absolutely my experience as well (not medical, but aviation/simulation). What's worse is when the same developers and managers pat themselves on the back for writing new unit tests to cover the discovered bugs.

I was out of work during lock-down (I was a consultant) and being out of work for 8 months almost crushed me when I came back to work because I had to remind myself how to do some relatively simple things (e.g. how do I write a mock?).

Being out of the industry for 10 years is basically starting from scratch (almost).


Going back to coding after 1 year gave me super-programming-powers! :)

I had this idea for a killer app that required a lot of deep thinking on the design to make it either a huge success or sunk it on release. I knew whatever I decided on those aspects would also be hard to implement. Every day I would just contemplate the list of pending items in my amazing project for hours, I entered in a spiral of indecision and procastrination that burned me out. Lost all interest in programming. Backed up all my code of years and cleaned my hard disk. Changed work, changed habits and hobbies. No more programming for me.

After a year I still haven't recovered my interest in programming, it must be my age, but I still believe my idea is something worth to be completed. Two weeks ago I restored my backup and was able to solve all the critical decisions, implement, test and fix bugs. That would have taken months with all the additional procastrination.


> Every day I would just contemplate the list of pending items in my amazing project for hours, I entered in a spiral of indecision and procastrination that burned me out.

This almost happened to me.

I switched to using Emacs org mode to track my personal projects. It lets me do the contemplating in a written form, and I gradually refine high-level targets into sub-tasks until one of them is small enough to implement (at which point I implement it).

It also means that if I am blocked (can't think of a good solution) on a project, I just switch o the next one and edit that org file instead.

It's worked out really well for me. My projects are still mostly unfinished, but at least now I have time-tracked against what has been done and have clear goals written down so I know where to pick up again. This means that I have actual visual feedback of progress!


Over the last 2 decades I have switched from development to business owner/management/marketing/sales and back 3 times.

Right now, I am doing a rewrite of my online 3d engine for consumer products, after being active for 3 years as an enterprise consultant for mid and large sized organizations, mostly initiating first roadmaps and identifying how to onboard new clients, reorganizing my client's product and solution portfolios, figuring out/maturing pre-sales tracks etc...

The reason I switch from development to other tracks and back, is that after a few years I feel like I am stuck in an echo chamber, where I tend to overestimate the value of the thing I am doing, and underestimate the importance of what others do.

The biggest disadvantage is that I could have probably retired from a financial POV about a decade ago if I had stayed on the same track, while this might now take another decade, but on the other hand, for me the chase matters more than the catch.

The biggest advantage is that - due to my broad experience - my kind of profile is extremely unique, and I can find a well-paid freelance gig in about a week without to much effort if I need some cash to bridge one of my many mini-sabbaticals.

This might not be for everyone, but if you like to spend time out of your comfort zone, I would strongly advise anyone who tries to understand the bigger picture to switch every n years; it is super valuable to understand the drivers from both sides and makes the job so much more interesting.

Edit: typo


Thanks for that. I’ve found mixing disciplines has been very helpful; especially these days, when I’m the only coder in a startup team.

But specialization is still fairly important. I am specialized in Swift (with a minor in PHP). This has been beyond invaluable, in the work I do. To be fair, I am involved in multiple platforms and tech within the Apple ecosystem (like writing stuff to work on all Apple hardware platforms), but that is still a fairly constrained environment.

I have found that it’s quite possible to write acceptable software, quickly; especially if we are willing to loop in a great deal of “mystery meat” code, but it’s another matter, entirely, to create awesome code.

I once had someone boast to me that they only hire people that can switch languages and environments in a matter of a couple of weeks.

Wide and shallow worked 20 years ago. Not so sure it is as effective, these days, as the river is a lot deeper, and runs faster.

The stakes for screwups, nowadays, are also pretty terrifying.


Me be like how going back to the code that i wrote 2 months back almost crushed me

Stupid april fools.

Two things don't add up for me in this article.

1. He moved to management 10 years ago. But most things he describes were already a thing at that time. Git existed, unit tests were a thing, we already had a CI workflow at my company at that time. Sounds like he is looking 30-40 years back, not 10. Or he worked at a place that kept innovation outside their doorstep and didn't look at what others did either.

2. Moved to management and got that detached from dev life? All sw managers in my company are up to date with dev practices. How else could they make reasonable management decisions? Everybody up to the CTO is regularly committing code. Not as much as they'd like to, but they are familiar with workflows and have contributed key pieces of our codebase. Meritocracy at its best. Not sure what kind of manager he was, but I doubt I'd have liked to work under him.


Legal | privacy