Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Software Dev Can't Be Automated – It’s a Creative Process with an Unknown Goal (thehosk.medium.com) similar stories update story
2 points by thunderbong | karma 96016 | avg karma 7.63 2021-08-14 12:51:40 | hide | past | favorite | 199 comments



view as:

Was this written by AI? Seems random.


This depends on the definition of "automate" and "creative." GPT-3 proves that you can automate creative writing. In principle it's possible to have a system that interacts with users and develops software based on feedback. You cannot take user feedback out of the loop, sure. Maybe one day the system will also be able to simulate the user's reactions to iterative prototypes, to the point of coming up with something directly useful. Of course the system can't predict the future, which is why iterations will continue as time passes and the world changes.

> GPT-3 proves that you can automate creative writing.

No


I mean, it definitely is creative in the sense that it turns a one sentence prompt into a full story. There is no reason to think that it couldn't write Hollywood-style generic movies in future.

Narrative skill is often overlooked even in people.

1) The kind died, then the queen died of a broken hard.

2) The king died, then the queen died.

3) The kind died, then the queen died of a broken arm.

One is a story, two is not, three is written by AI.


I mean, at least spend the 60 seconds or less it takes to check spelling. You can tell all the above was written by a human.

your welcome

In Imaginary Magnitude Lem provides the introduction (or perhaps foreword, I don't remember) to a work on Bitics, machine literature. IIRC typographical errors are where it begins. Now, of course Lem is writing fiction (even if the fiction purports to be a scholarly work about an imaginary topic) and there's no reason our world should proceed as Lem imagines, but equally there is no reason to rule it out either.

I laughed That said, like any other skill, there will be a day when AI actually surpasses humans at it. Moravec and Tegmark convinced me of it. (The prelude to Tegmark's fascinating LIFE 3.0 is online: https://www.marketingfirst.co.nz/storage/2018/06/prelude-lif...)

PS: I laughed more from the xkcd link posted before by u/ignoranceprior https://xkcd.com/1263/


Actually you have a point. Hollywood plots have been about as coherent as GPT output since CGI became a big thing.

But seriously, no GPT is not going to be writing movies any time soon. Why? Because it does not have anywhere close to the coherence of human written plots. Why? Because GPT is not "AI." It is rather fancy statistical extrapolation.


And artificial strength isn't muscle attached to ligaments, yet factory machines outperform humans.

We should be defining intelligence by outputs rather than the methodology by which they achieve them, or else the only acceptable definition for intelligence will become "a human brain".

If you choose the former, you can comfortably say airplanes don't fly, they just glide, if you want to define flight as flapping of wings. Yet airplane flight is more efficient than if we tried to achieve it the biological way.

Similarly, it seems apparent we'll be able to solve many tasks that we attribute to "intelligence" using methods different than the biological ones. We shouldn't limit ourselves to comparing the methodology when deciding if it has achieved the goal, it is more productive, from what I've seen, to define by output, and to increase specificity there if the result is still unsatisfying.


When people write text they use it to represent ideas and objects and relations between ideas and objects. When GPT writes text it fits a satistical model. It has no concept of meaning. Although the result may be superficially similar sometimes, under the hood the two processes are nothing alike. GPT does not think. Humans do. At least sometimes.

Thinking is an internal process. It is not just determined by its output. When we roll dice we do not say the dice decided to land on three and seven.


>When people write text they use it to represent ideas and objects and relations between ideas and objects.

That's not true, at least a lot of the time. Watch an episode of Paw Patrol, go see the latest Marvel movie or listen to number 6 or 7 on the top 40 charts.

A huge amount of what we consider "creative" endeavours that only a human can produce have nothing to do with meaning or experience and are simply a group of people trying to create something that "sounds right" or looks cool, or in the case of a lot of kid's content, merely inoffensive to the largest segment of society possible.

I'm not saying that GPT is going to win the Pulitzer Prize any time soon, but the line between "human text has meaning, machine text doesn't" is quite blurry and process-dependent.


It’s also worth considering the scope of intended and realised meanings in a work, and whether or not we’d consider that dynamic as one worth pursuing in the future.

Kind of like the old joke about English teachers looking too deeply into classic literature, isn’t it? I might not intend to relate and extend layers of context with the line, “John sat squarely on the desk,” but a reader observing that line as part of a surrounding text could infer meaning and purpose beyond what I originally intended.

With AI-generated text, that dance of writers filling (or deliberately avoiding) their work with meaning, and readers lifting the meaning out of the work, becomes much less human in and of itself. Or does it? Will schools of the future choose AI-written stories and film for students to learn from, over material written by people?


JJ Abrams movies have about the attention span of GPT-3; each 30 seconds is awesome (in the mountain dew drinking sense), but don't make any sense relative to the 30 seconds before and after narratively.

I don't see it replacing their writers, but I see their writers using something like it to mine ideas from on a particularly long night around the writer's table.


But GPT could well be part of a screenwriter's toolkit soon. Say I need a generic car chase scene. Why not have it generate a few variations and then use those as the inspiration? Seems a lot easier than starting with your own empty component.

> No Yes

> GPT-3 proves that you can automate creative writing.

It requires creative input. It's just remixing what it already knows. Once you've exhausted that you're back to needing human creativity again.


What are we humans are doing differently when we're being creative? I don't imagine we produce creativity out of nothing but effort. It seems to me that the source of our creativity is external, either in the form of experiences or in the form of processes that predate our experiences. Maybe both, but I wonder why machines couldn't in principle do the same.

Humans actually understand the words and concepts -- we aren't simply re-arranging words.

That being said, we aren't doing anything machines couldn't eventually do (we are, after all, machines outselves). But GPT-3 is not what we do.


Is this a rebuttal to something. Although I agree with it, I don't know what aspects are really in question. I'd be curious to know what prompted this.

My naive guess is that this is in response to the recent release of GitHub Copilot. There was a lot of surrounding discussion about whether we were approaching a future where human developers are unnecessary. Feels a bit late to be responding to that so I’m not sure.

could also be a response to the more recent OpenAI Codex model.

The purpose of most people on medium seems to be advertising themselves in a slightly less transparent way than Linkedin.

So the question that prompted this was likely "What can I write on Medium?"


It really depends how you define automating. I tend to think of writing higher level languages and even libraries and functions as automating the repetitive low-level parts of programming

Well if specifications are sufficiently well determined I don't see why it can't be automated in principle.

I tried and failed to find the xkcd that your comment is an obvious setup for. They talk about not needing to write code and instead writing a precise specification of the program's behavior, and the other person says "code, that's called code"

not to be dismissive, but it reminds me of this one: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...

Software is a specification. Maybe you meant requirements, but people don't have requirements - they have problems that need to be solved. It is the programmers job to devise a solution to the problem and specify that in code.

In principle there's no reason to believe that computers can't be better than humans at understanding a problem that a group of people has and writing code to solve that problem. Sure, computers aren't good enough yet to do that, but I wonder if people 100 years from now might look back at articles like this in the same way we might look at article from 100 years ago claiming that machines will never beat humans at chess.

At that point, we aren't just talking about computers that write code. You're talking about a general purpose AI that can effectively automate all human intellectual pursuits not just software development.

writing does not equate to understanding. Look at gpt-3. It can write semi-coherently but it does not understand anything. However an automated programmer does not require understanding in order to be effective.

The point is you don’t have that level of specification. Software development is a creative and iterative process towards such a specification.

I think a lot of the challenges of defining software dev is that it varies so much.

There’s people who get to own a problem and build stuff start to finish. And there’s people who are just automatons , implementing a specification. And everything in between.

So I think this can be absolutely true and false at the same time.


in programming, a good abstraction is kind of like automation. programming is automation, and creating abstraction is automation of programming. (on the stupidest level, think C++ templates). if all programming is automation, automation of programming is still programming. (there’s no real difference between a program that will automatically write program out of my specifications, and a DSL). so programming is the fixed point of automation, kinda like exponent is the fixed point of differentiation

in reality it works in a negative way. programmers hunt for un-automated stuff to automate. it’s like the state violence, that hunts for other kinds of violence. if you destroy all non-state violence, you end up with a lot of state violence to constantly perpetuate. if you automate all that is not automated, the only not-automated activity will be the activity of automation (programming)


>programmers hunt for un-automated stuff to automate

Yup. This is pretty much how I describe my work to my wife. I find hard things and make them simple (using computers) - that makes "impossible things" hard, and inturn (eventually), I will make those things simple


That last line isn't true . There may be things that are too difficult or expensive to automate, so they won't be automated.

Also if software development tools reach high enough levels, it may be that domain experts, or business analysts will do the development, and not software engineers.


The missing component in parent's description is the imperfectness of abstraction.

A good abstraction, written for and suited to the task at hand, adds little complexity.

Reusing a previous abstraction, written for a slightly different purpose, incurs some complexity.

Having to use an abstraction, inappropriate for the purpose, incurs substantial complexity.

This results in three outcomes respectively: (1) increased productivity, (2) stable productivity, (3) decreased productivity.

Most of programming, in my experience, has been figuring out which one of those roads you're on, and when to eat the cost of switching to another.


quite often the under-automated things are not "higher-level", but the missed details about the lower levels. think of all of the random bugs

Even if you can’t automate away 100% of software developers, it doesn’t mean you can’t automate away the lower, moving, 80% of skills. This can have the same effect of automating away software development, from the perspective of the median of the labor market.

what's fascinating is that developers have been automating-away their jobs for decades with tools and libraries... so just add to the automation-pressure? or be some sea-change?

hard to know, economically-speaking...


Difficult to say. My bold prediction is that within the next two decades, bootcamps for web dev decline as the barrier to entry of that field increases as the field matures, despite pay being excellent.

How much less menial labor does one do with modern web frameworks relative to the state of the art in 2002?


I find the menial work is about the same.

Less was possible in 2002, so the menial tasks were different. But I think the split between interesting/novel code and boilerplate/menial work was about the same.


That is a bold prediction since people are constantly reinventing the web. For a few years it really looked like Macromedia Flash was going to be a thing to learn. These days it’s reactive UI components. Tomorrow who knows? I can’t see it ever really maturing as it’s not terribly hard to apply new paradigms to the web every few years that gain traction and move the state-of-the-art.

"field matures" is the operative phrase. Flash jumped in a void that was left by a pretty nonfunctional html experience. But it was proprietary, presumably was an energy hog, insecure, and was essentially a stand-alone experience that didn't play nicely with the browser. Reactive UI components are based on web standards, and it is VERY hard to imagine web standards (HTML, CSS, and JavaScript in particular) being replaced by anything in the near future. They are the "C" of the web for better or worse.

And yes I know WebAssembly is supposed to loosen JavaScript's hold on the web, but as far as I can see that has not happened yet even though every browser now supports it. I am not exactly sure why that is.


For one thing, the js VM, V8, is extremely good these days. I’ve been very curious about these questions, and in single thread contexts, V8 seems to beat JVM, with compute performance roughly equivalent to Rust in debug mode.

Since the bindings to the rest of the “essential” browser model are all geared towards js, the appropriate mental model might be that js is the “assembly” of the browser, even though it is not assembly in any meaningful compute/memory model sense.


tbh, and while knowing this argument is hard to push on most companies, « old web technologies » are just as relevant as one decade ago.

If I look at any project I’ve made for the last ten years with « hype technologies », REST APIs everywhere, complex front end frameworks (so complex that it’s literally a second codebase), they totally could have been developed with an old and mature framework like Django, RoR or ASP.Net.

Once those projects are done, you clearly can see that : things like sending a form are incredibly complex and non standard, ressources are wasted everywhere, and only 1 to 3 APIs are used externally, and you maintain the dozen others for your own usage.

The sole reason we are on more complex stack is because the industry magically acknowledged that « web development » was in fact two different jobs.


The one thing I like about “modern” JS-driven websites now versus then is, in general, interactive web software (e.g. with JS-driven interaction) is much more maintainable now.

> hard to know, economically-speaking...

It’s not hard to know at all, it’s a well studied area of economics.

https://en.wikipedia.org/wiki/Jevons_paradox


Yes, and in fact that kind of automation has been happening continually since the creation of the first stored-program electronic computers.

Yep, and the barrier to entry follows, IMO, a U shaped curve where a field gets easier to enter past the initial development, but then gets more difficult over time as it matures and the easiest tasks continually get whittled away, until increasing abstraction of tools requires sufficiently increased abstraction of thought.

We’ve been automating programmers work as long as there’s been programming. Nobody implements their own loops anymore.

As the cost of programming comes down, the set of problems it can be profitably applied to grows. So far, that’s been increasing the demand for programmers whenever the cost comes down.


>Nobody implements their own loops anymore.

It’s funny how this is true in several ways:

* Nobody needs to manually move the stack pointer anymore to implement a loop at the lowest level

* Iverson ghosts [0] let us operate on arrays without explicit loop constructs, e.g. array.sum() in NumPy

* Modern IDEs autocomplete loop syntax

I’m sure there are many other examples I’m missing. All of these are examples of automation of programmers’ work, even if we often don’t think of them that way (especially #1 and #2: higher-level languages/syntax)

[0] https://dev.to/bakerjd99/numpy-another-iverson-ghost-9mc


You could extend that with examples where specific type of loop actions are even abstracted further, such as filtering, which is aided by (standard) libraries, but also by introducing specific syntax to the language (think lamda's / closure's)

Same with open source. The more open source exists, the easier it is to make more useful open source, which increases the surface area of profitable development, which fuels more open source.

1000 programmers is more than 1000x as valuable to the world as 1 programmer. A million may be more than 1000x as valuable as 1000.


The last statement is utterly wrong, and is wrong for most fields of endeavor.

One programmer on their own has the maximum possible productivity per person because there is no communication needed. The whole project sits inside one person's head.

Of course, real world programs usually need more than one.

Each new programmer that is added also adds a little overhead to the existing programmers.

More, once the project starts to get really large, you start to get new forms of time wasting.

In a project with 1000 programmers, there is bound to be a huge amount of duplication, quite a lot of programmers working at cross-purposes to other programmers that they don't even know exist.

And there will also be programmers who do nothing, and get away with it in the mass.

And this is particularly true in open source, where there isn't any strong management at the top trying to prevent duplicate work.


I took the parent poster to mean that 1000 more programmers working on different open source projects has a force multiplication effect.

Yes, that’s exactly what I meant. Thanks!

As if implementing loops are really time demanding. Time goes into solving problems. Thinking and trying. Writing code is only 10% of the total time spend. Cost of programming is not going down. It's going up in fact.

> Writing code is only 10% of the total time spend

Sure, now that we have more powerful, pipelined, branch-predicted computers and you don't have to optimize every last bit of your program, that's true.


Well, I came to programming somewhat late, in the 1970s, but even back then, the actual typing part was a small fraction of the total time spent.

Perhaps you mean the 1950s?


I don't recall stating a year.

"Actual typing part" != "writing code"


Ah, I remember the first time I discussed this with a friend. It was right before graduation, and we were wondering if we'd still have jobs in a few years.

I think that was 1982.

For me, a modern language like Python already removes 90% of the work I was doing in 1982, and templates and my editor do a lot.

It's really not clear there are huge speedups to be found there.


Some software is like this but other stuff is just a simple CRUD app to sign up for some crap or other and it still overshoots budget by 2x. Which is why eng decision makers tend to always choose an off the shelf solution when it exists. Nobody has any faith that their team can deliver on time and on budget even when it’s a previously solved problem.

Everything we have seen so far seems to indicate that you get maximum productivity when you pair human and AI, rather than having either on its own. What that means in practice is that (at least until someone actually develops GAI) aspects of programming may be automatable but you'll always want to keep a human in the loop somewhere to get maximum productivity. This should be true until society's need for programmers becomes saturated... but since programming is essentially itself automation, it's hard to imagine that need subsiding any time soon.

I'm not so sold on your premise. I think there are trade-offs when using AI for software dev, like with copilot.

Copilot is simply a poor application of it. Searching the internet for a stack exchange answer is a rudimentary example of pairing an AI with a human. Modern chess GMs use stockfish to study positions. Maybe a future version of that will be more similar to stockfish for code you're writing or problems you're debugging.

My main problem with this theory is that so many people want it to be true; very hard for me to see anyone doing dispassionate analysis of whether humans will be valuable in the loop long-term.

What do you mean long-term? In the short and medium term (5y), humans being in the loop is highly likely to be valuable, as you seem to agree with. Long term is very difficult to predict. AGI changes everything instantly.

If you’re saying “it seems like people just want humans to be valuable, and the human race’s thinking seems very anthropocentric”, I definitely agree. It’s hard to separate a person’s inherent need to be important from dispassionate analysis in general.


With AGI looming at the horizon, any long term predictions are baseless. The event horizon is somewhere at +20 years.

Yes, AI (or even better what I like to call Artificial Stupidity - naive implementations that get you 90% there) are a great cognitive prosthetic, but in my experience should not be relied on.

A better name for AI is "pattern recognition" because thats what it is. There is nothing intelligent about it.

What is intelligent about natural pattern recognition? /s

Sarcasm aside, your definition is too low-level, as if we called brains just braincells. Pattern recognition is a building block of AI like a logic gate is a building block of a CPU.


If you studied formal logic, you understand what logic gates do and what a CPU can do.

We don't understand brains or braincells - how many operations per cecond can a braincell do? What type of operations? How many bytes does it store? We don't know.

Can we replace a single braincell with a chip and get them to act the same?


Likely not, but that was not my point either, it was just an non-stretchable analogy.

AI is not a single network usually, it is a pipeline of networks of different kinds, see e.g. https://m.youtube.com/watch?v=P1IcaBn3ej0 (seek to 1:35 for beginning of an architecture overview). You can’t call it “just pattern matching” without fetching it too far.


ok, its an algorithm that makes use of pattern matching

Do we do much beyond pattern recognition?

I have been using the phrase "Artificial Stupidity" as well, but with the opposite meaning. Specifically I like to think of human-like artificial stupidity as a challenge for machine intelligence, in which an algorithm is able to replicate the rather sophisticated and incredibly entangled logic, intuitions and calculus of humans at the height of their stupidity. This seems to me a much greater challenge than the standard sort of supervised learning problems in that a truly stupid AI must be able to imagine latent variables that allow it to explain away real world observations in a way that is both statistically implausible but casually serendipitous to their stupid peers. This seems to me to be a requirement for any kind of useful AGI.

You could easily generate stupid statements on demand. Just post a video on Youtube on that theme, and scrape the comments.

> in my experience [AI] should not be relied on

Have you considered all uses of AI or just a few that you encountered in your experience? AI automates lots of unpleasant manual work.


> AI automates lots of unpleasant manual work.

Github Copilot is a great example of this IMO. When you're writing code it seems to be on a 90/10 of 80/20 split where 10-20% of your code is the actual important bits that you have to pay attention to and the other 80-90% is all the common patterns, components and conventions around the important code.

AI is fantastic for the 80-90% of that code that I really don't care about. I can focus my energy on the 10-20% of the code that is really important and let the AI code the other 80-90% that is mostly glue and filler.


A lot of the currently mainstream advanced compiler and programming language tech was called AI when first developed.


This article is shallow and generic. Replace "software" with any other business activity and the strength of argument remains the same.

That's a good point. Like with any other business activity, software can be automated -- to a point. You can automate the repetitive parts of business activity and software development.

But you can't automate an entire business and you can't automate the entirety of software development either.


For the same reasons, testing cannot be automated. The only people who routinely claim that it can are not those who study and enjoy doing software testing.

In fact, anyone who claims that any human activity has been automated faces a testing problem: how do you know? And a construct validity problem: how do you know that the new process is equivalent to the old in every way we care about?

Anything can be automated if you are allowed to ignore its true nature and make unchallenged claims.


what if the AI is smarter than humans?

even if narrowly, only within the sw dev domain?


I think same logic applies. AI is developed by humans to be better than a human. But how can you test it?

By having known activities that humans have done and asking it to repeat them?

Let us say we have 1000 activities humans have done, the AI does them all correctly and of course in seconds instead of days. You then conclude that the AI is better than the humans - but you cannot prove that there is not some slight change in inputs that humans would consider inessential that would cause the AI to fail at a particular batch of tasks.

Is there any point to such speculation? Just as much as that is true, so too could the AI succeed in similar scenarios where humans would fail. The same question could be asked about two humans being contrasted against each other, and just as fruitlessly.


Our own psychology and moral intuition will cause us to attribute a higher responsibility to the AI failures even of we can prove numerically that humans would have done more harm in average; all it takes is a few cases where a human would/could have saved the day

We understand human failure and how to guard against it - we dont let drunk people drive, we require multiple authorisations to move a billion dollar or launch nukes, etc.

We don't understand AI failure, so it can cause collosal damage at the worst possible time


And yet, drunk people drive and kill themselves and other people every day. We may understand human failure, but we’re still really bad at mitigating it.

Here is where our moral instincts make the difference. When mitigation fails we fallback on taking solace in a mechanism that plays a role in the mitigation aspect but its psychological effect takes a life of its own: punishment.

When a human makes a fatal mistake it bears the responsibility of that mistake and we have protocols to put in place to deal with that (including having somebody to blame and direct our anger against).

When a machine does a mistake it becomes much much unclear against whom direct our anger. The engineers who built it? The corporation that profited from it and could have invested more?


You can also not prove that a human would consider that change the same way another human would.

An AI doesn't have to be better than a focused human with lots of experience, great education, at peak health, with a good night of sleep and no distractions. It just has to be better than the average human to be useful, and that's a very, very different scenario.


>You can also not prove that a human would consider that change the same way another human would.

actually by the condition of my predicate I can - I postulated a 'slight change in inputs that humans would consider inessential' if a human considers it essential it is not a change that is allowed into our reasoning here.

Now of course I said this because in the past, as noted by the article I linked in another response, AIs have failed because of changes in inputs that people discussing the matter thought inessential. So while the logic of my statement does not allow for humans considering the changes essential, the actual examples I was thinking of when I made my statement might very well have had changes that humans would consider essential.

So perhaps we should consider the actual rather than the theoretical.

What I cannot prove is whether or not there is any person anywhere that would consider these actual changes essential - I suspect that such a person does not exist but the inductive gap being what it is I might be wrong.

Bayesian analysis was originally formulated as a way to bridge the inductive gap, although that is not the way it is generally used, still I wonder at this point what the chance is that some of the data variances that have led to differences in AI conclusions are variances that any human would find inessential.


But you can never know, so no sensible person would ever trust such an AI. AI stands for "automated irresponsibility." Didn't you know that?

Maybe AI's are really being created to serve a special class of gullible human who don't understand basic epistemology.


Eventually - when the field is mature enough - it might be tested like you would test any new hire. At first you keep a close eye on it/them, and eventually you trust them to do the job.

You cannot tell how good is your lawyer untill its too late, or your bank, or the AI. It is also prone to sudden and unexpected failures, like tesla that was driving fine on the same road for 400 days and one day decided turns ito a barrier and kills the driver

was that an actual tesla incident? links?

interested to learn more about it, tia.


Defining the sw dev domain is a problem. Because I would argue that includes the social/human interaction domain as well.

and why can't AI outperform here as well?

"What if...?"

My answer is you could never know, because that is a testing problem, not an AI problem, and there is no test or set of tests that could settle this.

We certainly can have AI that seems to claim it is smarter than humans. But we also have idiot humans who claim they are smarter than humans, too. So what? Anyone or anything can make any claim, however untrue.

Imagine an AI that is smarter than most humans in terms of some technical ability but is stupider than most humans in terms of moral judgment. In other words, and automated sociopath. Please don't give automated sociopaths any power.


Testing can be automated, but it's highly unlikely that your automated testing is going to catch every problem.

We got involved in an emergency "replace this component" job at work recently. When the inevitable "how are we going to test this" questions started cropping up we did describe how we'd made half a million automated tests and that we had a reasonable understanding of the < 1% failure rate.


Testing is different from creation, though. In creating software, you make exclusive choices between options for how it will be made. But in testing, you can apply every testing strategy and they will complement each other. As one exhausts itself, mining out its favored part of bug space, the others become relatively better at finding the remaining bugs.

Oh, the actual project was an emergency hack where we accidentally reverse engineered what the component we were trying to replace, and then went on to outperform the original 100 fold. Very creative enterprise.

The converse is also true: “testing can be purely manual, but it’s highly unlikely that your manual testing is going to catch every problem”

True, except I think we refer to the application of tests when we talk about automated testing, not their writing.

It's important that a whole software suite be tested automatically for non regression with every change, even it the tests themselves have been written"by hand". Yet it is surprising how rarely this is put in plave.


Yeah we seem to be conflating the automated execution of manually written tests with the automated crafting of such tests.

*place

If you have good tests you can automate programming of code that passes these tests.

There's a wealth of videos on youtube showing AIs finding unexpected ways to cheat at any given task. There's no reason to believe they wouldn't cheat at programming either.

Exactly. And once you’ve got your tests made to be specific enough to make it impossible for them to be evaded? You don’t have a testing suite anymore, you have a compiler!

Which, honestly, is a great argument for choosing languages with highly specific and derivable type systems: I’d rather deal with compile time errors than with writing runtime tests.


That's a sentence that sounds strange - how can an AI cheat? It can't ignore the rules it's programmed with.

Well, because there are additional, unspoken, unconsidered rules that it wasn't programmed with.


If there is a case that isn't properly tested, the AI is likely to use it.

For example, if the test always uses 1,2,3,4 and 5 as inputs, the AI could just generate a list that returns the results for those 5 but not any other.


There are no sure tests to know if someone is clueless about testing, but one of many indicators is when he suggests that with "good tests" you can automate programming.

I work in water management in the Netherlands.

It used to be that water levels were registered by someone manually noting the level at a gauge, writing it down, and bicycli g to the next. He'd visit all of them in a region in a day.

Now they're of course automated sensors. They are cheaper, there are more of them, they are more accurate, immediate available online, and with 5-minute resolution. Clearly better.

But the person doing the rounds each day would also see the whole system each day, notice vandalism, know local conditions on growth of pest water plants by heart, et cetera. Now there is nobody with such detailed first hand knowledge of the system. The informal part of the job is lost, and the loss can't easily be quantified as some amount of money either. It didn't even exist on the org chart, but it still feels like it was essential.

So we replace human work with different automated processes, and yes we ignore the true nature of the work.


> But the person doing the rounds each day would also see the whole system each day, notice vandalism, know local conditions on growth of pest water plants by heart, et cetera.

Not exactly the same, but this is strikingly similar to some of the observations made in this essay: https://desystemize.substack.com/p/desystemize-1

When you reduce field work to a set of data, there's always something that got left out. You just have to hope it wasn't important. Sometimes it was.


Would it make sense to employ one order of magnitude less people doing the same routine every two weeks?

>I work in water management in the Netherlands.

Wim Lex, is that you!?


Hey, I'm sitting in Watergraafsmeer, where I happen to know our front door is about 60cm below sea level, so I'd like to give you a shout-out for help keeping my feet dry.

(And also, I agree with what you wrote.)


Genuine question: what type of bad outcomes does it lead to? Now nobody checks for vandalism and local flora, so ok what happens?

Vandalism means more than just spray painted tags, it can be break-ins, destruction of machinery/systems. Local flora overgrowth can clog pipes/grates etc.

Yes the sensors might pick those up if they affect them. Put what's cheaper? pre-emptive "gardening"/maintenance, or rushing to get a downed pump station back online?


I would guess it is cheaper to not hire people and send in a small task force to up a downed pump on average. How often does this happen? would an additional sensor or camera help cut those down?

Having a team of people manually checking sounds wasteful. The 5% they miss is cheaper than the 95% more they spend.


Cool, I have family working at the hoogheemraadschap. I should ask her about this. Wondering how they measure the misuse by farmers

Your statement about "detailed firsthand knowledge" hit home and reminded me of a topic related to job/task automation--single points of knowledge.

There is a big push at my employer to eliminate single points of knowledge and this push typically takes the form of voluminous amounts of documentation that is 1) never proofed, validated, maintained, or updated, and which 2) no one ever reads. Makes more sense to have a solid continuity plan and create a work environment that doesn't cause employees to pull a Jerry McGuire. Probably also worth recognizing the value of observing the practice in action vs reading someone's opinions on it.

Humans aren't fungible.


The book "Seeing like a state" gives some powerful examples on what we loose when we use incomplete models instead of observing reality and actually using local knowledge. It gels very much with your point and IMO should be mandatory reading for any stem person in order to not confuse the map with the territory and be appropriately humble on what we model

I have a crush on anyone who cites Seeing Like a State in a conversation.

There's no reason someone can't still visit the sensors right, maybe not as frequently?

> how do you know that the new process is equivalent to the old in every way we care about?

How could you possibly ever know or prove anything like that?


It's not that you can prove that it is exactly the same, which you can't, it's that we have strong and specific reasons to know that it is NOT materially the same, in most situations.

In the area of testing, for instance, I study testers, and I document many things they do which require social competence to achieve. This is tacit knowledge. Tacit knowledge is not automatable. But when you don't know much about testing you don't see any of that tacit knowledge in action. So you think testers are just button pushers whose minds contribute nothing.

When OUTSIDERS to a process claim that they've automated it, they are always wrong; and not just wrong but absurdly wrong.


This is essentially rephrasing the Turing test. If something can be automated to where you can’t tell the difference between an automated and a human product then it doesn’t matter if there is one. It’s likely that even super advanced AI pretending to be human will be detectable by a human observer well past a singularity point, not that it would matter much.

So much of software engineering is mechanical. Those parts could definitely be automated. To some extent, software engineering is the process of constantly automating things. Each function, in a way, automates

Let’s automate the functions so we can automate while we’re automating

> You can’t automate the creation of software because the requirements are not known.

It seems like the entire article is based on this sentiment. I read the article but I'm not sure what the point it's trying to make is.

Even in this thread everyone appears to have a different idea of what automation is in the first place, and the author is basically suggesting that we will never have something close to the level of a sentient AI identifying requirements to produce a piece of code with no human involvement...?

I'm not well-versed in AI, and I'm not sure anyone on this planet is qualified to make a claim that we will never get even "close" to that point, but it seems reasonable that most of the things listed can be reduced to simple data and automated with well defined functions that transform them along the way:

    * Specific tasks and activities
    * Captures different data
    * Can see or not see certain data
    * Security — What can and can’t specific users do
    * Different goals
    * Individual responsibilities
    * Processes
    * Reports and visuals of data to decide

Too many paragraphs, with a deduction (unknown goal => can't be automated), that can easily be shown to be invalid because the halting problem is undecidable - e.g. it implies existence of (automated and goal can't be known).

U assume all developers are creative every day. Most of them solve stuff already solved elsewhere thousands of times.

Creative processes with an unknown goal can be automated, such as writing novels or generating paintings, and if you did this with software dev you might end up with something interesting but not very useful - and as a general rule we want programs to be useful.

Maybe automatic esoteric language generation would be interesting - let a million brainfucks bloom.


Ever read an article and think, "This author really should have stated their thesis and context better"? Yeah, that's this article for me. I read it a few times but it always came off as confused to me. Before getting to my long and perhaps rude breakdown (I tried to eliminate the rude bits), let's ask: What does it mean to automate software development? I don't think the author specifies this idea well enough so I'll offer my own ideas.

There is an idea of 100% automation, with the computer spontaneously generating a program that both meets the users requirements and is delivered to them when they need it. This is unlikely without some degree of human intervention to at least state, "I need to...". If this is what the author means, then sure, they're correct. This is such a fundamentally absurd idea it would be impossible for me to even conceive of an argument well-grounded in physics and computational theory that could support it.

However, here's my take on automation in the domain of software development that doesn't sound totally absurd and is probably closer to what most people mean when they describe it: A system in which a user can provide a minimal specification and get a complex execution as a result. I offer SQL query planners as an example of this. The user needs to enter "SELECT * FROM foo;" and the planner does the rest. The user doesn't need to do anything else (optimizations notwithstanding). From the perspective of an intelligent data analyst but "non-programmer" (in their perspective at least), the program is automatically developed to meet their needs from this point forward. It is a reasonable response to say that the SQL query is a program, but it is such a minimal specification of a program that the specifier can be totally unaware of whether the database is on their machine, in their local network, on the other side of the planet or on Mars, if it's a single database or sharded for performance, or any number of other considerations. The development process has been automated beyond this level of abstraction: The user merely needs to know the relational model of their data and the syntax of SQL and they can accomplish grand things.

Now, the rest of this gets a bit nitpicky and at some point I started to feel like a real asshole so I didn't finish the paragraph by paragraph rebuttal (aside: paragraphs should be longer than tweets), also it got too long for HN and on submission it told me so. Keep the above idea of automation (mine) in mind, because if you keep the straw man version in mind then the author is, of course, correct. But it's also a kind of useless way of being correct when the notion you're responding to is so fundamentally absurd.

> As much as people would like to automate software development, they can’t because it’s a creative and collaborate process.

Which people and to what extent do they mean to automate it? You've posed that they exist, explain who they are or at least what they actually mean or provide their words so that we can understand the context of your article.

> It’s uncommon to think of developers as creative individuals but more technical people writing code. The common belief is developers take rules and automate them.

Is it really? At least on this forum it's not. I mean, I do get this from managers at times. I had one who said, only loosely paraphrased, "Software is easy, you just have to type. Why is this taking you so long?" Fortunately they seem to be a dwindling group in my circle.

> Developers can only create software to do what you tell it.

This would also be true of a program synthesis tool.

> You can’t automate creativity and where the final software is unknown.

I mean, I guess it depends on what we mean by "creativity". What is creativity? Per Wikipedia we get this sense of the word: Creativity is a phenomenon whereby something somehow new and somehow valuable is formed.

Huh, a phenomenon whereby something new and valuable is formed. I'm pretty sure program synthesis is a thing. Now, you could argue that providing the requirements and letting the computer generate a program from them is in fact an act of programming. I think that's reasonable, but it would be a significant amount of automation. To be able to go from a mere statement about what a program should to a program that does without any more developer interaction would seem to be automating software development to me.

In the section titled Why can’t you automate software development the author concludes with:

> You can’t automate the creation of software because the requirements are not known.

The rest of the section has no relation to the heading. I reread it several times trying to figure out how those sentences and bulleted items related to it, but could not. Maybe someone else can tell me how. The only relevant piece is the sentence I quoted. To which I'll say: Umm, no.

The requirements may not be known in full, and they may not be known at the start. However, this statement is presented as a universal truth, when it is not. I know the requirements for my software at work, but perhaps that's just because my customers know the requirements for the software. They know the requirements because they know their domain. They know the systems they have to interact with. They know the tasks that have to be done and consequently know the tasks and features they need us to implement. I'll admit, this may be a rare situation, but it's not unique.

All that said, if the requirements are not known for your system, then certainly you can't automatically create it because you don't know what to create. You can't ask a computer to synthesize a program that you can't even describe. That is a reasonable statement.

> You can’t automate the creation of software because it needs to work in a way that makes it easy for the people to use the software.

The former does not follow from the latter.

> Software is a tool to aid people working, to help them to their jobs. The goal isn’t to make software; the goal is to make software that makes work easier.

I can agree with this in a general sense. Making software is rarely the objective itself, it's true. Making software that accomplishes some meaningful work is the more typical goal (especially in businesses where there is a dollar amount associated with the work). Nothing to do with automation, though.

> You don’t know how software needs to work until users use it because many of the rules in a company will conflict.

Two things here:

First, finally you express your domain. Business process software, which is perhaps one of the software domains closest to being automated. BPMN has been around for a while, and not really caught on as much as Appian and others may like, but if you go with my notion of a minimal specification, this is a domain ripe for automation.

Second, if you don't know how software needs to work until it's being used then you don't know what you're making. We're back to my earlier statement, you can't synthesize what you can't describe. Short of a stochastic method which synthesizes many programs, at least, for users to test out. Genetic programming BPMN could be entertaining to observe as chaos spreads through the organization and BPMN diagrams are mated and mutated.

> Actions and software can have unintended consequences that are only visible when they happen and then you decide to change the software.

> Businesses evolve, changing and adapting to improve the way they work. Software goes on a similar journey when you create it. What seems a great idea on paper or as requirements might not work as software.

Program synthesis does not bar evolution.

> Finding out the final requirements and how software should work is the journey, it’s the creative process that involves many iterations. You can’t automate creativity because it can’t be broken down into steps and rules.

Again, depends on your notion of automation. If it's the spontaneous creation of just the right program, sure. But if it's about having a minimal specification that can then be used to synthesize a more complex program, then you can iterate on that specification itself. Without needing to get into the lower level systems (again, optimization notwithstanding).

> You can’t accurately plan how long it will take to create software because you cannot say exactly how the software needs to work or how many iterations it will take to get it right.

Not related to automation or lack thereof. See Swift and the exploding compiler times.

> Software developers take the requirements and create what they think the users need. The users then use it and give feedback. There is subjective and there is interpretation and assumptions that lead to the wrong software created.

No reason that a computer can't take the requirements and create what it thinks the user needs (per the mechanisms provided to it to produce a working solution). No reason that a user can't give feedback to a computer.

> The wrong software is progress because it gives us some definitive to build on and update. This is a step towards creating the correct software.

Good news! Automation is almost certain to make the wrong software at times, also giving us something to build on and update.


Then how do you explain compilers? And high-level languages? Much of it is already automated.

"I think there is a world market for five...maybe six computers."

- Thomas Watson, president of IBM, 1943

"Too big to fail"

- Titanic, and some banks, 1912 - 2008

"Don't worry, the software labor market is safe" - 2021


What a low effort comment. I could replace the last quote with any statement and it still reads the same.

Less low effort than yours? Yeah it was pretty easy to do, but low effort not really. You know it's like Picasso, the 30 second sketch contains 25 years of experience.

Reads the same? With a different meaning tho. Meaning's important. That's what we're doing here: meaning.


The significance of the first quote, is that in 1943, "computers" were people who did math by hand. His point was that 5 or 6 electronic computers could replace the whole world's market for "computers".

Is that right? Seems the common take is that this guy underestimated computers. Like the guy who cashed out his Apple investment for 800 (Ronald Wayne).

I started looking up the provenance of the quote, then stopped, because there's an entire wikipedia section on how it never happened: https://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_attrib...

The idea is not to automate software development, the idea is to automate the writing of it.

Meaning it is not the logical part you want out of the equation. What you want out of it is the unintelligibleness of code, the time it takes to writing all the necessary boilerplate of code itself, the bad naming conventions and stupid sintax errors...


You don't need to write boilerplate code and watch for syntax errors, if you use a programming language that does not encourage boilerplate or have a lot of syntax.

When I start a new C# project, first 40 minutes I am mindlessly writing a scaffolding of classes, data structures and helper functions that I will need from the beginning.

When I start a new Clojure project, I don't need all that, so I just stare at my blinking cursor thinking about what the program will actually do.


Can you expand on this? Let's say I want to write a backend with Clojure, wouldn't I still need to write boilerplate for auth, the api endpoints and the CRUD operations?

If I understand correctly, GP wasn't necessarily stating that Clojure guarantees an absence of boilerplate for every use case, but instead that it doesn't require boilerplate to be written every time you use it.

I'd encourage looking through the documentation for Luminus[1], one framework for writing web apps Clojure, to help you figure out if it suits your personal tolerance for boilerplate. IMO the example for gating API endpoints by authentication[2] is remarkably simple, but you might prefer something more opinionated a la Rails to completely abstract the auth boilerplate away.

For what it's worth, being a lisp, Clojure could certainly support a framework for writing web apps that's highly opinionated, and takes in config for tweaking some defaults (e.g.: auth), thanks to macros.

[1]: https://luminusweb.com/ [2]: https://luminusweb.com/docs/services.html#authentication


To add another opinion: you will indeed have repetitive/boilerplate components (db setup, http requests, ...) but in the end your project will be a more direct reflection of your domain problem because it is easier to abstract out lower level considerations.

However, this comes at the cost of a potentially very idiosyncratic program and the need for much more attention to readability and documentation.


>When I start a new C# project, first 40 minutes I am mindlessly writing a scaffolding of classes, data structures and helper functions that I will need from the beginning.

I share this sentiment but the recently announced changes to file level namespaces, global imports, top level definitions etc. have the potential to eliminate a lot of pointless boilerplate.

Now the structural patterns - those are self inflicted - you pay that cost to keep the thing consistent and maintainable down the line - but people often can't evaluate when it's not worth it (and often even don't understand what they are try to achieve but instead treat it like boilerplate that has to be present)


In my experience, the first 40 minutes of a project's lifecycle is largely noise. It is much more important is to create a body of code that can be effectively grown by large teams of developers. At the risk of sounding obvious, consider creating a library in situations where you need to use C# if you find yourself doing the same thing over and over again.

Of course, if you can be specific about your misgivings that may help the C# (and Clojure) ecosystems or at least help us understand what you mean.


I think this equally applies to languages that are more geared to REPL and/or scripting development (e.g. script languages and/or functional languages). Not just Clojure.

OO requires a bit of boilerplate and structure up front IMO, at least the Java/C++ variety. IMO the structure typically has to be re-invented each project as well. i.e. what factories, interfaces, parts will I need?


I agree with this perspective.

I think people seeking to automate software development should find out a way of compiling something like TLA+[0] to executable code.

Right now coders are both masons(brick layers) & architects. TLA+ is what can bee seen as a "blueprint" for code, and the actual code can be thought of as the brick and mortar.

Devs having to only focus on the blueprint of the code i.e specifications written in TLA+. Truly takes the industry forward.

[0]: https://www.youtube.com/watch?v=-4Yp3j_jk8Q


> Right now coders are both masons(brick layers) & architects

Not sure how common this is today, but I do remember the time when software development was done by people writing hundreds of pages of specifications of various level of detail (including class diagrams, method names etc) and then delegating the menial work to very junior folks. In many cases I witnessed true contempt for the mere job of transcribing programs.

I'm not talking of some dark age of punchcards; but clearly learning the boring details of the Java SDK and figuring out how to setup your Eclipse project (let alone using CVS/SVN) was deemed to be below the pay level of many.

The quality of the software produced in those environments was abysmal. I'm not talking about specs on general, but the practice of making the mistake of overspecifying to the extent you think you need a quasi-mechanical transcription (an undergrad student or a very junior dev) but at the same time not really specifying everything and thus le


It's a shame your message got cut.

I remember having the same experience during my third job, in a french consultancy in Spain. For the sake of vindication (so many years after the fact xDD) I will just name it: Sopra [0].

So, for each business task (like adding some feature to the application we were developing), "Analysts" would write the "specification" in uml-like models in Eclipse (some kind of modelling plugin for Eclipse, I think it was a framework developed by the company over the EMF plugin suite), and a very short description of what this was supposed to accomplish, business-wise.

Our[1] task would be, then, to generate a scaffolding from these models (via Eclipse) and then to fill in the blank of the methods generated this way.

The problem lay at how the incentives were laid in that company (and many others like it, the whole IT "consultancy"[2] business, in fact) : Programming was a job that was poorly paid (but still, much higher than the median salary in Madrid, which is why I was there). So, every person had every incentive to get to either management or higher "design" roles (Technical Analyst, Business Analyst, Architect) as soon as possible. This meant that the people who remained in the company[3] were either the people that had 1 year, 2 year tops, of programming experience that got to now design whole systems and applications, or the sociopaths that floated to management by stabbing everyone on the way up.

You can imagine the quality of the systems produced this way...

Models were hilariously under specified, and clearly the person doing them did not have experience on how programas behave in real life. But because they were so insecure in their positions (remember the backstabbing and the lack of experience), any back and forth between the programmers and the analysts was handled with hostility and contempt from the analyst side.

Obviously we also looked at analysts with contempt. Still, they were paid better than us, so of course they were right and they had management support, and were told by our team leads to make it work anyway, so it was common to implement something completely different under the scaffolding, to avoid having discussions with the analysts, that checked that the scaffolding itself did not deviate from their designs.

So, my take in this kind of 5th generation (as they were called back then) code generation frameworks is that it encourages the kind of behaviour I saw. And when the same people that do the design also do the programming (the mason architects coders of our time [4]), this way of working is just redundant. Diagrams and models have their place in documenting a project, but certainly not in trying (and failing) to specify everything that can happen in a system.

Wow...I've almost written a blog post xD

-----

[0]: What a shitty place to work in, at every level. No joy was possible in that environment. Sociopaths thrived and naturally floated to the management positions. As we say in Spain: shit always float to the top.

I have so many stories to tell of that place...and I was only 6 months there!

[1]: lowly paid and lowly valued programmer-monkeys, literally the term they used when they thought we couldn't hear it (and for a more colourful spanish variation of it: "picatas" which was a derogative term meaning Typist)...how the tables have turned since then...

[2]: As we all know, it's just a sham. We were never consultancies, just the equivalent of cheap sweatshops for our french overlords, who came to Spain because programmer's salaries and labor rights were lower than in France. Still...they offered better salaries than the native "consultancies", so there's that. A common theme in Spain, actually. At least foreign exploiters treat labor better than our own national exploiters.

The situation has greatly improved since those times, and you can find genuinely good (foreign) companies to work at, in Madrid or Barcelona. The consultancies still exist, but they're no longer 100% of the job market now.

[3]: Because, naturally, the churn was stupidly high at the "picatas" level, who could find better offers and situations just by changing jobs every 6 months, which I did. I remember at that time how they tried to FUD us into "loyalty" by saying that they did not recruit people that had this kind of job-hopping CVs. It was false, of course, they were desperate to find people, so they took what they could. My father said the same to me. I don't doubt that was how it was at his time. That era of employee and employer loyalty had long passed by that time, if it ever really existed from the employer side in the first place (I doubt it, IMO it's just that employees had even less choice and were educated by society into loyalty for your task masters)

[4]: I'm very glad of how the industry has evolved, job-wise. These times are much more interesting from a technical point of view, than what existed in the past. I get enjoyment out of my job now, and that is priceless. Also the high salaries help. Not as high as outside Spain, of course, but high enough for a good life.


Except reading software is harder that writing software, so it actually makes my job harder (and less interesting) if my responsibility changes to debugging machine regurgitated code.

Back to writing asm for you! Common, higher level languages are frequently a net win for everyone.

So your idea is that, rather than type in the programs directly, we instead switching to some kind of language for defining the program in? A language that abstracts from all the low-level details?

Sounds like a winner to me. Somebody should implement this!


I think you are on to something.. Such a language could perhaps even have a formal specification? Like a syntax and schematics?

Yeah, like a Unified Modelling Language.

There have been quite a few attempts to use a 2D layout to describe logic. While it does work to some extent to problems naturally suited to unidirectional data flow, it is overly limited for general computation. The patterns immediately behind the distasteful (to some) text of code are N dimensional and cannot easily be mapped to two dimensions.

There certainly have been attempts to do this to varying degrees over time. For example, in Java, there is JHipster which introduced a DSL to describe a number of application concepts and then generated code based on those: https://www.jhipster.tech/

The problem is that these approaches tend to fail whenever you need to either edit the generated code while not breaking the generation logic, or implement more complex logic and cases that aren't covered, or even just integrate with external libraries without digging too deep in the generation logic to assimilate the external code within the codebase.

Now, there are other, more limited cases in which this can work, especially with frameworks like Ruby on Rails or even Laravel, which provide all sorts of code generation logic for specific situations, given that they're typically not integrated with that many libraries, in which case they just make development faster and simpler.

Personally, i think that there's a lot of utility to be had in this approach, especially for situations like schema-first database development, or even generating migrations based on the model classes within the application. MySQL Workbench actually lets you generate DDL statements based on ER models, as well as synchronizing partial changes in a live database, which is useful in practice, yet also really niche and probably hard to implement behind the scenes.

Perhaps that's why sadly model driven architecture hasn't really become all that popular outside of academia.


> The idea is not to automate software development, the idea is to automate the writing of it.

But they seem the same.

> Meaning it is not the logical part you want out of the equation.

The "logical part" is what we call "writing software". The rest of it is window dressing.

> the bad naming conventions

Why do you expect an AI to come up with clearer names than a human can?


I agree, but this also sort of kicks the can down to code review... until that could be automated too.

As opposed to building the code from the ground up, maybe we'd have to sculpt it from an amorphous blob into something more refined.

I guess coding manually would be something people do as a hobby, like solving sudoku.


That is sort of how I think it will be. Think about it this way, we (humans) automate all complex tasks since we know we are not that good at managing high complexity at once, code in itself is incredibly complex, no human can truly know all variable names and the location of already existing datastructures, you see this on our day, when 2 devs end up implementing classes for similar purposes just because they were not aware someone else had done it nor how was it named, this even pases code review since 1) sometimes the reviewer doea not know the other class exosts either 2) its not visible from the scope of the code review...

Once we acknowledge we also need to delegate software creation to software you start abandoning the idea of a code review for something more alike to "correctness review", basically making testing your code review itself

And as you say manual coding would be done either by pleasure, nostalgia, teaching and most importantly identifying possible optimizations and errors


How long do you _actually_ spend on boilerplate/syntax? Your perception is likely exaggerated as it is tedious work.

Yeah, I have some related thoughts to this.

Much of the automation happens when you do know explicitly what the goal is. However, for programmers, their job is not turning well-defined steps into valid code but understanding what the goal is from very vague requirements, and that cannot easily be done by a statistics-based ML algorithm.


OpenAI/AI Dungeon, especially the new (paid) model, seems pretty good at turning vague two-sentence prompts into fully fledged small stories with the requisite themes.

> However, for programmers, their job is not turning well-defined steps into valid code but understanding what the goal is from very vague requirements

The absolute vast majority pf software development is barely one step away from a CRUD. Do not deceive yourself thinking that developers solve some incredibly difficult problems unknown to mankind.

Your "vague specification for a form" is right now being solved by about 100k other developers. It's not unique.


Not everyone is building another blog or something. Business code can by fairly complex though not technically complex. Nowadays people start their shops on shopify but still hire people to code / customize their needs. It's just not easy to completely replace a human by a more sophisticated product about coding.

Your 100k other developers will continue to justify their positions to improve the model though.


I think question should be: to what extent things can be automated?

Is it possible to create a automated process that will with enough non-coding input create a workable software? In that regard answer is yes, same as developers are recording requirements, one day machine will have enough "knowledge/intelligence" to do the same.


Perhaps this is correct, but it’s correct in the way painting will never be automated. Instead, we invented the camera.

I do expect that our abstractions will be sufficient enough that end users can express what they want in a way that doesn’t require deep knowledge…like building in a video game.


90% of developers jobs today can be automated.

The author is largely overstating the job of software development and confusing "software engineering" with "software development"

SE is indeed a creative process that requires a lot of thinking , conflicting point of view and a often a crazy amount of research in order to do anything ( Database System , CLI etc... )

But for the rest , I'm an enterprise architect in banking , 90% of the project I'm in charge consist of "gluing" service together or adding a "screen" in the front end that calls an apis...

Startups like Bubble|0] have proven you can create web / mobile apps / apis without code ,

the only reasons we are still relevant today is for three reasons :

- There is no proper FOSS standard ecosystem to create codeless APIS and Applications

- There is no standard in the industry for the business and application layers

- Enterprise have "legacy" that is much cheaper to maintain with humans ( Devs + Architects ) rather than automate by R&D to create those ecosystem ( Labor Capital Substitution )

There is dozens of papers on 4th industrial software revolution , which at the moment won't happen because the tooling is simply not there.

Definitely 4G Software ecosystem largely automate 90% of the blue collar coding of today.

[0] https://bubble.io/


+1, this is my experience too.

I dont know. Im a dev in banking and I see what kind of job you have. Sadly, an enormous amount of our time is not spend in transcoding your vision into beautiful lines of code, but fixing all the wrong assumptions you had, talk to the actual users and iterate over the problems of your model until we fix it into something workable, maintain day to day changes in the universe that make your original idea further and further from the need.

People like you work in projects but us we dont just glue things together for your royal pleasure, we sadly have to actually fit a day to day need.


Nice rebuttal, even though you might be projecting a bit at the parent commenter :)

I used to be a consultant in this space, and I can confirm the existence of these completely different bubbles. Ironically one of the root causes is how badly banks want developers to be interchangeable commodities.


Enterprises do try all the time to replace devs with "no-code", or "config" solutions, automation, interop, or the flavor of the day.

Many times the initiative comes from architects such as yourself.

I've never seen it go well, almost like they don't really know what it takes to "glue two apps together", but I'm sure you'll do better! Good luck!


> the only reasons we are still relevant today is for three reasons

Then how we automate creation of the standards? Don’t get me wrong, but now we aren’t punching holes in cards. In the future, people probably won’t be doing a lot of things, which are now typical menial tasks too hard or cheap to automate. As long as an AI with capabilities of reasoning and abstract thought comparable to a human isn’t here, there always will be a layer of tasks that is impossible to automate for one reason or another. Then it was card punching, now it is gluing APIs together.

Automation is mostly useful when you have a very narrowly defined process/standard with little variation over time. Unfortunately, this also requires for the one requesting the automation to know exactly what they want to get at the end of the process. How would one automate creation of a tool fitted to the particular way some business is operating? Video game development? Fixing badly documented code? Granted, many parts of those processes can be, and probably already are, automated away. But there are almost always elements that require a higher level understanding of the whole context these projects exist in.

I would equate it more to a hairdresser having switched from using scissors solely to being able to use a hair clipper. Their work was partially automated by a machine, but the process still is overseen by a human being.


Well this is honestly not really the software developers job. Software developers figure out how something should be build. This is indeed a creative process, though different to what is described.

Developers don't figure out requirements. sometimes it's possible to use libraries, Frameworks, open source and no code solutions or even modern programming languages to make that easier. However in general still someone needs fairly deep knowledge of the relevant tool. You might as well call such a person a developer. Usually very little in the sense of making it easier is possible because the software doesn't exist in a vacuum and needs to be build on top of/interface with/integrate existing software/run on a specific server set up.


The issue with automating software is that you first need to specify exactly what you want, and what should happen in all the edge cases. The best way to do that is with a domain-specific language, which is what programming languages are.

I don't think the issue is that there isn't a tool that understands natural language, it's that even in natural language you need to specify everything before it works.

Who knows, though, maybe we'll see a piece of AI that can make educated guesses from rough descriptions and make it work.


> maybe we'll see a piece of AI that can make educated guesses from rough descriptions and make it work

Right. If we can build a strong AI, then we can certainly automate all stages of software development, by definition.

The article's argument essentially boils down to I have certain assumptions about the limitations of AI, and I've decided software development is one of the things that will always be beyond AI. It isn't a technical argument. It does a poor job of giving a solid justification. These sorts of arguments don't have a great history. From the article:

> Creating software is a human collaborative activity. Humans are good at working with other humans and with finding solutions. Computers can only automate using rules, but software projects have to work out the rules as they go along.

We already have AI that can do a surprisingly good job of writing English prose. A few years ago I'm sure people were making exactly the same kinds of arguments about how writing can't be ever automated because computers only ever follow rules.

edit In reality I imagine we'll see incremental progress, as we've seen with GitHub Copilot. I'm not saying we can expect to see software development automated overnight, but the idea that it's impossible even in principle for it to be automated, is not a position I've ever seen a good defence for.


If you define the scope of what you're trying to accomplish narrowly enough then of course you can automate. This is what the nocode systems do and they give you a nice and easy to use UI.

Automating all of software engineering is something that will be done at the same time as every other job title - when general AI comes along. Hard to know when that might happen - it could be next week or it could be in 20 years.


When I think about a lot of the things I've developed for companies, and the basic chaos which is trying to pull a business rules definition out of a company...I'm not really convinced. A huge amount of what we do is just input/output mapping, and vague requirements essentially mean handling it totally wrong occasionally is completely fine.

It feels like an AI system could easily do a lot of this: what do I need? Some sort of web form which takes inputs, what do I need out? JSON which looks like this based on the inputs or database rows which look like this.

This feels accomplishable in the very near future.


It's fine if we automate low paying jobs fuck those people anyway but not the software developers, lawyers and doctors!

copilot is a game changer. It can and will be automated and get smarter as we use it. If it was able to generate unit tests for the functions it was writing for example to come up with solutions. I don't doubt someone is adding this feature as we speak.

Tools like Copilot will always be one step behind: as soon as we are able to avoid writing boilerplate code (auth, api endpoints, etc.), the requirements of the software we need to write go up by an order of magnitude, and there is no tool that can help us. We need to wait for the tooling to learn from our creative efforts. By the time the tools are able to automate the more complex software needed, then again requirements have gone up another level, and again human creativity to the rescue.

Unless, of course, we are talking about a general purpose AI that can "think" like a human. But I think that's sci-fi for now.


What AI might be better at than us is recognizing joint parts of code, creating a boilerplate for that and immediately offer to other users. Sort of like automated package control with self generating libraries for common patterns

As long as the business people still aren’t able to fully articulate what it is they actually want, and certainly never to the detail an automated system would need, it’s hard to see how it could be automated.

If the automated system is quick, it will be very easy to iterate. So eventually the result might be very acceptable.

I hate it when people describe software engineering as a "creative process". This gives cowboys the confidence that no one has ever thought of the problem they're dealing with before, and it's their job to come up with some revolutionary thing from the ground up. Every time someone claims to do that, I just dont bother looking into their work.

This makes it sound like devs are the clothiers of yore.

Once one of a kind, now pumped out by the millions in factories.


Legal | privacy