Sorry, I don't agree. I have not been in the industry for 30 years, but I have been programming since a very young age on similar hardware to your Commodore 64. No I cannot translate the skills I learned when writing Basic for that 8 bit architecture directly to the modern web app project I am working on. I can however relate better to memory constrained environments such as Arduino's and MSP430's. I can say that getting started with a very limited piece of hardware and a very limited language did get my interest peaked to keep exploring what I could do.
Later I got into Linux and FreeBSD. While I have never worked with any BSD professionally (other than OS X, but that doesn't count), I do think that knowing how FreeBSD works makes me a better Linux user/admin/developer. Certain ideas (kqueue, core system vs packages, etc.) are good concepts to keep in mind when developing for a platform that does not have them.
I also learned Turbo-PASCAL and used Delphi for a spell. While not directly relevant, they did have quite an influence on today's languages and development environments. NetBeans which I did use professionally for a bit was a far cry form Deplhi but knowing both made NetBeans easier to use.
But all that aside, the point of having years of experience is not about enhancing your current knowledge. It's about enhancing the process of acquiring knowledge, organizing it, and using it. Someone that knows JavaScript is going to get coded under the table by someone who knows JavaScript, C, Haskell, Erlang, Lisp, Python, etc. In fact I would wager that someone who knew C, Haskell, Erlang, Lisp, and Python but not JavaScript would in the long run beat out a JavaScript expert simply because the penalty to learn a new paradigm is much smaller than perceived, while the benefit of being able to think in a multi-paradigm fashion is a huge benefit.
Finally, the mechanic thing: I am comparing a veteran mechanic, with say 20 years experience, including current experience with latest cars vs a mechanic with 1 year experience with just modern cars. I listen to CarTalk, the NPR program, and they had a few very interesting stories on there. For example, there was a woman who called and said that whenever she turned on the fans in her car it smelled like gasoline and it very often happened after she got her car worked on at the dealership. The suggested reason was that when the mechanic worked on it, he put dirty parts on the cowl of the car where the air intake is and some gasoline and oil dripped into it. This is something experience teaches you and it has nothing to do with the modern chips.
I can't decide how much I buy this argument. I was born in 1980, and started programming Logo and BASIC very young (around 5 and 7 years old respectively). Almost everything this post describes is familiar to me: typing in code from books, learning PEEK and POKE, etc. I just don't think much of that matters: whether you learn BASIC on a stupid terminal that can't do much else, or in a super simulated terminal in a web browser seems kind of irrelevant.
OTOH, the experience of learning C and having to actually write video and network drivers (or their barest elements) because there wasn't a web you could download a library from...yeah, that probably actually did make me a better programmer. Having had the experience of writing a little bit of actual assembler, even just the old "MOV AX 10; INT 13" (am I remembering that right?) does give me a sense of connecting with the machine more deeply than someone who grew up with the internet.
On the first hand again though...living through the late 90s and most of the 2000s was crap. The time of Java. And the worst kind of JavaScript. I pretty much stopped coding altogether until the web had matured as a platform a bit, by the early 2010s.
I was working in Pascal, C and assembly about 30 years ago, mostly in DOS and Windows 3.
By 1995 I started dabbling with websites, and within a couple of years was working mostly with Perl CGI and some Java, on Windows and Linux/NetBSD.
Most of my work was on Windows, so that limited the available Perl libraries to what would run on ActiveState's Perl.
I gave up trying to do freelance because too many people didn't seem to understand the cost and work involved in writing software:
- One business owner wanted to pay be US $300 to fix some warehouse management software, but he'd up it to $500 if I finished it in one month.
- A guy wanted to turn his sports equipment shop into an e-commerce website, and was forward thinking... except that none of his stock of about 20,000 items was in a database and that he could "only afford to pay minimum wage".
I interviewed with some companies, but these people were clueless. It seems like a lot of people read "Teach yourself Perl in 7 days and make millions" books. The interview questions were basically "Can you program in OOP with Perl?".
I got a proper developer job on a team, eventually. They were basically happy that I could write a simple form that queried stuff from a database.
Some other people on my team used Visual Basic and VBScript but I avoided that like the plague. I recall we had some specialized devices that had their own embedded versions of BASIC that we had to use.
When Internet Explorer 4 came out that we started having problems making web sites that worked well on both.
Web frameworks didn't exist yet, JavaScript was primitive and not very useful.
Python didn't seem to be a practical option at the time.
Your web developer knows nothing of how to write ABAP, your SAP developer has no clue how to do embedded programming for a microcontroller, your real-time embedded programmer knows nothing of databases, your database admin thinks React is something to do with nuclear power plants.
40 years ago: your physicist knows nothing of how to write RPG, your mainframe developer has no clue how to do embedded programming for a microcontroller, your real-time embedded programmer knows nothing of databases, your database admin wouldn't touch FORTRAN with a ten-foot pole.
... Which is to say, I don't really buy it. Or rather, I don't think that specialization is a recent phenomenon. It is true that the personal computing market consisted of comparatively simpler machines, which allowed for greater breadth of understanding.
You're right, I guess I'm comparing against both those axes at the same time, since I'm relating to my own experience as a kid 30 years ago vs as someone working in tech today. Indeed, that's not really a fair comparison.
The point I was trying to make was about the distinction between programming with a limited set of building blocks vs programming with a large and open-ended selection of libraries. They don't require the same skill set, and one seems more fun to me than the other.
> I don't think those are comparable to complexities in IT that exist today.
I think the constraints and lack of “here you go” resources many starting programmers dealt with, even for toy applications, out of necessity in the 70s and 80s are better preparation for the attitude necessary for dealing with the complexities in IT today than the learning conditions today.
Which isn't to idealize it, it was also a hard onramp that drove lots of people off that would have done well at many real world problems where they weren't soloing without support, and night with experience have still developed to great solo practitioners, too. But the people I've encountered that came through that 70s/80s start tend to be, IME, on average more willing to slog through and learn hard stuff in new areas than people who came up through, later, easier onramps.
Though that may also be in significant part survivorship bias, as the 70s/80s crew will have had to have stuck around the field longer, usually, and it may be people with that flexibility are more likely to stay in technology past the half-life of whatever was current when they got in.
I disagree, if you did any application programming in the mid to late eighties in C/C++, TCP/Sockets, CORBA, Sybase, and X11, about 90% of your experience directly translates to today's Web based application development.
Hell, if you use a Linux development environment you will feel right at home.
That's because you, like all of us that learned in the 80s, were very, very lucky. We started in a world where writing good, production code and learning the very simple basics of programming were the same thing. I started with a ZX Spectrum. You could use simple machine code, or simple basic. Libraries? What are those? The one barrier to get good was when you ran out of memory, and had to switch to machine code and learn memory saving techniques, but by then, you were as ready as anyone.
I compare it to wha we do today: My code uses libraries, that use libraries, that use libraries. Languages are huge in comparison. Sure, it's easier to do what we used to do 20 years ago, but nobody expects from us what we did then: Even someone that is just learning wants to do more. This is what builds the despair phase of the article.
It's a well known issue that both affects how we train new people and how we even manage large pieces of software today, so it's well talked about. For instance, the first talk of JSRemoteConf last night was all about this issue. Hopefully they make the recordings openly available soon.
I get your point and would reformulate as: over time a beginner's environment is mostly the top layer of the tech stack and leaving that state of beginner is a lot more challenging.
In the 80s I was dabbling in Basic on Amstrad CPC computers and things were reasonably simple indeed. When needed I could revert to z80 assembly language and peek and poke my way around. And that's it, there weren't many layers between you and the hardware.
In the 90s however Windows made things a lot more opaque, though it did not prevent VisualBasic success. Instead of hardware generated interrupts you had events, mostly related to GUI, for which you needed to code some scripts. No more poking around in memory, it's all abstracted away from you. Enthusiasm for this way of working motivated the creation of a (non-compatible) VB equivalent on Linux [1] which includes an IDE with drag and drop GUI building, and that's been used to create an ERP for small businesses in France [2].
So yes the programming environment has now a lot more layers, however it just means that only the last layers are needed to get your way
around. This reduced cognitive load makes things easier and increased the reach. The trade-off is that most programmers have little understanding of the lower levels: compiler optimisation, memory and processor allocation, etc. And since abstractions are inevitably leaking...
Direct access to hardware is not the difference. It is easier to get things done today. You can argue that it was simpler in 1985, but I don't think that simplicity is the right way to measure things. You can present an easy-to-use programming environment to people, accessible to newcomers, usable by experts, and it doesn't have to be simple but it has to be possible to use it in simple ways.
Just personal observations here. Someone with a computer in 2019 can download Unity, which is free, and watch a tutorial on YouTube, which is also free. With near zero understanding of what is going on, and no prior experience, they can have some kind of rudimentary platformer working within hours. This is then a good starting point to learn programming (you can dive into C#) or you can continue to jumble together copy/pasted bits of code that you see online (kind of like how I remember doing with BASIC and library books).
Sure, there are a bunch of layers of abstraction, and those abstractions will break down all the time. You don't need to learn those abstraction layers, you can stay at the top layer and still get good work done, maybe working around a few problems that you don't understand from time to time by futzing around with stuff until it works. Or you can dive in and try to understand what's underneath an abstraction. That's the whole point of having abstractions in the first place. They exist to hide the complexity, to let you get work done without understanding the entire system. 72 layers of abstraction is a bit of an exaggeration. I'd say if BASIC has three layers (interpreter/machine code/hardware), Unity only has six (editor/scripts/engine/os/machine code/hardware), but who knows how you count them.
In my experience, watching people learn how to create things, it has never been easier to learn programming and start building things. The main difference is that in 1985, programming was considered an essential computer skill, and in 2019, you're expected not to program. BASIC was amazing because it was what you saw on the screen when you turned your computer on, nothing more.
I'm going to cast a dissenting vote from the other comments here and say yes.
The golden age of my personal programming experiences was around 1993, during high school. After outgrowing GW-Basic, I was hacking around in x86 assembly language, figuring out how to program DOS graphical games by manipulating VGA hardware registers. My tutors were library books on x86 assembler and instructional text files gleaned from local BBSes. (I still have and treasure those resources to this day. Michael Abrash was my god.) I was able, by myself, to produce programs as technically intricate and graphically rich as industry-leading fare like Commander Keen or Epic Pinball, in pure assembly language. (I never actually made anything beyond tech demo stages into an actual marketable game.)
This was magic. Not a single other person I'd ever met in my life or even on local BBSes across my entire area code had that sort of capability. This awed even non-techies, whose pinnacle of computer experience was somewhere between Solitaire and Minesweeper. I had a mini-career programming BBS advertisements for inclusion in zip files, putting out some neat and impressive graphical effects in 2k or so of assembler code. My favorite trick was to include what appeared to be a custom font in these tiny executables - done by copying the BIOS ROM font at runtime and applying some bitmap transformations to each letter. I even wrote my own mini sound engine for the Adlib FM-synthesis registers.
Nowadays? Grab a Flash development tutorial and you can do in a week what I spent most of high school learning and doing. Everything is possible in programming and everybody knows it and every answer is seconds away. There's no feeling of exploration and achievement. There's certainly achievement in building a business or product, but not in the programming itself, which now serves as the drudgerous means to an end.
The connectivity of the Internet has eliminated local maxima. I was in high school by an order of magnitude the smartest programmer I'd ever met, but I'm thoroughly average in the Internet world. I've never been motivated to strike out building my own cool software or startup since nothing's ever come to me that could recapture the magic of my early days of DOS VGA assembly discovery and programming.
All the cool technology and companies now are about connectivity, starting with Google and its primal concept of page rank and link juice. Facebook and its world-squared size social graph. Ebay the world-squared size marketplace. Apple and its million-strong app store. Dropbox and its seamless cloud connectivity. You can't program anything meaningful and novel in and of itself; programs are now defined by what they interact with. Some do find scalability and connectivity exciting, but not me. You can certainly produce scintillating results with modern resources, but for me it's always been about the journey not the goal, and the journey of modern programming is, for me, essentially drudgery.
My take is simply that programming has gotten way bigger. People seem to think that in the “old days”, programmers ate raw potatoes and programmed in assembly because it put hairs on their chest. The reality is, computers were way simpler objects back then. Assembler on the Intel 4004 has like, 30 opcodes or something. The list fits on your phone screen without scrolling. Modern javascript bundlers pull in more code than entire operating systems back then. C only has about 20 keywords. If you know assembler, you can probably learn K&R C in a weekend if you go hard. Modern C++? Forget it. Apparently the spec just for c++ initialisers is 300 pages long.
Today the same amount of knowledge makes you barely passable in a single niche domain. Consider web development. To get really good at modern web development, you need to know modern javascript, CSS and html. You need to understand how browsers work and all the quirks of how http requests and loads webpages - including what dozens of http headers do and how they interact. You need to understand the browser rendering process, performance tools, accessibility and debugging tools. And learn dozens of javascript libraries, like react, express, webpack, database wrappers, and so on. It’s accomplishment to learn all of that. But if you do, you still only know web programming. That knowledge doesn’t really translate to operating systems work, mobile development, databases, AI, embedded, etc.
Most professional programmers only have the inclination and capacity to learn one ecosystem. And, even then usually with big holes in their knowledge. True polyglots are rare because the mountain you need to climb to get there is higher. But we also depend on polyglots to guide us toward useful tools. Language / ecosystem choice still matters. It matters for performance, velocity, security and compatibility. But how can you really evaluate that stuff unless you’ve spent time debugging Go programs, or tried to squeeze every last drop of performance out of a big legacy Java monolith?
We’re left talking imperfectly from our own experiences. And living in whichever niche of programming we’ve carved out for ourselves. The days of everyone being all terrain programmers is over.
As a retro computing aficionado, this is where I disagree. Learning about old computers has always taught me something useful about modern systems. Going back to the first IBMs and Apple architectures has a lot of merit. It's like reading a book from the beginning, instead of jumping in the middle and trying to make sense of everything. You can probably continue reading, but you won't know why some things are the way they are.
Obviously, developing software for obsolete systems is not a great income stream, but as an exercise, this too has merit. You learn a lot about constraints and limitations, which today are rarely considered, but could still teach you how to optimize code. You learn how to produce software which can run surprisingly fast on machines from 30 or 40 years ago, and that is transferrable to modern coding.
You learn a lot about memory, how to use it efficiently and what can be achieved with just 640K, which as we all know, "is all the memory anyone should ever need". You learn that by introducing limitations a sort of game happens in which you need to be more creative to implement things which have since become obvious. And this makes you a better problem solver.
There is a lot to learn from old computers, and while some people will always disagree, I think it makes you a better software engineer.
IMHO, that experience from the 70's has a lot of value. How to optimize, how to talk to hardware; all are forgotten skills now days. Not to mention the sheer value of remembering how things were done and why. The explosion of young, inexperienced programmers that refuse to acknowledge the triumphs and failures of the past put the profession into an infinite loop of rediscovering, re-implementing, and re-learning the lessons of the past.
Those 1970's skills map directly to embedded work. They map to kernel and systems level programming. Guess what ? The same problems we had on a PDP-11 we have today. The scale is a bit different, but the fundamentals are not.
Can you pass the fizz buzz test ?
You would be amazed at the number of applicants that can talk "singleton observer model view controller association class" all day, and can't do fizz buzz. Or compile hello world from the command line. Or know the difference between a compiler error and a linker error.
Focus on what you do best. I think you will find there is a demand for those skills.
I have to wonder: there are very competent "old technology/mainframe" programmers. They debugged complex programs using very primitive tools when we were all in diapers. Obviously, there are also very mediocre old programmers, but for the sake of this discussion, consider the best practitioners of their generation.
How is it that they mostly find themselves stuck in horrible jobs, maintaining legacy software on their old platforms? Surely, for a programmer with experience in assembly and Basic (as an example), becoming proficient in most modern languages shouldn't be a problem. But I strongly suspect that even the minority of the old programmers who bother learning a new language, can't find employment utilizing their new knowledge. Any thoughts?
You know what? I don't think it is easier to learn to program now. Many years ago when I was a child and I got my first computer, it booted straight into a BASIC interpreter. The computer also came with a manual that taught you about if statements, for/next loops, gotos and gosubs. It showed you examples of how to draw to the screen and get inputs.
Compare that to today. There was nothing to install. It was all already there. There wasn't even an environment to launch, you booted into the REPL. You didn't have to hunt for tutorials on the internet, everything you needed to get started was in the manual that came with the computer. And everything in that manual was appropriate to the computer. You didn't have to worry about having the wrong version of dev environment, making the tutorial incorrect.
That's not to say that programming hasn't improved in other respects. I could spend years trying to build a basic TCP/IP stack on that old 8bit computer, which already comes pre-installed on a modern computer. I can grab a free copy of just about any language I've heard of. Huge numbers of libraries freely available allow me to stand on the shoulders of giants. But, nothing available today approaches the simplicity of that old 8 bit system for learning to program...
This sounds super awesome! I can't imagine how interesting those times might have been. I started to play around with a 6502 just because there is the possibility to understand the system to some extend. Modern day software engineering is like sitting in the golden cage. Not because the the systems are inaccessible like mobile devices or Apple Computers – you can still get pretty far on Linux systems – but the main reason is that you just can fit all the stuff in your brain. If you try to understand how your Angular/React/whatever project is getting their pixels to the screen all the way down – its just a project for a lifetime and even that is not enough. Understanding modern x86 processors is almost impossible – yeah to some degree enough to understand some basic concepts to not shoot yourself in the foot with how the cache works etc. but getting your hands dirty with an 8-bit processor and reading the datasheet is just something very different.
To be fair, s/he was responding to "Today you have to download and understand a stack of tools and languages before you get anything printing on the screen."
It is true that people don't generally ship code written in devtools, but then most of the programs shipped on the Commodore 64 were not written in basic. (Unless you count hundreds of peeks and pokes as basic.)
I do miss the immediacy of the programming experience, but I also think the correct modern day analog of that is the browser.
Programmer education, tools, language standards and best practices are all vastly different than 50 years ago. That's like pointing at a Ford Edsel and then claiming that modern humans can't make good cars.
Later I got into Linux and FreeBSD. While I have never worked with any BSD professionally (other than OS X, but that doesn't count), I do think that knowing how FreeBSD works makes me a better Linux user/admin/developer. Certain ideas (kqueue, core system vs packages, etc.) are good concepts to keep in mind when developing for a platform that does not have them.
I also learned Turbo-PASCAL and used Delphi for a spell. While not directly relevant, they did have quite an influence on today's languages and development environments. NetBeans which I did use professionally for a bit was a far cry form Deplhi but knowing both made NetBeans easier to use.
But all that aside, the point of having years of experience is not about enhancing your current knowledge. It's about enhancing the process of acquiring knowledge, organizing it, and using it. Someone that knows JavaScript is going to get coded under the table by someone who knows JavaScript, C, Haskell, Erlang, Lisp, Python, etc. In fact I would wager that someone who knew C, Haskell, Erlang, Lisp, and Python but not JavaScript would in the long run beat out a JavaScript expert simply because the penalty to learn a new paradigm is much smaller than perceived, while the benefit of being able to think in a multi-paradigm fashion is a huge benefit.
Finally, the mechanic thing: I am comparing a veteran mechanic, with say 20 years experience, including current experience with latest cars vs a mechanic with 1 year experience with just modern cars. I listen to CarTalk, the NPR program, and they had a few very interesting stories on there. For example, there was a woman who called and said that whenever she turned on the fans in her car it smelled like gasoline and it very often happened after she got her car worked on at the dealership. The suggested reason was that when the mechanic worked on it, he put dirty parts on the cowl of the car where the air intake is and some gasoline and oil dripped into it. This is something experience teaches you and it has nothing to do with the modern chips.
reply