From that point of view, C# also had hardly new to bring to the table, given Algol 68, Mesa/Cedar or Modula-3, if we start counting GC enabled systems programming languages.
Systems programming languages with GC exist since the late 60's, with ALGOL 68RS being one of the first ones.
Since then a few remarkable ones were Mesa/Cedar, Modula-2+, Modula-3, Oberon(-2), Active Oberon, Sing#, System C#.
The reasons why so far most of them didn't won the hearts of the industry weren't not only technical, but also political.
For example Modula-3 research died the moment Compaq bought DEC research labs, or more recently System C# died when MSR disbanded the Midori research group.
If you want to learn how a full workstation OS can be written in a GC enabled systems programming language, check the Project Oberon book.
Here is the revised 2013 version, the original one being from 1992.
In that regard, given the set of gc enabled systems languages that predated both of them, I consider both had quite a few unfortunate decisions, exactly regarding value types, low level coding, and AOT compilation.
Both went with more Smalltalk and less Modula-3, even though they acknowledge influence of those languages, among others.
A trend that started with Algol 68, followed by Mesa/Cedar at Xerox PARC, Modula-2+ at DEC Systems Research Center, Modula-3 at Olivetti Research Center, Oberon at Swiss Federal Institute of Technology, Sing# at Microsoft Research.
Quite true, but the reality is that since Modula-3, Oberon, Eiffel and others did not get adopted at large by the industry, many got the idea that it isn't possible to have a GC for productivity and still manually manage memory, if really required.
So now we are stuck with modern languages having to bring those ideas back to life.
Maybe Modula-3 is too distant from Mesa to make this a valid point, but if not, there's a discussion to be had about why the Mesa-influenced Modula-3 (or the arguably essentially similar Ada) didn't sweep all before it in the 90s.
Is it that the "small set of sharp tools" provided by C, and the "safe and somewhat onerous discipline" provided by Modula-3 represent two points on an evolution that's converging towards the ideal systems programming language? Or maybe the language level is the wrong level at which to be considering this, as if we were analyzing prose at the level of phonemes?
Looking back at ALGOL-68, it looks like comparatively small language compared to many of our current languages, e.g. Java, C++ and Python. I loved the definition of it, but never got to use it.
It's a shame there aren't more "Modula" derived modern languages, rather than trying to force C into being a higher level language.
Java, especially at the 1.1 mark, felt like some kind of crippled version of Borland's Pascal/Delphi + a UCSD P-system VM (dressed up to look like C++).
Microsoft for C# hired the language designer from Borland.
The Go team includes a Uni of Zurich guy, and at least decided to use a declaration syntax (when variable types aren't inferred) more like Pascal.
Start with a language that wasn't a shiv with a broken handle in the first place to build applications.
It's a shame Eiffel never took off instead of the wretched C++, no, this time for sure, Java... (not, that's not it, either - lather, rinse, repeat) lineage.
(C was vastly preferable to assembly in the late 80s / early 90s, but it's time to let it go, and its syntax/semantics, people)
Irony: I like the unix philosophy of composing processes in a functional pipeline, which is totally lost in the era of monolithic java crud, but that is another huge tangent.
> any language with GC is a complete nonstarter as a systems programming language
If it was possible to write whole operating systems in garbage-collected LISP in the 80s, then it surely is possible to use a GC'ed language for systems programming thirty years later.
Java designers officially mentioned Modula-3 as source of inspiration, sadly they forgot value types and AOT in the process and are still catching up on that front.
C# is finally there, since .NET Native and the improvements they keep doing since C# 7. Sadly WinDev keeps ignoring what comes out of DevDiv, they worship their COM with C++ implementations.
Go, well, plenty to catch up in regards to Modula-3 features.
While upper case keywords are a bit of a bummer, like on Visual Basic and most modern SQL tooling, there are options to automatically format keywords into uppercase.
Among all modern alternatives, I would say C# 11 and Swift are the closest to Modula-3, with a rich ecosystem in the backpack.
D and Nim could also be candidates, but they still miss much of the tooling to make them competitive in current times.
Modula-2 wasn't really a contemporary of C's. By the time it was released, C had already taken over the world. Plus, it's yet another case of something that looks good but has never really been tested. While not quite Modula-2, in the early oughts I was working on a large project that was half written in C++ and half in Ada. We're talking millions of lines of code in both languages here. The Ada code looked nice but we were cursing when we had to work with it for two reasons: we had to consult thick Ada manuals to grapple with language intricacies, and compilation times were frustratingly slow. With C++ we could spend more time thinking about the algorithms as there was less "language lawyering", and we could run more tests (ironically, C++ now suffers from both of these problems). Perhaps that's why to this day I prefer smaller languages with short compilation times (I like Clojure but dislike Scala; I like Zig but dislike Rust).
My point is that when people say that one language is technically superior to another, what they really mean is that it's superior in the technical aspects that they themselves value more than the aspects where the other language is technically superior. This is all fine, except that these personal preferences aren't distributed equally. This is a little like the Betamax vs. VHS debate. Sure, Betamax had a superior picture quality that some valued, but VHS had a superior recording time, which others valued but that latter group was bigger.
As for C# -- strong disagree there. I think they're making the classic mistake of trying to solve every problem in the language and soon, resulting in a pretty haphazard collection of features, quite a few of them are anti-features, making up a pretty complicated language. For example, they have both properties and records, while in Java we figured that by adding records we'll both direct people toward a more data-oriented form of programming and at the same time make the problem of writing setters so much less annoying to the point it shouldn't be addressed by the language (while properties have the opposite effect of encouraging mutation). They've painted themselves into a very tight corner with async/await (the same with P/Invoke, which constrained their runtime's design making it harder to add user-mode threads), and I think they've made a big security mistake with interpolation -- something we're trying to avoid with a substantially different design. Also, while richer languages do have a lot of fans, all else being equal more people seem to prefer languages with fewer features. Our hypothesis is that it's better to spend a few years thinking how to avoid adding a language feature (like async/await or properties) than to spend a few months adding it.
Also, every feature you add constrains you a little in the future (and every language makes this tradeoff early when it's trying to acquire users, but once it's established you need to be more careful). That's why we try to keep the abstraction level high at the expense of a quicker and tighter fit to a particular environment. This delays some things, but we believe it keeps us more flexible to adapt to future changes. It's like having an adaptation budget that you don't want to fully spend on your current environment (I think P/Invoke and properties are such examples of overfitting that pays well in the short term and make you less adaptable in the long term). The complexity budget is another you want to conserve. Add a language feature to make every problem easier, and over time you find yourself not only constrained, but with a pretty complex language that few want to learn.
"As far as I know it took a while to have a properly conforming Algol 68 compiler as the spec specifies behaviour, not implementation (cf. Knuth's 'Man or Boy Test')."
You nailed it. The spec specifies how the language is to behave rather than dictate its implementation. That kind of thinking was critical with hardware so diverse as the past. You can add a GC if you want but it's not assumed. You can do that with C, too, as many have.
"Also, as you pointed out, many of those languages relied on hardware support for safety. "
It was often used but not required. The older languages established safety by including strong typing, bounds-checks, and some interface checks by default. These knock out tons of errors. Modern languages have them actually. Some went further with custom hardware accelerating it, esp Burroughs, but that wasn't the norm.
"It is at least plausible that the progressive cpu intergration of the '80s which lead to the rise to dominance of simpler and faster architectures (RISCs and even x86) left languages and OSs that realied on more complex hardware support at a disavantage compared to C and UNIX."
It's the best hypothesis. Even Burroughs, now called Unisys, got rid of their custom CPU's for MCP/ALGOL since customers only cared about price/speed. AS/400 did same with transition to POWER based on customer demand. GCOS did the same thing. As you said, LispM's and i432 (and BiiN's i960) died since they did opposite. Java machines exist with Azul's Vega3's being friggin' awesome but they largely didn't pan out. Azul is recommending software solutions on regular CPU's.
Far as I see it, market drove development along just a few variables that severely disadvantages safe HW and SW stacks. This was probably because software engineering took a while to develop and market to learn other things (eg maintenance, security) mattered. Damage was done, though, with IBM mainframes, Wintel PC's, and Wintel/UNIX servers dominating.
For UNIX, open-source and simplicity also contributed to its rise. Another aid to various products was backward compatibility with prior software or languages that are shit for lack of better word. Trends like that feed into the hardware demand trend and vice versa. So, it wasn't any one thing but price/performance was huge factor given all people looked at were MIPS, VUPS, MHz/GHz, FLOPS, and so on.
C was already dated when compared with languages like Modula-2 from 1978, modern system languages are kind of catching up with how the world used to look like outside Bell Labs walls.
We had a dark age of too much VM and scripting languages, and only now getting back how computing could have looked like.
For example, given Anders background imagine how .NET would have been if it was fully AOT compiled and the same low level features from Delphi since version 1.0.
Or if C++ Builder wasn't the only surviving example to RAD development with C++, before others started to build on top of LLVM toolchain.
Except for those of us that were already coding in the 80s.
Modula-2, Turbo Pascal and Ada weren't research languages, rather well known languages, while C was mainly UNIX only one, slowly spreading alongside UNIX into the enterprise.
I think the problem is that many young developers never experienced those days and think C and C++ are the only game in town for systems programming languages.
You can use value types and do manual memory management in C#, F#, VB, D, Nim, Swift (RC is GC).
Then on the past languages that failed to gain steam, Mesa/Cedar, Modula-2+, Modula-3, Oberon, Oberon-07, Oberon-2, Active Oberon, Eiffel, Sing#, System C#, Component Pascal, among many others.
The only thing missing is that many are badly taught, use new everywhere and don't bother to fully understand all language features.
Prior art like Wirth's or Modula-3 language added a minimal amount of features like exceptions or some OOP on top of simple, efficient, systems languages. Usually have a GC but many prefer ref counting for efficiency. This seems like that kind of thinking in an interpreted language.
Just going by what's on the front page: didn't dig deep into it or anything. Interesting project.
reply