"As far as I know it took a while to have a properly conforming Algol 68 compiler as the spec specifies behaviour, not implementation (cf. Knuth's 'Man or Boy Test')."
You nailed it. The spec specifies how the language is to behave rather than dictate its implementation. That kind of thinking was critical with hardware so diverse as the past. You can add a GC if you want but it's not assumed. You can do that with C, too, as many have.
"Also, as you pointed out, many of those languages relied on hardware support for safety. "
It was often used but not required. The older languages established safety by including strong typing, bounds-checks, and some interface checks by default. These knock out tons of errors. Modern languages have them actually. Some went further with custom hardware accelerating it, esp Burroughs, but that wasn't the norm.
"It is at least plausible that the progressive cpu intergration of the '80s which lead to the rise to dominance of simpler and faster architectures (RISCs and even x86) left languages and OSs that realied on more complex hardware support at a disavantage compared to C and UNIX."
It's the best hypothesis. Even Burroughs, now called Unisys, got rid of their custom CPU's for MCP/ALGOL since customers only cared about price/speed. AS/400 did same with transition to POWER based on customer demand. GCOS did the same thing. As you said, LispM's and i432 (and BiiN's i960) died since they did opposite. Java machines exist with Azul's Vega3's being friggin' awesome but they largely didn't pan out. Azul is recommending software solutions on regular CPU's.
Far as I see it, market drove development along just a few variables that severely disadvantages safe HW and SW stacks. This was probably because software engineering took a while to develop and market to learn other things (eg maintenance, security) mattered. Damage was done, though, with IBM mainframes, Wintel PC's, and Wintel/UNIX servers dominating.
For UNIX, open-source and simplicity also contributed to its rise. Another aid to various products was backward compatibility with prior software or languages that are shit for lack of better word. Trends like that feed into the hardware demand trend and vice versa. So, it wasn't any one thing but price/performance was huge factor given all people looked at were MIPS, VUPS, MHz/GHz, FLOPS, and so on.
Regarding ALGOL (I can't still access the spec), does the standard actually provides for manual memory management or does it only have the equivalent of malloc and no free? (I do not doubt that practical implementations had both).
Regarding hardware safety, I believe we might see a resurgence of builtin support for safety features. CPU designers have more transistors available than they know what to do with it: after adding yet another execution unit and widening the vector lenght again, they are reaching a diminishing return point, so they might switch to adding back these safety features.
And in fact it is already happening: W^X and virtualization can be considered as belonging in this area, and more recently Intel added MPX and MPK that are a more direct attempt at userspace security features.
New architectures are being designed with security in mind, like the vaporware^Wupcoming Mill.
You nailed it. The spec specifies how the language is to behave rather than dictate its implementation. That kind of thinking was critical with hardware so diverse as the past. You can add a GC if you want but it's not assumed. You can do that with C, too, as many have.
"Also, as you pointed out, many of those languages relied on hardware support for safety. "
It was often used but not required. The older languages established safety by including strong typing, bounds-checks, and some interface checks by default. These knock out tons of errors. Modern languages have them actually. Some went further with custom hardware accelerating it, esp Burroughs, but that wasn't the norm.
"It is at least plausible that the progressive cpu intergration of the '80s which lead to the rise to dominance of simpler and faster architectures (RISCs and even x86) left languages and OSs that realied on more complex hardware support at a disavantage compared to C and UNIX."
It's the best hypothesis. Even Burroughs, now called Unisys, got rid of their custom CPU's for MCP/ALGOL since customers only cared about price/speed. AS/400 did same with transition to POWER based on customer demand. GCOS did the same thing. As you said, LispM's and i432 (and BiiN's i960) died since they did opposite. Java machines exist with Azul's Vega3's being friggin' awesome but they largely didn't pan out. Azul is recommending software solutions on regular CPU's.
Far as I see it, market drove development along just a few variables that severely disadvantages safe HW and SW stacks. This was probably because software engineering took a while to develop and market to learn other things (eg maintenance, security) mattered. Damage was done, though, with IBM mainframes, Wintel PC's, and Wintel/UNIX servers dominating.
For UNIX, open-source and simplicity also contributed to its rise. Another aid to various products was backward compatibility with prior software or languages that are shit for lack of better word. Trends like that feed into the hardware demand trend and vice versa. So, it wasn't any one thing but price/performance was huge factor given all people looked at were MIPS, VUPS, MHz/GHz, FLOPS, and so on.
reply