Yes, but after the initial monoliths of mainframes, we had a golden era of operating systems, where all of the plumbing and infrastructure to run your applications was maintained by someone else (Microsoft, Apple, Commodore, Atari, SGI, Sun, Palm, Redhat, etc).
Now we've come full circle back to the bad old days where you need an entire team of dedicated people and arcane knowledge just to run your application software again.
I suspect that this will continue until we reach a point where the big players out of necessity come to an agreement for a kind of "distributed POSIX" (I mean in spirit, not actual POSIX). These are exciting times living on the edge of a paradigm shift, but also frustrating!
Perhaps I should expand the history, for those who don't know. In the early 80s, QNX was the first fully multitasking operating system available for the IBM PC architecture. It was small, efficient, real-time, and somewhat idiosyncratic. It was fully ten years ahead of its time.
At the same time, in another sphere, POSIX was shaping up as factor in the emerging Open Systems wars. It drove compatibility between big vendors like IBM and HP and Sun (over other idiosyncratic offerings, like Apollo).
Perhaps POSIX as a user response to corporate power had value, but there was also collateral damage. As I understand the story, Canadian colleges standardized on POSIX, and so a great little Canadian OS (QNX) was left out in the cold. They had to become like others to survive.
Conformity served to reduce innovation and developer choice.
Now fast forward to 2013, and leaving aside shared memory details, what is POSIX driving today? Don't forget that ultimately a non-POSIX OS killed them all. And don't forget that users have their own means of bubbling up new features and architectures in the Open Source projects. Their control is far beyond what users got out of POSIX in the Open Systems age.
I am beginning to think that while POSIX, Unix and X11 might have been innovative and very useful in the '70s and ' 80s, now they are really old and most of the computing landscape changed.
I feel like this generally applies to all software developers. Software and its systems have largely evolved through accretion and that creates a strong network effect and eventually, a taller application stack. We're continually making everything more complicated because it's just easier to use other people work, especially when the barrier to entry for that work is low, i.e: Web/JS.
As a slight segue, I recently came across Timothy Roscoe's Usenix talk[1] on OS development on HN and I'm starting to think that a complete rebuild of modern computing might occur when the limits of the old OS architectures become too burdensome, at which point there could be an opportunity to "fix" the application stack and try something simpler. I sincerely hope the OS's of the future do not look like Linux or even attempt POSIX compliance.
Absolutely. But every single new operating system project falls into the trap of following all the same old patterns. It seems to be very difficult to approach problems and realize that the constraints that guided all of the old design choices simply no longer exist. Today different challenges exist and people would rather perpetuate re-creating the limitations of the past and then solving those nice understood problems than they would like to solve the new problems.
For instance, today we don't have text terminals. We have high-dpi displays. So why is the first thing you reach for a text buffer? Stop it. Vector-based fonts should be step zero. Raster graphics ought to come along pretty far down the line when you get to texture surfaces. We don't run with video units with basic memory-mapped framebuffers any more. Our storage also isn't seek-bound in most cases, we have RAM that can bust the limits of even old filesystem designs made for bulk storage. We have more now and just like the physicists say, More Is Different.
But the next revolutionary OS will probably come along and announce their goal of POSIX compliance and people will still trick themselves into thinking its 'new'. I'd rather see an OS that realizes the things the Internet and web have taught us and integrates it. What people want is search and instant-on application functionality. They don't care if it's "an app" or its "a webpage" or whatever. They want it to work, they don't want to "install", etc. And if you can murder the browser and get back to something native and cut the eye out of every company monetizing themselves through user behavior data sales, all the better!
This all read to me like the juggling and tricks software developers had to do back then when there was no operating system available (e.g., handle memory manually). Then OS appeared and we could finally spend our time in other more productive tasks. I imagine at some point, something will appear that will let us just deploy an application globally so that it's extremely efficient and scales up/down as needed, without us developers/sysadmins having to deploy so many moving parts.
Hence the opinion Steve Jobs had about UNIX, NeXTSTEP's POSIX support being more geared towards bringing software in and win DoD contracts than anything else.
I don't need such vision, it is how it would look like if all OSes happen to jungle browser instances, ChromeOS style, while a big chunk of applications runs on someone else's computer, abstracted via language runtimes.
The return to mainframe's timesharing days and very sad outcome for desktop computers, with the minimum experience common to all vendors.
Graphical APIs not able to take advantage of hardware, hardware support that takes years to have minimal APIs, drawing UIs pilling divs customized with CSS and JS to mimic visual behaviours.
Wasn't that precisely supposed to be the operating system's job? One, virtualising (Processes in Unix are little virtual machines), and two, hardware resources management (file systems, time sharing, ...).
Distributed operating systems tried to do that for multiple computers, but instead we get ad-hoc crap on top of single machine OS'es. Progress!
Just lots, wherever you look. io_uring and eBPF are the new hotness, but we've had operating systems scale up from a few CPUs to hundreds or thousands in the past few decades, we've had filesystems like ZFS and BTRFS with checksums and snapshots and versions, there's namespaces and containers, hypervisors built into the OS and paravirtualization, multiqueue devices with command queue programming models, all sorts of interesting cool things.
Before someone jumps in and says we've had all these things before, including io_uring and eBPF (if you squint) and scaling to hundreds of CPUs and versioning filesystems etc. That is true. IRIX scaled on a select few workloads on multimillion dollar hardware and even that fell in a heap when you looked at it wrong. The early log structured filesystems that could do versioning and snapshots had horrific performance problems. IBM invented the hypervisor in the 1960s but it was hardly useful outside multimillion dollar machines for a long time. Asynchronous syscall batching has been tried (in Linux even) about 20 years ago, as have in-kernel virtual machines, etc.
So my point is not that there isn't a very long pipeline of research and ideas in computer science, it is that said long pipeline is still yielding great results as things get polished or stars align the right way and people keep standing on the shoulders of others, and the final missing idea falls into place or the right person puts in the effort to put idea into code. And those things have been continually going into practical operating systems that everybody uses.
There is still a lot of research and innovation, but it doesn't always come in the way of completely new software projects. The cost of trying to build a new OS is simply massive. You need to be compatible with existing useful software if you want to do anything other than an appliance. Anything that provides a new paradigm shift that requires changing existing software has a huge slog ahead of it to make it successful. That said, there is tons of incremental progress in the operating systems.
I think a lot of folks have thought about the idea of a truly distributed operating system. I'm pretty sure existing operating systems will eventually evolve in that direction. You already see bits and pieces of it popping up.
That was absolutely true way back when the industry was new and there were myriad system vendors with their own operating systems.
We have learned a lot since then, and we know how to address these problems: openly specified document formats and protocols. Programming languages with portable runtimes and libraries. Open source software. Web Applications.
The world is a much different place from when you felt trapped because your Visicalc files were on LDOS floppies that could only be read by a TRS-80 Model IV.
Have we just become too selfish for OSS? We're going to end up punishing ourselves by having to use proprietary, closed systems for everything. I don't want to compute in that world. That would be awful.
Unfortunately the requirement of "one that runs all of their software" is the one that means that most new os's are posix/unix clones of some variety. There is too much human effort invested into the posix userspace for us to start from scratch. In fact even Microsoft seems to be struggling with this inspite of their entrenched position.
I have no OS dev experience, but I like the idea of many smaller specialized programs that are able to communicate with each other. Another problem with large and slow moving code bases is that things around changes, like hardware getting better/faster so software architecture that made sense 30 years ago might not be the best solution today.
An OS should probably not be developed by a single team or company, there should be an open core/kernel, then companies could instead compete and specialize in different combinations and support of systems, apps and services developed independently, where each part could be replaced.
Yep, to some degree at least. Even some more modern systems after that had some similar problems, where there was less or no direct interaction anymore and userspace and kernelspace was the 'best' boundary there was to offer. But the same problems still persisted within the space if the other one goes down.
Imagine having a fully functional app but you still can't do anything because the service 'on the other side' doesn't work. It's better because it won't crash the entire system, but it's the same because you still can't do the work you intended to do...
We've been here before. This seems to occur in cycles. It will soon burn out, and then we will see a plethora of operating systems. A period of relative stability will occur, and the cycle will begin again.
It's kinda depressing that after a quarter of a century of work in the region of a hundred thousand dev years and billions of dollars of investment the world's most used operating system fails at the most basic tasks like reliably writing files or allocating memory.
That philosophy pertained to userspace tools, not the OS as a whole. And the waters have been muddied a lot (think systemd which does a lot of things and is still embraced for servers and desktop alike).
Considering the use-cases, multi-tasking operating systems are not necessarily going to escape the trend toward information appliances.
In my opinion, if the workstation does survive, than a posix style OS similar to the abandoned HeliOS will become necessary again at some point... as Moore's law is effectively dead, and a 15% context-switching overhead on existing threading and mailbox models is silly.
Now we've come full circle back to the bad old days where you need an entire team of dedicated people and arcane knowledge just to run your application software again.
I suspect that this will continue until we reach a point where the big players out of necessity come to an agreement for a kind of "distributed POSIX" (I mean in spirit, not actual POSIX). These are exciting times living on the edge of a paradigm shift, but also frustrating!
reply