Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Ask HN: Anyone using proprietary Unix at work? (b'') similar stories update story
194 points by wassenaar10 | karma 289 | avg karma 4.9 2022-12-16 12:07:26 | hide | past | favorite | 221 comments

I was born in the late 90s so by the time I got involved in technology, my introduction to Unix and Unix-flavored systems was limited to Linux and MacOS. However, I've read about the history of Unix at Bell Labs, the BSD systems derived from research Unix, and the eventual commercial releases of Unix System V from AT&T themselves. I also see that HP-UX, AIX, and Solaris are apparently still maintained and get releases, which suggests that they are still being used in production in some places.

I'm curious if anyone here currently works (or has very recently worked) somewhere where proprietary Unix is still used for production. If so, can you tell me what they're used for and why those deployments haven't been moved to an appropriate Linux distribution?

Not suggesting Linux is necessarily better for all use cases, just wondering what keeps these small number of entities clinging to closed-source Unix with presumably pricey license costs.



view as:

I use mac which is a proprietary unix.

Technially, the Darwin Kernel it uses is OSS

https://github.com/apple/darwin-xnu


Can you actually recompile your kernel on Mac? And its functional?

I'll bet this isn't even the source code Apple uses, but rather they have a private fork with extra patches (similar to how Microsoft publishes OSS VS Code, but then uses their own proprietary version for releases).


Yes. However, many kernel modules needed to boot on a modern macOS machine are proprietary.

If an os is partially oss but functionally closed, what do you call that?

If I run windows with apache web server, is windows now open source?

rMS is a zealot, but it doesn't mean he's wrong...


You can, or at least could 5 years ago or so (and I bet you still could now).

However, it involved finding some random blog post on how to do it, and then even once I got it up and running there were some issues like the fans being pegged at 100%. Still, it was mostly functional.


I can speak to that via scuttlebutt from ex-apple friends. Apparently the intel power management code running in macos includes proprietary code from intel that they weren't comfortable opensourcing, for weird reasons.

There was an attempt to make a complete FOSS OS on top of that - PureDarwin[1][2]. It looks dead now though.

[1] http://www.puredarwin.org/

[2] https://github.com/puredarwin


Not proprietary, but Oxide Computer uses a niche (open source) Unix called illumos: https://illumos.org/

Illumos is open source Solaris.

illumos has a Solaris heritage, certainly -- but has evolved quite a bit in the last ~12 years. For details of the history, see my LISA 2011 talk.[0]

[0] https://www.youtube.com/watch?v=-zRN7XLCRhc


I know next to nothing about OSs, but I'm a big fan of your talks by the by.

"Coming of Age" and the "Don't Anthropomorphize Larry Ellison" talk are especially great.


Not for almost 20 years. HIGP/SOEST used SunOS.

Yup. SCO OpenServer 5.0.something, for some partner's accounting department. Neither the OS nor the application software have been updated since the late 1990s, but if it works, it works, I guess... (to be honest: the application software is only used to run reports in response to requests from the legal department, but apparently still can't be shut down -- I ask once a year, next upcoming 'query date' in my agenda is March 2023).

This used to run on a Compaq Proliant server (huge noisy Intel 486 tower) until the end of the millennium or so, then was converted into a VM. First on VMware, then on Hyper-V, where it has been running comfortably on various hardware (Intel Dell PowerEdge, AMD SuperServer) since.

Access is the biggest issue, as the OS only supports telnet, and serial access. So ever since this has been converted to a VM, it runs on a dedicated VLAN (666, just to make sure nobody ever misunderstands the true evil underneath...), with an AD-authenticating-HTTPS-to-Telnet bridge (coded up in Visual Basic.NET using some long-long-deprecated libraries) connecting it to the outside world.

That VB.NET kludge was recently upgraded to .NET 6, in order to get TLS 1.2 support. This was surprisingly uneventful, and I'm pretty sure this abomination gets to live another decade or so.

Ah, yes, a career in IT... Always on the forefront of cutting-edge tech...

(Later edit to, like, actually answer the question: licensing costs are nonexistent: SCO is gone anyway, and we don't require any support/updates. Migrating to Linux might be an option, but is most likely going to be hugely painful, and the existing VM scenario Just Works for everyone involved. Security and such is not a real issue: only a handful of internal users have highly-restricted access via a proxy)


Will it all blow up in 2038? Do you have a plan (to kill it, upgrade it, or retire) if so? :)

That is an excellent question, one I should investigate using a VM clone one of these days. The accounting data is frozen in time, so it should not be affected, but if the system starts refusing logons or just crashing, that would not be great (and I'm pretty sure the SCO licensing management thingy would fall over, as that was a previous source of, eh, entertainment).

The plan is definitely to retire the system Real Soon Now, but with the subjects of the underlying data springing new generations with new lawyers, ensuring some kind of Y2K38 compliance might be wise...


Rewind the system clock by 100 years and add 100 years to the data output :)

It should buy you another 100 years.


This is the way. Just make sure the hypervisor isn't jumping the clock forward.

> Access is the biggest issue, as the OS only supports telnet, and serial access.

OpenSSH works on it. This page has links to precompiled packages:

https://scosales.com/knowledge-base/how-to-install-ssh-for-s...

links:

ftp://ftp2.sco.com/pub/skunkware/osr5/vols/openssh-3.4p1-VOLS.tar

ftp://ftp2.sco.com/pub/skunkware/osr5/vols/prngd-0.9.23-VOLS.tar

ftp://ftp2.sco.com/pub/skunkware/osr5/vols/zlib-1.1.4-VOLS.tar

Probably better to grab the sources and compile them yourself if you can, though.

> Migrating to Linux might be an option, but is most likely going to be hugely painful.

Probably. At least it sounds like you only have a few users for it, so getting them to adapt to a change of software might be easier.


Maintaining access to a 20 year old ssh server will not be trivial. A modern client would usually fail to agree on encryption protocols in my experience.

It's pretty trivial. Some things are disabled by default, but still available on the latest versions of clients. For the OpenSSH client, it will tell you what specifically the client and server failed to agree on, and you can just add the option to enable on the client one of the ciphers, kex algos, or whatever that the server accepts. I've had no trouble with neither the latest version of the OpenSSH client nor Putty to connect to such servers.

Using ancient ciphers and kex algos can end up being just as secure as telnet.

I mean, if they can compile the latest version of sshd to run on SCO OSr5, that's great! If they can't because it's no longer compatible or whatever, are you saying they may as well stay with telnet? Obviously, not using legacy software is best, but it's not like people can just snap their fingers. Software needs to be ported, people need to be trained, etc. Work and time is needed. In the meantime, using sshd seems like an easy upgrade.

On "ancient", the ciphers and kex algos used by the OSr5 sshd above were deprecated like 4 years ago. I'd like to think that among the select group of probably-not-technical people that have access, it's not exactly the same bar of technical ability to inspect the contents of a plaintext connection as that to inspect the contents of an encrypted connection that uses ciphers and kex algos deprecated a few years ago.


Considering the other options you can add the deprecated ciphers and key exchange mechanisms with a few lines for the host in your SSH client config on the newer system.

> A modern client would usually fail to agree on encryption protocols in my experience.

I run into this issue frequently. Usually its a "client soft disabled, alter config", but sometimes... Just sometimes its "spin up an old VM to use its ssh client".


You SSH to sibling VM running under the same VMM, connect the serial port of the SSHVM to the application VM, clear text only runs over that virtual serial connection traveling through the VMM.

Wow, I had the same setup with Compaq Proliant in another galaxy long time ago...

This takes me back! This was roughly 25 years ago so I’ll do my best to remember.

My first job in high school was at a company with the entire business running on SCO Unix. I want to say OpenServer 3, maybe? It was essentially a terminal server with dozens of Wyse 60 terminals attached.

Anyway, as a Linux enthusiast I promptly setup a RedHat 7 install on some old hardware they had lying around. IIRC correctly it was a low-end Pentium but it could handle a PCI 100mbit ethernet card just fine.

Anyway, the goal was to get data to/from the SCO system to something with a TCP/IP stack (RedHat machine) so it could go somewhere - samba shares on the rapidly growing ethernet network, maybe even the internet!

We ended up using UUCP over serial, scripting, and cron jobs to push/pull from directories on each side. The RedHat machine was promptly connected to a 56k modem to do dial on demand and IP masquerading for the ethernet network and uploads of specifically formatted files from the SCO system via FTP to vendors and partners.

Fun times!


Why?

Fair question, although the basic answer should be obvious: the users still need access to the data! So, the question becomes more like "why not upgrade", or more specifically "why not migrate the data to something that is not so shockingly obsolete", since it's probably clear that there is no real upgrade path here (both SCO and the vendor of the accounting system are long gone).

Usually with systems of this vintage, "just dump all the data to Excel or PDF and get it over with" is a good option, but in this case both the volume (with the requirement to run queries on it) and the limited options available for export (the system can only print predefined reports, and they don't contain everything required for filtering) prohibit that.

So, next stop would usually be "reverse engineer the application data format and convert it", but the unholy collection of binary files used by the accounting software here has defied analysis: it's not Btrieve or MS-ISAM (popular semi-database formats for COBOL and BASIC apps of the time), and decompiling the binaries only yielded some generated-by-another-set-of-tools braindamage that didn't clarify anything either.

The choice then became spending huge amounts of money, or wallpapering over the tirefire and keeping it running. Unsurprisingly, the outcome was the latter, which is perfectly OK in this case, as the system is not exactly load-bearing, and actually sort of fun to maintain.


It's not Informix-4GL, is it? If it is, the DB should be available via `dbaccess`. If you don't have that executable to verify, I imagine the binary files are probably of extensions `.4ge` and `.frm`. Then, the DB is likely a directory with the extension `.dbs`, that has a pair of files per table with the extensions `.dat` and `.idx`.

No, it's definitely not Informix, Progress, Magic, or anything else commercially available that I'm aware of, although it does have all the "sure, let's disaggregate a single record across 26 files" hallmarks of a "4G" tool. But there was a lot of vendor-specific crud around at the time...

Surely only legacy data? No new data going into it?

Since you're accessing it via telnet, could you just scrape the screen? Then you could have a script that pages through the data, copying it out a screenfull at a time.

Maybe it's a multi-step migration.

From the highly obsolete format, to a stepping stone in-between which can be converted to a modern format.


With the constraints described by OP, converting to a stepping stone would still require you to fully reverse engineere the binary format.

> with an AD-authenticating-HTTPS-to-Telnet bridge

Apache Guacamole supports AD auth and (surprisingly) telnet.


Man, that brings back memories. When I kept a tile store/distributor running on SCO with their whack ass Keymark darabase.

Good times


That's an interesting account of evolutionary biology.

Hey, you took the tools you had and combined them into something new to solve a problem. That's literally what technology is, even if it isn't sexy.

Looks like OpenServer 5 is still being marketed by UnXis/Xinuos, so you might be on the hook for any ongoing licensing costs. They released the latest update in 2018.

Not since 2002. Irix, HP/UX, Solaris.

Correction: It has been pointed out to me that I'm currently using macOS which, Darwin non-withstanding, is technically a proprietary UNIX.


I've seen huge AIX installations maintained on contract by IAM at financial institutions.

Back when I worked at OmniTI (from like 2015 to 2017), we had an in-house Illumos neé Solaris distro for cloud servers named OmniOS. I didn't use it too much myself as it was somewhat legacy but all the other devs who used it for prod debugging loved all the dtrace/zfs stuffs on it.

I wouldn't call that proprietary. It was freely available, and I worked with it for a bit, until switching that project to SmartOS, which is also... not proprietary.

Both were excellent systems to work on in dev and prod.


LOL, small world. I was at OmniTI from '10-'13; I was around for the transition from Solaris 10 to OmniOS. It was an interesting system and I do wish the rest of the freenix world would learn some of the lessons that could be learned from it. (I guess in fairness they're starting to re: openzfs, ebpf, some of the smarter service management tools, et al.) That was the last time I touched proprietary unix at work, though. Even working at Oracle's AWS competitor later, everything internally was Oracle Enterprise Linux (which is to say, RHEL with the serial numbers filed off).

Pobox also had a few racks of hardware running Illumos/SmartOS as parts of their older stack. it was a pleasure to use once I wrapped my head around it, which was little enough because it was so reliable.

I use OmniOS - its actually not that rare to use if you want a good ZFS server. I am about to do a fresh install on new hardware of it as well. Its not legacy at all.

In ‘05-10 timeframe we had a selection of Sun machines around the engineering department running various software and were even evaluating new Sun hardware for file servers when ZFS was new.

Later on in eh, 2015 or so I worked at a company providing backup software that was tested and worked with every niche unix hardware under the sun. Usually to support large legacy industrial companies, think defense, materials, etc. who still used the hardware.

Banks will also use these things mostly for legacy reasons. The software got written once and has been working and validated for decades, no reason to rewrite it for a different OS just because.

Price is usually not a major factor when compared to the size of the business and number of employees.


> I also see that HP-UX, AIX, and Solaris are apparently still maintained and get releases, which suggests that they are still being used in production in some places.

When I worked for $(LargeDefenseContractor) we used Solaris for a defense system we were developing. Over time the older units (based on older hardware) would be passed down to the national guard. I would not be surprised if Solaris was still being used in obscure places in the military.

Solaris 7 was a pretty awesome OS as I recall, but pretty soon Intel and AMD started supporting linux as a workable OS option for their server chips. Then linux on the cloud took off and the rest is history.


Dated info (2001/2002?) but there was Solaris use at the NSA around that time too. Seems like the military and associated organs like to use it.

I worked on a system deployed on AIX for a few years. We used it to distribute batch workloads in parallel across a cluster of machines - no other software and OS did such a thing, maybe still doesn't. The machines themselves were PowerPC RS6000's which only ran AIX. The company already had a close relationship with IBM because they'd been running mainframes for decades. They made tons of money so saving money on licence costs was not important.

> distribute batch workloads in parallel across a cluster of machines - no other software and OS did such a thing, maybe still doesn't

What am I missing?

One of my college courses ages ago was about working with MPI which we got to run on the hpc cluster.

The last time I needed to run N copies of something I asked kubernetes to do it. The time before that, I asked $cloud_vendor for N identical VMs with the same cloud-init script.

Supposedly Google's in-house stuff that kubernetes and map-reduce (the product, not the concept) are public versions of, is all about running stuff well on huge groups of machines.


I can't speak for OP or their particular use case (because I don't know about it), but kubernetes is not a panacea for large scale batch processing.

For example, large banks need to securely, reliably, and very efficiently process an unfathomable amount of transactions[1]. In this case, kubernetes would be a giant waste of resources and complexity. The former one hampers throughput, the latter one means security and reliability suffer.

For people not familiar with it (me included, actually), it can be mind-boggling what throughput is achieved, and what mechanisms for reliability are in place. Not just in software, in actual hardware; this goes way beyond ECC memory.

[1] Transactions in the bank sense, not in the computer sense, because I don't want to confuse matters more. In the mainframe world for example, there is a difference between "batch processing" and "online transaction processing", but both could be applied to bank transactions. Note that I'm not advocating for the mainframe world here.


Interesting, can you give rough order of magnitude for the txn/s throughout achieved? Would also be really interested on more info or pointers to the hardware reliability mechanisms!

MPP was the acronym for the pattern. It let us take a binary executable which would normally just be a single PID on one node and it would distribute the PID and all its resources including files across however many nodes you had configured. The software that managed this was heavily dependent on RISC processor architecture which basically meant only AIX RS6000. Googles solution is designed for commodity processors. The modern equivalents let you do something sort of similar, but you have to design for that and as far as I know you’re not sharing resources like open file handles across the cluster. We took C programs and ran them as is.

Hey there! It depends on exactly how transparent you're talking, but I rewrote some Fortran and Honeywell assembly to 'C' on lowest-cost-bidder unix workstations on behalf of NASA in 91 or 92? We used NQS [1] to distribute the workload across the nodes and it worked pretty well. Well enough for NASA to retire the mainframe. The idea is hardly unique or novel - I believe DEC's clustering software allowed something similar? [2]

[1] https://gnqs.sourceforge.net/docs/papers/mnqs_papers/origina...

[2] https://www.parsec.com/wwwDocuments/ClusterLoadBalancing.pdf


Popular cross platform proprietary tooling for this is Control-M, but I didn't work personally with it. There are now also batch systems with executors on k8s or Mesos, and in a simplified sense apache airflow is often used for this

Not myself directly, but about 5 years ago I was doing some work for one space agency and there were some Solaris clusters which were older than me.

I’m pretty certain most if not all are still running.


Yep, still running Solaris 11 on a Sun server for SDH/SONET network management. I mean it fits the bill, since both technologies are now ancient.

I can't say bad things about the whole Sun/Solaris combo though. It's rock solid and requires practically no maintenance whatsoever.

Also, since it's completely off the internet, it's not like it could be compromised in any way.


Yeah internal networks never get breached /s

A friend has his own computer business, and some of his work involves tending to AIX systems. I don't remember now what his AIX customers use their machines for, but one might have been a biology lab.

We have a number of financial services apps running on Solaris. The cost of the licenses and hardware is insignificant compared to the cost of porting those apps to something else. You'll find that as the reason most folks stay on these platforms.

We also apparently still have some AIX, but I'm not sure what it supports. AIX is still somewhat popular in financial services; probably others as well.

I know of a couple large HP/UX shops left.


> The cost of the licenses and hardware is insignificant compared to the cost of porting those apps to something else

That’s an understatement!

Earlier this year I helped our final Solaris SPARC customer move over to X86/Linux. Every year for the last 5+ years they’ve asked us to extend support for just one more year; they were almost done with the migration! In the end, we had to compile them a few pieces of software with weird configurations because they took the standard AWS Linux and, well, I have no idea what on earth they did to it.


Yep, SCO Unix that hasn't been up dated in forever for a 1990's text based ERP system

Allied Irish Banks is still using Solaris, AIX, and HP-UX at the same time, according to a job posting I've seen.

The last "proprietary Unix" I worked with was AIX, way back in 2006. And that was only because their primary customer was an AIX shop. The code itself would actually run on Linux.

Previous to that (late 90's, early 2000's) it was mainly Solaris. One place was fairly heterogeneous, and had Solaris, HP-UX, Digital Unix (aka OSF/1, Tru64), and a couple others.


I think that last bit might have been somewhat common at the time. I had a small job as one of the unix-y guys at a software company dealing with a very similar collection of OSs. In their case, it was simply because the software they made ran on a lot of popular (and some not so popular) OSes, so the boxes running them (and, in most cases, building the software) had to be there.

As an intern in the mid 90s I babysat a similar mix. We also had some older SunOS boxes and some Irix, but mostly Solaris. I remember the HP as being the worst to work with of the bunch - for whatever reason if one of the tests was going to be broken it was on the HP.

We had AIX at my last job in 2021 and it was the least actively terrible part of our tech stack.

There is genuinely nothing wrong with a lot of the old stuff, in many cases its simply better than the latest and greatest React/Mongo/Whatever webshit stack that modern devs seem to emit.

This is running Solaris.

https://www.usa.philips.com/healthcare/solutions/radiation-o...

Last I heard they were desperately trying to get it on Linux. Why it isn't is because its a huge legacy application with some very horrible hacks specific to the OS.


The last time I got an MRI, the control room had one of those iconic purple SGI O2 workstations on the desk. I guess they were pretty popular with medical imaging products.

Yes, because SGI could do the "3D stuff" before PCs

Indeed. I remember we got two of those at a TV station I worked at in 1997-1998, used mostly for fancy weather 3D effects. The weather guy was so in love with the system. Meanwhile we were still using 3/4” Beta tapes in the editing bay for reporter packages and network footage…

Yeah, Cyberknife ran on them too.

Lots of MRIs and Cats still run Unix as well, even new ones. If it ain't broke etc etc, plus as a medical device it's a whole separate set of rules, testing, validation.

Indeed. Irix on an sgi o2 was the standard recon box until about 2005.

AIX on POWER hardware, because it runs SAP, and between IBM and SAP they provide all-encompassing certification and support. Swapping out the OS would invalidate those assurances. Yes, modern SAP runs on Linux. Yes, it would be a good idea to migrate off AIX. But the system has been in place for 20 years and it will be another 5 years at least before that migration happens.

There’s a bunch of enterprisey technology that is not yet dead, but dying very slowly.


I have a client (banking sector) which "recently" (5 years ago) started with AIX since they already had POWER systems running IBM i (i.e. modern AS/400), and had to select a Unix for a new vendor's EFT system; it was reasonable to use the (almost) same hardware and provider.

Another similar client is using Solaris (Sparc) for an analogous application; they are using it since 1996, I think because Sun (Oracle) always provided an easy migration path, so the applications didn't need to be ported.

As in most medium/big enterprises, in both cases the hardware/software price is not the main decision driver, but (IMO) it is the support/SLA, compliance checklists, and overall risk management.

BTW, in these cases Linux is also used for more "internet oriented" applications.


AIX on POWER here too but planning to move to Linux on POWER everywhere as part of the P10 refresh cycle.

HANA is already Linux on POWER so it will be nice not to have the AIX/SUSE split once we fully migrate over.

We tend to run pretty up to date config wise so it's helped quite a bit while planning upgrades. Only AIX 7.3 and SLES 15 SP3. Already moved from P7 => P8 => P9.


Here as well. Because of a customer using AIX, though it'll be phased out (as planned - they went with IBM/AIX because IBM could give them up to 20 years of guaranteed support and maintenance). In the not-so-far past there were also customers with all the well-known *nix platforms, IRIX, Solaris, Tru64, but that was some years ago. AIX is still alive though. Definitely not a bad Unix, and it has some unique features for large scale systems. IBM added a lot of Linux compatibility to AIX over the years, after all they were one of the earlier supporters of Linux. Which means it's relatively easy to develop and test on Linux before moving to the main AIX system.

In 2010-2011 I worked for (major American semiconductor firm), which had some test and validation teams on Sparc workstations running Solaris, and some design, test, and validation teams on Windows workstations with remote desktop clients for virtualized RedHat.

There was a common set of cshell based tools used between those two environments, among other 90s style Unix tools, like software written againt the SunOS Open Look widgets, and tools written in Tcl/Tk.

I wonder if those machines are still in use!


I worked for the DOD, and we used hardened RH servers, accessed through jump boxes/Putty, and Windows machines, and at least one Solaris box, though I never had a handle on what that was for.

In around 2004 I worked for a big five company. They were using some AIX, Solaris and HP-UX next to Linux which was still seen as that new thing. I remember having a training on some software where HP UX workstations where wheeled in. But those were really getting old at that point.

I worked at an ISP in 2007 which was running mostly on Sun hardware and Solaris. This was because of huge discounts provided by Sun. Most devs ran Linux on their workstations. In 2014 I got to work with some guy whose previous project had been at that ISP, who was at that point desperately trying to move off Solaris because they had to start paying list price for the OS and it was much too expensive.


I used to work for Sherwin-Williams. The in-store computers run some custom *nix OS. The software that company runs on is a text based ui that hasn't changed since it was introduced in the 90s.

They released a major update in 2020 that allowed you to move windows around the screen. It was groundbreaking.

But let me tell you, this system was absolutely terrible. All the machines were full x86 desktops with no hard drive, they netbooted from the manager's computer. Why not a thin client? A mystery.

The system stored a local cache of the database, which is only superficially useful. The cache is always several days, weeks, or months out of date, depending on what data you need. Most functions require querying the database hosted at corporate HQ in Cleveland. That link is up about 90% of the time, and when it's down, every store in the country is crippled.

It crashed frequently and is fundamentally incapable of concurrent access: if an order is open on the mixing station, you cannot access that order to bill the customer, and you can't access their account at all. Frequently, the system loses track of which records are open, requiring the manager manually override the DB lock just to bill an order.

If a store has been operating for more than a couple of years, the DB gets bloated or fragmented or something, and the entire system slows to a crawl. It takes minutes to open an order.

Which is all to say it's a bad system that cannot support their current scale of business.


That does sound like an absolute clusterfuck of software. But just in case it wasn't clear, I don't think that "custom *nix OS" is to blame at all. And as for text based UIs, they're still fantastic for things like data entry and lookup.

It just seems that the actual software running on the OS, with the text UI, seems to be profoundly terrible in your case.


Yeah, the OS itself was passable. It had the absolute bare minimum required to work, so not much could go wrong.

The ancient software also wasn't bad. After a few months learning the hotkeys and menu structure, the speed with which you can enter and process data was absolutely incredible. It had problems, but usually minor and patched in a reasonable time for corporate IT.

The real problem was their database management. I don't have any information, so I'm assuming here, but my impression is that they're using some positively ancient database software. Doing a backup of the local cache took multiple days, though it didn't lock the DB. Requests to HQ were incredibly slow, about 30 seconds to pull an account record. Larger queries like neighboring store inventory took a minute or two. Running a report on local inventory would regularly take tens of minutes, and it only had to read the local cache.

The database was a few tens of GB on disk. Granted, I don't know much about databases, but if running something like "SELECT * FROM inventory WHERE sales < 100 ORDER BY lastSaleDate" on a 30gb database takes 15 minutes, something is wrong.

There were a lot of problems we ran into on a daily basis, and almost all of them related to database functions. Particularly when a record failed to unlock, sometimes we'd have to reboot the local server, which caused all terminals in the store to reboot. That usually took a good 15 minutes.

Personally, I rather enjoyed not having Windows at work. For the most part, everything Just Works, and given the hardware, it ran ten times faster than windows would have.

My current job is a Windows development shop, and I don't have enough curses to describe the pure rage I feel every time windows does something stupid (which is approximately every three hours).


Ugly. This makes me seriously wonder whether they just did not put an index in, and unfathomably many hours have been wasted on useless full table scans for something that would have been fixed with a handful of CREATE INDEX statements? Though that's a lot of conjecture, and the real answer is probably more complex. But your examples do make me wonder...

Likely wasn't a RDBMS at all, but flat files with maybe one key index (or perhaps none).

I really doubt that a schema like that would survive at the kind of scale SW operates at. I'm about 80% sure I saw mention of database operations in the startup/shutdown logs, but I could be misremembering.

My guess is that they bought whatever database software was popular in the early 90s and never changed.

I do know they've been slowly changing the schema over the years, increasing the number of digits in the account number, adding email fields, that kind of thing. But I doubt there's been any major upgrades.


Well early 1990s database sofware wasn't awful. Talking about stuff like Sybase 4.x, roughly equivalent to early MS SQL server, also Oracle, Informix, DB/2, etc. Indexes, query planning (perhaps with hints), cursors, concurrency, were all adequately solved problems by then.

I remember running some very large reports over multiple years of inventory movement. Disk access on the server was totally saturated for a good 20-30 minutes.

Thinking about it now, it had to have read out the entire database multiple times.

Oh yeah, these reports weren't processed on the server, either. The network link on the terminal I used would be pegged at the max rate the server could read from the disk. I never really figured out what that's about.

I guess it's trying to stream large chunks of the DB to the terminal and running the query locally? No clue.


Some genius probably thought processing queries locally would be faster than on the database server. Fail.

I worked for a vendor that SW was a customer, and we were asked about integrating with some SW systems...what you are describing resonates

Was it EDI? I had a problem with a vendor where my store ordered something in 2015, marked it as not received, then received it later and sold it without correcting inventory.

The vendor went into a cycle of refunding and re-billing my store for that part every few months for years.

Fortunately both our books came out even in the end, but Jesus what a stupid thing to happen.


It was ultimately all of SW. Before the Valspar purchase, SW used SugarCRM, and my team there (TAMs) were with the account and regularly out in CLE. I really like the team there from SW. One of the best customers I ever dealt with overall.

I know of a small shop that made a good living buying software packages like that, rewriting/modernizing the technology, and selling the new version back to all the existing customers. It was kind of a win for everyone -- the customers got updated, secure software, the owners of the old tech who were getting nothing out of it (this stuff is not SaaS) got some money, and some developers got work.

That sounds like a great gig tbh

That explains why they had me drive across town only to find out another store very close had the color I need.

Oh no, that's an entirely different problem. Because of supply chain issues, stores are allocated product based on how much of that product they sell. That sounds like it makes sense, but really what it means is that smaller stores slowly get less and less product until they can no longer meet local demand. There is no way to break the cycle.

This is half of the reason everyone who worked at my store walked out on the same day. The other half is that the only people who worked there were me and the manager, and it had been that way for six months.

My advice is to avoid SW these days and go to Lowe's. SW is contractually obligated to ensure that Lowe's always has inventory. But do spend a little extra money for their mid-teir product. The cheapest stuff is trash and you will regret it.


Heaps! My main clients use AIX on Power for most database workloads (DB2). I had a client that used HPUX for everything (SAP). One could Telnet into their core accounting system from the corporate network.

In my previous job (mid-2010s) I encountered a ton of AIX and Solaris, mostly at very large banks.

I'd argue Mac OS X is proprietary UNIX, but I'll admit to that being a bit pedantic.

It's definitely partially proprietary, and it is definitely a Unix operating system.

It's certified, so I would say it counts as proprietary Unix

Interestingly enough Solaris isn't certified.

At a previous employer we would run a large number of Oracle installations on Solaris, on Oracle hardware (SPARC64). I don't think that uncommon.


Solaris is UNIX 03 certified. Solaris is actually a reference implementation platform for:

- TCP / IP; - NFS; - POSIX; - XPG4; - XPG6; - SVR4.


Didn't Oracle stop paying for the certifications a few years ago? Solaris certainly was certified at some point.

Yep. Two of my local colleagues were actively working with Open Group until then, fixing UNIX compliance bugs, running the test suites, producing the reports etc.

>Solaris is actually a reference implementation platform for:

Was actually. It is no longer a reference platform for TCP/IP, NFS, etc.


In the spirit of pedantry, I’ll point out it’s actually called macOS these days.

The ‘X’ and capitalization went away when they finally bumped the version number to 11.


Actually, it was changed to macOS in 2016 for 10.12 Sierra, to align the branding across Apple's various platforms (iOS/macOS/tvOS/watchOS).

None

It's rather hard to get rid of some old CAD systems in the automobile industry. I was part of a bigger migration almost 20 years ago, when a rather big org moved from CATIA V4, basically running on all the Unices, to the new V5 version, which also ran quite well on Windows. I know that some SGI systems still had a slight advantage when there was a lot of stuff loaded (unified memory?), but they were on their last days.

Or so they thought. Now it looks like most of the engineers moved on to the slate green pastures of Wintel, but occasionally an old format or workflow tends to pop up. I know of some software still being updated for those old V4 machines 5 years ago, but I've been out of the loop since.

Same software is used in aerospace, too. Where you're not switching to totally new models as fast, so I wonder how legacy-laden their software infrastructure is. "Who here knows both French and early 00 SGI admin?"


How about proprietary "non unix's"?

Anyone running VMS? RSTS/E ? Or on rare hardware, OS-32 on a PE 8/32, or MPX on any SEL 32 family? MPE on Harris ?


I have an AlphaServer in my collection, running OpenVMS. It's been a while since I've booted it up. It's very loud.

I know the problem. My AS/400e would also get much more powered on time if it wasn't so damn loud.

I'VE GOT A SUN BLADE WORKSTATION. YEAH IT'S RUNNING RIGHT NOW, HOW COULD YOU TELL?

"NO MOM, I CANNOT TURN THAT OFF. IT'S MY SERVER."

ssh'd to a live alpha server at work last year. there were some files on there I wanted to look at.

the machine has to be at least 20 years old at this point. but it feels fast. bash. find. xargs grep. command line stuff very responsive.

it felt fast back then. i do remember that.


Alphas were incredible when they first came out. The one I have is a DS10, probably also 20 years old-ish. It still feels responsive.

It was a DS-20

Part of our business-critical financial analytics software was running on VAX/VMS, then AlphaVMS decades ago. Written in DEC Pascal (let's just say - not really compatible with any other Pascal dialects I am aware of). We managed to port the whole system to Linux (before Linux became fashionable) by a godawful contraption made of Scheme, Perl, shell and elisp - it "translated" the Pascal code into a dialect understood by p2c, which in turn translated it to C. That was mucho fun!

Unironically this question could deserve its own thread.

Last I used a Xerox digital printing press I was greeted with a "SunOS" version string when opening a shell - being some version of a Solaris. It wasn't much impressed by my linux cli-fu. Looking at current brochures those still ship with it. Product is called "Freeflow Print Server" that GUI for the job queue looked java based as it also has Windows incarnations. It will do the raster imaging before printing. Good times

90s to early 2000s, it was "fun" to accidentally use "killall" on the wrong machine. System V derived UNIXes really mean "kill all", not what Linux meant with it ("kill all with a certain name"), and what today is better represented by "pkill" anyway.

Also reboot, which is safe on Linux, but does an instant power cycle with no proper shutdown on others.

Good one. That as well, and the reason why I'm still first trying "shutdown -r now", and if that doesn't work cautiously type in "reboot --help" (hoping that it parses the argument and does nothing at first), even on busybox.

I know what you mean, but I still laughed at “cautiously type in ‹thing that doesn't care how cautiously you typed it if it's going to screw up›”

reboot and halt used to do the instant thing on Linux too, it was essentially a lot of drift and well meaning fixes for desktop users (because actually invoking emergency halt or reboot might have been more useful for remote machines that you couldn't power cycle with a finger)

ORCL HW business increased YoY by 11% in the last earnings release.

The most shocking thing in this thread

Active HP-UX admin here.

the software being run is SAP R3/Oracle and there is plans to replace it, but that is not happening anytime soon due to the usual delays associated with ERP migrations.

License cost is a red herring here especially when dealing with enterprise applications from the likes of like SAP, Oracle and IBM, heck were probably paying as much for our SAP on SUSE subscriptions as we do for our HPUX licenses, and the real license costs is with the applications and databases.

And it's not that long ago(say 2014) that there were niches where the only real cost effective way to get enough single box io performance was to bye an non x68 box that came with it's own unix, so there is a lot of systems out there where the hardware aren't actually old enough for an 1:1 migration to make commercial sense and rewrites/redesigns of ERP software is risky with most projects overrun both the time and budgets by an order of magnitude, if they don't outright fail to deliver an new system.


When we ran HP systems it was because they had PickBasic

Same! Refrigerator sized HP 9000's running HP/UX and Pick.

Itanium or PA-RISC?

Itanium blades.

macOS is the most widely used UNIX on the planet.

Proprietary, isn't it?

Well, Android is Linux, does that count as "Unix"? I think there's a lot more Android world-wide than iOS/macOS/iPadOS/watchOS/tvOS/etcOS combined?

It depends on the definition of UNIX, but typically Linux doesn’t count as UNIX.

You can draw a continuous—if very torturous—line between Bell Labs’ UNIX original codebase and modern macOS/iOS.


Not really torturous, you can still find many FreeBSD 5 components in modern macOS, two decades later.

Strictly, no - the only vendor whose systems are currently certified to meet the UNIX trademark and Apple (MacOS), IBM, HP, and SCO. And on practice the user interface and APIs are so different it’s much more accurate to say if has some UNIX-like parts, but isn’t UNIX. - ditto the iOS family.

No, because the userspace is based on Java and the NDK uses Android specific APIs, only ISO C and ISO C++ standard libraries are officially supported, anything POSIX related is not part of NDK official APIs.

In rooted devices everything goes, on random Android device you application might get killed for trying to access private APIs, meaning anything that isn't documented as official NDK API.


I'm working with (although rather superficially) a ASML PAS5500 KrF DUV stepper. Its control workstation is based on some kind of unix (I guess Solaris).

The stepper is a tool that is too expensive to retire, and if routinely maintained, can remain operational and happily expose wafers for more than twenty years[1]. No surprise to see it contains some ancient (by consumer electronics standards) software.

[1] https://www.asml.com/en/news/stories/2021/three-decades-of-p...

Also, I used a HP 86142B optical spectrum analyzer last year, which runs HPUX.


Solaris indeed. More recent scanner products use Linux.

yep, legacy tru64, aix, solaris. big issue is ssh encryption.

Tangentially related since there has been a lot of talk about working with ancient systems: does anyone working with TXDOT know if they are still using mainframe systems to drive their core database and services? I worked with a quasi governmental agency about a decade ago and became pretty intimately familiar with their mainframe setup. Didn’t look like it was going away anytime soon when I was there.

Usually it's because there's a database license or something that costs even more than the OS. No one wants to be there, but it possible to get trapped.

If it ain't broke, don't fix it.

You are kind of combining two things though: legacy systems, and proprietary systems.

There are modern proprietary systems as well. RHEL is a good example.

I'd argue it's not a "small number of entities" though. You'd be shocked by what legacy systems are running in the most important places on the planet...maybe scared. Unfortunately/Fortunately, nuclear facilities aren't running Linux Kernel 6.X

The fact is, a lot (probably most) problems solved with a computer don't need further updates. As long as the hardware continues to function, all is well.

Something you may have not considered is that when the time does come, and the hardware does fail, I'd guess most organizations will opt -- and even go out of their way -- to source those same legacy components they had before to keep things running exactly the same instead of upgrading to a more modern solution.

I've had to do this a number of times for clients. Not long ago, I had to source an old mainboard for a system that was 20+ years old...in doing so I did realize there is some good money to be made if you can source parts for systems about 20 years in the past because the board was like $300 (this was 2017, and the board had a 33mhz processor and like 8MB RAM)

If you don't have to touch these systems, count yourself lucky.

If you do touch these systems, thank you for your service.\

In regards to the modern proprietary systems, there are many, but if you consider RHEL for instance, there is a lot of value for large organizations. They can reduce the number on on-hand personnel whom probably would be less efficient at solving an OS issue than a RHEL engineer. As an example, the Federal Reserve runs modern RHEL...but I'd guess if you dig deep, they have some really old stuff too...


> There are modern proprietary systems as well. RHEL is a good example.

RHEL is free software, isn't it?

(it's just that if you decide to fully use the rights given by the licenses, bye bye support and updates from Red Hat IIRC)


RHEL is not free, but the word "free" is loosely defined in open source world. RHEL is open source, yet it is also proprietary. Yes, if you wanted to build from source you can, for free, but you get no access to the repos or service and you have to strip out all references to Red Hat (trademark / copyright infringement) if you want to use it commercially. So, ya if you want to use it at home or to learn, it's free.

Isn't that what CentOS is?

edit: just googled, seems dead these days, but Rocky Linux is more or less the same thing


CentOS still exists, and is now owned by Red Hat. Originally, CentOS was meant to be the community ("free") version of RHEL, yet after Red Hat took it over it has since become a mid-point in the stream.

Before it was Fedora -> Red Hat

Now it is Fedora -> CentOS Stream -> Red Hat


> but the word "free" is loosely defined in open source world

The word "free" is well-defined in this context (I carefully used it in the phrase "free software"). A free software respects the 4 freedoms given in the free software definition [1]. You can run it for all purpose, study it, redistribute it, distribute your modified version.

Within the law, of course. Always. Licenses are restricted by the law.

> RHEL is open source, yet it is also proprietary

That's not possible because proprietary means "not open source" (as defined by the Open source Definition [2]). Or "not free" (as defined by the Free Software Definition).

Moreover, something is open source if and only if it's free software.

(save some anecdotal licenses that are considered open source by the OSI but not free by the FSF, but that's anecdotal and that's not relevant to RHEL).

It's exactly because RHEL is free software, or, said differently, open source, or said differently, not proprietary, that open source / free alternatives like AlmaLinux / Rocky Linux, their friends and formerly CentOS can exist, legally.

Note, open source ? source available (which is a necessary condition to open source but not sufficient).

> you have to strip out all references to Red Hat (trademark / copyright infringement)

Sure. That's trademark, and not related to copyright. Open source / free software licenses are related to copyright only. The licenses do offer you all the rights guaranteed by the definition of free software, of course you still have to respect the law by using those rights, including trademark.

Respecting licenses (based on copyright) and respecting brands / trademarks are two orthogonal dimensions of the matter.

You could tell me that see, you can't redistribute RHEL verbatim because of trademark so it's not free software. Wrong. It's right that you can't redistribute RHEL verbatim because of trademark, but that's not enforced by its FLOSS licenses. It's enforced by law (hence my mention of the "within the law" restriction earlier). It's subtle but important nonetheless.

Firefox has similar restrictions. Mozilla allows you to redistribute it under the Mozilla Firefox brand only if you don't patch it too much. Formerly, it was stricter than that, you could not redistribute it under the the Mozilla Firefox brand if you changed anything. GNU/Linux distributions could redistribute it as Firefox even though they patched it because Mozilla explicitly allowed them to do so. That's why Debian redistributed it as Iceweasel at some point, and now as Firefox again. They first decided that they didn't like needing an express authorization, and then Mozilla requirements were relaxed, ways of doing things changed, and using the trademark was fine again [3].

> RHEL is not free

It's not free as in gratis.

It is free as in libre / free software. It's certainly not proprietary. Except for proprietary software it has in its non-free repositories.

[1] https://www.gnu.org/philosophy/free-sw.html.en#fs-definition

[2] https://opensource.org/definition

[3] https://lwn.net/Articles/676799/


Look up the actual definition of "proprietary". If a company owns it, it's proprietary...even open source software.

Open source software can cost money, and still be open source...it currently exists.


> Look up the actual definition of "proprietary". If a company owns it, it's proprietary...even open source software.

It's not what this word means in this context. Almost nobody uses proprietary this way to describe software, because it's used as opposed to FLOSS, and because it would be useless: a piece of software always belongs to someone unless it reached public domain (which is not many pieces of software). It always has one or several authors, working for someone else or not, so by this definition any software would be proprietary. Why call a piece of software proprietary then if it's always the case?

It's like proprietary protocol meaning "non-standard", "specific". Proprietary has several meanings, depending on the context.

You need to use the words the way they are by everybody to be understood correctly. Or stick to your usage and you'll end up in futile debates again and again. Or try to convince people that your usage is the right one if you have strong reasons to do this. But good luck with that. Proprietary software is used with this meaning since the 80s and it's not controversial AFAIK.

> Open source software can cost money, and still be open source...it currently exists.

I know, I work at a FLOSS company.


I've used AIX, HP-UX, OEL (Oracle Enterprise Linux) and RHEL. Probably some other distros. We use it because there's a support contract and that sort of thinking is surprisingly durable. Also because it costs money to migrate.

Until last year I worked for a financial services company which was so big it's owner ran for President. While I was there, they were doing a major project to move key, irreplaceable core components off of big iron Solaris and AIX machines, and onto Linux on private clouds or commodity machines. Hell, I even worked a (tiny) portion of the project. It might be finished by now? Maybe?

Towards 2014 or 2015 my previous work (some hosting company) had some AIX, Solaris and SCO, as well as some IBM i (aka OS 400) which isn't a Unix. AFAIK they were used because of choices of slow-moving/risk-averse big corps, mostly to run some java software or oracle/postgres/sybase databases that could just as well run on Linux.

My take on each of the OSes was:

AIX and the associated IBM stuff is kind of a mess. I encountered a bug where /etc/filesystems (fstab equivalent) was parsed differently during boot than when using the mount command manually. The focus seemed to be on the use of the menu-driven smit utility as the primary admin tool, with automation of admin tasks an afterthought. The builtin commands are often not very practical, requiring multiple steps to do things that you're used to do in one on Linux. Installing some open-source tools is essential to sanity. Some of IBM's own tools are using expect on their own software (looking at you lpar_netboot).

SCO is clearly unmaintained stuff that looks like it dates from 30 years ago. At least it's simple to use.

Solaris had some nice features, like Zones or ZFS, but much to my dismay I couldn't play with them as I was made to install an old version of the OS as the newer version wasn't listed as supported by the version of Sybase that was to be installed on it.


The thing I loved about AIX is SMIT (System Managment Interface Tool). Available both from the command line and GUI, you could completely manage the AIX system from it, but at any point, you could also have it print out what commands it was going to run (yes, it built a shell script as you navigated through the system). I've never seen a system like it anywhere else.

i had totally forgotten about smit - good times

My last job (I left in October) still uses Solaris on the SPARC-64 architecture. It was selected because the customer, The Oligarchic Cell Phone Company, required NEBS (Network Equipment Building System) compliance (it wouldn't surprise me if it's Federally mandated) and SPARC had said support. The hardware reminds me of a freight train---slow to start, slow to stop, but once going, it keeps going and can handle a frightening amount of load. But all of the development is done on Linux and Mac OS---we just stick to POSIX and it all works out.

A lot of the gas industry (wholesale fuel distributors) are still using AIX and OpenVMS. A lot of proprietary integrations, I presume, as well as ties to very very low powered chips and devices (have you seen how slow a gas pump computer is sometimes?)

Last time I got near any sort of proprietary unix was probably AIX at IBM. It is still actively developed for their p-series POWER servers. But that was probably 7 years ago now.

Back in the 00s it was common to have to work on multiple proprietary platforms. I did a lot of platform engineering work for one product that ran across Solaris (Sparc and x86), AIX (POWER), HPUX (PA-RISC and Itanium), Linux (x86 and System Z) and Windows.

Now ... if I'm lucky I don't have to care about the platform at all, I just write lambdas in my language of choice and throw them at AWS. It's a very different world!


Not since 2012. Company ran on a big Oracle DB on very expensive Solaris machines. They started in 98 or 99 so this was a vaguely defensible choice.

Several times we looked at moving the DB to Linux/x86 but Oracle's pricing made it non-economical, or so I was told. All the app-tier servers (Java) ran Linux.

Haven't used a non-Linux system in production since leaving that company.


I heard that Boeing archived an Apollo Domain network, because some engineering documentation was developed on it.

https://en.wikipedia.org/wiki/Apollo/Domain


offtopic but didn't know wiki allowed slashes in article titles


Former Apollo/Domain sysadmin here, which I became at one company because I was already doing their Unix sysadmin job. I hope Boeing had somebody periodically resetting the system clocks on those, unless that Apollo/Domain bug had finally been patched after the HP takeover:

https://jim.rees.org/apollo-archive/date-bug


Hopefully. (I heard this in the early 2000s, from someone who'd worked at Apollo.)

For awhile after then, I could imagine some HP account manager for Boeing being able to pull some strings to get the patch, even if it was from another division. And since HP got DomainOS onto an HP 9000/4xx post-Apollo-acquisition, I'd guess that HP probably retained the code and know-how for awhile.

(Though I couldn't guess how long they kept their Domain lab systems around, especially if there were business reasons to push customers to HP-UX on HP-PA, rather than gouge a handful of people who'd still pay support contracts for Domain. I've seen companies run sustaining engineering labs for a while, but then eventually discard key tooling, like the test rigs for a product, which irreversibly marks the end of that, even if the engineers are still on-staff.)

I wonder whether HPE now has the Apollo/Domain IP, or who has it?


I started in the industry when you were born, with IRIX, HPUX (horrible) and Solaris. I was on Linux by 97/98 and I haven't seen a *nix system in production use in a workplace that I've been in since about 2010.

Isn’t https://oxide.computer/ continuing development of SmartOS (which is based on OpenSolaris)

Yes, and it’s unix, but it’s not proprietary.

AIX

Luckily it had bash, so it felt like a Linux system for my scripts. There is a Command to enumerate all the hardware running on it, I remember running that command to see what was assigned to the LPAR the Aix instance was running on. It took 3 minutes to run and complete :)

So yeah, a mainframe. Twlco. IBM obviously. It was used as a massive Oracle database.


> AIX > So yeah, a mainframe.

If it was AIX, it wasn't a mainframe. AIX runs on IBM's POWER systems, which are their "midrange" line. (There was, for a brief time, AIX for S/370 I believe... but that disappeared long ago.)

Mainframe would be a System z (the S/360->S/370->S/390->z/Arch lineage), typically running z/OS (or Linux for z!)


Which folks haven't mentioned yet, but z/OS is also a proprietary UNIX, albeit in a very strange way given that it is sorta like a subsystem/different way of looking at the same things, but is also in implemented in the base control program (kernel), so it isn't really a subsystem.

And it isn't the same as AIX. IBM has two alive commercial UNIXs, which is kinda wild.


Yes, and the z/OS USS is what I call aggressively POSIX-compliant. Its mission in life is to hurt you with strict adherence to the spec, not ever help you!

If there's any optional part of POSIX, it's not in there. You only get the 'MUST' parts of the base spec. No optional extensions, etc. When you're porting over C code to z/OS USS, you are often surprised when things you assumed were just part of every POSIX implementation aren't there. Fun times for all!


I worked at Chef Software up until last year and we maintained binary distributions of the software (including ruby and lots of lower level libraries) that were built on Solaris, AIX and Windows in addition to Ubuntu, RHEL and FreeBSD.

Most customers were on Ubuntu/RHEL/Windows. There was very little on FreeBSD, AIX or Solaris. We had zero interest for HP-UX, I think that is dead enough to be ignoreable. Banks and financial services have a tendency towards AIX, while Solaris I think was primarily one customer that had a lot of legacy. AIX and Windows were the biggest pain in the ass, but every time we tried to kill support for it, people discovered sizable contracts that had been signed with us (yeah, our tracking in salesforce was bad).

My background is that I learned C and Unix on a Tandy 6000 back in 1989/1990, then in college used and worked on a wide variety of O/Sen (Dynix, BSD4.3, Digital Unix 4.0, SunOS 4.1.4, Solaris 2.4+, Irix 5.x/6.x (i think), something that ran on a VAX, NetBSD and later Linux). I ported NMAP to a bunch of those and did the original GNU autoconf work on it. I've been mostly Linux since 2001 (Amazon from 2001-2006).


Is AIX functionally superior for banks and financial services? Or is it a matter of legacy software requiring AIX?

I can't really answer that, but my general impression is that they actually like it and it isn't just legacy (or at least some of them).

As somebody who likes it I think I would explain it along the lines of FreeBSD vs Linux.

AIX is a complete package OS with some nice enterprise features, and it is modern and similar enough to make porting from Linux not too terribly difficult. Even something like RHEL is still very much more in the Linux tradition of lots of pieces of independently developed software kinda thrown together and called an OS, at times haphazardly.

That being said, we of course run RHEL as well, but mostly for cost reasons for less important applications, as opposed to any particular dislike for AIX.

USS on zOS is a much more difficult IBM UNIX to work with in my opinion, and tends to generate stronger opinions from both sides of the proprietary/OSS fence. They are making progress, but the compiler and compatibility situation is not as good as POWER/AIX.


I work in electronic design automation. When I started, Solaris on Sparcs was dominant and we supported a number of Unix flavors, but all that died long ago, Linux killed it off. The Solaris port was the last to die, and we were very happy when it went so we no longer had to support their crappy proprietary C++ compiler that, at the time, still was missing important C++98 features.

i had a solaris workstation on my desk from 2000 to 2005 and we had an industrial fridge size IBM pseries running AIX to run big linear programming problems on. You could not run linux or BSD on those machines and you could not get close to the performance of those machines with intel CPUs. We also had an itanium HP box for a while, running Digital UNIX.

I work at a telco. We have many museum-grade Unices in production, mostly AIX and Solaris - we even have a Wang still running in one of our datacenters. It's largely a case of there isn't a business case to migrate it to modern hardware - one day we'll no longer operate a legacy copper PSTN network, but until that day comes, we still need it all. We still have maintenance contracts on some hardware, like our Sun Fires. It's very clear that replacement parts are scavenged from whatever they can find second hand.

I recently rewrote the system we use to push user accounts and passwords to systems that don't support LDAP. It was amusing to write an app using a current-day stack on RHEL 8 that purely exists to handle these very legacy systems.

One of my favourite systems I've had to work on is running Solaris 2.5.1. Users are added to the program by editing the source code and recompiling it. How times have changed.


That makes sense, paying for maintenance would be cheaper than rewriting everything then dumping it out five years (or even 10 years) later. Just make sure that everything can be pulled out before 2038 or else you'll need to rewrite everything.

It's most definitely already on it's way out - but it's a slow process :) Most of the switching hardware is significantly older than I am!

Who does the maintenance? Sun/Oracle or some third party?

I love when we get telcos as customers for security assessments, because it means I get to see what weird old shit they have on their networks.

I find the sysadmins at telcos come in three flavours: the ones who will work with me to secure their cool old shit, the ones who want to get rid of it, and the ones who have fucking meltdowns if I consider to touch their weird old shit.

What's neat is the weird old shit usually gets support forever, whereas support for modern shit tends to be short.


Replacing something that has worked for 25 years is a good way to end up with something that doesn't last another 25 years.

I worker for a telcon for 15 years. When I left, the company was starting to add load to a new undersea cable.

Turns out most of the wet plant (a fancy way to say something is submerged under the water) were HP/UX systems.

Submarine cables must last for 20 years minimum to be financially viable... So in 2035 they will still running.


In my very first job experience, at the end of '00, we had a couple of SCO Unix systems that needed to remain active to keep developing software for legacy customers. They were running on Olivetti M24 back from the 80s; it was a nightmare to find spare parts on Ebay when something broke. There was also a couple of x86 workstations running OS2 Warp for other customers. The colleagues told me that the SCO Unix are still there and OS2 workstations were virtualized 3 years ago.

I worked in a semiconductor probe and test group and we had many older Teradynes connected to Sun computers running Solaris.. There were also old mainframes running Solaris (these were slowly being sunset but from my understanding, it was challenging due to many many cronjobs, etc..). We were tasked with keeping all systems up year round, which made any type of migration slow.

I occasionally work with a bunch of 'em. AIX, Solaris/SunOS, HP-UX, SCO, and a couple of others at various customers sites.

They usually are just running some closed source service that is too expensive/impractical to replace, and aren't causing enough pain and suffering to anyone so there is no business case for replacing them.

I like having them around. Sure, projects could be created to replace them with some modern webshit on Linux, but it would probably run into the tens of millions of euros, take years, and work less reliably than the shit that's been chugging along just fine for longer than I've worked in tech.


Oracle database are still heavily deployed Oracle servers, with SPARC processors, running Solaris. While Oracle database run very well on RedHat, or Oracle Linux, Solaris really is the default platform.

Solaris is used in life-critical environments, like for example medical systems or energetics. Anything that's absolutely life critical and must run for thousands of days without rebooting. And it's really nice for running software because of its advanced features like the runtime linker, zones containerization and fault management architecure features like SMF. It's awesome software on awesome hardware.

QNX. Technically they are not unices…

Yes, mostly IBM's AIX and a few ... ah the name escapes me now ... the top notch graphical design workstations of the time.

The reason are maintenance contracts for very long-term systems that never upgrade (they are ultimately completely replaced by something else).

They are similar to that old bulb in a fire station[1]: it is strictly forbidden to breathe when around these systems, if you sneeze you are fired on the spot :)

We had to move them twice between data centers. I had some popcorn with me when they were powered off, transported as if it was Mona Lisa and then restarted with the sysadmins not watching and asking for the flashing numbers. Good memories.

[1] https://en.wikipedia.org/wiki/Centennial_Light


Silicon graphics workstations?

Exactly - thanks for reminding me !

At my work we’ve got a bunch of PlayStations around the office.

We've used Solaris on x86_64 for very long, until Oracle effectively shut it down, as it was best platform to run high performance JVM jobs. Much more predictable and observable than on Linux.

But later Oracle shell out JVM/Solaris integration/QA team, and shortly after that discontinued Solaris for new hardware completely.

Linux it is now.


I work in Enterprise resource planning (erp) space. Mostly peoplesoft. I'm around things like sap, jd edwards, oracle ebusiness / hcm cloud, rtc.

Proprietary Unixes are frequent and have probably made up 75% of my career including current massive project. Note these are modern and up to date, usually purchased brand new for implementation project hardware and software, not inherited legacy stuff (which seems to set me apart from most respondents here)

Several reasons but note I have very bottoms up perspective.

1. Support. I think this is the major thing. Having a reliable long term vendor with pricey well written steady support model is important to companies who use erp.

2. Related is perception and reality of stability. Aix on power is as proprietary as it gets. These things get rebooted once or twice a decade. Hardware upgrade to another frame is live migration through firmware. It is not fancy or pretty but God dammit baby it works. Perception is there too - that Linux scales well out but not up, that it's vendor support is not at same level, that it moves too fast and breaks things, etc.

3. Deals and contracts. There may be legacy hardware footprint or client may get package deal with application middleware database and hardware.

My personal perspective? Proprietary Unix is ahead on internals, behind on shiny,boring and reliable. There's a lot to be said for distributed cheap boxes over proprietary big boxes. But I don't think modern SREs fully grok how much I never had to deal with hardware or OS issue or outage on these things. Anecdata sample size =1 + gossip, but it's just a very different mindset and, here's the trick, there's nothing inherently wrong with that mindset, even if it's not currently in vogue.

I prefer to work with Linux for a few reasons, including shiny and resume-helpful, but honestly, from business / management perspective, a grouchy experienced aix sysadmin on Power stack makes my job a lot easier.

Edit / p.s. Again these are modern os'es and support Gui and tunnelling.... But I don't think anybody ever uses them. The application stack running on top is certainly modern and gui/Web, but installing and supporting OS, database, middleware and apps is all cli, very obscure, very efficient, very powerful.


IRIX, at my first student job at my university.

Most servers were already Linux/x86 based at the time (2012), but I vividly remember SSHing to that one machine where things felt just... different.


Believe it or not, UNIX isn't quite dead yet -- it's just resting. In certain embedded environments, we use things like what was once called BSD/386. WindRiver took a UNIX and worked to make it suitable for real-time requiremewnts.

HP Unix over serial terminals in the late 90's and early 00's. A few Solaris environments en the early 10's. After that, just Linux and FreeBSD.

Yup. HP-UX on Itanium hardware. Legacy software is the reason.

Not lately, at a previous job we worked with Linuxes, Solaris, AIX, HP/UX and Linux on 390 mainframes.

The reason I'm chiming in is to give my advice to those still dealing with these; learn vi, at least enough to edit config files. It's the only editor you can find on all of these, and you often can't install your own software.


Yes - Unix subsystem on z/OS (USS) and thanks to zowe.org there is finally some open source code as well [1] [2]

[1] https://github.com/zowe/zowe-common-c [2] https://github.com/zowe/zss


Haven't used any non-Linux stuff in forever, but i have generally fond memories of working on SGI (perl and c/cgi/nsapi at weather channel), Solaris (oracle db and java web apps at riaa), and SCO (java/servlets -- and javascript/netscape/livewire -- at startup confluence software, not atlassian), aix at small energy company, etc. -- the shit was just all over the place.

and it was kind of great. kept things interesting, at a minimum.

once linux and red hat starting gaining real traction in industry, i felt like losing all these high-priced unix distributions was kind of... lame.

i always had this idea, for instance, that working on/with Solaris -- i was driving this high performance _machine_ that was capable of doing almost anything, as long as i was up to the task - the Mercedes of OSs.

losing all those -- i would kind of compare it to how the English language - like the Linux OS - is taking over the world. at the same time that is happening, either as a direct result or something less than that, we're losing all these other languages. ditto biodiversity loss. ditto city gentrification / sameness / sterility. it feels wrong / unhealthy.

¯\_(?)_/¯

https://www.theguardian.com/news/2018/jul/27/english-languag...

what i'm saying is i miss the days of BeOS. :)


Legal | privacy