Back in 1989 I worked as a one-man-IT-department for a bunch of ex-academic economists doing econometric modelling on a Digital VAX 11/750. This mini-computer was running VMS - a multi-user operating system. All users had admin rights and each one thought that they could make their models run faster by bumping up the process priority as far as it could go - which of course interfered with the realtime processes needed to manage the effective running of the computer. Unsurprisingly, this had the opposite effect to what they intended. When I discovered this was what was happening, I revoked their privileges and after a system restart, sanity was restored. I was thanked for finally making the system work faster.
Oh, my first internet access was through one of those in 1991 at college! Found a cool exploit that let me anonymously broadcast messages to anyone. Sure freaked out a lot of people. Was fun cause you'd get to see the effects of your action in real time because it was a bunch of people in the same room on shared terminals.
Around '84 doing seismic data. We just got a terminal in the office that would allow you to monitor the jobs running on the IBM mainframes downstairs. Completely new tech to all of us. It had a command line message capability. Because it was quite easy to send to all instead of just your recipient, one marriage ended rather suddenly. Seeing the effects of your actions with new tech in real time indeed.
Working at a hedge fund with multiple locations, the founder insisted that mail to “staff” would reach everyone at the local office and mail to “all” would reach everyone (in all offices).
One afternoon (in the middle of the trading day), we got a weird email with an empty body and a bunch of nonsense email addresses. Turns out some poor trader wrote an online dating email, but pasted/wrote the mail in the cc line, resulting in all of us learning that he thought “you don’t seem like all the other girls”, which was sent to you@, don’t@, seem@, and the dreaded all@.
I remember someone getting a bit wild with their netsend and accidentally spamming the entire school district, including the administration. We also found out that you could DoS your instructor with enough messages.
On our high-school network, the Guest account had NET SEND privileges. It was somehow less chaotic than one might expect.
We had a single shared T1 pipe for the whole district. Which was enough for email and stuff, but when web browsers got popular, it was suddenly woefully inadequate.
So I figured out I could NET SEND * SERVER ROOM POWER FAILURE - 9 MINUTES OF BATTERY REMAIN - SAVE YOUR WORK AND LOG OUT and after a flurry of traffic, the network fell to nearly-idle. I could max out the T1 with whatever I needed to do for a few minutes, then NET SEND * SERVER ROOM POWER RESTORED and nobody would be the wiser.
The admin did go check on the "flaky UPS" a few times before looking closely at the message. Had a good laugh and told me not to use it too often.
This is along the lines of my early days learning about computers at school. I saw executables were filled with weird junk when looked at in a text viewer. So I'd load a little of that junk in a file and add
CRITICAL DISK ERROR. TURN OFF SYSTEM TO AVOID CATASTROPHIC DATA LOSS
and then printed it out to a system printer students didn't have access too. So you got a page of random symbols and that error.
Little did I know the company adding more computers to the network was there that day and the guy panicked and had the system shut down, it was down the next day too. I never did ask what happened to bring attention to myself, and this was before we had individual accounts in the system.
the key "feature" of the exploit was the ability to send messages anonymously. the unix commands "write" and "wall" allow you to send a message to any users terminal.
apparently "wall" has a switch "-n" to hide the sender
Nice, reminds me of my high school computer lab. I wrote a trojan horse in VB6 and distributed it among the lab PCs somehow, then from mine I would open and close peoples CD trays, turn their monitor off, send them to... questionable websites where they would swear it wasn't them! Haha, good times.
A friend and I were able to phish passwords from nearly the entire school we went to with VB6 - the school (board) used active directory for logins on a shoddy network where some switches would often just drop all traffic to a random port for any length of time, meaning a PC would lose connection to the AD server at random. The kicker was that attempting a login after the connection was dropped greeted you with a "could not connect to //SCHOOL_BOARD//SCHOOL_NAME/PC_NAME" to which the solution was reboot the PC and it would work again (99% of the time, anyways). The other kicker was the background image and login domain were the same for every single computer at a single school. We exploited this; we created a full-screen/un-exitable UI with the same background image behind a form simulating the normal login screen. We would first login to our own account and run the program (there were no login limits either), at which point someone else later through the day would sit down and try to login. The credentials that got typed in were added to a .txt in my own user folder before the user rebooted the "non-functional" system. Of all the dumb shit we did, that's probably the only thing we never got caught doing, and probably because we never did anything nefarious with them.
Ha - did the same phishing thing with our RM Nimbuses in the school computer lab. Wrote a turbo pascal program that showed the same login UI as the Nimbus. Logged into my own account on all machines and ran the program. When someone "logged in", it wrote their username and pw to a text file, and rebooted the machine (which got the user to the real login screen). Never got caught for that either, but also don't remember doing anything malicious with the info - I think it was more about the fun of the challenge :)
There was a system that a bunch of students administered (I was one of the students at the time). We would occasionally prank each other. On this DEC station where memory was scarce, one guy ran emacs.
Another guy wrote a program that forked 1000 copies of itself, nice itself to 19, did a sleep(0) and then exit. As soon as it got any cpu time, it would exit, but it never would as long as emacs was running. Meanwhile, the load (as displayed by xload) became a solid black box.
So the emacs guy would run 'ps -ef | grep procname | xargs kill' as root.
This meant that it had to get some cpu time to handle the kill, which took longer than a sleep(0) and was largely ineffective.
The second time this prank was done, the process was named 'ema'... which promptly also killed all instances of emacs too.
The third time this prank was done, the process was named 'et'. This happened to have also matched /etc/initd and the machine rebooted rather suddenly.
Somehow I've never considered what will happen on Unix if pid 1 exits!
Even though I'm pretty sure I've run with init=/bin/sh and then typed "exit" at some point in my single-user session, I have absolutely no recollection of the results. I should try it on a few OSes and see!
Occasionally. It wasn't a daily, or even weekly occurrence. But, in all honesty those pranks were some of the things that helped my early sysadmin experience.
When you have nice, well behaved users you'll not have problems that need solving. When things go awry - that's when you'll need to solve problems... sometimes even without pranks.
Before we had yp set up on the machines, we just copied the password file between them with a note "make sure you change your password on foo" since that was the one we regarded as authoritative and would copy that to bar.
One time, while adding a person to the /etc/groups file for write access to the web server, someone did rcp /etc/groups bar:/etc/password (I suspect it was muscle memory) and, well, now bar was unhappy and wouldn't let anyone log in... or even su to root to fix it. Found someone who had an open terminal and had them do a while 1 sync... and then powered the machine down and brought it back up. It wasn't happy, so started up in single user mode. Just needed to get the password file in there... but the terminal was 300h which didn't have a proper termcap entry for vi or emacs to work. I was a mudder and knew how to use ed... so ed /etc/password and then the contents of the minimal password file were dictated to me. When done, we got it back up and then copied the password and groups files to the proper spots.
Another time (and this was a prank), someone left themselves logged in and someone else created a directory path that was about 3000 characters long. /user/jsmith/I/will/not/leave/myself/logged/in/I/will/not/leave/myself/logged/in/ ... The problem with this is that `rm -rf` won't handle paths longer than 2048 characters long. So it didn't get removed "I'll do it later." You know what else doesn't like paths longer than 2048 characters long? fsck. So when the machine was rebooted/crashed at some point, the root volume (yea, user directories were on the root volume) failed to fsck... and failed to mount. Stuck in single user mode with the backup partition and reading the man pages for mount on the other machine we found how to force a mount without fsck and then had the guy who did the extremely long path fix it (and promise not to do it again) and got machines working.
We used to play pranks on each other such as logging into the NeWS server on a colleagues machine and manually setting a small rotation in the transformation matrix for a terminal window that someone was typing in....
NeWS had an interactive PostScript shell and almost no security so this kind of mucking about was trivial...
Ah, good times.... we used to do similar things across the HP Apollo workstations at the place where a friend of mine worked (and I unofficially "borrowed" computing facilities for a project of my own -- though I did also contribute a substantial speed optimization to their main product, so nobody seemed to mind).
For april fools' day, on my first job as a troublesome ~16-year-old administering our university lab's firewall (academia was a different land when it comes to trust), I set up some automatic network-wide substring replacement filters for incoming HTTP responses to replace `<body` with something like `<body style="transform: rotate(0.1deg);"`. This was in the time before HTTPS was ubiquitous, so it worked on most websites.
Unfortunately, it broke some pages that lab users needed for school. I later learned one colleague wasn't able to complete a homework assignment because of my prank.
Economy is all about incentives and behaviors. Did they learn any lessons from how they acted during the episode? And hopefully published a whole bunch of papers from it?
Had exactly the same with a bunch of developers who could change the queue priority on a mainframe for their own work. They couldn't work out for themselves that if everybody set their work to the highest priority it had no benefit to any of them. Trying to educate them failed, so we revoked access.
> All users had admin rights and each one thought that they could make their models run faster by bumping up the process priority as far as it could go - which of course interfered with the realtime processes needed to manage the effective running of the computer
It amazes me what the bios and OS or OS api's let you do, even on modern devices.
> It amazes me what the bios and OS or OS api's let you do, even on modern devices.
Same, but not necessarily in a negative way. I like pushing my hardware and software to the limits, becoming unable to push those limits would be pretty disappointing.
I'm not sure anyone should call VMS "modern", but that's beside the point ;)
VMS has a very comprehensive system of quotas and limits, so a "properly" configured system wouldn't suffer those issues.
And furthermore, VMS isn't intended to be used "interactively" as such. You should be submitting work to the built-in batch queues - each with various attributes that can include the priority level. This allows the system to intelligently manage work based on a comprehensive view of the entire system - something a single user in a multi-user system can't have. If you like pushing a multi-user system to its limits, you'd be impressed with what VMS could do even way back in the 1990s.
We don't typically use multi-user systems, actually - I'm pretty sure Logan meant our personal hardware. For example, we run an i5-12400F at 5.2GHz and run a fairly customized Windows 11 Enterprise, both of which would become impossible if our BIOS and operating system were too much more locked down.
I mean, our motherboard has an external clock generator almost entirely dedicated to pointing and laughing at CPUs with a "locked multiplier". (It's also used for PCIe 5.0)
Not sure whether you're referring to your or our profile, but yes, we do overclock to a significant degree. Our motherboard has a setting to downgrade the CPU microcode to an older version that doesn't try to detect higher clock speeds and shut down. Clearly, stuff is already getting more and more locked down even as we enjoy our relative freedom in the present.
okay, then we don't own computers that are intended to serve multiple physical users. They may run multi-user operating systems but ours are typically configured with only one actual user*.
*ignoring all the system and service users in OSes like Windows and Linux.
I remember going along to a VAX/VMS System Administration workshop. Another bloke and I did a prank where we substituted text of the text editor that would produce blinking "Working" if it got busy. We substituted "or" with "an". The workshop coordinator caught us because we forgot to do something I can't remember and a login was tied to the change in the executable.
In 1993 my freshman CS class was taught in scheme. All of our assignments had to be developed and tested on some shared Digital machine running Ultrix. The scheme interpreter was kind of slow to start, especially when there were 20+ users logged in. Helpfully, our TA taught us how to ctrl-z to suspend the interpreter, then edit our program in vi, and then "fg" to get back into the interpreter.
Unfortunately the fg part of the equation was lost on about 2/3 of our class... after editing they would start another scheme instance! I recall being in the terminal lab the night one of our first assignments was due, and the machine slowed to an absolute crawl. Can't remember exactly how it was resolved but I do recall being taught how to look for classmates running two or more instances of scheme to remind them about fg. (Also not helpful to machine load: "solutions" to the 8 queens problem with infinite recursion. The real lesson here was, in later years, to not be logged in on nights when CS 401 had assignments due.)
I had a similar story with a friend in college in the 2000's. He would always hit Ctrl-z'd to "close" emacs when logged into the server which would've been fine if he wasn't using screen or tmux as well. At some point, he was using a ridiculous amount of RAM on the server and the admins suspended his login to force him to come in.
I had a classmate who did her assignments in Ada. The compiler & linker would bring the school's Data General MV/8000 to it's knees, as it swapped out other processes to make room for it. Every 30-40 minutes we would have a coffee break forced on us.
When I was first learning Linux around 2006 I somehow got the idea that Ctrl-Z was the way to exit programs. For maybe 3 years, I would just Ctrl-z my way out of programs.
Luckily I worked almost entirely over ssh so I presume the suspended threads died with my ssh session exiting each day.
That reminds me of a story from a guy I worked with.
I'm not sure where he worked but it involved a queue of people. He said someone asked him if they could be given priority for their problem to be looked at before others in the queue. In other words jump to the head of the queue.
He said "Sure!" to the surprise of the person asking "But you do realize I will do that for anyone else who asks the same thing?"
So they person chose to remain in their place in line.
Digging into the manuals I figured out how to launch the compiler as a background process so I could still have a working system while waiting for it. Brought the whole classroom to a halt.
More digging revealed that the background priority was set well above user priority. AFIAK no malice involved, just someone who didn't know how to set the system up and left that landmine for me to find.
I remember a variant of xroaches: it had crawling babies with diapers. Maybe they wouldn't multiply under windows but they could be just a different bitmap with the same algorithm.
Unlike many of the things people talk about on this website that I didn't experience growing up, I actually did download this on Windows Vista once. It might have been a version with malware, but I don't remember.
It's funny how many stories from earlier times boil down to "it wasn't meant to be malicious, just funny, but people didn't realize that it would multiply that much or use so many resources". See also: the morris worm (I mean, that arguably was designed as malware, but supposedly wasn't supposed to be nearly as bad as it was)
Calling IT is the right move here. Could have been an intruder or a remote user doing something important.
It's a different relationship. The department workstations were much closer to the refrigerator or copy machine. If it's broken you don't touch it and just call somebody.
In modern money these machines were between $7,000-$10,000 each depending on configuration.
(As an aside, I've always wondered how many maxed out configuration orders they get - you know, when you kick that price up to $100k - what's the threshold where they ask if they could put someone on a plane to visit you? 10 of them?)
That sounds needlessly disruptive. It is a workstation after all. I restart mine as little as I'm allowed to and once a month sounds way too much already.
You have to close all windows (and possibly tabs in your editor), restart long running jobs you have in the background, restart your SSH sessions, lose your undo tree, lose all the variables you have loaded in your interpreter or bash, among others that I have possibly forgotten.
All recoverable, but annoying. I can't imagine doing that every day. It's fine for a home computer, but for a workstation, I just want it always on. Though these days even my personal laptop is essentially always on.
For a personal machine, it's fine to leave it always on.
> All recoverable, but annoying.
For a machine that other people are supposed to rely upon, I'd rather exercise this recovery you are talking about regularly. So I know it works, when I need it.
For a production system, I rather live through it's first day of uptime 10,000 days in a row, than making new uptime records every day. In production, you don't want to do anything for the first time, if you can avoid it.
For production it's highly dependent on the business needs. But restarting the entire estate every day seems a big enough hit to capacity at the very least that may already be prohibitive without any further consideration.
Not to count that would require every service to be prepared to be restarted daily, which could require a more complex system than you'd need otherwise.
I doubt Google restarts all their machines once a day. Obviously not all at once, otherwise they'd have massive downtime. But anyway, Google's needs are very different than just about any other company on earth (except for a handful of others like Facebook and Amazon). So, they are usually not the best example.
Yes, once a day was an example. Google uses different frequencies. However I do remember getting automated nagging messages at Google when a service hadn't been redeployed (and thus restarted) for more than two weeks.
Google as a whole might be different from other companies, yes. But smaller departments in Google aren't that different from other companies.
Getting those messages is very different (and a lot more reasonable) from forceful restarting them, which was the initial suggestion.
The restarting risk is normally so small, that there are several other things that are more important than constantly restarting to test that restart works. Continuous delivery, security patching and hardware failure will likely cause enough restarts anyway.
At least on older hardware, number of reboots was a stronger correlative with failure than hours on.
I'll readily admit this may have been apocryphal. It was a common adage when I was a child in the 80s and now that I'm actually qualified enough to suss out such a statement I've never cracked open the historical literature at archive.org on this one to actually check.
It could just be a portage from incandescent lightbulbs (where this is true) and older cars (where this is also true). The idea of non-technicals thinking magical computer dust having the same problem is understandable
One of the highest stresses on passives and power components occurs when there is an inrush of current (di/dt) or a voltage spike (dv/dt), which can occur on power cycling or plug in. So it is not a myth that hard reboots can be stressful on older hardware, but there is a certain amount of red herring because power cycling is also when an aged or diseased component is likely to show failure due to hours of service.
Modern devices and standards are able, at a low cost, to implement ripple, transient, reverse voltage, inrush limiting and the like. So failures are more isolated.
Nowadays, with stuff like USB there is inrush limiting, reverse voltage protection, transient suppression and it costs very little to implement, so it's mostly going to be power supplies.
I don't think that long uptimes is unix culture at all. unix was always about being small, simple and fun to use, A place where having something now is much more valuable than being correct later. A hackers OS. This is also where most of the sins of unix come from.
"We went to lunch afterward, and I remarked to Dennis that easily half the code I was writing in Multics was error recovery code. He said, "We left all that stuff out. If there's an error, we have this routine called panic(), and when it is called, the machine crashes, and you holler down the hall, 'Hey, reboot it.'"
I'd guess orders for these probably skew to the higher end.
If you're putting workstations in racks it's either to share them, or due for power/cooling/noise reasons, and the fact that you've got a workload that justifies having those kind of problems probably means all your other costs will still dwarf hardware.
There's usually a large premium on whatever the current largest size dimms, ssds and the top 10% or so of cpus and video cards. So I expect they sell a lot of machines that are "50%" size ( either max physical capacity with 50% size components, or half physical capacity, with 90-100% components ), and a fair number of maxed out just because it will often be cheaper to have 1 maxed out option then 3 smaller ones, and budgets don't matter except they do.
Places who cost engineering time at $100k/hour won't blink at $100k computers if it gets the job done.
No matter how many times I see it I always read fsck as "(for) fucks sake" and then internally correct to "file system check." I think I've got a flashbulb stressful memory floating around in there.
I once had to troubleshoot the math department director’s PC misbehaving. It turned out that he let prime95 have every spare cycle on a core 2 duo for a decade and the machine would only boot if it had cooled to room temperature.
Looks like the project averaged about one new Mersenne Prime per year for 1996-2009, and then only 4 hits for 2010-2018 with none since 2018.
Obviously the tflops::hit difficulty ratio is ramping up as the numbers get larger, but I can't help wondering if the cryptocurrency craze dampened their work rate.
They're reporting 78,012 tflops of work done today, but my five minutes of investigation wasn't enough to find a historical chart of tflops/day and five minutes is about the limit of my curiosity on this matter for now.
When the project started, CPU frequency scaling wasn't a thing, so CPUs would run at full speed (and full power draw) 100% of the time. If you weren't making maximal use of the CPU, any remaining capacity would go to waste. Distributed computing projects could make use of that remaining capacity.
Today, CPUs are built with power efficiency in mind, and will attempt to scale down rapidly when not fully in use. Thus there is no longer such a thing as "spare CPU time". Any time spent on distributed computing projects is paid for in electricity costs. Some choose to continue anyway, but many have been disincentivized.
For a while I had a Home Assistant automation that would spin up Prime95 on a machine in my homelab when the closet it was in (in an unheated garage) got too cold. The closet also has the water meter, so it has to be kept above freezing. There's also a resistive heater, but I figured I'd rather get a bit of productive use out of those watts.
Then I realized that the computers heated the closet plenty without artificially pegging CPUs, so I didn't bother reimplementing it when I did a migration.
I don't think that's true. Variable frequency certainly helps efficiency, but like the other commenter said, HLT did exist. The CPU would use less power when told by the OS to do nothing for a short while.
It may have been replaced by a cryptocurrency indeed, for there was PrimeCoin, one of the very few that actually did something that was both productive and unprofitable (critically important for the economics of mining) with its mining cycles, and that is look for prime numbers. Although I don't remember if these were Mersenne Primes. It was one of the very earliest altcoins and by its nature was CPU bound which made it unpopular with large scale mining farms, but extremely popular with CPU cycle thieves working in clueless corporate and educational IT departments.
In the mid-2000ds, there eventually came a time when the xlock (Ex-lock) screen locker disappeared from the last university workstations that still had it. People routinely got puzzled when they could not run it. It was a fun prank to tell them that Ex-Cee-lock was the replacement for it (which would, of course, run the clock application). :)
Xlocking workstations became a problem at our university. People would claim a workstation, lock it, go do something else (lunch, lecture) and then come back to their reserved workstation. So the admins added a button that you could log someone out if the screen had been locked for more than half an hour.
They didn't want to ban xlock because they cared about security.
> So the admins added a button that you could log someone out if the screen had been locked for more than half an hour.
In our CS labs the PCs re-imaged themselves on boot°, from a choice of OS images¹, so you didn't have to worry about causing corruption of the machine by just power-cycling it to get around the locked status. This meant that locking a workstation to reserve it didn't work.
My workaround to that² was to set the wallpaper which displayed behind the unlock prompt to an image of a bluescreen indicating a hardware error and move the window containing the lock prompt to the for bottom right of the screen, so it was just a single pixel and not easily noticed. Hey presto: a locked machine that no one wanted to claim by restarting because it looked faulty. Obviously anyone with half a brain watching me unlock the machine a short while later would immediately work out the trick, so the knowledge spread soon enough and the ruse stopped being as effective. It was very effective for a while.
--
[0] from the same shared network drive, which was initially a problem (this was the first year that lab had been in operation) if several machines re-imaged at the same time as head thrashing caused IO throughout to fall through the floor. Later revisions of the setup helped by tweaking cache settings, and giving the server more RAM, so that the second and subsequent read of an image in a given period would come from cache, also the images were compressed for the same reason and also to reduce the second bottleneck: the glut of traffic through the server's single 100mbit NIC.
[1] usually just Windows NT and the local Linux build, but sometimes other options were present
[2] which I used very occasionally, partly to not be a dick but mainly so as to not give the game away to quickly
Windows NT could always remotely log out the current user when part of a domain. We had tools in place for our lab administrators to logout users if they found locked workstations. With Windows 2000 you could automate this through group policy
In high school I'd reserve workstations for my friends by unplugging the keyboard. The PC would fail to boot with "Keyboard not found, press F1 to continue" which was enough to get it designated broken and avoided.
I did this unintentionally in college once by switching the keyboard layout to Dvorak, which for some reason persisted across logins. I came back later that day to the same lab and the station I had been using was marked "Out of Order". Huh, that's weird. Sat down at the station next to it. Next day both of them were marked "Out of Order". Oh, huh. Is there something weird with the keyboard? I might know what happened...
Sort of a meta-comment about infosec.exchange, so I expect a few downvotes, but while this was a funny dad joke it took more scanning and reading than it was worth. Mastodon UI's are very dated despite being new. I miss interdepartmental unix pranks!
This thing has a side bar even on a mobile device which shrinks the width of the text even more. The text is very difficult to read. You have to acknowledge that Twitter is a carefully designed and mature product at least in terms of UX.
Mine were in the early 2000s. Back then, the computers at the lab at my uni were not very powerful, so people would do work at a Linux console, saving themselves the hassle of running a bulky X session.
Some time around 2001 I read the console_ioctl(4) manpage and found it replete with prank possibilities. I wrote little programs that would flip the console font so that all the characters were upside down; or swap capital letters with small letters, again by way of manipulating the font; or flash patterns on the keyboard LEDs; or fade to black and back by manipulating the palette.
I then added a server component so that I could leave it running at an innocuously-looking terminal, wait for a victim, fire up these effects remotely from another box in the same room, and watch what happens. Fortunately, I soon discovered that the coding part was more fun than the watching-people-slip part, so I gave up on the latter.
Another prank I used to do was simulate a successful root login on these terminals by just typing in what would be printed, including the motd, at the getty login prompt, simulating newlines with tabs/spaces (and never ever pressing RET), ending in `[root@mailhost root]# `. Then, again, step back and watch what happens. Some people would curiously type in `whoami` and be puzzled why they got a password prompt; some would step back in terror without touching anything, switch terminals and email the sysadmin.
I wish I'd had Linux systems. I was doing the same type of thing with Windows networks since you could effectively run any program as any user with task scheduling so long as they were logged into the system. Pair that with active directory and you have user info. So knowing who was where, open iexplorer at certain site, innocuous word doc, etc. The most malicious case was an automated logout batch script.
People eventually caught on to the approach and tried to replicate the remote execution but executing as themselves instead of that user so when the IT admins came around there was a very obvious trail to who had been running it. I stopped playing around but eventually IT then SWE became my profession. I sometimes wonder how it'd have gone if I'd been reprimanded though.
Our unix (Gould GLX or something?) with dozens of terminals lacked appropriate permissions on /dev/ttyNN - so we just piped rain directly to the neighbors terminal.
Back in the day we banned using animated xlock to lock your screen. The Sun workstations in the lab ran the X server local and picked a random other machine to run the window manager and clients when you logged in. (Which is kind of an odd way to do it when I think about it now, but also cool that it was possible.) However this was all running over shared 10 Mbps ethernet with probably 100 machines and only 2 or 3 segments. This all worked fine until a few people use animated xlock running remotely over the shared network.
> ran the X server local and picked a random other machine to run the window manager and clients
Are you sure they were full workstations and not more dumb terminals (just enough processing power to be an X display) with all your logins being to a central beefy server (or one of a few) rather than some random machine?
If that were the case then an animated xlock would potentially chew up an unfair amount of CPU time as well as clogging the network spitting the results out to your local X display.
> There was a program that would sort of melt your screen.
This existed for PCs, too. It was called "drip". When idle, individual characters would "drip" down your screen like raindrops, at random times, for random distances.
Another one I remember was "drain". In the very early PC days, you could add this program to the AUTOEXEC.BAT of an unsuspecting victim's computer, so that it would run at startup. It would start flashing "SYSTEM ERROR 0304-B" for a moment, then add "Water detected in disk drive A:". Another moment, then "Now draining", and it would play this gurgling sound out of the speakers (as best you could, on the speakers of the original PC). That would peter out, then "Now starting spin dry cycle", and it would play this whining sound for a bit, ramp that down, and then tell you that it was OK to use the system now.
In those days, there weren't "logins" to PCs. If you saw a PC without the normal user present, you could do anything to it.
If I recall correctly, the drive light stayed on, the drive was spinning, but the whine came from the speakers, and moved to a higher pitch partway through. It also smoothly ramped down in "RPM" (frequency) at the end, which is not a thing that the floppies could do.
In 6th grade I wrote a fake virus that pretended to format the hard drive and then left the user at a C:\> prompt. I left it on my moms 486DX-33 (with a turbo button) for her to find it on the weekend. Well she never turned her computers on so it promptly left my mind that evening when I went to go play at a friends house. Fast forward to Monday morning I get called to the office over the classroom PA “ooooooh grepfru_it you’re in trouuuuubleeee”. I couldn’t imagine what I did wrong. I get to the office and the principal says I have to urgently call my mom. So I dial the home number and my mom is frantic on the phone “GREPFRU_IT MY COMPUTER ERASED ITSELF FROM A VIRUS! I TOLD YOU NEVER TO INSTALL GAMES ON IT”. Thinking: the last game I installed was a month ago. Then it clicked my fake virus. I laughed so hard I started crying. The principal and school administrators looking at me like I was crazy. I told my mom to eject the floppy disk and restart the computer. She immediately started laughing when she realized she didn’t check if a disk was in the drive. She said never do that again and hung up on me. I couldn’t stop laughing all the way back to class — to which I then pretended like I was getting suspended for getting into a fight (the whole class knew I was lying about that one though)
We had some Apollo Domain machines at school which could also run similar programs - I remember 'Crumble' and 'Melt' being two of them. And you could run them on other peoples' display. So we used to melt/crumble the screens of the engineering students in the next lab over. 'We' in this case had admin privileges, though, and only did it a couple of times.
If someone xhosted my machine I would run xmeltdown back to their display. I wrote in another post in the thread about how and why someone would xhost my machine.
Since I was at college in the second half of the 90s, we still had unix text consoles for reading emails so my favourite prank was to tell others in my dorm that I had worked out how to remotely log in from the dorm (we had to use a computer room back in them days!) and with my 10 line Turbo Pascal program created a fake login screen like looked identical to the normal one. After capturing a password, I would explain to each person that maybe it wasn't quite working so "sorry", so they were none the wiser that they had given me their passwords.
I didn't do anything with the passwords, it was just interesting how easy it was to get away with.
Someone was discovered to be collecting passwords that way on our universities VT terminals (I'm old enough that at Uni plain text terminals were still a thing, though they were generally used just as terminals for email & such when the lab rooms full of PCs were fully occupied) by leaving what looked like a login prompt on-screen. Someone with much tech knowledge immediately saw it wasn't quite right (that is how the issue was found) but these were terminals used by the general populace not just us CompSci students so the vast majority of the users were not at all technical (what we might assume almost everyone knows these days was still new fangled magic to the average student back then, for many arriving at Uni was their first encounter with having an email account for instance).
To my knowledge they never worked out who did it, or how long it had been going on for other than “may have been months”, because the fake login app would exit and logout after sending off the captured credential, and next time it was run it was done from one of the captured accounts, so only the very first capture would have been done by the culprit's own account (even that maybe not if they'd guessed or stolen a password by other means first). Captured credentials were sent to a popular high volume usenet newsgroup so they couldn't track who was reading the result that way. Also, no evidence of the attacker actually using the compromised credentials for anything else, so it was possibly someone “playing” to see what they could do rather than a more malicious intent.
It became standard practise (“enforced” by notices in large all caps text) to reset terminals before logging in to be more sure that was a real login prompt.
Replacing a T-connector with a broken one to sabotage unpopular classes... Or binary searching it with a terminator to save one you liked...
Learning that pinging Windows 3.1 with a big payload would BSOD it and writing a script to perform a rolling BSOD of the entire lab while sitting in the back...
Sending random insults to random spots on ttys while people read their mail using Elm...
Writing a trojan to steal and then delete the MUD accounts of the dudes hogging the only 2 computers with Internet access available to undergrads...
And being caught and let go with a not so stern warning. Simpler times.
> Replacing a T-connector with a broken one to sabotage unpopular classes
When I started working in academia they had mostly phased out 10BASE2. Every now and then we'd get reports of network being broken and would have to get out the break detector to track down where. Inevitably we'd find a student had unplugged a T-connector in a classroom, disrupting the others on the same loop.
I believe either my brother or someone he knew once wrote a program that spawned lots of child processes. He did that to test a scheduler or something like that, but it got a bit out of hand and swamped a major server in endless processes. Admins weren't pleased. But also not too upset, because they approved of students experimenting. We had pretty cool admins.
A function called ':' is defined. In its body, it calls itself twice at the same time (':|:') (piping the output of the first call into the second, which doesn't do anything useful) and sends these calls to the background ('&'). After function ':' is finished being defined, it is called.
The first call spawns two clones. Each of those spawn two more. Etc.
:() defines a new function called :
{ :|:& } is the body of the function, where we call the : function recursively, piping (|) its output to another call to :, then backgrounding the whole thing (&)
; indicate the end of a statement and the start of a new one
: and finally the last : calls the function we defined to start the chain
Essentially each time the function is ran, it creates 2 new copies of itself, which each create 2 copies of themselves, etc. until your OS stops responding and crashes.
Nowadays many shells recognize this particular fork bomb and refuse to execute it
IBM 370 mainframe, 80+ programmers, running VM/370 which creates virtual machines, one for each programmer.
I'm one of two systems programmers with "superuser" privs.
In the virtual machine you normally ran CMS but you could run anything. Some machines ran MVS.
To direct a command to the virtual machine itself you would prefix the command with a special character which by default was #
but any chosen character could be the magic prefix. So #cp ... would be a command to the virtual machine.
Bored one day I wondered if a virtual machine could run VM itself, on the "second level". I booted it up, changed the prefix
character to ! (so, !cp). I could create new virtual machines inside this new VM.
So, could a second level virtual machine run VM? I booted it up on the "third level", changed the prefix to @ (so, @cp)...
I got 8 levels deep. So, yes, VM could run VM, could run VM, could run VM... etc.
Game over. Time to start shutting down these embedded levels.
Out of habit I typed "#cp shutdown" ... and it did. The REAL VM on the REAL machine shut down. Panic run to the machine room to push the start button on the console.
Of course the system keeps a log ... and the other systems programmer showed up at my door ... and said "don't do that again".
I don't understand. I thought # was the prefix for the level 1 VM, not for the level 0 OS (the host). If you used # to send commands to level 0, what was the prefix for level 1?
System admin privilege #cp shutdown works on the real machine (unless the prefix is changed).
I set the first virtual level system prefix to be !cp, second level to @cp, third level to $cp (look at the top of your number keys to see the sequence).
I SHOULD have walked backward $cp shutdown ... @cp shutdown ... !cp shutdown but habit caused me to type #cp shutdown. Sigh.
Sounds like us used to linux working on solaris. Process got stuck and we were too lazy to look up the PID. So we just called “killall procname”. Machine immediately went dead.
When the sysadmin came over we learned that killall does something different on solaris and we should never use it again.
Must be 20 years ago as Google was a thing and we had a split monitor and kvm with the one computer in the school IT classroom.
My colleague types hello into the Google search box from the next room. The student then typed who's this? Colleague types, Jimi Hendrix...student turns computer off at the plug :-)
I seldomly mistype ls as sl but it always makes me smile when it happens. It also doesn't bothers me because you can quickly circumvent it by sending sl to background with ctrl+z and deal with it later...
But now that I think about it I wonder if you can prevent the job manager from back-grounding a task, would be quite the addition to sl heheh.
However, it still leaves ABRT on the table that can be with ctrl-4 and ctrl-\. For that you'd need to disable the binding e.g. with stty and then handle TSTOP the same way I suppose—or just put it in raw mode.
On our university lab, the main reason why most of the savy users would have "xhost -" on their login scripts would be to avoid being shown not so convenient images at the wrong times.
In the early '90s, I was a fresh Computer Science undergraduate student at a state university. Our computer lab was packed with the Sun SPARCstation IPCs, each running SunOS. We had this rudimentary email system that everyone in the department used to communicate. The tech-savvy folks, of course, had started exploring Usenet, but for the majority of us, the e-mail was our digital universe.
One day, my group of friends and I decided to have some fun. We concocted an idea inspired by the famous 'fortune' command that prints out random adages. We wrote a simple shell script that would take a random line from a text file full of humorous and nonsensical messages we'd written, then mail it to a random user in the computer science department. The script was set up as a cron job, scheduled to send one of these messages every hour.
Initially, it was just a harmless prank. People found the messages funny and would often share them in the lab. The source of these messages became the talk of the department, but nobody knew where they were coming from. We took great pleasure in watching our peers and professors speculating about the mysterious sender.
However, things started to get out of hand when the Dean received an especially absurd message that read, "Why do computer scientists confuse Christmas and Halloween? Because Oct 31 == Dec 25!" He found the joke incomprehensible and thought it was some sort of cryptic message or even a potential threat.
The campus IT team was called in to investigate, and a week-long frenzy ensued as they tried to trace the source of the emails. My friends and I watched in trepidation, wondering if we'd be found out and expelled for our seemingly harmless prank.
Finally, after several sleepless nights, we decided to turn ourselves in. We went to the Dean and confessed. After an anxious silence, he started laughing. Apparently, he had been let in on the joke by one of the Computer Science professors and was waiting to see how long it would take for us to come forward. He was good-natured about the prank and found our initiative creative, although he warned us about the unintended consequences of such pranks.
Looking back, it was a fun, memorable prank that gave us a valuable lesson about the ethics of technology use. It's a story I often recount when I'm teaching my own Computer Science students about the importance of ethical conduct in the digital world.
When I worked in IT at a big company we had a fake screensaver that emulated a blue screen and boot loop situation. Nobody ever thought to press a key or move the mouse which would have cleared it, they would always go for a power cycle followed by lots of hilarious troubleshooting, before it happened again and again. Good times.
A grad school prank one of my friends pulled on another:
echo sleep -1 >> .login
was appended to the .login file of the victims account. The victim stepped away from the terminal briefly leaving themselves logged in allowing the prankster to make the addition. It was days later with over 20 sleep statements appended before it was apparent that something was uniquely wrong with that student's login. Day after day the victim grew increasingly frustrated with the slower and slower times from initial login to active terminal. Finally when it was getting unbearable the prank was discovered.
I installed that on a coworker’s machine only to find out he had a bug phobia. He shrieked like a kindergartener! I had to buy him beers for weeks before he forgave me.
Someone at work played this prank on a business analyst and almost got fired. It was the early 90's, and we were a shop running HP/UX on Apollo workstations. She was a SME for the application we were developing, but was otherwise non-technical. He and her were friends, so one day he fired up xroach on her system. When she moved a window she absolutely freaked out, and wanted the guy fired. He kept his job, but I don't think they were friends after that.
Plenty of people in this thread with Apollos in their past. We had no fun on the Apollos in college; they ran DOMAIN/OS, I programmed the 68xxx assembly and I got out of there.
I did all my mischief on the Unix machines: 3B2s with dumb terminals, and diskless SPARCstation SLCs.
Back in my MSP days I had a client report that their server was slowing down after an hour. They would also use this "server" (a tower computer in a back room) to adjust inventory, print reports, etc, but after an hour of having used it, everything would slow to a crawl. It was running databases and such that other computers in the building accessed.
Every time the server slowed down, someone would go back there and restart the software, or reboot the whole computer, and it'd be fine for another hour.
Sure enough, after an hour, some fancy ass 3D screen saver came on and pegged the CPU. It was some shareware thing that someone downloaded because it looked cool. I ended up turning the screensaver off and just set the monitor to go to sleep after 10 minutes.
When I was at college back in the 80's we had access as students to the college's VAX 11/750 (an 8750 Systime clone) to work on our coding projects. The student terminals were on one half of a large divided room, the other half being used by the college IT folks. Often, if there wasn't a spare terminal on the IT admin half of the room, one or two of the IT folks would use two of the nearest terminals just over the divider.
One day, bored out of my mind waiting for my COBOL project to compile, I wondered if I could capture the sys admin's username and password. I wrote a script using the CLI to perfectly simulate the login prompt complete with beeps, messages and all. All it did was clear the screen, sit there waiting for user to enter their username and password, when they did the script would mail me said username and password, display a username/password error then logout to the real login process.
After trying the script out on a couple of unsuspecting classmates and having a bit of anonymous tomfoolery I decided it was time to try this out for real with the sysadmins. I logged into both terminals the IT folks normally used and left the script running. A few hours later I returned and to my surprise and mild anxiety I found out that I'd captured the SYSTEM login password :o. For about a month or so I'd full control of that machine, and would re-run the script occasionally whenever the SYSTEM password changed. I told no-one and on my last day at college logged in and deleted the script, just in case (this was around the time the law in the UK was getting a bit heavy with regards to unauthorised computer access).
Combined with access to the huge set of manuals for that machine I spent a heap of time exploring and learning about VMS and no-one had a clue.
This is great. Reminds me of the BASIC program I wrote on my Ti-83 to simulate the memory reset process for algebra tests because our teacher walked around and would run it himself.
an approach that goes back the the rainbow books, at least. There was some scheme to use the "break" key on the teletype for this (maybe in Multics, or OS/360 perhaps?) but I have no idea if it was ever implemented.
For those who don't remember, "break" was not an ASCII character but a literal long unmaskable pause in transmission, and couldn't be generated in software or by reading the paper tape punch, nor could it be read on the host side into an input buffer as it wasn't a character.
In MS-DOS, Ctrl+Break could be simulated through ASCII character 0x03. Literally, hit Alt+3 and it would respond the same way.
The sysadmin for my high school computer lab had written a text-mode DOS login screen that would start Windows 3.11 for workgroups after the user logged in. A few of us kids figured out we could just Ctrl+Break out of it and install/play DOOM, Descent 2, etc. He wised up and wrote a trap for the Ctrl+Break sequence, but I discovered the Alt+3 trick and we continued on our merry way. I don't think he ever figured out how we did it.
(We also hid our games inside a directory named Alt+255, which appeared as a space. A single space was not allowed as a directory name, so it felt like magic to us.)
The magic words being "Secure Attention Key" (on multics in particular, it was specifically caught by the Front End Processor, so nothing in the OS could interfere by (for example) catching the signal and printing a fake login prompt, becauase it was literally a separate computer handling the request...)
This seems like a rite of passage... I did the same thing with the VAX at my school, but decided to come clean and gave the sysadmin all the passwords I gathered after one day (including several with SYSTEM privileges). They gave me my first job :-). I also made sure to grant the necessary permissions to one or two obscure accounts, so that I could regain SYSTEM when it was revoked on my "official" accounts. Fun times, and innocent too -- I never used the privileges to cause havoc.
Its funny how writing a user login replacement seems kind of ubiquitous for all future hackers. Mine was in Visual Basic (5 iirc) for a Windows (Novell?) network. This was back in school. You could trivially change win.ini to set it up to run _before_ the real login screen. Mine would save the username and password to either a shared network drive or a local file, then display a "password error" and exit to the real login prompt.
What got me in the end was that a "friend" used the same trick and started copying peoples files from their network account to his own. My suspicion is that when he eventually maxed his quota, the system must've warned the network admin... a cursory look would later reveal he copied some teacher's thesis files, and that was a big no no.
Eventually this incident would land me my first computer related job as a junior tech support/network admin.
I once set up a job on a co-workers VAXstation 3100 that would run XCrab every 10 minutes. What is XCrab? It is an annoying little program, when it runs a crab scurries across the screen, grabs the mouse pointer, and runs off the screen with it. I renamed the job PRT$SYMBIOINT so it looked like part of the print spooler, he never figured it out.
In my day we ran irc servers and MUDs on the professors workstations, which were of course tied in to the campus-wide NIS/NIS+ system, and often more lightly loaded than the big iron (that being VAX6000 or 8000, or if you were lucky a Sparc20).
I wrote some code to run on the office computer that we used to stream music from. At a random time every 24-48 hours, it would turn its volume up and say “I love you” using the Mac OS whisper voice.
In high school in 1988, a friend and I discovered a vulnerability in the netware deployment of 30 IBM PS/2 Model 30-286s in our brand new computer lab that allowed us to insert programs into the autoexec netboot sequence. Prior to that, we'd been hacking on the (brand new at the time) VGA registers and figured out how to switch from 80x25 text mode to 320x200 256-color graphics mode with no flicker or glitches, as both modes have a refresh rate of 70Hz. So he created a TSR that preloaded a digitized picture of a clown face (a popular upload on BBSes at the time) into A000:0000, and after about 4 minutes, it would display the clown face for a few frames and then immediately switch back to whatever the user was working on. We gave ourselves away because we couldn't stop laughing in the corner of the room watching all the students get very confused/horrified looks. One bonus was the comic timing of a student calling the instructor over, having him stare at the screen for 3+ minutes, turn away, then have the clown face flash when he wasn't looking.
My friend's name was Brian, one of the smartest people I've had the privilege of knowing. Ten years later, we created mobygames.com.
My gag was a bird chirp that would play every so often--typically minutes between chirps. There were several random numbers that went into making the chirp so it would be different every time and every noise it made would be shifting frequency, never a moment of a fixed tone (back when speakers usually just went BEEP.) Leave it running on a machine that wasn't being used...
Once when doing tech support at a local hospital, one of the nurses called us stating that she had "some sort of weather report" up in a window on screen, but she couldn't click it away because the most would go under the screen. Obviously piquing my interest, I told her not to touch the computer until I arrived.
Since the hospital was quite big, it took me about 10 minutes to reach her workstation, where she exclaimed that "the window closed on its own a minute ago. I swear it was on there for half an hour".
That, coupled with the layout of the desk, the description of the window led me to a hypothesis. I pressed the button on the monitor (which she had unknowingly pressed with the corner of her keyboard) which called up the system menu of the display. It showed 100% sun (brightness). The mouse would go under it.
Then, I hiked back another 10 minutes to await the next phone call.
I had to do a tech support call once where 'the screen keeps glitching out I think I have a virus' (The Net was still in theaters). I showed up, removed the boombox with two large speakers from on top of the CRT and like magic the problem was solved!
I have the misfortune of my rescued VGA CRT displaying incorrect colors in the corners, unless I either leave magnets on the screen, or flip the magnets while degaussing and remove them before using the display.
CRT gaming is fun, both for older console and PC gaming, and even modern games to a degree. The colors are richer than cheap LCDs, the scanlines and/or smoothing act as a beauty filter over visuals, and the strobed image produces unmatched motion clarity (outside of modern gaming monitors with backlight/OLED strobing modes) though it can get uncomfortable for 60fps games (or god forbid 50fps PAL). I've written about it at my blog, at https://nyanpasu64.gitlab.io/blog/crt-appearance-tv-monitor/.
I also did tech support in a hospital. I'm not sure IT is a value add in that environment, at least not as much as IT thinks it is. Nurses are busy doing things like saving lives and caring for people experiencing the worst day of their life. On top of that nurses have to deal with poorly implemented and maintained workstations. Then when they ask for help the very people who created the problem in the first place come and belittle them.
I'm not accusing you of anything. Just noting my own perception that IT people tend to be very smug and see the world from an IT-centric perspective. Nurses aren't dumb and they certainly aren't lazy. They just have better things to do with their time than deal with inconsequential IT issues.
As I recall everything in the hospital that was actually critical to providing care had some specialist that maintained the system. The computer running the MRI machine wasn't bound to Active Directory and might not even be on the network. Issues with it were routed to GE, not the guy that fixes printers.
Ha - I know where you're coming from, but, like every opinionated SOB on this site, can't help but throw in my own opinion*:
1) The parent comment seems completely neutral and non-judgmental. Seems quite just "I had this experience", essentially a set of facts (compared to frequent enough comments that possibly really ought to generate a response / "note of approbation").
2) Stereotypes are the epitome of judgmental thinking, language, attitudes, etc. That may sound judgmental itself, and I apologize for that - that's due, in part, to connotation "issues" with words we use (although, I can't, obviously, claim that even removing those removes all of that issue - I am literally labeling and characterizing strictly in denotation). To be fair, people, in general, can be very smug.
In my experience, there has been an "enrichment" of that in "IT", but, for example, try talking to some of the especially high-ranking surgeons, say ... or, certain professors. In particular, I've personally come across a somewhat bimodal distribution, I think. Some people who reach "very high levels of expertise" are very humble ("right-sized" - not subservient or etc.) and super nice (surprisingly willing to help and talk to even "the layman" / "novice"). Then, there are the outright "arrogant bastards".
So, to add some kind of summation - calling out "IT people" specifically, and especially the parent comment ... well, what's the point? Again - not meant to be rhetorical or aggressive ... but, I think many would agree that people in general should cut that $h1t out.
This opinion brought to you by my "Big Giant Head... remember, when you're thinking of giant heads, think [my] Big Giant Head..."**
* Which, we all know is likely to be comparable to ... something else everyone's got. Though, I'm telling myself, of course, that I've got something useful to say. "Caveat lector". :)
** With apologies to the writers and other people involved in making "3rd Rock from the Sun"
You are way too apologetic with the original post. It was not just a set of facts. You could feel the superior attitude. It's kind of obvious in the last sentence. The person you answer did well to call it out.
I read the post in Mastodon with a tone of amused incredulity at the unusual situation. Yes, some unusual habits of the professor may have contributed, but I don't read any negative judgement there.
Somebody got burned too many times with IT people, I think.
Every group of people who's remotely decently specialized... will get very smug assholes. I've met IT people who just want to help you get your shit done, and those who look down at you for being a "stupid user". Also I've met Doctors whom think they are god, and Doctor's who want to be a partner in your health journey and are totally non-judgemental (at least outwardly).
I'm sure you'd find the same in any similar field. Engineers. Pilots. Mechanics. Nurses.
And yes, everyone looks at the world from their personally centric view of the world. It's the only way we, as humans, know how. After all, how would you get to your oh-so-important job if not for the road engineer, or the mechanic who built your car, or the city designer who designed the routes that you use every day.
Some are more aware of main character syndrome [1] than others... and some are just less assholes than others.
> They just have better things to do with their time than deal with inconsequential IT issues.
I understand a nurse shouldn't need to be a computer expert, but where to draw the line? I'm not a car guy, but I am expected to maintain a working vehicle and obey the rules of the road. I'm not a nurse, but am expected to comply with their instructions and understand to some degree what they're doing while under their care.
And because of that, all the important interfaces to cars are standardized so that anyone can operate any car. Not just for convenience or even for efficiency but for literal safety.
But the winshield wiper control might be anywhere and take any form, and because of that it's frequently got wrong and fumbled.
This monitor setting menu button is just the stupid widshield wiper knob. Neither the nurse nor the monitor manufacturer nor the IT industry as a whole needs to do anything about it, for this particular example.
But there are infinite other examples where the random haphazard thoughtless unaccountable inconsiderate nature of everything in IT is at fault and not all the users.
The original comment didn't read very smug to me, but more generally you have a point. Most people (maybe all of us?) have much better things to do than learn quirks of Windows. It really is the software/computer manufacturer or maintainers fault when things work in an unexpected way. I mean, why do we even have general purpose computers and operating systems on machines designed to do one single job (like medical equipment)? why does the system menu even exist in a setup where it's never needed? The only reason is that the computer and software industry is really a duct taping industry where stuff just gets slapped together in a way that's easiest for whoever is doing the duct taping - not in a way that's easiest for the user or the best for the actual application.
In no other engineering discipline would this shit fly; imagine building a bridge and just constructing it from rocks, old cars and whatever scrap they happened to have lying around the construction site and then fixing it on the go as some of that crap quite predictably falls apart. This is basically what all of computer/software industry is doing.
While there may be some truth in your comment, you do miss the entire point.
General purpose computers with general purpose operating systems on them are exactly what it sounds: general purpose. A PC would be comparable to a CNC mill in metal workshop. Operator needs certain base level of knowledge and skill to use the machine safely and purposefully. In fact, general purpose computers, "enterprise" controls on them and end user programs are so unimaginably good and miraculously robust that people with little to no formal training and very vague understanding of operational concerns of the system can use the system to consistently yield net positive result. Even when users actively try to bend the systems in ways they are explicitly not expected to be bent. I don't know whether this is because of or despite the systems being duck taped from scrap, but nevertheless these systems are amazingly resilient and fool proof.
> In no other engineering discipline would this shit fly; imagine building a bridge and just constructing it from rocks, old cars and whatever scrap they happened to have lying around the construction site and then fixing it on the go as some of that crap quite predictably falls apart. This is basically what all of computer/software industry is doing.
On the other hand, in no other engineering discipline engineers and operators are expected to provide a pedestrian bridge that is movable, can quickly scale to support military column and be able to support opening parade before foundation is poured. In no other engineering discipline you start building a façade and figure out internals later, based on use. The more I understand inner workings of computer systems, the more I am amazed at how they do not collapse under their own weight and remain operational.
I guess it is a new discipline suddenly in the world.
Whilst unlike in my youth where 99% programmer ever live still alive, we still have lots of them still alive. Still remember the first time to hear a programmer or IT guy died, he was shot down by the Soviet Union in a korean airplane.
> end user programs are so unimaginably good and miraculously robust that people with little to no formal training and very vague understanding of operational concerns of the system can use the system to consistently yield net positive result
... after destroying the keyboard, swearing to kill the guy who made such a parody of a program after decades of GUI experience was thrown out of the window.
Yes, i can use Windows and the accompanying Microsoft SW "to consistently yield net positive result", but at the expense of my mental health.
> Nurses aren't dumb and they certainly aren't lazy. They just have better things to do with their time than deal with inconsequential IT issues.
I don't want to reduce this discussion to name calling. Absolute majority of my tickets were along the lines of "user entered shit data and now they are getting shit data out, help". A minority of users would put in effort to thoroughly document actual state of affairs. Huge portion of users just did the bare minimum to tick a mental box "I did a thing". If they could find a way to discharge patient from their unit with lowest number of clicks/keypresses they would do that, even if it meant marking the patient as deceased (not much of a hyperbole, by the way).
> Nurses are busy doing things like saving lives and caring for people experiencing the worst day of their life.
That's the problem with medical personnel. They see the immediate act of "saving lives", e.g. putting a patient in a scanner ASAP, as their only job. All the documentation stuff is usually seen as a hindrance, sometimes as a way to cover someone's ass. Under very rare circumstances it is seen as an integral part of patient care. How do I know that? Every implementation of rule checks would be met with backlash rather than appreciation for an extra pair of robotic eyes.
This. Same as software engineers doing production support. It's easy to glorify putting out fires and hard to take credit for careful attention that prevents fires. Preventing fires extends a lot more lives.
>"user entered shit data and now they are getting shit data out, help" [...]
> If they could find a way to discharge patient from their unit with lowest number of clicks/keypresses they would do that, even if it meant marking the patient as deceased (not much of a hyperbole, by the way).
That's a classic, not only in hospital IT. Apparently you expect the user to serve the process and not the other way around. That's not how it works. If the goal of the user (or the organisation) is to do as much of X as possible (maybe even incentivized) the process better facilitates that or it will be abused and/or circumvented.
> Apparently you expect the user to serve the process and not the other way around.
One of the ways to classify process control is by degrees of freedom: closed-loop, semi-closed-loop, open-loop. Anything but closed-loop processes require users to serve the process. That's by definition and there is no way to escape that fully. Sometimes you can make manual state adjustments to the process, sometimes you can "correct" aggregated data fed to outer process, sometimes you can't do anything about it.
In SCL/OL user is the oracle. Every business process management system (ERP) will be at least as complex as the process it models. The more generic the system, the less context-aware it is and thus typically the more complex.
You can pretend that SCL/OL control system should try and infer things to help users, but in fact that impedes correctness, because users are less likely to enter purposefully wrong value (mental default or validation-passing gibberish) than they are likely to agree with prefilled default value despite it being somewhat incorrect. Especially if the inferred default value is usually mostly correct.
A very crude example. Think of e.g. compass based boat steering control system. It will need occasional course correcting input to account for wind/current drift. Suppose you implement periodic prompt for course correction. Empty prompt will be more likely be filled with non-zero value. Default zero value will be more likely to be accepted unmodified, resulting in more jaggy course.
I don't think we are really in disagreement here, but you are talking about steering the cart through the aisle and I'm talking about taking home groceries. If your steering process is hindering my taking home groceries, I will ignore the cart.
Less farcical, you are talking about means, I am talking about ends. Ask yourself what a successful organization needs to take care of.
On the topic of forcing users (ie. require) to adhere to the process you better have a good explanation why the process requires hindering the user in completing the task at hand. I know there exist good reasons why we ask users to jump through this hoop or that, but we need to make sure the user knows these reasons and sees why they are not a hindrance in completing the task but part of the completed task.
> you are talking about means, I am talking about ends.
You see, these things are not necessarily separate, but many people fail to understand that. Continuing shopping analogy, imagine shop sells some candies in multiple flavors. Do you need to weigh them separately when customer takes multiple flavors? The process requires you to weigh them separately. Nothing wrong will happen if you don't. Although, eventually managers will resupply you with a single flavor.
> but we need to make sure the user knows these reasons and sees why they are not a hindrance in completing the task but part of the completed task.
So what's the task at hand? Is it to simply sell candies to customer or maybe keep shop operational, properly stocked and then sell candies? For the shopkeeper the process may seem to start at customer entry and end at customer paying. For the shop that is just part of a grander process.
It seems to me that you argue for single-person scoped processes to be treated as standalone and microoptimized, regardless of how that fits into master process. I try to argue that it is the master process you have to care about.
> That's the problem with medical personnel. They see the immediate act of "saving lives", e.g. putting a patient in a scanner ASAP, as their only job. All the documentation stuff is usually seen as a hindrance, sometimes as a way to cover someone's ass.
Thankfully, we have reasonable people in the management: people who understand that it is maintaining proper documentation (and properly shifting liability) is the one and true purpose of the system; the "medical caring" (or whatever it's called) is simply a by-product that would fall out of it naturally. If only that stupid line personnel (and they are stupid because why else would they have applied to such lowly-paid job?) would understand that and would start to change the world so that it would to match the written documents and instructions (not the other way around, that'd be blasphemous).
The sad thing is that i wish that you were joking.
Again to all SW engineers out there: If there is only 1 (one) input field on the application which has focus ( in A.D. 2023, 55 after the Mother of all demos was presented, according to Wikipedia), why do i have to click on this field before typing so that the characters typed do not go somewhere else ?
> If they could find a way to discharge patient from their unit with lowest number of clicks/keypresses they would do that, even if it meant marking the patient as deceased
Real programmers don't comment their code. If it was hard to write, it should be hard to read.
> Every implementation of rule checks would be met with backlash rather than appreciation for an extra pair of robotic eyes.
When you are under pressure, the last thing you want is "an extra pair of robotic eyes" which constantly gets in your way.
Or, you DO want a rule-check, where it reduces incidence of critical errors that are known (due examnation of prior failures) to be more likely when under pressure.
I see this in a lot of software. Any software person wants to solve a problem, but their solution becomes a part of their own ego such that in their mind it is the cornerstone of the entire business, whether that's true or not. However, the person they're solving it for just wants to get something done - they don't want software in their way. The best software should is as invisible to the end user as possible.
In unversity, I was bored one day and added this to another student's .bashrc
echo "sleep 1" >> ~/.bashrc
He used the Solaris SPARC machines to do his work, so everytime it logged in it opened 4 terminals or so (not sure why). By the time he asked the help desk for help, it was up to a 6 minute wait after logging in before he got the prompt.
Once upon a time a (fameous) CS professor reported that his email was annoyingly slow.
He was using the good ol' mailx MUA.
He never deleted emails.
He never moved emails out of his inbox.
He had been doing this for many years.
His .mbox file was many, many megabytes.
Apparently, he had never noticed the problem before because he very rarely logged out, but IIRC some system work meant he needed to do so several times just prior to his complaint.
This story is either fake or exaggerated, the number of roaches in xroach is constant, and there's a hard limit as they are statically allocated (I checked the original source code)
The headline put me in mind of a different memory I had of the campus BBS from undergraduate days, and then I clicked through and realized that the author was actually the admin of that BBS!
Anyway, in addition to the BBS there was also an IBM mainframe (?) running VM/SP that you could connect to, and somehow that was how you got to IRC.
One night several of us spent hours chatting on IRC, and the next day we got called into the campus computing service where it was patiently explained that some professor's overnight batch job running stats had failed because it was constantly being interrupted by interactive-priority jobs ... i.e., our chat sessions. Which we should now stop.
I remember writing a passionate open letter using my vague knowledge that had trickled out about "hacker culture" in places like Berkeley where students were surely right now exploring these new things called "Usenet" and "the Internet", arguing that even though we weren't doing anything fancy like running stats for an economics paper, and even though we didn't know what any of it would actually amount to, the important thing for our education was that we had the chance to experiment with it for ourselves...
-rc roach_color
Use the given string as the color for the bugs instead of the default "black".
-speed roach_speed
Use the given speed for the insects instead of the default 1.0. For example, in winter the speed should be set to 0.1. In summer, 2.0 might be about right.
-roaches num_roaches
This is the number of the little critters. Default is 10.
-squish
Enables roach squishing. Point and shoot with any mouse button.
-rgc roach_gut_color
Sets color of the guts that spill out of squished roaches. We recommend yellowgreen.
I was in college in the mid 1990's. Our school had hundreds of Unix workstations from Sun, HP, DEC, IBM, and SGI. Everything was tied together with MIT's Project Athena which used the AFS Andrew File System, Kerberos, and the Zephyr instant messaging system.
Your home dir would be mounted as /school/login
The directory paths would be really long like /afs/school/math/maple/maple5.3 so there were 2 commands named add and attach to mount dirs to /school
add maple5.3 and you would have /school/maple5.3 and it would source .environment script in that dir to set up the tool and it /school/maple5.3/bin to your PATH
The attach command did the same thing but did NOT source the .environment file
If you needed to access another student's dir it was explicitly written in the intro computing class book to use attach and not add
I had a lot of scripts and utilities that friends would use. They told other random people that I didn't know.
So of course I made a file that would be updated any time I logged in showing my current machine name. Then I made a .environment file that would xhost + my_machine and send me a Zephyr message saying "I just added and xhosted your machine"
I would wait a few minutes and then run xflip and xmeltdown and set the -display to their machine. If they were in the same computer lab as me I would see them start to freak out. These programs basically froze your display for a few seconds while inverting the screen or causing everything to appear to melt to the bottom.
for OS class we had to write a distributed game on top of the Andrew File System; debugging a nasty crash I managed to distill our team's code down to 10 lines to remotely crash/reboot any chosen AFS workstation on campus; thus I regularly emptied workstation rooms at CMU (most students just gave up after a couple of sudden computer reboots) so that CS friends could work on and finish their homework; I may have also crashed some professor workstations from time to time, idk
For the last 25 years at 8 different jobs everything is over NFS. Every company uses Unix groups and some have used groups to manage project access. Sometimes it can take 3 days to get added to a group.
When I started college in 1993 we learned in the "Intro to Computing Environment" class our first semester how to create ACL groups.
I have a degree in computer engineering so I understand binary, octal, hexadecimal, but chmod 755 or 644 or whatever is not exactly intuitive.
AFS permissions are much easier to understand. When we had a group project we would all make a directory for that project and only give access to the other people in our group. We could give them read or write and everything worked great.
I know NFSv4 has ACL support but I've never seen it actually used anywhere.
Back in the day when a 4x CD-ROM burner was a luxury, we got a phone call from a customer we shipped a CD to. Would not read. Did a second burn at 1x speeds, tested on several machines, and mailed out the disk. Would not read. Burned a third and tried on every unique setup we could find on the most premium disk we could buy. Would not read.
I drove out to do the manual install a stack of media in hand. Got to the customer site - and watched in horror as they put the CD-ROM in the 5.25" floppy drive. It fits.
The other trick, aside from xroach, we would play on professors would be to take a screenshot of their desktop and re-display it as their wallpaper, with windows opened and all. To see a professor close an app window and still see it on the screen (from behind where it was) and to try over and over futilely to hit the "x" to close it on X Windows was a beautiful sight.
It wasn't our fault professors typically ran "xhosts +" in order to make their lives easier.
Source: my 25 year-ago mischievous self ... Sparc. Miss those days.
Once upon a time, we had several Sun Enterprise 450s that my college used to teach Oracle to students. It was well underutilized and the hit game Quake had come out. Of course we, the IT support staff, ran a Quake server and invited all of our friends to play on it. Imagine our surprise when one of our professors we support came into our office and said "Hey our Oracle instance is very slow, can you guys take a look at it?". Whoops, we shutdown the quake server and he later sent an email "I don't know what magic you guys did, but the performance is amazing!".
Another fun one, not targeted at professors, but at our student compute lab. We had a lab of 25 Sparc Ultra 60s that was pretty well utilized. Well one day, before I became a sysadmin, I was thinking to myself "all of these servers are rsh enabled, what if I logged into all of them". So I wrote a script that would cut up an AU file (sun's audio format) into tiny parts and then wrote a program that would synchronize with each other and play a different part out of a different workstation. I vividly remember playing a screaming sound in a ring around the entire room at a low volume before playing the entire sound out of the three middle workstations at full volume a few seconds later. The lab was full. At least 15 people immediately noped out. I was sitting in the back cracking up.
Another time the sysadmins of the same computer lab left rwalld running. So I sent an rwall "The system will shutdown in 5 minutes" or whatever the shutdown message was. The professor at the time got angry "they always do this, they perform maintenance whenever the g.d. please" and he stormed out of the room. Suddenly the professor and an angry IT administrator was peering in the door and pointing at me. They threatened to revoke all of my access which would cause me to fail out. I just shrugged, I knew what they were up against and instead asked to work for them. The anger turned to surprise and I worked there for almost 6 years before leaving for greener pastures.
In other news, I'm pretty sure I once cleaned a keyboard containing live roaches because an engineer ate messy hamburgers over it for lunch every day without cleaning them. It smelled and looked like a dumpster. There was no swapping keyboards for political and economic concerns. If you're a badass nuclear/electrical engineer, businesses tend to tolerate quite a lot.
In other other news, I once ran johntheripper against a NIS password hash dump (`getent passwd`) from my university's CS department's computer lab network. Within about 3 minutes, I had the login passwords of 5 professors, a dozen grad students, and about 40 students.
I did IT support for the CS department at university back in the 90's, and we had a professor who would email us once or twice a year asking us replace his monitor because it had gotten blurry.
We'd go in, wipe the thick layer of dust off the screen, and it'd be as good as new.
For some reason, straight raw performance and theoretical computer science are not really friends, nor is it expected that they should be. To a certain extent, I understand why. Science at a university is done for the sake of the knowledge. Since there is no time-to-market, many things established professors do or make you do are not really efficient or market-ready... But... I am HPC Cluster admin at a university, and sometimes I wish scientists were a little bit more trained in real-life CS.
Case in point: Many years ago, I happened to look at the source of a project running on our cluster. It was written in Free Pascal. The argument of the professor responsible for the project was that algorithmic complexity is all that counts when thinking about runtime. Compilers optimizing for a target architecture was not part of his understanding of the world. And since I had nothing to do on the weekend, I reimplemented their code in C++. And yes, the resulting code was 5x faster, just by making use of a battle-tested compiler. Turns out a speedup of 5x is relevant to a project that was projected to use 5000000 hours CPU-core time...
I was rather young at that time, which surprised me even more. It was pretty clear to me that this rewrite would bring a noticeable performance boost. To this day, I sort of wonder how you can end up with so much theoretical knowledge that you tstart to completely ignore reality.
I think one of my most simple - yet evil - prank was to LD_PRELOAD a modified `malloc()` on a colleague's computer. Except where the common practice was usually to have it blow up the memory or not return enough, I had mine to work mostly correctly: in most cases it would allocate as requested, and in occasional cases it would allocate slightly less than requested, and in rare cases it would blow up the memory.
You could think I shouldn't be proud of that one, considering some classmates probably went crazy wondering why their systems behaved erratically, and that it probably didn't help with some of their assignments. Generally our pranks were rather tame at school, and if I recall I had reserved that particular one to only 2 guys that were extremely d-ckish. Can't even recall their names now.
Other pranks were your usual stuff: switch keyboard mapping to Dvorak, swap LTR to RTL, randomly modified clocktime by a few minutes (also rather mean when you need to handover assignments, I suppose...), bind some keys to the most annoying shortcuts possible, unsecured xdisplay accesses, open cdroom on ssh-accessible machines... Basic stuff.
I also grabbed all the login/passwords for an entire promotion once. To my defence, I didn't exactly use the accounts for anything else but to change the default passwords (so, technically and legally, I did access the accounts, I suppose). I was just making a point to IT that assigning default passwords with guessable sequences (if I recall, major + year of promotion) was a bad idea and that for some students it could take weeks before they'd change it and leave some accounts open for abuse (e.g. for students who dropped, were sick on the first days, or would simply not use the labs that much). They weren't pleased by the surge of people contacting them to ask why their passwords didn't work.
When scientific distributed computing came rather prominent, I also clustered an entire classroom (didn't want to hit the whole school network, only rooms that were underutilized). That got noticed by an admin though, but he didn't know what it was and I just said this was one of my projets for graphics computing (which was indeed a real project for which distributed comp was authorized).
reply