Hacker News wasn't around that whole time. But the fact that spacejam.com was still around was a big deal a few years ago (don't remember when I first heard that.)
From some googling, it looks like 2010 is when this made news (via reddit).
What's truly impressive is if you decide to creep a bit, you'll see that loads of users from that thread still post on HN pretty actively.
HN has an insanely high retention rate for being just a little news sharing site. Most people get bored, pissed off, or uninterested at some point and leave. Looking at 10 year old tweet threads or reddit comment sections is basically a graveyard in comparison. Not sure what keeps people sticking around here.
In a couple days, it'll have been 10 years for me. Crazy.
I clicked about 20 profiles, at least 70% are still active.
Actually, I think this phenomenon probably is pretty common for websites that are still on rising or at least hold still (like both HN and Reddit). Their initial users don't just leave for no reason. Less active, perhaps.
I remember seeing that in the news when I was 16 years old. At that time the Internet was just starting in my country so the news came in a local news broadcasting. It was crazy.
Two members stayed behind. I heard recently they still respond to email.
>two group members were briefed about a side mission. They would remain on Earth – the last surviving members – and their job was to maintain the Heaven’s Gate website exactly as it was on the day the last suicides took place.
>And for two decades, the lone Gaters have diligently continued their mission, answered queries, paid bills and dealt with problems.
Alan Moore wrote a comedy comic where a traumatized Cool-Aid Man is publically accused of involvement at Jonestown and insists on the record in a tell all interview it was the Flavor Aid man who committed the crime.
I know it's partly nostalgia, but something about both of these sites feels more fun to interact with and browse through than almost any modern website I visit. The web used to be so much fun.
Navigating these sites feels like exploring a labyrinth. I feel like I can spend an hour on those pages, clicking all the hyperlinks, trying to consume all the content it has to offer
That’s an extremely small variation in score. Why would that make you think they’re making stuff up? Networks and servers don’t always respond with identical timings.
It's pretty darn hard to hit their page delivery targets unless you're serving from something close to their testing site. https doesn't help, because it adds round trips (www.spacejam.com doesn't support tls 1.3, and I wouldn't want pagespeed to be using a resume handshake for initial load anyway).
The main tips I can give for a high page speed that most websites don't do are avoid large header images, make sure text is visible before custom fonts load, use minimal CSS (and/or inline the CSS for the header into the top of the HTML), don't use blocking JavaScript and especially avoid huge JavaScript triggered cookie popups (the blocking JavaScript + big delay for the Largest Contentful Paint will kill your score).
Its quite possibly literally just a bunch of static html files. There is not much maintenance cost there. They probably run all their static sites from the same webservers. It may very well be the same effort to keep it as to delete it.
It's running on a fairly current version of Apache, but aside from keeping the server up to date, it conceivably could be running the same setup for years.
For an organization the size of Warner Bros, it's essentially free, as they are literally doing nothing to the server for that site specifically.
However, it does look like it's running on AWS using their global accelerator (globally optimized traffic) so I assume it's sufficiently robust.
I would guess because of the trademark: keeping a bunch of static HTML files around is not much cost otherwise, and certainly gathers some attention like this post on HN indicates.
“The site owner” is Warner Brothers, they likely have a department responsible to keep movie-related websites up - and might well do it in-house, after the AOL acquisition. A site like this is basically free to host: a domain registration for 20 or 30 years will attract a massive discount, space on disk is probably less than 50MB, used bandwidth is minuscule... after you set up log rotation by file size and automated domain renewal, you can basically forget it exists.
I made a few close friends there in the chat, where other hacker wannabes and philosophy neophytes would gather. The chat forum had fun weird bugs that my friends and I would play with in order to edit past posts, or obliterate each other's posts. It was wonderful little corner of the web for a short while.
That was just one part of that great site. In 1999, the Internet still felt new and full of potential. I loved all the concept art posted there, the trailers, and finding easter eggs.
Years later I recreated the full chat for my friends, including the bugs. It
Interesting. I would have sworn that I remember “the Space Jam website is still up” being a common fun fact on the internet when I was in college 10 years ago.
It’s such a nostalgic feeling of the earlier web back when just interest groups, universities, fan pages, web-rings ruled the web. Back before it became commercialized by greedy folks that threw ads all over the place, tracked everything you do and spammed the hell out of your inbox.
I miss the good ‘ol days for what the web was intended for.
One of my first projects was maintaining the site for: Looney Tunes Teaches the Internet.
The best thing about the early web was that nobody knew what it was for. So people just did things, without considering if it was "right."
Nowadays, you'd never get Bob's Labrador Page. Because "Hi! I'm Bob. I live in Lebanon, Kansas. I like Labrador dogs. Here are some pictures of my favorite Labradors!"
I don't really find this argument convincing because it was plenty easy to publish your own webpage. I did it at 9/10 years old and I don't want to believe that your average adult has less capability than a child.
I think we're doing a disservice by infantilizing people.
People around the world are more educated than ever in humankind history [0][1] , including ability to code
Possibilities are still out there, it's not like people are forbidden from building their own web stack from scratch - you can still buy vps, bare metal, R-Pi and static IP or dyndns - and just code whatever you want.
Of course, Internet is not what is was in 1996, doing trivial things like publishing cat/dog videos and photos is easy - as it should be. Amount of the content is enormous and one can find amazing, incredible, briliant things - maybe not necessarily on the top of FB/IG feed, but it is still out there.
I am no more infantilizing people than pointing out that most people can’t change the oil in their car. It’s a speciality where most pay a service fee to get it done for them. And they go about their lives just fine.
Publishing a webpage with 1990s tooling is roughly in the same category of complexity.
It is far easier to pay Wix, or even better, to use one of the myriad photo sharing sites like Instagram, Flickr, Or Facebook. Which is why they’re so successful, and why the internet is a far more widely used and useful platform today than it was 20 years ago. It’s a disservice and carries no virtue to insist on unnecessary complexity for those that really could care less about computers or how networks work.
To my knowledge, they typical ISP of the 1990s provided free web hosting. (At least it was in my neck of the woods.) It may not have been much, but it was enough to put up a personal website that was not plastered with advertising.
The necessity to write your own HTML (or use tools like Dreamweaver and Frontpage) and the anything goes design mentality may have resulted in some atrocious sites, but it also made the web feel more personal. While there may be some ability to tweak the design while using a CMS, it is much more constrained and sites feel much less personal.
Yep. I took a high school computer science class back in 1995. My end of year project was a website about my MUDing adventures and I put it up on the free web hosting from our ISP. I wish I still had it. I remember thinking it was pretty terrible even back then.
Quite different. Old homepages were more "building" less "sharing". Beyond the coding, there was planning and categorising. You put thought into the interface and structure.
Seems like a small thing but its the difference between being a hobby mechanic or just owning a car. Or buying a desktop vs building one. You end up with the same thing, but "feels" like a very different endevour.
This is rose colored glasses IMO. The vast majority of pages were just no standardization
...
<img>
<br>
<br>
<img>
...
placed images.
If you want all that hobby mechanic stuff you can do all the same now with firebase or pages or whatever just like you were with frontpage or dreamweaver back then.
That's true, but that small amount of effort is still about 1000x more than is required to use Instagram.
And it was really the discovery of such web pages back then that was the thrill. It really did feel like exploring an alien planet or following a treasure map of link exchanges. Each click was an investment of a couple minutes at the rate pages loaded, so you really couldn't explore every link. And browsers didn't have tabs -- you were looking at one page at a time and maybe bookmarking it for later.
The editorial and stylistic independence is what I miss.
Absolutely: there is more stuff on the internet than there was then.
But! How much of that stuff is creatively controlled by actual end users? I'd say < 10%.
The large platforms are right out - restyling Facebook?! The build-a-site platforms all look somewhat similar because form follows tooling defaults. And because of the professionalization of web technologies, laypeople are locked out from just making their own page (or at least don't believe they can).
I think that’s a beautiful way of describing the differences.
These days the web is all Ikea flat pack. It does it’s job and in many cases it works really well for the price. But the individuality has gone since people aren’t just hacking together something based on their own tastes and limited carpentry/web development skills.
While true, it also puts up a hurdle - most people wouldn't bother learning how to build a website because of how difficult it looks.
But now putting pictures of your labrador on the internet is accessible to everyone. In practice, there's thousands of times more labrador pictures on the internet now. However, it's lowered the value and uniqueness of said labrador pictures.
Nah, it feels very different because Instagram et al has gamified the whole thing. Bob's labrador page is its own space, separate in a way from the rest of the internet. Bobslabs on Instagram is implicitly competing with celebrities and 'influencers' whether Bob likes it or notand that changes the feel.
Nostalgia factor is very real, but I don't think it captures just how novel the internet was. Communicating en masse across the world had never happened in human history. And it made you feel like an explorer of an alien planet, at least until one too many "under construction" pages of the night.
So yes, Space Jam site itself was less about wowing, but gave a feeling of interacting with its creators on a more intimate level than other movie marketing. They were using the same tools that any one of us could do ourselves, unlike the millions spent on the movie. The Space Jam site looked much like dozens or hundreds of others from hobby coders or engineers in their free time.
And for me it's more melancholy than fun now, because it reminds me of that feeling of unbounded optimism that the early internet had.
It was very different. It was different in construction, discoverability, intent, and consumption. This isn’t nostalgia. I’m not particularly nostalgic about that time for other reasons and I was neither a kid nor a teen.
Nothing I do on Instagram can possibly make it as personal as my personal sites were. That’s not how I interact with Instagram at all and it couldn’t be even if I tried really hard. And even if I managed it, it’s not how it is offered by Instagram and not how it would be consumed.
That said, I don’t think that web is dead. It’s just a lot less discoverable and there’s a lot more noise. One of my favorite “old web” sites: https://www.fieggen.com/shoelace/ I’m not even sure it’s actually old. It just is more like the old web.
Notice the first comment (towards the bottom of the page): “Low on modern-web-BS...” There’s a qualitative difference.
The shoelace site is wild, but it got me thinking. Detractors might say that Wikipedia would fill the void for this type of information.
...but I don't think that's true. Ian's shoelace site information would instead be edited ad-nauseum by a consortium of shoelace enthusiasts. It doesn't allow for personal opinion or in some cases specific things that aren't well known that can't have their history sourced properly (citation needed?)
I mean I get it. I still call it nostalgia because as you point out, it's still completely possible to do it's just a smaller overall percentage of what is on the web. I run a niche site that gets ~20k MAU:
I consider it the ultimate in no-modern-web BS. The only "modern" thing I use is GA, which even then, honestly I'm looking at replacing it with one of those 90s counters.
I was surprised to find a great example of a site like this recently, "How to Care for Jumping Spiders": https://kozmicdreams.com/spidercare.htm , which is part of someone's broader personal home page with random bits of art, photography, and a guestbook(!). The geocities-esque design bowled me over with nostalgia... the header is an image map!!
Funny you mention tracking, one of the few modifications made to the Space Jam website at some point in the past 24 years was the addition of both Adobe and Google analytics.
I try to leave little notes and jokes in HTML source because from when I was growing up playing with computers to now I still look at the source just to see if someone was expecting me. It's not very common now, unfortunately.
We called them "swirlies" in middle school/high school. But I've never actually seen someone get one, and it could well be mostly apocryphal. And it wasn't something you sought out, it was like, you were getting bullied.
I liked this quote from the page: "Tim Berners-Lee on home page:
Q. The idea of the "home page" evolved in a different direction.
A. Yes. With all respect, the personal home page is not a private expression; it's a public billboard that people work on to say what they're interested in. That's not as interesting to me as people using it in their private lives. It's exhibitionism, if you like. Or self-expression. It's openness, and it's great in a way, it's people letting the community into their homes. But it's not really their home. They may call it a home page, but it's more like the gnome in somebody's front yard than the home itself. People don't have the tools for using the Web for their homes, or for organizing their private lives; they don't really put their scrapbooks on the Web. They don't have family Webs. There are many distributed families nowadays, especially in the high-tech fields, so it would be quite reasonable to do that, yet I don't know of any. One reason is that most people don't have the ability to publish with restricted access."
Basically was describing the concept of social networks before they existed on the web.
It's even crazier than that. That page suggests using ResEdit to modify the Netscape application to feature their spinnning basketball icon rather than the standard "N".
(ResEdit was Apple's editor for data in the resource fork of HFS files, which classic Mac apps used to store their assets. Mac OS X abandoned this interesting but unusual approach in favor of the NeXT way, ".app" directory hierarchies.)
I was surprised by that too, it would be kind of like the new Trolls movie advising kids to make tweaks in windows registry. Hope they don't make any mistakes :D
Well that really takes me back! It was pretty cool how classic Mac apps had most images, text strings, that sort of thing in the standardized resource fork structure. Which meant that you usually could alter a bunch of things about an app's appearance and sometimes behavior by editing those resources with a standard editor.
It’s not really a message about browser compatibility. They’re explaining how to change a specific icon on a Mac. Unsurprisingly, a tutorial written for a specific system will only work on that system.
When I was in college I remember discovering the "Master Zap" website on this domain. It was a musician/software developer who made a few software odds and ends. One of those being Stomper an analog style drum synthesis program. I have great memories of spending hours trying to re-create each sound from a TR-808. Taught me a lot about synthesis. Also really got me writing code and learning C...
I think funnily enough, one thing that made these old websites more interesting is how slow the web was back then.
In a way it was "animation"— I'd look at images more closely as they "scanned" into the page and notice details I don't think I would now. In a way the fact that all these pages load instantly now is a bit of a downer. Maybe because there's no anticipation any more, or maybe just because the page seems more static and unchanging.
I was webmaster for this site (and thousands of others at WB) back in 2001! I believe this was when we did the great www -> www2 migration, which was of course supposed to be temporary. In fact I think that was when we migrated from our own datacentre to AOL's but I could be getting the timing wrong.
Back then it was served from a Sun E4500 running Solaris (7?) and Netscape Enterprise Server. Netscape had been acquired by AOL which had also just bought Time Warner (that's why we moved to their datacentre) but somehow we couldn't make the internal accounting work and still had to buy server licenses.
Fun fact, unlike Apache, NES enabled the HTTP DELETE method out of the box and it had to be disabled in your config. We found that out the hard way when one of the sysadmins ran a vulnerability scanner which deleted all the websites. We were forbidden from running scans again by management.
Another fun fact about NES - they were really pushing server side Javascript as the development language for the web (and mostly losing to mod_perl). Also back in 2001 but at a different place I worked with the person who had just written a book on server side js for O'Reilly - he got his advance but they didn't publish it because by the time he had finished it they considered it a "dead technology".
Our job was basically to maintain an enormous config file for the webserver which was 99% redirects because they would buy every conceivable domain name for a movie which would all redirect to the canonical one. Famously they couldn't get a hold of matrix.com and had to use whatisthematrix.com. Us sysadmins ran our own IRC server and "302" was shorthand for "let's go" - "302 to a meeting". "302" on its own was "lunchtime".
I still mention maintaining this site on my CV and LinkedIn - disappointingly I've never been asked about it in an interview. I suspect most of the people doing the interviewing these days are too young to remember it.
:) This made my day. Were you based in LA at the time? I’m out in LA now and love hearing stories like this from colleagues who’ve done engineering work in the entertainment industry.
Here's another one - Solaris had a 2GB file size limit (this was before ZFS). Which isn't as crazy as it sounds now - hard drives were 9GB at the time. So ordinarily this wasn't a problem, but when the first Harry Potter movie came out, harrypotter.com (which was being served off the same server as spacejam.com) was the most popular website on the internet and the web server log would hit the 2GB limit every couple hours and we would have to frantically move it somewhere and restart the process.
Most likely the main Oracle server which had a disk array attached. These days it's easy to forget how tricky it could be to get basic things like this working. rsync was in its infancy and I doubt we were using it. As I recall most servers didn't have ssh installed; telnet was standard. Lots of tar piped over rsh to get files from A to B.
7-8 years ago I've encountered a full-on rsh internal network in a fairly big datacenter. No rsync, rcp. It turned out that the sysadmins were too lazy to set up ssh keys on all the servers, and their manager was skilled at deflecting issues.
I like how it loads instanly! :-) I remember staring at this thing for about 2 mins watching it load over dial up in a third world country on an laptop dad somehow smuggled into the country from dubai without paying import duties. Yeah.
I like to think you can still make a fast site by sticking to the basics; write plain static HTML and CSS, avoid JS, craft your images specifically for the site, and put it behind a decent webserver with compression enabled. I'm sure Pagespeed rewards HTTPS and HTTP/2 support as well but that's webserver config.
Having done it NES-style, I'm kind of glad that server-side JS didn't catch.
Node is an unlikely event. It hit the sweet spot precisely when async started being needed, there was a runtime environment that made it viable (v8, thanks to massive investment by Google on Chrome), and a language that made it kind of natural.
is there consensus on whether Node did or did not "catch"? like, sure, it was a launch language of AWS Lambda and Netflix uses it heavily. but every backend person I know uses Go or Python or Ruby, anything other than JS. pretty much only fullstack JS people consistently pick nodejs as first choice - but that's just my POV. I'd love some more definitive numbers.
Its components are part of every developer's toolbox (a lot of really essential command line tools use to be installed with npm). I don't see many backends powered by it, but I'm mostly a Python developer and that certainly biases my samples.
Node as a tool definitely did catch on: automated testing, package management, REPL integration, bundling etc.
Node as a server/backend language: I would say the say the ecosystem and usage are at least as large as Go/Ruby. This is hard to gauge. But I assume/expect a factor of 2 or more. If you look for web specific libraries it is unlikely you'd find something for either Go/Ruby but not for Node. Python is harder to compare because it is used much more broadly.
For me the biggest use-cases for Node are: server-side/static rendering (get all the power/expression of frontend libraries on your http/build server), web-socket ease of use and the fact that there are tons of programmers who are at least familiar with the language.
And even though it is steadily declining, the most popular web-backend language is still PHP by a long shot. And this won't change until the other languages get managed hosting that is as cheap and simple and similarly ubiquitous.
It's absolutely catching on with younger developers who don't want to bother using multiple languages. Anecdotally, like 100% of developers in their 20s at my company push hard for JS.
At my first job out of college in 2008 we wrote JS for a desktop app on top of Mozilla's xulrunner platform. At the time I hadn't heard of using JS outside the browser and Node was still a few years off. It was a great experience but Xulrunner got killed by Mozilla and the company had to rewrite in C#.
Just an example of (sort of) non-proprietary, non-browser JS.
It used to be super slow. Like, you cannot imagine how slow it was.
The usual example I trot out is when I was writing a client-side pivot table creator in the mid-2000s, as far as I can remember, with just 100 items the js version took 20-30 seconds. I then tried it using XML/XSLT instead[1] and it was instant.
I haven't checked recently, but even a few years ago javascript is extremely slow with large datasets, I was mucking around with Mandelbrot generators and js compared to C# was like a bicycle vs a jet engine, they weren't even vaguely in the same league performance-wise. Just had a quick look at some js based versions looks like it's got a bit faster, but still slow.
[1] Awesome performance, super hard for others to understand the code. XSLT was great in some ways, but the learning curve was high.
In principle, Javascript on modern JIT can approach roughly 40-80% of the performance of C++, more or less across the board, not just in some toy benchmark. This benchmark is from 2014:
Granted, this benchmark is for generated Javascript, not "idiomatic Javascript" (if there is such a thing). Of course you can write arbitrarily slow Javascript as well. There's a lot of stuff that can throw a wrench into the JIT.
For the Firefox graph, none of them are below 35%, only three are below 40%. The average is well above 60%.
That said, there's a lot going on there. Javascript can not use SIMD or multiple threads. Some tests are heavier on C#, which is converted to C++ in the Javascript version and thus becomes "faster than native C#".
The point is not give an exact number on how fast Javascript is compared to any other language, there's wide variance across usage and implementations. The point is to show that Javascript to rival C# performance is feasible.
Server side javascript didn't catch on the first time around because you couldn't create threads. Because in the early 2000's, all technical interviews asked about Java style multi-threading. At some companies, the first round of technical interviews was a very, very detailed discussion about the Java threading model. If you didn't pass, you didn't get to round two. So everybody wanted to use threads.
I did my own "JavaScript Server Pages" using Rhino (JS for Java), which does compilation of JS to java bytecode. It worked great for me. I did two sites using it, then changed job.
Well, the issue seems to be the scan not only detected but realized the risk from the vulnerability, which is exactly what is the point of scans to help you avoid.
It was a different era. A lot of best practices we take for granted now as being common sense were stuff we (and I say this as myself also being an old time sysadmin) had to first learn...and often the hard way.
Plus the web wasn't as important business stratergy back then as it obviously is now. I doubt Warner Brothers would have been willing to invest in replica dev infrastructure when "developers can write code on their desktops". I know dev infrastructure is for more than just developing code, but common concepts we take for granted like IaC, CI/CD, config management etc wasn't formalised or widely used back then and servers were pets that were held together with duct tape and sacred rituals.
In many ways, that's what made being a sysadmin in that era fun. There wasn't any shame in hacking together a solution.
"In many ways, that's what made being a sysadmin in that era fun. There wasn't any shame in hacking together a solution."
hah true, and management was absolutely amazed! In the late 90s/early 2000s i worked for an independent pharmacy chain. I wrote what was basically just a proxy sitting between our dispensing systems and the central clearing networks for rx drug pricing. All it did was double check the price on the prescription (our stores weren't applying price updates which was a manual process at the time) and reject prescriptions that were priced wrong with a message telling the pharmacist to apply their price update. The CEO invented an annual award to give to me for the "work" hah
Back in 2000, it was the wildwest. You actually did run tests on production and editing production code was not as insane of an idea as it is today.
I don't think anyone (at least no one I worked for at the time) ran staging or dev servers.
It was always stupid, yes. But back then we didn't have the tools and testing suites that we have today. CI/CD setups didn't exist. Git wasn't even built until 2005. The only version control solution was SVN at the time, which was released in 2001. But it was clunky and immature.
Back in 2000, launching a site update meant someone would log into the server via FTP, drag the files over, and try to "be careful" while they did it. Using passwords that were written on a sticky note, stuck to the CRT monitor's screen (password managers weren't a thing).
It is easy to forget how immature and primitive the world of web development was at this time.
Also, the internet was still so new, that every C-level executive had built their careers by running businesses in the 70s, 80s, and 90s. Back before you built websites or relied on them for any significant impact on your bottom line. So to tell an executive from that era that your website broke when you poked at it, their solution would be to stop poking at it. Security wasn't really a major concern like today, and having a website was still mostly a novelty in the eyes of most executives.
> Back in 2000, launching a site update meant someone would log into the server via FTP, drag the files over, and try to "be careful" while they did it.
Heck, I remember just compiling Java classes on the server machine itself, copying them to a production directory, and restarting the app server (tomcat IIRC). Source control was the sysadmin running a nightly backup of source directories
> The only version control solution was SVN at the time
Pre SVN there was CVS, and before that was RCS. Now these didn't work the way we think about git today, but they did allow you to roll back to a known good state with some futzing about.
> they were really pushing server side Javascript as the development language for the web (and mostly losing to mod_perl).
Enterprise server-side JavaScript was the first stage of dynamic web servers that connected to relational databases. Netscape LiveWire, Sybase PowerDynamo, and Microsoft Active Server Pages (with interpreted LiveScript or VBScript) were early. The enterprise software industry switched to three tiered app server architectures with JVM/.net bytecode runtimes. Single-process, multi-threaded app/web servers were a novelty and none of the client drivers for the popular relational databases were thread safe initially.
It took some time for RESTful architectures to shake-out.
Apache (and mod_perl) was thread safe by being multi-process, single threaded. You were always limited by how many perl interpreters could fit in RAM simultaneously. Then came the era of the Java app server (Weblogic and WebSphere).
Everyone has mostly forgotten AOLserver (and TCL!). Almost everyone of that generation was influenced to use a RDBMS as a backend by Philip and Alex's Guide to Web Publishing which came out in 1998.[0] Although I never actually met anyone who used AOLserver itself! Everyone took the idea and implemented it in ASP or perl, until the world succumbed to EJBs.
I used AOLserver with OpenACS, https://openacs.org/ .
AOLserver was apparently more optimized than Apache (at the time, at least).
My manager even had us (early 2000s) take a one-week(?) bootcamp at Ars Digita HQ in Boston. Though about the only thing I remember from it was the fancy aeron chairs.
That was also the era when DBMS locking was iffy, so competing inserts/updates would happen once sites got too busy, and everyone was encouraged to turn off transactions (or, on MySQL, just use MyISAM instead of InnoDB) and put it on XFS or even tempfs for speed.
So many websites I remember came back up with "the last two weeks of posts are gone, sorry" or just shut down for good because it was all too corrupted to fix, and so were the backups, if they had any.
Funny you mentioned matrix.com; back between Matrix 1 and Matrix Reloaded I knew someone who knew the guy who owned thematrix.com and apparently was not offered enough to sell it. I guess they must have agreed at some point because I also recall that the matrix.com at around the time of Reloaded then started to redirect to the canonical Matrix site (which it sort of does now). Wonder what the price turned out to be.
"We were forbidden from running scans again by management"
Seems like management believes it is better to wait for real bad actor to purposefully destroy your site than have it done by your honest employees by accident.
I know security in 2001 was much more lax (I started in 2000) but this still shows ignorance of management.
The right way to handle this would be to ask your staff to ensure it is possible to restore services and to ensure you know what the tests are doing before you run them.
>Seems like management believes it is better to wait for real bad actor to purposefully destroy your site than have it done by your honest employees by accident.
From a politics standpoint that is completely true. Which would you rather tell your boss:
Q: Why is the website offline?
A: One of our sys admins accidentally deleted it.
OR
Q: Why is the website offline?
A: Some nation-state/teenager launched a sophisticated cyber attack, we need to increase the cybersecurity budget. It's the wildwest out there!
There is this saying that only a person that does absolutely nothing never makes any mistakes.
Mistakes are normal course of action at a corporation. Sane managers will understand that is not possible to not have people make any mistakes. Mistakes are part of the learning process.
Now, when somebody makes a mistake what I am looking for are:
Does this person show good judgment? Were precautions taken by the person reasonable?
Does the mistake show pattern of abnormality? Some people seem to attract failure, maybe there is some underlying cause?
Is the person learning from mistakes? Learning is expensive, if somebody made an expensive mistake I want as much learning as possible for the expense.
Is there some kind of external factor that made the mistake possible or more likely? Usually it is possible to improve the environment to reduce the number of mistakes.
As to preventing these guys from scanning ever again, that is bad decision because it is likely they would never make the same mistake again. What's done is done. The scan showed there are problems with the app, now we should want to know if there are more problems but without risking the application stability (too much).
---
-- Do you know what the Big Co. pays when they pay high salary for an experienced engineer?
-- They pay for all the mistakes he/she made at her previous place.
I don't really agree with that decision, but maybe that phrasing is not 100% accurate. Maybe they just meant that they should run the scan on a local test environment and not in the production deployment. Obviously, there's value in running it against live, but at least for this particular issue, they could probably had caught it in a testing environment.
You are probably thinking in today's terms. Back then it was unlikely there was separate test environment. For application... probably, but not for what is a static website and HTTP server.
They may not have had a fully fledged staging environment which is an exact copy of production. But I'm sure they ran the http server locally in their desktop to test new configuration. I was a high school hobbyist "developer" (maybe a power user would be more appropriate) at the time and I certainly tested my pages locally before I'd upload them to Geocities. 2000 is not the bronze age :P. People were already talking about TDD then and the concept of testing things before shipping existed for much longer.
I think you're looking back at this from the modern perspective of cheap hardware and open source software. We didn't have Sun workstations as desktops - we were using FreeBSD on cheap Dells. Our production servers cost something like $200k each. And the http server cost money - we didn't have licenses to run copies locally. These days you can just run things in Docker and have a reproducible environment but it wasn't that easy back then.
Usually, HTTP servers would be huge bloats of configuration running on huge machines. As a sysadmin, it was not typically feasible to replicate the configuration on the local machine.
With a static website nobody would figure you can accidentally actually do something harmful to the website. The worst thing that could happen was that you made a configuration error in which case you just rolled back the config.
Of course we were supposed to have backups in case the server failed. While the servers would typically use RAID arrays we knew it would not save against rm -fr so there was a backup but backups rarely could be restored instantly.
Interesting. I never thought it was so complicated to run the http server. What was the bottleneck that prevented them from running locally? The memory used by all the config?
The amount of hardcoded stuff that was specific to the machine. All the domain addresses, firewall rules, etc. There was no git or ci/CD to keep everything in sync (easily).
> I still mention maintaining this site on my CV and LinkedIn - disappointingly I've never been asked about it in an interview. I suspect most of the people doing the interviewing these days are too young to remember it.
This is astonishing to me. I check back to see if this site is still up once every year or two just to have a smile. If you were sitting across from me in an interview I am quite sure I'd lose all pretense of professionalism and ask you about nothing else for the hour.
It isn't quite dead yet. It is just entirely Flash based so it likely won't make it to 2021. Safari has already blocked Flash completely while the other major browsers require you to manually enable it. Odds are those too will begin blocking Flash completely sometime in the next few months with Adobe set to end official support at the end of the year. So maybe add zombo.com to your Chrome whitelist and visit it one more time, because soon the infinite won't be possible.
> Another fun fact about NES - they were really pushing server side Javascript as the development language for the web
I started “web stuff” with Apache and Perl CGI, and I knew NES existed but never used or saw it myself. I had no idea “server side JavaScript” was a thing back then. That’s hilarious.
>the person who had just written a book on server side js for O'Reilly - he got his advance but they didn't publish it because by the time he had finished it they considered it a "dead technology".
Makes sense. This server-side Javascript thing will never take off. The Internet in general is really just a passing fad!
Your comment made me remember netcraft.com which was the tool we always used to see which web server and OS was running a site. There was a running joke back in the day on slashdot about "BSD is dying" as Linux started to take over, based around Netcraft which used to publish a report showing which servers were most popular. I'm glad to see they haven't changed their logo in 20 years!
Their site report used to show an entire history so you could see every time they changed servers but it doesn't look like it does anymore. Now it's running in AWS so certainly not the same Solaris server. Although those E4500s were built like tanks so it could be plugged in somewhere...
I submitted this and thought, "this is neat." Didn't expect to return to it being on the front-page nor the former webmaster to show up. The internet is a fun place.
That background story is fascinating. I wonder how many full-circles server side JavaScript has made up until now.
> Fun fact, unlike Apache, NES enabled the HTTP DELETE method out of the box and it had to be disabled in your config. We found that out the hard way when one of the sysadmins ran a vulnerability scanner which deleted all the websites. We were forbidden from running scans again by management.
Oh man, the early days were so exciting. Like that time I told my boss not to use Alexa on our admin page out of general paranoia... and a few days later a bunch of content got deleted from our mainpage because they spidered a bunch of [delete] links. I learned my lesson; secured the admin site a lil better and upgraded to delete buttons. Boss kept on using Alexa on the admin page tho.
I once went to an industry presentation where someone on that project described the workflow.The project got into a cycle where the animators would animate on first shift, rendering was on second shift, printing to film was done on third shift. The next morning, the director, producer, and too many studio execs would look at the rushes from the overnight rendering. Changes would be ordered, and the cycle repeated.
The scene where the "talent" is being sucked out of players had problems with the "slime" effect. Production was stuck there for weeks as a new thing was tried each day. All the versions of this, of which there were far too many, were shown to us.
Way over budget. Cost about $80 million to make, which was huge in 1996. For comparison, Goldeneye (1995) cost $60 million.
Awful film. I was listening to the radio the other day and they had a 10 year old on saying Toy Story was cringe and he preferred Space Jam. I've never been so appalled just listening to radio.
Why are you bothered by a child’s opinion on children’s movies? You’ll likely never meet the kid, and one’s sense of taste at that age is hardly definitive.
For someone that young, visuals tend to take precedent over story. I, for one, am glad to see a younger generation appreciate 2D animation (which still looks acceptable in Space Jam) over 3D (which looks dated in Toy Story).
No, but even back in 1996 you didn't have to use the URL to store the state of a user's game. They could have used a cookie (supported in Mosaic since around 1994), or used a server-side session (I think was cgi::session a thing back then). That was a technical decision someone made. Those sorts of choices when you're speccing a web app haven't changed.
These days it would be a bit unusual to use routing to store the game state admittedly but that's why it's good to have older devs like 43-year-old me on the team. I actually think about these sorts of details rather than using the defaults.
It's also worth noting that storing the state in the URL shouldn't be the default. Being able to see the state by reading the address bar would be a security issue for most apps.
The elegance of using a static URL is that you are not really keeping the state - you instantiate all possible states and let the user traverse them according to whatever state they happen to be in (and able to see).
Probably 3/4s of the people who post on HN can attribute their entire career existing to "the modern web", so maybe there's some good things as well as bad things about it.
I like how you can tell that the HTML was done by hand. There's a parallel here, something like: handmade furniture is to hand written html as mass produced furniture is to generated html. There's a quality to it.
https://www.nytimes.com/2020/07/01/world/hong-kong-security-... The last part is reading in conjunction with this made me feel ... is web site more safe at least easier to be preserved than social media? I do not doubt whether one program can deep delete social media messages, at least one can ship one's web site as an archive and it is still readable. Priatebay like but for individual ... so easy to silence in social media compared with well Space Jam.
You know, back in high school when I was learning HTML + JavaScript, I was really looking forward to creating websites that took "longer" to load[1]. Because I have associated that with complexity (understandable), and I associate complexity with coding professionally.
Now that I _am_ coding professionally, I just wish websites would load simple as this, with interfaces as simple as this. None of that fancy image preloading, or disappearing/reappearing navbars, or those sidebars that scrolled independently from the main page content.
Then again, what memories are those which time will not sweeten, right?
[1] Caveat: with the dial-up connections then, all it took were enough images for a site to load slow. So I wanted mine to take "longer"!
There was an obsession for a while with advanced Flash websites to design interfaces that extensive and complication transitions and loading screens. Best example being "2advanced".
Of course it is still alive. It's one of those fixed-space-time-points that all the bloody turtles and elephants are balanced on. WE take that down, who knows where and when we will end up !?
I love this! "The jamminest two minutes of trailer time that ever hit a theater. It's 7.5 megs, it's Quicktime, and it's worth it. Click the graphic to download..."
reply