Wow, glad to see Modlabs still going! Modding my and friend's PCs back in 2004 after reading through it and Overclockers.ru is among my fondest childhood memories.
Retro gaming is the common use case. GLIDE is something people like to run natively and 3Dfx ruled the roost long enough for a ton of games to be specially tuned to the API.
It's viable enough that clone cards have been built based on the reverse engineering efforts:
For some retro gamers any emulation no matter how perfect is a no go because they don't consider it an authentic recreation of the experience. For some even this reverse engineered clone would be unacceptable.
For some even this reverse engineered clone would be unacceptable.
I could see that if it was a software emulation or e.g. an FPGA reimplementation, but this one is still using the original GPUs so would be comparable to having a modern GPU from Asus or Gigabyte instead of the reference designs from AMD or Nvidia.
It mostly has nothing to do with any technical issues. It is about recapturing a feeling for days gone by and part of being able to recapture the feeling for them is only using period manufactured hardware.
I have a lot of nostalgia for the Playstation 2. That system occupied a lot of late nights during my college and young professional years. My memories of those games are wrapped up in those tumultuous times.
As fond as those memories may be, I just can't play emulations/remasters of those games nowadays. Why not? I'm convinced it's because the loud fan noise of the PS2 is missing without the original system. That fan blast was on before, during, and after every gaming session. It filled the silence when the game paused. It picked up when the game peaked. It was there when I had loud joyful matches of Burnout 3 with friends. And it was there during long lonely nights following a breakup. The PS2 fan's noise pierced my soul.
Nostalgia is a very fickle thing. It's never just the favorite game, or food, or jacket. It's where you were at the time, the smells in the air, the twitches around you, the people you were with, the way you felt.
What is an authentic recreation of an old game?
For one, it might just be running it on the original hardware. It may be hearing some specific electric hum or click before the system starts. It may be the way the buttons on those old controllers felt. And that may be enough.
For others, it may be playing the game with a friend lost to time. Or playing the game in one's childhood bedroom, long since demolished. Or playing the game while eating a special pizza, from a place that's long shuttered.
So if recreating the experience is as simple as getting the old hardware assembled, that sounds relatively reasonable and achievable.
Still, ultimately, I prefer not chasing ghosts. Why cling to the experiences of the past when we can make new experiences today? It's a hard lesson learned. Nostalgia bites me hard.
The GLIDE api was quite different from OpenGL or Direct3D. There’s a lot of examples of games from that era that played well on 3DFx hardware but absolutely sucked on OGL or D3D. I think for example the first Unreal engine and games built off it like Deus Ex.
Obviously hardware has improved so much that you can play a game like Deus Ex on any hardware today and it performs perfectly well. But there are also games that run into problems with high clock rates for unrelated reasons. So for an accurate emulation, you would want a clocked down CPU and 3DFx chip running on a FPGA or something.
Also very famously, Diablo 2 ran amazingly using Glide. I had a Voodoo 5 5500 and didn't lag when the whole screen was filled with mobs, summons, fire, etc. while other people died to lag.
AIUI, Glide cards didn't really do 3d acceleration in anything like a modern sense. They accelerated the rasterization part of the rendering pipeline; they were basically very fast (for the time) 2D triangle renderers.
(Thus, reimplementing this under either OpenGL or Vulkan ought to be reasonably trivial using modern hardware.)
If we're arguing semantics, I feel like these early cards are more fitting to the "3D accelerator" name than the modern ones. They were specialized for accelerating 3D graphics (which, in the end, means pushing triangles into a 2D framebuffer). Modern GPUs are more like general-purpose vector processors that happen to also be good at calculations related to 3D graphics and can (sometimes) render triangles to a framebuffer.
i'd argue that 3d-accelerated graphics from that era is still true 3d acceleration. of course, back then the pipelines were mostly fixed - nothing like programable shaders - but you're passing in 3d-coordinates, generating some matrices to determine the MVP, passing in some textures, and getting most of that "heavy lifting" of getting the pixels on the screen from the API & card.
> they were basically very fast (for the time) 2D triangle renderers.
that's still true of GPUs today :) in fact, with the exception of h264/265 encode/decode, and "AI" - that's all that graphics cards have ever done!
The point is that the 3D-geometry portion of the pipeline is still done on CPU with these early cards. The API's may have been slightly broader, but the hardware accelerated part was limited to rasterization, i.e. painting 2D triangles.
indeed, and i agree - what i'm saying is that even today, the rasterization of triangles is still 2D :) (and necessarily will always be! well, as long as we use 2D displays..)
The GrVertex structure has z and w components too. But it’s a fair point that the x and y is specified in screen space, not world space. I had forgotten that. But the hardware does depth buffering based on the z component, so it’s still 3D.
Yeah, that's a nice "hack" the hardware offered. You can still shove in the depth after having done the screen projection so you don't need to worry about the order in which you draw the triangles. So maybe 2.5D? :-)
As I explained in a sister comment, there's a lot more going on with rasterization, even back then, than "painting 2D triangles" might imply. Yes, only the XY coordinates determine the screen location of a rendered pixel, but the Z coordinate even at that stage has a lot to do with its color (e.g. for perspective-correct texture lookup) and whether it is painted at all (Z buffering).
I’m sure it can be emulated efficiently enough nowadays, but I don’t think the mapping is very direct: Glide was mostly imperative and synchronous, Vulkan is declarative and explicitly asynchronous. Both are low-level, but for hardware that works very differently.
In terms of API design and usage Glide is far closer to classic 1.x GL.
I guess Glide could be defined as low-level API since it was defined by single vendor's architecture instead of some more abstract machine like GL and other APIs.
Being 2D rasterizers was arguably true for some cheap early graphics chips like that of the PS1. For Voodoos, there was no point in the pipeline after which rendering is “only 2D” in any relevant sense. At the time, the Z coordinate was passed to the hardware and used e.g. for important things like perspective correct texture lookup, environment maps (fake reflections), and Z buffering — that is right until the very last moment of committing the pixel to memory.
It’s just an internal code name, they didn’t trademark it, so someone else could use it again. See the new case where “Mendocino” could refer to the one good Intel Celeron, or now some low end AMD Zen 2 chips for cheap laptops.
Back then if something was hot, it was new, cool, and coveted. Today thermal management is a huge problem so they tend not to equate products with heat.
Afaik, thermite is still available for marketing departments to adopt.
There's no good book, necessarily. However, it's easy enough to piece together the information from Wikipedia[1] and a few articles[2]. In addition, there are a few great youtube channels that offer some great commentary on specific cards/generations and comparisons: PixelPipes[3], PCRetroTech[4], LGR[5] and the various other obscure channels recommended once you go down that rabbit hole.
Not quite what you want, but Michael Abrash’s Graphics Programming Black Book is available for free online, and covers the software rendering side of computer graphics in the early-mid 90s, ending at Quake, just before 3d accelerators hit the scene.
There are a few anecdotes from hardware designers of 2d graphics cards mixed in, but the bulk of the book is about writing optimised x86 assembly (don't follow the any of it's advice these days) and writing fast polygon rasterisers.
If you want an architectural overview, an alternative approach is to read chronologically the websites coverage starting from the late 90s; it may be relatively superficial, but there's a decent amount of information:
- https://vintage3d.org/index.php: dedicated to vintage GPUs; very interesting, but I would have liked it to be more in-depth
- https://www.tomshardware.com/archive/1998/2: start of (more dedicated) GPU coverage for Tom's Hardware, but it's more awkward to navigate, and I think more superficial, than Anandtech
It's pretty sad that modern generation has no idea that back in the days tech was cool, and “enthusiasts” were those who discussed chip architecture and texture sampling, not some video personalities discussing the color of old plastic. Well, consumerist approach have always beaten the specialist dedication.
I had some fun digging through the PowerVR Series 1 source code [0] that was released earlier in the year. It includes a simulator that you can use to piece together how the hardware itself worked, some of it certainly looks like a rough translation from the RTL...
This brings back some memories of making 3d games in the 90s. 3DFX cards were the best. I think our graphics programmer bought some 3dfx stock. It was great at the time but who wanted to support GLIDE? Our engineers. But DirectX was the obvious thing to do.
At the time we had deals with most of the 3d card manufacturers. I remember we had a press event with NVIDIA like in 1999, with the GeForce2, and I think we had one card beforehand, I am not sure. And every computer at this even had one with lots of journalists there to play. At the event it was the first time our frame rate wasn't smooth. Up to that point all we did was to make the rendering time even in ms. So we didn't think that much about unevenness in physics or networking. Rendering was the bottleneck. But once the rendering started to get really fast we realize we had a new problem.
Happy ending: I bought stock in that NVIDIA company.
>It was great at the time but who wanted to support GLIDE? Our engineers. But DirectX was the obvious thing to do.
And OpenGL, John Carmack was extremely vocal about it. That was before the internet and twitter so it was interviews in computer magazine and PC Gamers.
"The graphics chip was codenamed Napalm, packed in about 14 million transistors, and was built on the 250nm process." - we have progressed by 35x in 20 years.
Moore's Law would correctly give a factor of 35x linear density so 35^2 transistors on a square or rectangular die. But speed doesn't scale linearly with transistor count, both because there are diminishing returns per transistor, and because clock speed plateaus.
(Remember Moore's Law is about density, not speed. Density sometimes begets speed but not always; one big counterexample is memory access latency time.)
Aw yeah, I saved up for a VooDoo 3 just for playing Tribes on my 56k modem, fall 1999 - running windows 98 on a Packard-Bell that we got from Circuit City…
I get the snark but that's peak power. Chances are that VooDoo uses more power at idle than the 4090. Modern silicone is very efficient are throttling down at idle and old GPUs would use max power all the time.
My voodoo 6000 had a Molex connector, I guess to facilitate testing in ordinary pcs.
(I didn’t realise what I had until years after I threw it out! I only picked it up because it was so impressive-looking: comically enormous board, four fans.)
Sorry - was just speaking about the possibility of buying a 6k. It’s been so long I wouldn’t trust my own memory of which board had which configuration so didn’t intend to comment on that discussion.
That's what I assumed, and it was only when I was idly googling for 3dfx stuff a few years ago that I realised it probably wasn't. The 5500s all look to have 2 chips, and are normal sized. Mine had four chips, and it was... large. Like, very large. Also had a plastic handle.
I got it from the storeroom at the video games company I was working for at the time. They were having a clear out, and we could take whatever we wanted. This was one of the things I wanted.
(Sadly they didn't let me have the Jaguar devkit, doubly annoying when I later discovered that Atari had put all their Jaguar stuff into the public domain. Mind you, I'd only have thrown that out too as well, probably, so same result in the end.)
Interesting stuff. I worked at 3dfx in the last few months of their existence. All the 6000 I saw had the external psu. I was just an IT nerd so I wasn't involved with the cards.
There was actually not much need of drivers or firmware. 3Dfx products had a very simple and straight forward way to write commands directly to registers [1][2][3].
As much as I loved 3Dfx (with a capital "D"!) by the time it was bought by nvidia, it was already on its downfall. It did some interesting things, but clearly, they couldn't keep up with nvidia.
I actually had some hope when nvidia bought it, but unfortunately, they killed the brand, only keeping "SLI", for something that isn't even SLI (scanline interleaving).
And yeah, open source drivers, but don't you find it annoying when you only get open source drivers for subpar hardware? It is unfortunately a recurring problem.
That was my dream card for a long time. A Voodoo 2 (3D only, no 2D output, had to run through a second card, wow) overclocked to hell lasted me a good number of years until I upgraded to a GeForce MX 440 lol
reply