The point is that the 3D-geometry portion of the pipeline is still done on CPU with these early cards. The API's may have been slightly broader, but the hardware accelerated part was limited to rasterization, i.e. painting 2D triangles.
indeed, and i agree - what i'm saying is that even today, the rasterization of triangles is still 2D :) (and necessarily will always be! well, as long as we use 2D displays..)
The GrVertex structure has z and w components too. But it’s a fair point that the x and y is specified in screen space, not world space. I had forgotten that. But the hardware does depth buffering based on the z component, so it’s still 3D.
Yeah, that's a nice "hack" the hardware offered. You can still shove in the depth after having done the screen projection so you don't need to worry about the order in which you draw the triangles. So maybe 2.5D? :-)
As I explained in a sister comment, there's a lot more going on with rasterization, even back then, than "painting 2D triangles" might imply. Yes, only the XY coordinates determine the screen location of a rendered pixel, but the Z coordinate even at that stage has a lot to do with its color (e.g. for perspective-correct texture lookup) and whether it is painted at all (Z buffering).
reply