Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

You might be right in "most" cases but looking at the benchmarks, there is quite a few cases which are used a lot in everday computing where performance literally hits rock bottom. Video rendering went from 600 frames per second down to 200 frames per second. That is a 60% or more performance loss.


sort by: page size:

Yup. GPU pipelines are another very common example, with the notable difference that pretty much every GPU programmer will optimise for the render pipeline, which is not the case for the CPU.

Well it is, but if your CPU is getting its job done 1ms sooner, then the GPU now has an additional 1ms to do the rendering while delivering the same deadline. Which probably means it'll render more frames.

Bottlenecking just means that improving the bottlenecked thing will result in the biggest improvement. The second-biggest performance limiter can still be a significant performance limiter.


A very good point. My worldview of performance is highly biased towards “full steam ahead” desktop graphics.

Video encoding is a bit of a special case though because the common algorithms are carefully designed for hardware acceleration. For most rendering, it doesn’t make sense to go out of your way to avoid the FPU.


I find it hard to believe that with today's dualcore-multi-GHZ CPUs there can be significant speed differences in rendering a web page.

That’s bananas. I write a lot of complex WebGL shaders, I had never heard that some devices have such a clear (and early) performance cliff.

That's the part about batch rendering. Everything else there is architecture specific and typically not a problem on modern hardware anymore, unless you really force it to its limits.

The coders accidentally optimizing to reduce GPU compute load when there's GPU compute to spare and memory bandwith is limited, for example.


I should have said something different and not even mentioned a specific framerate. What I meant to say is that GPUs now-a-days are able to most rendering with ease, but the product in this link will near a lot more.

But rendering isn't what makes a modern computer slow. For example, Assassin's creed isn't even that well optimized but it maxes all threads, uses all the GPU and looks beautiful.

And people have tried "simple" hardware (it ends up being not simple) that the programmer (compiler) understands before, it doesn't work.


Re-renders are incredibly cheap in the big picture. They are not a source of performance bottlenecks in 99% of real world applications

I'd also argue:

4) cpu speed isn't the limiting factor many times now. Disk, memory, network, user input etc all are much more impactful honestly.

Sure getting a Blender run down 10% is huge, but what is that time saved compared to how long setting up the render took?


I think the point is that benchmarking the graphics subsystem's throughput is easiest when you look at high framerate games; otherwise CPU or just rendering time might be the dominating factor.

Kind of a weird New Years fever dream tangent. Bear with me:

Say we have a scene to render and a frame can be rendered in 10ms given the chosen game engine or graphics API or whatnot. There is a hypothetical optimum code for rendering that scene that can do it in less time. And the delta between those times is the price you pay for API abstractions, human abstractions (eg. maintainable code vs clever code), and other things.

Is this a concept with known terminology?

I was thinking tonight, “my hardware is running this game at 45FPS and I know that it could hypothetically be WAY faster if we put a ton of money and brilliant engineering behind it.”

And that got me thinking about how you would go about measuring efficiency loss or waste. We’ve all experienced it: two comparable games running at vastly different performances given the difference various technical and business choices made.

Furthermore, I’ve been very curious about why there isn’t a tool that can provide a concrete answer to, “what part of my PC is bottlenecking performance?” I’ve been playing a game that’s running slow and noticed GPU is never above 80% and CPU is pegged at 100% so I shuffled my computer hardware to essentially upgrade my CPU by a lot. Didn’t have any effect.


It's not the rendering / DOM part that got 50x faster though, so that would be apples to oranges.

Exactly it’s the same with laptops. If you know anything about decent specs and don’t buy something with a ridiculous bottleneck and do a fresh install you can find a 5+ year old laptop that is amazing for anything other than extreme workloads. I mean like rendering scenes - not VSCode and Slack or something

I can't believe that rendering speed is an issue with modern CPUs.

I get that but it is about performance when we reach the magnitudes GPU rendering is over CPU rendering.

This isn't 20% faster, it's the difference between seeing the image in almost finished form and interacting with it with real time responsiveness vs having to wait minutes for an image. [1]

[1]: https://www.youtube.com/watch?v=Gf_P1G_wbK8&t=0m33s


He's talking about rendering (quality) not speed.

Check out the video. In the example, rendering of an animated model goes from 0.25fps without it, to 4fps with it using CPU rendering, or up to 40fps using CUDA for the example model....

I'd say that is large enough to have the potential to fundamentally alter how people work.


All I meant is, if drawing too many things overloads the system so it can't render at 30fps, that's fine, I build things that don't look bad if the rate drops. I can't do action games, high-speed video, a few other things. That's fine, I can still do lots of things.
next

Legal | privacy