Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

good news is 1.10 already is shaping up to be about 2x faster.


sort by: page size:

Looking forward to the 1.22x speedup! Been using Nim for a lot of those types of use cases personally.

Further, the performance improvements in 1.8 will probably make almost all apps faster anyway. Anyone counting nanoseconds needs to do their own benchmarking already to catch processor-specific regressions etc so probably exactly nobody will actually have a production regression from this.

It's going to get faster. That's why it's still experimental. ;)

I'm excited. Even if it is only a 5% to 10% improvement in performance, then that buys me a little bit more headroom on my current server setup.

I look forward to testing it out down the road.


I would also expect 10x improvements over the next year due to optimizations found throughout the stack.

That's good news. Looking forward to the performance improvements.

I'm seeing about a 50% speedup. It depends what you mean by game changing, but you should certainly see if 3.11 is a drop in replacement for whatever you are doing.

The article has quite a few performance numbers in it. The short answer is that it's much, much faster but that we will continue to pursue improvements pretty much forever.

I think that's a very welcome improvement. With NVMes that go at 7 GB/s we're now at the point that it can be hard to do anything useful with the data fast enough.

So I think good acceleration for things like compression is going to be a big help.


Yes, even with SSDs it seems like 10x is very optimistic. It should be several orders of magnitude.

I think it will be a pretty sizeable jump (100% growth?). I expect the next 0.8 release will make a lot of noise when they leave Handlebars behind and really ramp up the rendering engine side of things.

Yep and when the faster cpython team gets to implementing advanced JITing (which GVR has said is planned) that'll be huge.

This is true! Although I'm also really excited at the potential speed (both for loading the model and token generation) of a 1B model for things like code completion.

Nice. Hopefully that will lead to some nice runtime performance improvements.

This is one of the things that makes me most excited about Bun. It’s much faster for certain kinds of things like this, and I hope its focus on performance spurs a positive arms race of sorts.

Hold your horses, it hasn't run for 32 minutes yet. If it succeeds in that, I expect we'll all be hearing about it.

This is a big improvement compared to the microseconds once was. So maybe we're getting there.

Edit: removed some stuff that turned out to be fluff once I'd actually read the article. Whoops...


We have some important changes landing soon that will make insertions O(1) rather than the current O(n) - so this will greatly improve performance.

That's just speculation. And even if it's 2x, it's still a very welcome improvement.

This is a welcome improved spec! I'll be curious how long till wide adoption.
next

Legal | privacy