WASM gave Figma a lot of speed by default for a lot of perf-sensitive code like rendering, layouts, applying styles and materializing component instances, our GUI code is mostly React and CSS.
WASM engine performance has not been a problem for us, instead we are constantly looking forward improvements in the devex department: debugging, profiling and modularization.
One of the largest challenges of the platform we face today is the heap size limit. While Chrome supports up to 4GB today, that's not yet the case for all browsers. And even with that, we are still discovering bugs in the toolchain (see this recent issue filed by one of our engineers) https://github.com/emscripten-core/emscripten/issues/20137
The challenge of the perf-testing at scale in our company is helping developers to detect perf regressions when they don't expect them - accidental algorithmic errors, misused caches, over-rendering React components, dangerously inefficient CSS directives, etc.
Did you see the benchmarks? There's almost no difference between javascript and wasm except for a single certain browser. So you're really going to take on the maintenance burden to get that better performance?
This is a cool technique but I can just imagine the looks on my team mates faces when I tell them it isn't react... :/
Unless your claim is that WASM isn't actually faster than JavaScript - which I haven't personally verified but seems like a pretty shaky argument - you're not really making any sense.
When you've got 10,000+ component instances on a page, tiny bits of render logic add up. You can identify whether actual JS logic is your bottleneck (and to an extent, which JS is your bottleneck) through profiling.
The most common fix is to avoid calling render functions at all where possible. These are cases where the output of the render function will be identical to the previous output - which means React won't make any DOM changes - but where the render function will itself get called anyway. You can prevent this through better dirty-checks on props and state (shouldComponentUpdate, in React's case). Though if you're not careful, even those comparisons can become a limiting factor (not often, but sometimes). Immutable data structures like those in Immutable.js and pub/sub component updates like what MobX does can help ensure comparisons don't get expensive.
Another trick is to do expensive operations like mapping over a large array ahead of time, instead of doing it on every render. Maybe you even need to perform your array transformation in-place with a for loop, to avoid allocating-and-copying. This is especially true if you do multiple array operations in a row like filtering, slicing, reducing. Memoization of data that gets re-used across renders is a generally helpful pattern.
Another huge factor in this case is concurrency: JavaScript runs on the same browser thread as reflow and all the rest, meaning that all of this JS logic blocks even non-JS interactions like scrolling and manifests very directly as UI jank. React rendering cannot happen in a worker thread because React requires direct access to the DOM API (element instances, etc), and worker threads cannot share memory directly with the main thread; it's message-passing only (the upcoming React Concurrency project will help alleviate this problem, but doesn't directly solve it). Rust, on the other hand, can share memory between threads, meaning that in theory (assuming Yew takes advantage of this) renders can happen in parallel. Even if they don't, WASM already lives in a separate thread from the main DOM, which does mean it will probably incur some constant message-passing overhead, but it should never block reflow. And that would go a very long way towards preventing user-facing jank.
The average web app doesn't run into these problems, and usually they can be optimized around, but when you do run up against these limits any across-the-board speed improvement that raises the performance ceiling can reduce the amount of micro-optimization that's necessary, reducing the cost in developer time and likely improving readability.
> 99.9% of front-end work is only performance-critical for DOM rendering
There are two meanings of "DOM rendering" in the context of UI-as-a-function-of-state libraries: 1) generating the virtual DOM from data, and 2) modifying the real DOM to match it. Arguably there's even a 3) browser reflow as a result of those DOM changes.
You're right that 2 and 3 can't really be helped by WASM. But 1 can, and while it's not usually the bottleneck, it certainly can be. At my last company it was not terribly uncommon that fixing UI jank came down to eliminating unnecessary React render function calls because the sum total of them all - running the actual JavaScript logic - was taking too long. Assuming yew computes the virtual DOM in WASM (I don't see how it could be otherwise), the performance increase could definitely be beneficial for certain highly-complex apps.
To me, the benchmarks you linked show that the fastest WASM web frameworks I know (leptos, dioxus, sycamore) are competitive with, or faster than, the JS frameworks most famously associated with speed, like SolidJS and Svelte; and that they are way faster than React.
That doesn't support your statement here, so I was wondering what takeaway you meant to direct our attention to when providing that link.
Preact diffs against the real dom and is faster than Svelte in benchmarks while the library is only around 3.5kb (the minified code fits easily on a single screen). I'd note that authors from most of the frameworks represented have submitted their own optimizations, so performance isn't strictly based on the author's familiarity.
InfernoJS is massively faster than Svelte or Preact (the benchmark author had to do some rather convoluted re-writes of vanillaJS over the years to keep ahead) and it uses a virtual DOM, but has a few optimizations that React doesn't have or can't use due to its external API.
stdweb is the real thing to keep your eye on as it seems to show that WASM can be every fast. They still haven't gotten the vanilla benchmark up to the same speed.
React's Fibers do some cool batching behind the scenes which means that large updates (especially constant ones like animations) don't have large negative impacts on user interaction. I doubt this would be possible without a vdom or something very similar to track all changes before deciding batches and update priority.
Also, remember that vdoms have gotten a lot more efficient too. Newer ones can do things like re-use existing sub-trees or recycle vdom objects (they can also recycle the associated real DOM nodes too). Preventing all that extra garbage goes a huge way toward keeping those vdoms efficient.
As to svelte in particular, you aren't "just using an equals". It actually compiles behind the scenes into a completely different set of reactive code. This gets at my biggest issue with such frameworks. Their abstractions aren't free. I have to learn their proprietary syntax and markup. I then have to learn how their compiler works behind the scenes when its time to debug the code I wrote. When I compile a React-style component into ES7 and run it in the brower, I have to deal with webpack for imports and I have to understand that JSX is really just JS functions. Otherwise, what I wrote is what I'll see.
As I said, this project was about as complex as Figma: a feature rich collaborative editor. This designation has nothing to do with code size.
Go hook up the React dev tools to Reddit and look how much each component renders. You see that components are constantly rendering even though nothing apparent has changed. I get that ideally React would only re-render when something has changed, but clearly it’s easy to create situations where react thinks something has changed even if it hasn’t.
Another good example is the Facebook website itself. It’s just so slow. I get saying Reddit developers suck, but I don’t think you can say the same about FB developers given that FB made React. React is popular because it’s a good developer experience. It’s not the best choice if you want optimal performance.
Do you have any performance numbers or benchmarks by which to measure the claims of superior performance? I’m not doubting you do see better perf - but I’m curious how this compares to other React alternatives.
ftr, when i mentioned speed i was mainly referring to WASM->DOM. So, if that ends up too slow for heavy DOM manipulation (eg, React style diffing/etc), then with any luck we can offload the DOM interactions to JS. Eg, WASM->JS->DOM
Why do you care about optimizing perf so much? React and even Angular are fast enough that you should never need to worry about any sort of perf hacking, and I would argue that if perf is the number-one concern for your use case you shouldn't be using a framework at all.
The primary thing I care about when comparing frameworks is developer productivity. Bundle size is a moderate concern, saving 50-100ms to re-render a giant table doesn't even make the list.
Use the nunjucks js all the time! Really works so well and the performance is insane. Render 20 plus just in time in a React Lowdefy app and no lagging. Would love to see a performance benchmark for a wasm version of this!
Yeah the build times were a killer for me. I did a small project with rust->wasm in the front-end, and this was a bit painful compared to a standard react workflow.
It would be really nice if this could be improved in the future.
You're missing the point he's making. Preact is leaner than react. FF was leaner than mozilla.
React isn't slow (yet?), but it's not hitting any universal limits, there's always scope for leaner, smaller and faster software (yes it is, see benchmark).
Legitimate request: can you provide some actual benchmarks or examples that demonstrate these perf issues?
Not straightforwardly (due to pseudonymity and in any case client confidentiality) but I can give you a typical example.
I've worked on quite a few web apps in recent years that have used SVGs to draw somewhat complicated and interactive visualisations. That needs a level of efficiency well beyond your basic "Here's a control, here's the underlying data in the model, there's roughly a 1:1 correspondence between them" web app UI. For example, some of the layout algorithms could take a significant fraction of a second just to run once, so typically you get a whole extra level of cache sitting between your underlying data and your React-based rendering functionality. Even then, a rerender of the whole visualisation can easily take a significant fraction of a second, just due to the number of elements involved and the relatively poor performance of browsers at handling large inline SVGs in various situations.
So, you immediately need to start isolating which parts of the SVG actually need updating in response to user events, including relatively frequent ones like mouse moves or repeated keypresses. But if many of those child components are also going to be interactive, there are potentially hundreds or thousands of event handlers flying around. If each type of interaction needs a handler that has some common information, like say where your underlying controller logic lives, and also something specific to each element, like say the ID(s) of some underlying item(s) in your data model that are related to that part of the visualisation, then either you get a lot of binding going on or you have to pass in a common function and manually call it with the extra parameters from the event handler functions for your SVG elements.
One thing you really don't want to do in this situation is have function involved that is (a) recreated on each render of (b) a high-level component with a lot of children that will ultimately be calling that function. Doing so will inevitably mean that you need to rerender all such children in your viz instead of just a very small number that might be relevant to say a particular mouse move or keyboard event.
The thing that gets me is that in this sort of environment, whether it's an SVG or a table or a list or any other part of the DOM that will contain a lot of content and probably be based on a lot of different underlying data points in your model, it's entirely predictable that you will have to focus on only narrow parts of your rendered content in response to frequent user-triggered events. So you know right from the start that you can't just bind your event handling functions at the top level and pass them down through props and add more specific parameters as you pass them into more specific child components. That entire strategy is doomed before your write a line of code (though unfortunately it will probably work just fine if you're developing with a very small set of data and a very simple resulting visualisation, which can create a false sense of confidence if you're not careful). So you might as well plan the design for your event handling and callbacks around what you're inevitably going to need from the start. I can't actually give you hard data on the exact FPS we saw the first time we tried the bad way of doing this, but I can tell you that it was more like SPF than FPS.
Ryan did also address the `shouldComponentUpdate / PureComponent` aspect in the post already.
Honestly, the last parts of the article about ignoring handlers didn't make much sense to me anyway. I can't see why you'd ever want to pass props into a React component that weren't ultimately going to influence rendering one way or another, and therefore why you'd ever be in a position where you could safely ignore props for shouldComponentUpdate purposes yet wouldn't just remove them altogether. Certainly any assumptions about event-handling on<X> not needing to be diffed would be totally wrong in the context of a table/list/SVG/etc. where each child element is tied to some specific underlying data point(s) and has event handler functions supplied via props accordingly. That looked like another of those cases where the author is generalising their own limited experience and failing to realise that for plenty of other situations the same assumptions won't apply.
The problem with those small benchmarks is that it's pretty easy to manually write the optimal sequence of DOM commands to get the best performance. But when you scale your front-end to millions of lines of codes with many full time engineers that may not know front-end very well, then it becomes extremely hard to do it properly.
React originally was designed for developer efficiency and not performance. It is a port of XHP (in PHP) that we use at Facebook to build the entire front-end and we're really happy with. It turns out that the virtual dom and diff algorithms have good properties in term of performance at scale. If you have ideas in how we can communicate it better, please let me know :)
Solid.js is even faster than inferno, and it doesn't really use a VDOM strategy, uses a strategy much more like svelte. IMO svelte is just poorly implemented from a benchmark perspective.
In reality, most of these benchmarks are not meaningful when talking about real app performance. What's meaningful is how you do global state updates in your app. If you use a react app with react-hook based context providers that unnecessarily update hundreds of components on simple changes, you perf is going to suck. If you use a react app and don't use React.memo anywhere, perf is going to suck. If you use react very carefully and are fully aware of when the vDOM is going to run and use small components that only update when their data actually changes, and ideally avoid running vDOM 60 - 120fps a second for animations, performance is going to be good.
I like Solid.js because it does all this for you by nature of just using the framework. Svelte does some of this for you so for real world apps performance is likely to better than react, but it doesn't do it as well as Solid by nature of it's state management strategy, not by nature of it's DOM update strategy.
The less you update, the faster your app will be. Then the DOM diffing strategy doesn't matter.
This looks neat but was React itself ever the problem with slow web apps? I haven't looked into this so correct me if I'm wrong but from my impression, slow libraries are the biggest problem with slow performance in SPAs.
I'm not convinced that the 100kb is what's causing the perf problems though. It's not that hard to make a React app that loads almost instantly. There are lots of huge, slow react apps, but that's due to poor engineering choices not the framework.
I find useful comparison points are usually React, Preact, and SolidJS: React is fairly slow, but it can do pretty much everything, Preact is React with a lot of parts ripped out to provide a more efficient, slimline version, and SolidJS completely changes the rendering model and is probably the upper limit on how efficient a front-end framework written in Javascript can be.
It's very interesting to compare those with Rust-based frameworks. The big noticeable difference is that all the Rust-based frameworks require significantly more memory, even than React, and ship a lot more total code. However, in terms of performance, they are ringed by Javascript frameworks: SolidJS is by far the fastest framework (although Dioxus is catching up in most areas), and React is by far the slowest.
Obviously a lot of this is to do with how they interact with the DOM. Javascript gets that for free, but Rust and other WASM-based frameworks have to go through a process boundary in order to read or write DOM values. When direct DOM access eventually comes, I suspect we'll see some pretty big changes in these results. But for now, it's pretty clear that the most important thing for optimisation purposes is your application's architecture, rather than the choice of language.
That describes the DBMon benchmark [0] pretty accurately. It has a hundred "components" (rows), each with several subcomponents that re-render based on state changes.
Actually it is less interesting and indicative of mainly one aspect of web app performance, that is re-rendering. The JS Framework Benchmark gives one a more holistic overview that includes bulk insertion, deletion, swapping, events, startup time, etc.
On this benchmark, Simulacra.js outperforms React by a wider margin.
WASM gave Figma a lot of speed by default for a lot of perf-sensitive code like rendering, layouts, applying styles and materializing component instances, our GUI code is mostly React and CSS.
WASM engine performance has not been a problem for us, instead we are constantly looking forward improvements in the devex department: debugging, profiling and modularization.
One of the largest challenges of the platform we face today is the heap size limit. While Chrome supports up to 4GB today, that's not yet the case for all browsers. And even with that, we are still discovering bugs in the toolchain (see this recent issue filed by one of our engineers) https://github.com/emscripten-core/emscripten/issues/20137
The challenge of the perf-testing at scale in our company is helping developers to detect perf regressions when they don't expect them - accidental algorithmic errors, misused caches, over-rendering React components, dangerously inefficient CSS directives, etc.
reply