Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

(a) You can use the key attribute in order to get DOM reuse. If you are looping over N keys then React is going to reuse the nodes and move them around.

(b) You can implement shouldComponentUpdate in order to have a quick way not to re-render a sub-tree if nothing changed.

(c) See (b) but we're also working on changing the internal representation of the virtual DOM to plain js objects that can be reused[1]. We were super worried about GC but it turns out that it hasn't been the bottleneck yet for our use cases.

(d) If you are writing pure React, all the actions are batched and actually, it's write-only, React almost never reads from the DOM. If you really need to read, you can do it in componentWillUpdate and write in componentDidUpdate. This will coordinate all the reads and write properly.

A really important part of React is that by default this is reasonably fast, but most importantly, when you have bottlenecks, you can improve performance without having to do drastic architecture changes.

(1) You can implement shouldComponentUpdate at specific points and get huge speedup. We've released a perf tool that tells you where are the most impactful places to[2]. If you are bold, you can go the route of using immutable data structures all over the place like Om/the elm example and you're not going to have to worry about it.

(2) At any point in time, you can skip React and go back to raw DOM operations for performance critical components. This is what Atom is doing and the rest of their UI is pure React.

[1] https://github.com/reactjs/react-future/blob/master/01%20-%2... [2] http://facebook.github.io/react/docs/perf.html#perf.printwas...



sort by: page size:

> How does React know that only that sub-component needs to be re-rendered?

It doesn't. By default it will render the whole tree again. This happens in javascript and is probably much faster than you think.

However, when an app grows large enough, this can be a source of performance issues, which is why React provides ways you can inform it about work it shouldn't have to do, namely shouldComponentUpdate. If you can compare the data and determine that a particular sub-tree doesn't need to be updated you can implement a shouldComponentUpdate callback that does this. Immutable data that can be cheaply compared by checking reference equality is the preferred way of doing this.

More info on this was recently added to the docs: http://facebook.github.io/react/docs/advanced-performance.ht...


I will address these with respect to domvm [1]

> What about state within a component? How do you encapsulate that state so that other components can't muck with it?

Solved.

> What about passing data to that component?

Solved.

> What about type checking that data passed in in debug mode?

Could be solved if i rewrote in TypeScript. But this has never been a problematic issue in 10k+ LOC apps.

> What about a formal API for components to talk to one another?

Yep.

> What about knowing the explicit amount of the view tree that needs to be re-rendered based on what data changed?

Obviously, that's what virtual dom is.

In addition to all of this and much more, materialized views are independently refreshable, can be composed between disjoint components. There are async lifecycle hooks, a router, mutation observers, ajax sugar.

All of it is 17k min, requires no special tooling or IDE to write, is isomorphic and is 2-3x faster than React.

I have no doubt that React brings a lot to the table, but I hesitate to treat it as the final word in frontend frameworks for all the above reasons.

[1] https://github.com/leeoniya/domvm


(I’m not the person you replied to, but I’ll offer some thoughts anyway.)

for a component, why do I need the virtual dom diffing?

React allows you to build relatively large and complicated DOM trees by composing smaller components, and it is designed so you can then (re)render the entire tree as a single operation when any relevant underlying state changes, without having to manage (re)rendering each individual component within that tree manually.

This is useful because now you only need to define one absolute way to render from a given set of underlying data. You no longer need to give a relative specification for all possible transitions from one state to another, which can be a huge reduction in complexity and edge cases if you’re working on a large, complicated UI.

However, other things being equal, rerendering your entire DOM tree on the slightest change in the underlying data would become painfully slow for any system that isn’t very small. That is partly because there are costs to regenerating the DOM tree itself, as with any template-based system. However, the tightest bottleneck in today’s browsers is usually the consequential costs that result from updating the DOM in terms of regenerating layout and so on.

The virtual DOM diffing is essentially a performance optimisation that lets you mitigate those consequential costs, because now only the parts of the DOM that actually change as a result of changes in the underlying data will lead to rerendering in the browser and the costs that incurs. In other words, virtual DOM diffing isn’t the point of React, declarative/absolute rendering is, but the former is what makes the latter fast enough to be viable.

As an aside, React’s answer to the other half of the performance problem, the cost of regenerating the entire DOM tree, is the `shouldComponentUpdate` function. That lets components, including child components deep within a large tree, perform any quick tests they can to determine whether their rendered output will actually change, and to skip the rerendering if not.

That leads on to various other ideas that have become popular in connection with React, such as using immutable data models for the underlying data. With immutable underlying data, a shallow comparison between a couple of object references that can be performed in moments acts as a “dirty check”.

However, you don’t have to use React in that way. Some people are wary of relying too much on `shouldComponentUpdate`, which inherently violates the “single source of truth” principle and risks introducing bugs if its assumptions differ from those of the same component’s `render` method. Others find the arguments for building your entire data store around immutable objects, which itself carries a hefty performance overhead, less compelling.

There are certainly other reasonable architectures you could choose for supplying the underlying data to React components and triggering rerendering when that data changes, including more traditional designs where view components observe the underlying data model in some way and trigger their own rerendering on any relevant change. Going down this path is effectively using React as a reasonably efficient and composable template rendering engine, so you can still have the advantages of absolute rendering logic, and you bypass some of the potential performance problems caused by doing large-scale rerenders with React, but in return you take back some of the responsibility to set up explicit dependencies between your view components and the data they depend on. As ever, there are trade-offs, and usually there will be multiple quite different designs that will do a decent job as long as you understand their pros and cons.


This is diffused cultural knowledge, but most of the ideas in React can be understood as: view = render(state). It re-renders everything all the time. But practical considerations force some optimizations. Everything else in React follows from those optimizations.

For example, if you destroy and re-create DOM all the time, then it loses critical information like user's cursor position and text selections. It is also slow to read and write to the DOM. Thus the need for an intermediate data structure, the virtual DOM.

React re-renders the virtual DOM every time the state changes. And it then diffs the previous and current ones against each other and does just the minimal number of DOM mutations to sync it up. To speed this up a bit, array elements are denoted with "key" so (I assume) there is a way to see if an element has been added or deleted.

But re-rendering the virtual DOM all the time can also be costly in terms of performance. Thus the next set of optimizations: React.memo+immutable data, and so on..


>If you keep all your logic super high level, then your entire component tree will re-render often.

This is not true, especially if you're following _actual_ best practices and using e.g. React.memo() and immutable state updates. React is explicitly built around the idea that you don't need to re-render every part of the DOM on every state update.


React can handle all of your items just fine depending upon usage.

(a) Using the key property, will give React a manner to determine likeness of elements

(b) Don't calculate the expensive things at render time, do them when loading or modifying state.

(c) Is related to a, but I haven't run into large problems with this personally.

(d) React does batch changes to an extent I believe.


> sounds less efficient since you'll need to re-print the entire page with every change.

Huh? No, not at all. As far as I understand, React has algorithms that replace only the html that changed in a dom subtree (and that is called virtual dom, not shadow dom, which is a different concept).

But if you already know exactly what has changed and where to change it in the page, there is no need for more complex algorithms to kick in. Just take the pointer to your div or cell and change the content.

Bottom line: React is written in vanilla Javascript. Can't do better than it.


You don't have to recreate the entire vdom if only one part changes. You need only recreate that component and its children (something like React's PureComponent or shouldComponentUpdate optimizations can prevent the worst cases without too much trouble).

Changing branches in a component is extremely common in code I write (very large business application with lots of rules).

Another important optimization is caching and re-using DOM nodes. With a vdom, you know exactly which properties and attributes differ from the default node. To reuse a node you need only update these properties to their new values or restore their original values. Without that tracking, it would be computationally cheaper to just create a new node.

One important example of this is our use of flyweight scroller patterns in lists. The content of the sub-tree changes, but most of the sub-components stay the same, so a vdom could keep most of the dom nodes around.


While the basic idea of rendering to a virtual DOM is simple, making it work well for large applications require a lot more than a toy implementation as described in the article.

In order to get adopted, React needs to coexist with existing applications/third party libraries that do manual DOM mutations. The life cycle methods are there to deal with this.

React implements its own class system that supports mixins and type annotations. The plan is to change the API to use ES6 classes (we're working to improve the standard to support all React use cases).

React re-implements its own event system in order to make it fast, memory efficient and abstract away browser differences.

Making composable components is not as straightforward as it first seems. There are a lot of edge cases like refs, owner/parent to be handled.

Then, as you mentioned, there's the diff algorithm and batching strategies which need to be implemented in a performant and memory efficient way. And provide hooks for the user to be able to give it hints via shouldComponentUpdate.


> Yeah, I also don't think that's correct, in most cases.

Actually, stupidcar assessment is completely correct.

> React updates the entire tree internally when anything changes, but actually only changes DOM nodes who's prop or state has changed.

Sorry, but you're mixing up a lot of concepts here!

When you tell react(-dom) to render, it recursively calls all the render() methods of your entire component hierarchy and updates the entire virtual DOM, but only mutates the actual browser's DOM for the parts that are changed.

Yes, there are ways to block a render based on prop staleness using shouldComponentUpdate/PureComponent/React.memo. These are explicit optimisations on top of the default render-everything model.

Props and State play into this story quite a bit differently. When a component-local state changes react only renders that specific component (with everything below it). Props are passed down on every render and are by default not compared for staleness so don't prevent re-renders. Props and DOM updates have nothing to do with each other: props are properties of components, not the DOM! Sure, the DOM can be directly or indirectly be influenced by those props.

> Maybe a side-effect of the poor equality we have in JS-land.

Yes, this is true. A lot of reacts's quirks are a result of JS's poor notion of equality.


What if your component has a lot of nodes? Do you re-render them all? I guess that's the reason they made react. Even if the state of component changes most of the DOM nodes of render component stays the same and re-rendering them would take too long.

The PureRender and componentShouldUpdate methods are "escapes" from the React philosophy, so I don't count them as being "React". They make the code more complicated because, with them, inconsistencies can easily arise.

Making a simple local change to a list of N items, which would normally take O(1) energy, suddenly takes O(N) energy. Even with the PureRender and componentShouldUpdate methods, because React will traverse the complete list to determine which items should be changed (calling componentShouldUpdate for every item).

I agree with you on the DOM. Its speed needs to improve. React offers a nice abstraction which may keep the code simple, but, as explained here, has problems of its own.


1. If a component changes state and its parent doesn't, then the other children aren't considered so it would be O(1). If the parent rerenders, then yes, we would consider each child and it would take O(n).

2. Yes, it would. You would really have to go out of your way to make that happen, though. React has a feature for batching state updates together to prevent unnecessary rerenders if all of the updates occur together. Beyond that, it's rarely an issue.

If you have a particularly long list you can use a tool like react-virtualized (https://github.com/bvaughn/react-virtualized) so that only the onscreen rows are rendered at all.


You know it well (DOM, Retained | Immediate). There's nothing to do with saving react here. Talking about browser app specially, I'd keep eye on standard like Template Instantiation and the coming up Symbol as Map's key. It's closed to the recent "Signal (reactivity)" trend. A way to update part of tree more efficiently without re-render-the-whole-thing. As long as we still use DOM, it's not going to be more revolutionary than this.

Not necessarily addressing the speed portion, but if you're going to give a key to a repeated item in React, it is best to use the index instead of a unique identifier(if the overall DOM structure does not change much between renders) so that React does not destroy and recreate each item on tree change.

That's really not how you'd write a large idiomatic React application.

That's a bit No True Scotsman for me. Just in the replies to my previous comment, there are several quite clear yet apparently contradictory views expressed about how these things are supposed to work. I doubt that anyone, probably including the React developers themselves, yet has enough experience to sensibly determine what is and isn't idiomatic or best practice here. In any case, I'm far more interested in effective results than dogma, which is why we've been doing practical investigations into what really happens when the tools are used in different ways.

Conceptually, you should be running render() methods on the actual items which have changed, and a very small number of parent components with little-to-no DOM elements. You should also be executing a relatively small number of shouldComponentUpdate() methods which should be extremely fast to execute.

And this is where the assumptions started breaking down in our experiment.

For one thing, we have a somewhat complicated data model and a somewhat complicated UI to present it. Managing the relevant shouldComponentUpdate mechanisms was getting tedious even in our short term experiment. If you're rerendering top-down in a heavily nested UI then it seems you need to provide shouldComponentUpdate in quite a lot of places just to deal with the parent-child relationships with acceptable efficiency, and each of those implementations is a potential source of errors because now you have multiple sources of truth.

Also, for things that aren't extremely fast to execute because they rely on derived data, you wind up with questions of what intermediate results to cache and where. Caching intermediate results goes against the kind of approach much of the React community seems to advocate, hence props in getInitialState being an antipattern and all that.

Time will tell, but once the abstraction started leaking in these kinds of ways, it wasn't clear to us that trying to follow the pure, declarative approach that a lot of React advocacy promotes was really better than other strategies.

If you are doing that, then where exactly was the performance bottleneck, and how was your alternative approach avoiding it?

Bottlenecks were as above, among others.

Avoiding them comes down to the simple principle that if each component attaches to listen to events in a modular way and sets its own state accordingly, then the default is essentially not to rerender anything and has zero overhead. As soon as you start making active decisions about what to rerender, you immediately have the overhead of those decisions to worry about, and sometimes that overhead is significant if you're, say, responding to every character typed in a text box.

If you respond at a more local level, you can choose the granularity of responses to keep the overheads more controlled, which it seems so far is effectively getting the best of both worlds.

In fact, it sounds like your second approach might be closer to a normal idiomatic React app than your first implementation. Incidentally, are you using some sort of flux architecture?

In the sense that user interactions send meaningful messages to an internal component, that component is responsible for updating some stored data accordingly, and the internal component then signals changes that other parts of the UI can respond to, yes, it's somewhat like Flux in overall architecture.


> the top-down push of data through parent-child component relationships

This is a conventional approach when working with the DOM, since the XML is organized as a tree. Plus it happens to work nicely with their intended data flow pattern. But you make an interesting point, what about UIs that don't fit nicely into the document paradigm? Are there cases where it would be better to organize UI components in a graph, for example?

> if you are working with a lot of third party components that are not react based, it can be hard (or a lot of boilerplate) to contain them within react applications.

In my experience integrating third-party UI components has been much easier than I would have guessed. The difficulty has not really been in wiring the React parts to the component's API, but in making a stateful third-party component behave in a stateless fashion.


Thanks for your thoughts.

I eventually did figure out what's going on. I did figure out that using key= solves it. I also found I could use class components and make the three labeled instances in the constructor. That gives them a long enough lifetime to stay consistent also.

I'm not trying to tell anyone else they shouldn't use react. It's great that so many find it to be so productive.

You might say I dislike the general philosophy of react. I would rather choose whether I'm using 1-way or 2-way binding than being told. I actually have come around on JSX. I think it's pretty good now.

As for being weak and contrived, I don't know what to tell you. This honestly seems like a natural way to write this. I'm a react beginner. When I ran into this, it took me some time to figure it out, but I did eventually. I suppose another part of the philosophy I don't like is entire premise of virtual DOM and reconciliation. It creates another layer of concerns the developer needs to think about. This kind of stuff should be an implementation detail. Instead, you have to remember to use an array rather than unrolling. Or just use keys everywhere. More precisely, maybe you don't need to. But I would need to.


from [1]: "In React, you simply update a component's state, and then render a new UI based on this new state. React takes care of updating the DOM for you in the most efficient way."

So you don't actually do DOM mutation yourself, nor do you "update" the DOM in response to events on models that changed. You think in terms of "pseudodom", not actual DOM, so if any state changes or an event fires you can just redraw all the pseudodom instead of worrying about finding the right place in the dom that you need to update.

[1] http://facebook.github.io/react/docs/interactivity-and-dynam...

Sorry, I don't mean to derail your thread.

next

Legal | privacy