Most developers have never used a virtual-dom library, especially those that find JSX offensive. Your sales pitch for your library is coming off pretty aggressive, and it's a bit off topic. No one is talking about render speed.
> If you want to compile away the virtual DOM, then the compiler has to be aware of how data changes propagate through your model
That makes the assumption that a virtual DOM is needed for efficiency because the model exposed by a framework instance would be doing extra work otherwise. That is a red herring for two reasons:
1. You don't need any framework, whether MVC or otherwise, to have a fully expressive application written in JavaScript. If there is no framework and no model there is nothing superficial to be aware of or propagate through. Perhaps the vDOM provides performance improvements compared with other framework techniques. The compiler doesn't need to be smart when unnecessarily complex considerations are taken off the table.
2. There are only two classes of APIs exposed by the browser to JavaScript: The Web APIs and the DOM. Regardless of how clever or organized any framework is it still ultimately interacts with the page using the same basic set of instructions as everything else. For example a JavaScript framework isn't exposing new access points to hardware, memory, or markup artifacts that aren't already available as web standards. The performance benefit of bypassing a virtual DOM is simply due to executing fewer instructions, which means not executing a giant framework in addition to bypassing its virtual DOM.
As an analogy I don't need to blow up a dam and change the course of a river to put out a camp fire, but if I were going to put out a campfire by diverting a river I now have many additional performance factors to consider.
> So we looked for smaller alternatives, like virtual-dom and mercury. The documentation for virtual-dom was slim and we didn’t think the API for mercury was very user friendly.
Really? You found that it was better to write everything from scratch instead of modifying mercury for your use (thus making use of the very good virtual-dom library)? Do you know that the whole mercury source code[1] is only 126 lines of code?
Perhaps you should also know that your library usage examples, in the end, look just like mercury usage examples.
> It is not the one-size-fits-all solution you are using it as. If your site has a dynamic form, you don’t need a freaking virtual DOM.
The main benefit of virtual DOM afaict is that it makes rendering on the server trivial without needing to jump through all the hoops that, e.g. Angular has to. Contrary to the post, this makes it ideal for such sites.
> I think you’re using DOM to refer to the entire browser, not just what’s standardized as the DOM.
No, I'm referring to DOM as Document Object Model.
> Things like creating or modifying elements will run at tens of millions per second on an old iPhone
Not in the DOM :)
> but fundamentally a web page is doing more work and there’s no way it’s going to match something which does less.
That's why I'm saying that the DOM is not fast. It's excruciatingly slow even for the most basic of things. It's, after all, designed to display a small static page with one or two images, and no amount of haphazard hacks that accumulated on top of it over the years will change it. It will, actually, make it much worse :)
There is a reason why the same device that can render a million of objects doing complex animations and logic in under 5ms cannot guarantee smooth animations in DOM. This is a good example: https://twitter.com/fabiospampinato/status/17495008973007301... (note: these are not complex animations and logic :) )
> I have a hard time understanding how, say, calling 500 methods per second on a DOM object, e.g. `label.innerText = someSensorValueComingFromWebsocketsVeryFast;`, which needs to trigger various events, callbacks, etc... every time could possibly be faster than modifying a pure JS object at the "incoming message rate" and then blitting that object at the screen refresh rate or something similar ?
Updating a value on the DOM 500 times per second just because you can read it 500 times per second falls, again, in the category of writing wasteful code just because it's simpler. i.e. "don't care about performance, just let the browser work".
If you compare code which is initially bad, then sure, a lot of "solutions" will be better than that.
The appropriate comparison for evaluating the value of using a virtual DOM should not be about that, but only about the part you describe as "blitting the object to the screen". You have the data, no matter how, where, or even -to some extent- how often, and you want to put it on the screen.
> You don’t avoid using JS at all but only use it where it benefits the user. For example, the users benefit from having those forms having nice validation, dynamic lookups, etc. but they don’t benefit from using a vDOM on top of the far more efficient browser DOM or loading an entire general purpose templating framework instead of just using the DOM to update an existing form.
I am sorry, are you talking about example todo app ?
because for most apps with complex navigation, data entry, and validation this is absolutely not true. Just use the DOM , might be somehow feasible with webcomponents, but they are years late to the game.
> My point is this is not standard or even typical behavior and so you rarely see it.
It could easily become a standard if someone writes a convenient JavaScript library or composer tool to do exactly that. My point is that the underlying platform is not the limitation.
>What is it about the JS DOM that makes a DOM of N elements modelling a given app’s view, have a higher memory footprint than the “DOM” (view state) of a native graphics toolkit modelling an equivalent app’s view?
For starters, because the DOM is a very inefficient design of an app's view, primarily designed for text and simple forms, and with all kinds of extra crap bolted on. Until CSS Grid, there wasn't even a proper layout engine available, and people used styling primitives meant to float text for UI design...
A native UI engine can implement drawing a window with a button (the raw widgets, design wise) with a few lines of code to draw, two rectangles, some edge shading, and some text.
The DOM has thousands of lines for all kinds of contingencies for the same thing...
> A huge, huge, amount of effort has to go into JS engines to make them competitive with any sane language.
Most people consider Python and Ruby "sane languages", and JS has been running circles around them for years and years. Even the most trivial JS JIT beats CPython and MRI/YARV hands down.
> I'm also implying that the attitude of JS and HTML - ignore errors - seems to carry over into users of those things.
JS doesn't ignore errors. It throws exceptions for all kinds of illegal operations.
And error recovery is not a reason for CSS's performance problems. Of all the wrong reasons I've heard suggested for CSS's performance issues, that is one of the weirdest.
> Look at all the "shadow dom" and other hacks around terrible performance due to HTML's model.
Shadow DOM isn't really about performance. Are you thinking of virtual DOMs as implemented by React, etc.?
> I don't want a DOM when I'm porting my DAW to the browser, and I'm pretty sure you don't me to use that way either.
Actually we do want you to use the DOM. Use the DOM for everything except the parts that are highly graphical, i.e. rendering waveforms where it makes sense to use cavas. Most of a DAW is just normal controls and we want those to be accessible, readable, and potentially extendable.
The discussion was to answer, "And why is everything just so slow!"
> The web has a reputation for being slow, thanks to a traditional lack of GPU acceleration as well as 25 years of legacy code attached to its Document Object Model, that are pretty gnarly for browser vendors to deal with. That has nothing to do with JS itself.
That's not exactly true, though, as many discussions here on HN have demonstrated. The web is not slow due to, "a traditional lack of GPU acceleration." The DOM is the execution environment of JS in the browser. This isn't a discussion about JS in any other environment.
While you may have a vested interest in JS, that doesn't change its properties. Please, let's get back to the actual discussion.
> For the case of a big 'single page' JS app where a most of the rendering happens from Javascript fetched on the page, I'd guess this approach won't help that much since you have to fetch the JS from the CDN/cookieless domain anyways.
If you're taking that approach, you probably don't care much about performance anyways (I've never seen a pure JS SPA which rendered fast).
That's a poor assumption that appears to be based on your personal preferences and what you're comfortable with. I personally find both require some learning, but prefer the svelte version.
>"If JavaScript is already "fast", then why the need for something like this "to get better performance today"?
This feels obtuse -- obviously JS today is faster than it's ever been, but we want native c performance out of it. Do you have a in-browser VM solution today that can do complex, GPU-powered full 3D on every platform?
Then your VM solution is already inferior to javascript and will need significant time, money and development to even catch up to javascripts current level of "fast".
(Using, of course, "game" performance as a benchmark for the vague term "fast")
>"The market is speaking, and it's saying that JavaScript needs to go.
What? How do you rationalize this? Every major browser supports it to their core and most are working on dramatic imporvements for their implementations. Every mobile device supports it natively. Every computer I've seen in the past decade supports it.
The market is speaking? Where? Where is the market speaking? What devices are without javascript?
What browsers don't support javascript?
I think you are confusing the "market" for "a small subset of a-typical vocal developers".
> I don't follow that. You can't make a fast JS runtime in your previous examples (the JVM, PNaCl, .NET, etc.).
I included ones you could, such a NaCL. And I should probably have also included the native platforms (Apple, MS, Google), because they're the competition, even if their sandboxing or runtime environment isn't what one might call 'state of the art'.
At the end of the day, it's the two-tier universe that bothers me the most. You're content to foist the JS runtime on everyone but yourselves. Once you implement Firefox entirely as an application running in a JS VM, the argument that JS is good enough might carry some weight.
Most developers have never used a virtual-dom library, especially those that find JSX offensive. Your sales pitch for your library is coming off pretty aggressive, and it's a bit off topic. No one is talking about render speed.
reply