EDIT: I misunderstood the parent comment; it's not talking about Shadow DOM.
---
> It can act in the shadow too, allowing parallel computing. Is it the great beginning of «in browser javascript multi threading»?
Unless something has recently changed, this is not true.
Shadow DOM elements are still part of the same JavaScript execution environment as the Light DOM; there's no concurrency at that level.
If you want browser multi-threading, we've had WebWorkers for years and they keep getting better (OffscreenCanvas and module support being current examples).
Yeah, Shadow DOM was definitely introduced as an immature technology with lots of problems. Nowadays there's also Declarative Shadow DOM[1] which allows you to create Shadow DOMs without JS. It alleviates some of these problems at the cost of introducing a few more of its own. It's like a drunken stumble with two steps forward and one back.
I'm developing a browser extension that injects a little popup window onto webpages using content scripts.
The shadow DOM is critical to make sure that my extension's styles don't overwrite the page's styles and vice versa. And unlike iframes, it grants the ability to use js state management that extends beyond the shadow DOM root element. So, you can inject multiple shadow DOMs capable of "talking" to each other.
Yes, and not DOM manipulation. Its the anonymous functions, closures, functional programming, scopes and contexts, etc... pair it with Rhino and the power of "dynamic scopes" and shared scopes and you have multi-threaded programming.
It kills cross-browser compatibility, kills standards (since they're unreachable, undocumented elements that can handle input, interaction and affect other elements.
Shadow DOM is a mechanism by which a browser can inject a subtree into the DOM, but it won't be visible to JS.
This was often used for things like custom elements (inputs and such). I presume it wasn't very popular to use the shadow dom in webkit.
Far as I recall, there's also talk about using the shadow dom for injecting foreign fragments into the DOM in a secure manner (web components). Although there's probably ways to achieve this without the full shadow dom.
This is speculation: The reason Google is moving forward quickly with Shadow DOM implementation is that this polyfill is dog slow. Think about what it takes to make DOM invisible in the browser in a polyfill- overriding every DOM method to filter out shadowed nodes. I've heard reports of 10x slowdowns on DOM operations with this polyfill in place. The kicker? Web Components (specifically Custom Elements) lose almost all their encapsulation without Shadow DOM. Those of us who are betting heavy on Web Components as The Future™ are pretty anxious about this.
I don't think you know what Shadow DOM means. Shadow DOM is not just another virtual DOM. It is a very specific thing built into browsers used only for HTML Custom Elements.
It is not very popular. React, Vue, etc., do not use the Custom Element spec.
Shadow DOM, in particular, provides a lightweight encapsulation for DOM trees by allowing a creation of a parallel tree on an element called a “shadow tree” that replaces the rendering of the element without modifying the underlying DOM tree.
Shadow Dom is completely optional and you can leave it out if you don't need an encapsulated DOM at runtime. It's not at all welded to the implementation.
Did you see the performance issues also on web components that didn’t use shadow dom? I noticed this performance wall as well, but I always put it down to shadow dom being heavy. Now I’m wondering if it is the instantiation itself that’s heavy.
For state mgmt I have noticed the same problem and was thinking about trying to take a page out of desktop development’s playbook by having a window-level message bus that all components subscribe to and filter data from, and every state change being published on that message bus.
Is the value proposition here the CSS performance benefits of shadow DOM without needing shadow DOM? If so, its surprising there's no direct comparison in the link.
I could actually see this being pretty compelling for easy performance wins. Its much easier to adopt a single CSS property versus a different model for DOM interaction. The churn in the spec and various implementations didn't help shadow DOM adoption at all.
Its 2020 and I've been using shadow DOM(plus web components) for 6 years in side projects. When the stack has native support its great; it runs fast and is a pleasure to develop. The downside is you have to be willing to swallow polyfills that degrade performance when you don't have native support.
If 2020 isn't the year of shadow DOM, maybe 2022 can be the year of shadow DOM polyfills that don't ruin performance.
My biggest concern with using Shadow DOM for things like this is that it makes it depend upon JavaScript, which I strongly prefer not to do.¹ I know there’s some research work into declarative Shadow DOM², which has the potential to resolve this, but that’s still quite some way off. As it stands, I don’t feel comfortable using Web Components with Shadow DOM for things like this, which is a pity, because it’s a delightful way to work, and this toolkit looks very good in most regards. Instead, I like Svelte’s model which essentially allows you to write things this way, but have it result in light DOM, so that you can still do server-side rendering if you desire.
¹ Requiring JavaScript harms accessibility, especially for web pages (as distinct from web apps). It makes pages take longer to load, not be as spider-friendly, and be more fragile when JavaScript fails to load, whether deliberately or accidentally—and simple network failures and the likes break things more often than you might imagine. I myself browse the web with JavaScript disabled via uMatrix, mostly because it speeds the web up a lot on average. Privacy improvements are a distant secondary reason. When I encounter pages that require JavaScript, I either give up on them or open them in a Private Browsing window, where I don’t have the uMatrix extension enabled; or if it’s something I expect I may be using more than once or twice, I see about more precise whitelisting in uMatrix.
I think you may be confusing Shadow DOM with Virtual DOM. Shadow DOM is more about encapsulation and isolation of independent DOMs, while Virtual DOM is about keeping an in memory representation of the DOM to allow for more efficient rendering. Shadow DOM is a browser API whereas Virtual DOM is implemented in frameworks.
Shadow DOMs have up until now only been able to be created in Javacript, which is a problem if the user has JS disabled or server-side rendering is desired.
Declarative Shadow DOM allows you to set all of that up with HTML markup.
Indeed, i just posted the some comment, but you where first.
The shadow dom solves the css modularity problem. And also comes with better "componetization of the web".
React is imho not the end solution, and the virtual-dom and the diffing of nodes should actually be built into browsers, and not into a javascript framework...
---
> It can act in the shadow too, allowing parallel computing. Is it the great beginning of «in browser javascript multi threading»?
Unless something has recently changed, this is not true.
Shadow DOM elements are still part of the same JavaScript execution environment as the Light DOM; there's no concurrency at that level.
If you want browser multi-threading, we've had WebWorkers for years and they keep getting better (OffscreenCanvas and module support being current examples).
reply