The "modernness" in Nana comes at a heavy cost : callbacks are based on std::function, which means that adding a callback - e.g. as in their front page :
btn.events().click([&fm]{
fm.close();
});
will do a dynamic allocation, and create an indirection, since std::function does type erasure (so you get a vtable, etc...).
Qt may look less modern, but the cost of doing a `connect` there is far smaller especially if you connect to a member function.
std::function implementations typically have small buffer optimization, a lambda capturing a single reference will most likely fit in there. But yes, it's quite heavyweight in general.
For a high-speed callback that might not matter but is the potential slowdown of a few microseconds for the extra indirection really a problem for reacting to button clicks?
Hard to see how that is a real problem. For one std::function implementations usually do small obj optimization, so if all you are capturing is a pointer there is typically no allocation. Furthermore, this a setup cost, so typically only paid once, and it is a UI, so there is a human in the loop anyway, which enormously increases the performance tolerances.
These can be in fast path, depends on the code, but that's not the point.
Coming from embedded development, the amount of short-lived heap allocations that are so conveniently swiped under the carpet with C++ is ridiculous. Even with something like slab allocators and extremely cheap allocs, there are always cases when alloc-free cycles fall on the edge of slab and become very expensive. The less is allocated during the run-time the better.
When these are combined across the whole codebase, optimizing them away _does_ result in a noticeable speed increase and makes the whole thing behave more predictably. Regardless of their perceived per-call cost.
Web and ECMAscript standards are to blame not Chromium. Chromium and Node are "just" implementation work, it's not like Firefox is that much better either!
- more symbols (I guess you never hit the per-object file symbol limit on windows, even with /BIGOBJ - when you do the only thing left is despair).
- most callbacks will have to copy a kind of `this` pointer so the size of your objects increase and they stop fitting into cache lines
- when you have "normal" inheritance, the vtable of your parent object is generally already in your cache, while here there will be one vtable per-object (and no, the vtables aren't optimized to be part of the object itself so you still get that indirection even with SBO). so goodbye memory coherency.
I've been working on my own callback implementation with an inlined vtable which allowed me to grasp sometimes more than 30% perf improvement vs std::function, in callback-heavy code : https://github.com/jcelerier/smallfunction/blob/master/small...
adding ? not much. calling ? I've worked in apps where there are multiple thousands callback firing per second - and you've got a 8 ms deadline to show data on screen.
Maybe I jumped the gun. I’m not an expert on GUI programming. In what case do thousands of GUI callbacks need to be called in one second? I would be interested to know.
well, I am having this comment because I had to spend months optimizing some UIs which updated thousands of elements per second with this kind of mechanism - needless to say, it doesn't scale, especially when your average user has a core 2 duo.
Callbacks are often not the best approach. How about "inverting" the control by having a function which first collects all the elements to act on and then acts on them? This should prevent problems with cache misses since you perform only one kind of thing (on all elements) at a time. Where this kind of reordering is possible.
(The callback approach is OOP I would say. And as always, OOP prevents us from seeing the better way to structure processing).
problems happen as soon as the elements you want to display aren't known at compile time and can't be stored in a nice, data-oriented approach, but instead come from various plug-ins that the user can download and put as DLLs in the app's folder.
Could you explain that more? “a heavy cost : callbacks are based on std::function (will do a dynamic allocation, and create an indirection)” v.s. “Qt the cost of doing a `connect` is far smaller” ?? How Qt achieves that using string comparisons at run time? https://doc.qt.io/qt-5/signalsandslots-syntaxes.html
Correct me if I'm wrong, but these widgets are custom-drawn, making the "modern" world completely inaccessible to and unusable by the majority of disabled users.
Don't blame the GUI, blame the screen reader. It's the screen readers that are stuck in the 70's. We have software that can do facial recognition and run on an raspberry pi but for some reason screen readers still need some ascii text buffer to send to the equivalent of a speak&spell.
The screen reader I'm using includes modern accurate OCR. I'm very interested in this, though. How do you suggest a screen reader should recognise the often subtle differences between control types and states across every application and framework when custom controls are so common?
To add to your point, why require such sophistication if it can be avoided? I mean, we could all send bitmaps back and forth to communicate. We could literally just take pictures of hand written notes. Encoded email seems much saner.
At least on Linux, there are already a few different GUI toolkits that custom-draw their own widgets. Presumably those projects all implement some sort of standard accessibility interface -- maybe this one could do the same?
Is it really better to split on semantic/structure and styling? Specifically, do we have evidence that that is the case? I know we certainly seem to like it. Quite often, any supposed advantages of it are never realized. And instead we grow into rather heavy solutions that are so painful to work with we reach for the next framework.
"Is it really better to split on semantic/structure and styling?"
When you need to show the same UI on different devices (desktop, mobile, OSes) and media (screen, high-dpi-screen, printer) you will come up to the need of separate style definition entity almost immediately.
Like this: "Hello, <bold blue size=16>Nana C++ Library</>"
Should be just this "Hello, <em>Nana C++ Library</em>" and styles for the emphasis that can be different for each platform.
Even for the same OS and user session you will need different style systems if you want to support dark and light theming for example.
Yet having UI nailed down to pixels grids (<bold size=16>) is a path nowhere.
This is an appealing argument. But most of us don't actually have that. Worse, many attempts have shown greater success with just minimal styling and/or multiple versions.
Another important consumer-end feature of splitting style/structure is it allows clients to have custom styles, accessibility-oriented styles, by just swapping out the "styling" content of an application/page without needing to swap out or inject into the UI structure/code.
The benefits to developers are more "trade-offs", but I think it generally is a better raw framework to work in without significant IDE-assistance to do development.
You are probably right. But also statistically, "nobody" is disabled or needs accessibility features. That's a poor argument for running in the opposite direction from providing them.
Benevolent systems cater to the least powerful user, not the median user, and I think it's good to support benevolent systems wherever possible, because businesses are naturally biased against benevolence.
> with just minimal styling and/or multiple versions.
If an application has simple UI that can use predefined and quite limited set of system UI elements then you probably don't need to style anything. But this is not the case of the library we are discussing.
Apologies, keeping things brief on my phone often backfires.
What I meant is that most of us don't have the same content that needs to be styled differently across different devices. Majority of the time, things are easier. As you mention, use the predefined UI elements and call it a day. It is sadly hilarious how much better that would be for most applications.
For things that do have content that wants custom styling, think games and such, you are almost certainly either inventing your own abstraction language and porting that to different targets, or just creating a new implementation for each target. Yes, some folks have tried to make uber languages where you "write once, run everywhere." Those are almost always subpar compared to "write once for everywhere you care about running."
I'll say that I do agree with the argument on its merits. However, I disagree with the argument on evidence. Statistically, the style/content divide has really just increased the complexity of the code I deal with.
> you will come up to the need of separate style definition entity almost immediately
Not necessarily. As always with software design, there’re tradeoffs involved.
Separate styles work when the stuff you’re styling is content to consume, not rich GUI to edit/produce. For rich GUI that works well on every platform you need different structure, not just different styles.
Separate styles work when the devices are not too different. If they are very different, like PC and smartwatch, you probably need different structure even for readonly content.
There’re frameworks/design patterns, like MVVM for XAML, MVC for iOS and old-school web, where you can replace complete views while reusing the rest of the app.
Very nice library! But you only support PCs. You have same structure, and same UX (mouse+keyboard, relatively large screen) on all of them, you only need to adjust for DPI scaling, and for window resize.
About hardcoded pixel sizes, sometimes they’re OK. Apple did that in iOS, worked well for them. When they released iPhone 4, they just doubled the DPI. And when they released iPad, they made developers replace complete views and it made sense because it was much different UX.
I’m not saying that library is good. Apple has separate UI markup language, and they have decent UI designer in XCode for that. The feature is IMO required for a modern GUI framework and it’s missing in Nana.
Also, GPU-based rendering is nice to have nowadays, see MS XAML or Google Flutter.
I once needed to create rich GUI for embedded Linux app on top of essentially raw hardware, drm, kms, gles. I’ve compiled C shared library exposing some wrappers around NanoVG API https://github.com/memononen/nanovg then implemented touchscreen input, controls, styles and animations in C# using .NET core. Worked well for my requirements.
modern C++ refers to c++14/17, etc. and current recommendations to program in c++, not to modern style in web design. Your affirmation is that there is no hope for c++ programmers wanting to write a "simple" desktop application other than learn web design/programming just to build a functional GUI?
They also lost me at that moment. I can understand high level representation like qml and xaml... but mixing them like that into plain c++ and also with this XML/sgml/HTML thingy ... Smells wrong.
Not saying these aren't important, but you can implement these features once you've got a nice API working. You can't easily retrofit a nice API into an existing project with lots of features. So maybe they chose the right way.
Every nontrivial GUI framework eventually converges on a poor reimplentation of HTML/CSS. If you need a complex modern interface, just use Electron or the like.
This is a great sickness of the web: every website does custom styling on controls, and neglects all but the most primitive UIs. The result is a lowest common denominator: the web only really supports the click and the tap.
Consider the humble popup button: a native one can support mouse input, or arrow keys, or keyboard shortcuts, or type selection, or accessible interfaces. Take it a step further and support scripting.
But if your popup button is a div with an onClick, it won't support any of these. Even if you go the extra mile and reimplement all of them, the user won't even think to try it, because it's a "styled" snowflake. So the web sinks.
HTML is way easier because it does a hell of a lot less.
Yeah, no, it's not "cheaper devs". This is elitist gatekeeping.
HTML/CSS/JS win because at this point there's both a mountain of features that you'd need to match in a native framework, it's cross-platform with rendering engine(s) that are mostly platform agnostic, and it's _actually documented_ unlike things like AppKit.
Someone learning HTML/CSS/JS can google their issue and find a solution or guide or what-have-you with relative ease. It's nowhere the near the same for any other framework.
I would be quite happy if HTML/CSS/JS would be able to match the tiny set of 90's RAD development, and native graphics performance, that apparently never make it to that mountain.
I think you’re right, but tables in scroll views are tricky in every context—if my understanding of the problem is correct, optimal rendering of this requires both an understanding of the backing data store and estimation of rows before loading. What about rendering via dynamic DOM interaction makes this fundamentally more difficult—is it something inherent to the DOM, or is it simply to do lack of effort to expose the necessary apis to do the same kind of work you can do natively to scroll through a table smoothly?
>No, HTML and CSS are document markup languages. They are actually quite bad at UIs.
This was true 10 years ago. Now frameworks like React are at or beyond the sophistication of anything native. Practically every other UI technology implements their interfaces in an XML-esque syntax that is far less expressive and flexible than HTML, which end up locking you in to a small constrained ecosystem. CSS has evolved as well, and with GPU acceleration out of the box and things like grid and flexbox all of the older issues are gone.
Nah, HTML and CSS have lots to learn from iOS and Android layouts, QML and XAML.
Where is Blend for HTML/CSS?
React might be sofisticated, but having the performance of native GUIs, or the eco-system of companies selling UI components, is surely something that it will never have.
This looks like a common C++ pattern of having a pair of overloaded methods as property accessors - foo() to get the value of foo for this object, and foo(x) to set it.
This is not about naming or formatting. This is about the prevalence of what are commonly called "setters and getters", that are a fairly reliable indicator of poorly thought-out design. You can learn something here, if you pay attention.
In a sound design, objects more usually get their attributes at construction time, and keep them until they are destroyed. When something needs to change, from outside the object, it is usually better to make another object.
Of course an object can still have mutable state, altered by member functions that do actual, useful work, but they have no use for setters and getters -- they have direct access to the member data that needs to change.
The key here is that the public member functions should be doing useful work for you. Just mutating primitive state is noise. If the object doesn't abstract anything, it isn't earning it keep.
Another indicator of bad design is public virtual functions.
> You can learn something here, if you pay attention.
I'm all for learning and debating, but this comes off awfully patronizing.
> In a sound design, objects more usually get their attributes at construction time, and keep them until they are destroyed.
Immutable objects are nice and I always try to use them when I can, but arguing that everything should be immutable or you have a design problem... that's exaggerated.
> When something needs to change, from outside the object, it is usually better to make another object.
As with everything, there's a trade-off with any solution. Allocating and copying data isn't cheap. And the fact that you just pick one way of doing things and discard anything else as "bad design" doesn't inspire confidence.
> The key here is that the public member functions should be doing useful work for you.
No, the "key" is that you are abstracting access to data. Your setter may do validation or transformation on the data. You may only offer const access to data. You may even do a defensive copy of the data. You may set breakpoints or log access to the data.
> If the object doesn't abstract anything, it isn't earning it keep.
Not all objects need to represent functionality. Some represent data. And mutating data is fundamentally what every program does. Just because you have an aversion to mutating data outside your object and you are willing to pay the price for immutable structures, doesn't mean everyone else is following a "classic anti-pattern".
> Another indicator of bad design is public virtual functions.
It's not just the naming. It's the very existence of accessor methods/properties that are the anti-pattern, or at least code smell. At a minimum, they're as smelly as exposing the object's internal state to external manipulation directly through public fields, and it's even more of a red flag if the accessors are liars and do things beyond simply getting/setting field values. (Obvious exception for DTOs and such, if you don't consider those a problem.)
No C++ malarkey for me!!
reply