I was wonder why aren't devs making desktop apps any more, especially since everyone is buying laptops and desktops again? With all the tooling out there for cross platform, a native experience, and better privacy/security than a webapp.
For example, Here are just two frameworks/tool kits that are easy to use and to build desktop apps with.
React Native for Windows and MacOS: https://microsoft.github.io/react-native-windows/
Avalonia UI:
http://avaloniaui.net/
Also, If they do use frameworks like the ones mentioned above they will still only release the app for mobile and then rebuild the code for a webapp which is slower and does not have the functionality.
Where does one even buy software anymore? The only stuff you tend to see on the shelves at a brick and mortar store these days are shovelware games that I assume Grandma buys to give her grandchildren something to play on the family computer when they come to visit.
Everything else is pretty much a license key you purchase online to allow you to download/register something you grabbed from the company's website.
Agreed—I love Tower, but I'm sticking with version 2. I happily paid an upgrade fee from 1 -> 2, and would be happy to pay another upgrade fee, but I'm not signing up for a subscription.
My hunch is that it's for roughly the same reasons so many people bought disposable cameras in the 80s/90s/00s.
1. Low initial investment: Disposable cameras were cheap to buy and so are most SaaS subscriptions, at least in the beginning.
2. Nearly-frictionless termination: Disposable cameras go away when you're done with them and most SaaS can be canceled— and navigated away from quickly.
3. Ease of use: Disposable cameras were simple point-and-shoot devices and most web apps are constrained by the expressibility of WWW UI/UX, so they often cannot easily be made as complex as their desktop counterparts.
4. No maintenance: Disposable cameras were made to be thrown away (at least from the user POV), so users didn't have to worry about protecting and maintaining an e.g. fancy 35mm camera. Similarly, SaaS shifts the burden of maintenance to the company providing the service and is ongoing.
For desktop software…
For #1. You have to install it and, possibly, pay an up-front cost to do so. Furthermore, you have to trust that the software is not malicious.
For #2. You have to uninstall it if you no longer wish to use it and you have to trust that it will clean up its tracks and not leave e.g. broken file associations.
For #3. You have to navigate an UI built under far fewer constraints than UIs designed for the WWW, so you may have to invest time learning its ins and outs.
For #4. You (may) have to keep it up-to-date, usually if you upgrade your OS, but especially if there's a security vulnerability present in the version you're running.
This doesn't explain the mobile app boom then. There are more Windows desktop devices than all of all the Apple ecosystem. Also, there is a higher cost for webapps, since you need to host it on your own instance. If you want an easy deployment method you can use one of the package managers out there that make deployment and upgrading easy. Hell windows has winget and chocolaty now which make deployment and update easy.
Steve Jobs is dead, and Apple's primary responsibility is to its shareholders, not its users. Subtly and pervasively hamstringing PWAs doesn't protect users. What would it even be protecting them from? PWAs that work well offline? No, it just coerces developers into writing native apps and distributing them through Apple's App Store instead.
While being locked in to a single app store without jailbreak isn't ideal, I'd rather have a native app on my phone than some lowest-bidder web app that downloads 10MB every time and runs terribly.
If my laptop is connected to the Internet, it's fast. That isn't as universally true on mobile. (Desktops are also more reliably connected to the internet when on.)
But the performance is worse. I point out the example of a small spreadsheet. Open it in Google sheets, and you are using 1/2 gigs+ in ram plus heavy cpu. Open it in a desktop app maybe you will use 20 to 30 mb of ram, and low cpu usage.
But many of us are not very heavyweight docs/slides/sheets users. For 95%+ of what I do just having a web experience on a browser that’s just a login away is an infinitely better experience than the old desktop days.
In general there’s plenty of performance to go around. For many things I don’t see a big difference between an M1 Pro MacBook and 6 yo machines.
> the performance is worse. I point out the example of a small spreadsheet. Open it in Google sheets, and you are using 1/2 gigs+ in ram plus heavy cpu. Open it in a desktop app maybe you will use 20 to 30 mb of ram, and low cpu usage.
Performance, measured in human units--e.g. time to load, refresh, et cetera--is fine. That's what ultimately matters. The customers are the humans. Not the machines.
Anecdotally, most teams I know building mobile apps are using some sort of cross platform toolkits, unless native is really required. So you're kinda killing many birds with one stone.
Totally, I'm using Flutter for that reason. But I built a simplified SDK that can also be used from the back-end with any supported language. I'm planning on releasing an Open Source edition (which is the whole thing right now) soon: https://nexusdev.tools/
Are these genuinely useful apps or cynical efforts to clone popular games/apps and monetize via ads? If the latter, volume says more about the business model than demand for novel functionality.
I feel that it doesn't make sense to expect the exact same transition desktop went through to happen on mobile, mobile is just different and has a different set of constraints.
Mobile is about discoverability and money. iOS users are more willing to pay for apps than any other platform, and every established business wants an app so they can be on more devices.
MacOS. "Cross-platform" really just means Mac and Windows, and Mac is too hard to develop for, even if you use a cross-platform toolkit (you have to own a Mac, and you have to install the latest XCode, and you need to figure out how to navigate XCode's bewildering and buggy GUI, and you have to digitally sign your app every time you release a new version).
Meanwhile, making a webapp that will work on all platforms (including Linux, Android, and iOS, as an added bonus) is the default. Just create it somehow, and it'll work on all of them.
For some reason, people love mobile apps and hate using the browser on mobile, but the reverse is true on the desktop: Web wins, installing a native program is cumbersome.
iOS has ~50% market share and an inbuilt payment mechanism that users have a higher rate of using than on desktops apps, and is a platform where people have been conditioned that "there's an app for that" compared to one where people have been conditioned "Don't run that exe, it might be a virus!".
Oh that's simple. It's impossible to make a webapp that works well on mobile. On mobile, you must be able to tap on buttons that are located along an edge of the screen. Those are the easiest to access points. However, in a web browser, doing so will invoke the search box, the menu bar, the back button, any number of unlucky events. So companies are forced to make a mobile app.
Sure. But what are you running linux on? Pencil and paper?
Seems like, if we follow your argument, the Mac is the best purchase because we can develop for all platforms on it.
Your argument, fundamentally is "I bought the wrong computer." Or as you put it, "I already bought a PC and I don't want to also buy a Mac". That's very different from "Mac is too hard to develop for because you need a Mac".
Trouble is, you're conflating the distribution mechanism with the payment model. There's no reason a desktop app couldn't/can't implement a pay-as-you-go model; a one-off lifetime/per version payment model isn't the only option out there.
But people HATE pay as you go installed apps. Take Word, (MS office), for example. The way I see it, I downloaded this software, it is fully available on my computer taking up space, but I have to pay you for the key to just use it? Somehow that feels different than going to a website where I don't even have access to the application without paying.
Intellectually, we know that they have to pay to develop it somehow - M$ is still a business. That doesn't help the rage that is instilled by the "free trial has ended" message.
Way more money to be made ensnaring consumers into web-based subscription software.
Also, consumers are forgetting how to buy regular software. The Windows App Store is a thing, but it still feels weird to do that on a desktop/laptop as opposed to a phone or tablet.
I think there's a huge market need for good desktop apps, many folks prefer them to online tools. Bonus: It's never been easier to build a cross-platform desktop app.
I know there's a lot of hate in here for Electron, but it truly makes cross-platform desktop app development achievable for small companies and indies.
What about Flutter? I heard so many complaints before I now hesitate to look at it. But maybe these were just teething problems and things are running more smoothly now?
Flutter is great. Web support is still not all there, but if you're looking at making apps for iOS/Android/Linux/Windows/Mac and you're not worried about it looking 100% native, it's a huge timesaver.
Is an electron app really a desktop app? I suppose from the users point of view it could be, but I read the OPs post as asking about native desktop apps, ie. providing a better experience than a webapp.
To answer (my interpretation of) the question, there are still plenty of good native desktop apps for MacOS. I don't have data to back this up but I wonder if Mac users are more willing to pay for native desktop apps than Windows or Linux, which makes it easier for indie devs to support themselves full time writing these niche apps.
I can't speak for all Electron apps, but Beekeeper Studio is a true desktop app yes.
All the assets (css, html, js) are bundled in the app, nothing loads from the web, it is truly 'local', and works 100% offline. It's possible to change styles based on what OS you're running on, and there's the full suite of native APIs you can call.
To weigh in on who pays for software -- I'll let you know once I've sold more copies of my paid version, my guess is that more MacOS using individuals pay for apps, but more businesses running Windows pay for bulk licenses.
I think OP means native app. JS based Electron apps are noticeably less responsive and suck a lot more battery+RAM than native apps, easily a 10x difference.
Even very well written Electron apps consume a lot more battery and memory and are less responsive than very well written native apps. It's the nature of the technology.
Sure, I understand that. But that's not what the parent was saying. The parent was saying there have been a number of good Electron apps recently. The only Electron app I ever see anyone hold up as good is VSCode, an app that makes a herculean effort to mitigate the latency and performance problems that Electron apps usually have.
If the choice is between an Electron app and no app at all? I would rather the Electron app not lie to my face that it's a real application. No one expects a website in your browser to follow system conventions perfectly or behave like any other app on your system would. That expectation instantly and reasonably changes the moment it has its own application icon and windows, and Electron apps don't give a shit. I would rather not need to have Teams and Slack both installed and chewing up my CPU and GPU at work just because they both decided they're special enough to try and claim all of my resources.
> VSCode, an app that makes a herculean effort to mitigate the latency and performance problems that Electron apps usually have.
Can you point to the parts of the program (source) that implement such optimizations? I'm curious, in particular, about how other editors solve the performance problems.
Large parts of VSCode is implemented in C++ and running webassembly to avoid the problems with electron. An electron app that doesn't have parts in webassembly can't be as good as VSCode.
Notion, Obsidian, and the like are getting pretty popular and are all Electron based. Postman was good after the Chrome App -> Electron port, though now they've bloated it to do more upsells.
Not only Electron apps, also any desktop program made with web technologies.
At an example, let's look at Dropbox desktop programs. In Windows is made mixing Qt, Python scripts, and native solutions (just have a look at the installation folder), resulting in both a worse performance an a use of RAM.
That's totally true. I forgot to write that Dropbox uses Qt WebEngine in combination with HTML + CSS + JS to achieve what could be done with just plain Qt with nothing more.
I would argue that a desktop app is an app that does not require a remote server to render the UI. Ideally it would be also useful without access to the internet. In this instance, it should be able to load up, connect to a local db instance running on my machine and allow me to get work done.
As a user the underlying technology does not matter to me. I just need to be able to get stuff done without a WiFi or wired connection.
I don't have a hate on for Electron apps as such, and use quite a few, but they really are the worst of all worlds, particularly from the security point of view: you have all the ability of an Internet-connected web app to execute arbitrary code, but without any of the work that a full browser puts in to try and sandbox the ability to fuck up your machine.
> makes cross-platform desktop app development achievable
Personal opinion - this has never been not achievable. It just requires making it a priority.
And it's not as if creating and maintaining an Electron app is somehow free. This "not free" aspect is compounded if you care about making it fit in with the rest of the desktop environment. Which, being fair, most developers (PMs, Managers, Execs) don't, even when their customers do.
You're right that it's possible, and electron development isn't free, but it's significantly easier for a team of two than it is to maintain native Windows, MacOS, and Linux.
One way I think of Electron is that it's fine if it's a primary app I'm using, but when it's a side thing that I have to leave open (Spotify, Slack), it can annoying. I agree that I'd rather native apps for better performance and resource usage, but I understand the developer's plight.
There's no developer's plight for Spotify and Slack. They both have tens of millions of clients running. They can afford to make something better and if they had any respect for their users or the planet, they would.
I think one of the biggest problems with modern app development (web and desktop) is how much developers assume that their app is your primary app (and thus can hog resources/screen space/etc). The only apps that can ever reasonably make that assumptions imo are ones with real-time interaction, e.g. games. Anything else can and will be a side app in some scenarios.
There was a great cross-platform environment called Visix Galaxy in the 1990's. You could compile your C++ source into native apps that ran on Windows NT as well as Solaris and HP-UX. It had a fantastic GUI builder which is similar to what is available today.
It's possibly still around: http://www.ambiencia.com/overviewgalaxytd.php . Expensive as fuck, though; if I understood the pricing page correctly, it's $1.5K for a single-developer no-support license of the cheapest version. (Dunno what the differences between versions are.)
Free Pascal / Lazarus is a FOSS cross-platform language and GUI IDE. I'd recommend trying that in stead.
>I know there's a lot of hate in here for Electron, but it truly makes cross-platform desktop app development achievable for small companies and indies.
>Qt is not nearly as easy to use as Electron, especially if you're a small company and need to hire front end devs for cheap.
Sure it is.
>Also Qt GUIs look like garbage, but it's hard to quantify why.
No more garbage-like than you're getting with Electron and just as style-able (with arguable superior layout engines). Qt has a number of style palettes; perhaps you're used to using applications which chose to use non-native/standard palettes.
I've written apps with GTK, Qt and Electron in the past. Of the three frameworks, Qt is easily the hardest (albeit "the best" for cross-platform native development). I'm not sure what your experience is with it, but I never once felt like it was easier than writing an Electron app.
As a dev with a lot of experience in JS/HTML/CSS Electron makes things super easy, especially when it can be part of the regular build pipeline.
Learning Qt means figuring out a whole new stack and build chain. Maybe the result would be better, but with only a few hours a week I didn't feel like it would have been worth the time investment.
It's super powerful, but with that power comes complexity. I found it too overwhelming to use when building Rails apps. I really missed SequelPro, so I built Beekeeper Studio to scratch my own itch. Others had the same itch I guess!
I've been following for a while, really love the Beekeeper Studio!
But I always was interested, how do you evaluate the market and your income? Would you advice to join and develop tools for the desktop? Is it your fulltime job? Does it take a lot of time?
I suspect its mostly a change in the types of applications being written. Most of the trends today are about interconnected sharing of data or aggregating data from across sources, internet applications for want of a better term. There aren't many new commercial applications being developed because its inherently a single user experience creating or editing data locally. They are still being created in the open source community however, non linear video editors have come quite a way in the last few years as have some interesting learning tools like Anki and IDE stuff like VSCode, I have added 10 or 15 applications to my list in the past years for all sorts like cartoon drawing. But the desktop isn't perceived as profitable for a new application and a lot of the tools have ended up with obnoxious DRM and charging schemes like Adobe. It is just not a hot area of innovation and its not being pursued.
I have also found that as a consequence a lot of modern languages are very light on GUI support if at all. Rust, Go etc etc do not have robust GUIs of their own and you end up bridging across to other languages and working in odd ways or with very bare bones implementations like fyne. On Windows at least your best bet for a GUI remains either C/C++ with Visual Studio or .net and the other choices are a long way behind. We are seeing the rise of the web interface via Node and such as a result, its cross platform because its just embedding a web server as a control but its providing a local application but as people often point out they don't run well and they lack native feel and performance and its not a great experience. There are technical issues here too but I think its really more that few people are looking at these types of applications and a few big companies maintain most of the big commercial applications and buy up anything new to include in their suite.
Just to piggy back off your list, the only ones I can think of are the open source, creative applications (Visual Studio Code, Godot, GIMP, the LIBRE suite of applications, Blender, etc.).
Also, PC games... though, a great majority of them are connected to the internet for multiplayer access or DRM purposes, so I'm not sure if that counts. This market seems to be falling off, though. Steam has launched a hardware specific platform (though, I doubt they'll abandon PC's).
Is there data that indicates devs aren't making desktop apps any more?
Maybe it's just the major applications (e.g. video editing, audio editing, image editing) are good enough that it's difficult to compete without substantial resources.
The user experience may be better for a well-made native app than an average web apps, but the developer experience for the web has some major advantages: 1) You don't need to ask permission; 2) web apps works cross-platform; 3) updates are easier; 4) less obligatory updates to accommodate changes in native OS APIs/SDKs. Etc.
It's the chicken and the egg issue. People would purchase desktop apps if they were offered. But since they are not there is no support for them. As developers, it enthusiasts, etc ... we need to bring them to the forefront again.
> However, for some reason, many consumers will happily subscribe to SaaS apps.
Because consumers value not having to install, not having to backup, and not having to sync devices. They want immediate access to the thing they want, when they want it, on the device they want to access it on, with zero friction.
Apparently, broadband internet is sufficiently reliable (at least download bandwidth), that consumers do not value local redundancy, and they have not experienced sufficient losses by not having their data restricted to local devices, or are not cognizant of any losses.
They are. But it just so happens that now we finally have a truly universal write-once-run-everywhere VM (the browser) that makes development a million times quicker and easier, so people build for that.
As a user I absolutely do not want your cross platform 'native' app.
I want a purpose built native app that is close to the metal on that specific platform. Not cross platform. Platform specific, using all the latest apis and features of that platform.
1) Private individuals do ~everything on phone and tablet operating systems. Tellingly, those have lots of native development going on. The main exceptions are PC gaming enthusiasts, and, as one might expect, there are tons of PC games released every year. That's what happened to B2C desktop software.
2) Businesses and other organizations have a bunch of reasons to at least be OK with using web-apps or tightly-integrated-to-online-services Electron shitware, and maybe even to prefer those things to native applications, and other businesses have a bunch of reasons to want to sell them web-apps (subscription vs. single sales, especially). That's what happened to B2B desktop software.
I have a 4+ year commercial application Video Hub App built with Electron and Angular (it's also charityware - $3.50 of each purchase goes to a cost-effective charity).
I don’t know if I’ve seen a cross platform UI development toolkit that produces truly accessible UIs. At least with HTML you can make a solid attempt to support various AT like screen readers.
There definitely exists a market for desktop apps. But they need to be superior (especially in the "sparks joy" department) to the opensource/multi platform alternatives.
I’m making one as an internal tool to help our manual testers use playwright… and I’m using avalonia just because I want to make this thing multi platform… so far I’m impressed
However, it's hard. Very hard. Everyone in our industry has moved towards Electron but we've resisted it for a long time. I have always believed that native experience is the best in the long run. The downside is that it takes more time to build the same feature across two different platforms but this is a risk we're willing to take to provide a superior user experience for our customers.
Hi, can you please not block Firefox Relay for sign-ups, which I use to protect my email address? I don't trust your company with my real email address (I don't trust any company) and I think that's fair.
Another request: if you used a blocklist found on GitHub I'd really appreciate a link to it so I can make my argument for why Relay deserves to be spared.
I've passed the request to our developers. In the meantime, if you want to keep your email hidden, you can sign up using Apple and use anonymous email option.
Because "time to ship", "availability" and "discoverability" are much more important concerns to many business than "consistency with the UI conventions of a given desktop operating system".
* "Time to ship"
Going the "native" way for desktops means doing specific work for Windows X, Windows Y, Windows Z, MacOs Whatever, than MacOs Whatever-Beta-Plus.
Even using the "write one, use everywhere" framework entices such work, because of the dirty secret no one talks about : they don't really work besides simple apps.
The choice of leaving those problems to the Chrome team is _really_ tempting.
* Availability
Popular Desktop OS have not historically integrated "app stores" or "canonical way to install an app".
Asking them to "Install" stuff is great if they're nostalgic geeks like us ; but real people learned long ago that anything asking the right to "install" on their computer is malware from a shaddy gambling website.
* Discoverability
People don't care about your app. Really, they don't.
But they have chrome on their computer (it came preinstalled, and they need it for Facebook), so at least you get a chance to grow your user base if you manage to put a link to your website in your facebook feed, or, if they're high-tech users, as an ad on the top of their google search result (also knows as "the internet".)
The only exception, as usual, is games, because:
* they need to run native code and can't afford the dozen layers of inderection of a browser
* their users are dedicated nerds who will happily glue toxic chemical products to the CPU of their opened motherboards in order to get 1 more frame per seconds
* the install process of game exists, it's called Steam
> Asking them to "Install" stuff is great if they're nostalgic geeks like us ; but real people learned long ago that anything asking the right to "install" on their computer is malware from a shaddy gambling website.
Or just the limitations of permissions. A business team can adopt a SaaS unilaterally. Anything installed involves IT.
This is why the runtime libraries __have__ to be shipped by the OS vendor. It isn't good enough that Windows (or OSX for that matter) includes only stuff for it's OS. It should include a standard that is compatible with every other major platform too.
We don't even really get that with Electron either; all of those apps are 'fat' in that they ship the entire standard library for doing anything with 'Hello World'.
Ok, I guess "BU" here means "business unity" (I can't remember ever seeing that term), but I can't imagine what "SSO" could be that would crack-down on people signing on internet services.
Oh, ok, that makes sense. But there are a couple of fixes. You create the account with your corporate email. You just need the password.
And that part: you're potentially giving sensitive corporate info to the SaaS app, with your personal email, and you risk getting fired for that
If that one part is real, than SSO isn't adding any power to you (or to the DSO anyway). If it isn't, then the only thing SSO changes is the password one.
If "time to ship" is actually important, than why do most companies go to the work of making three separate codebases for their applications - Android, iOS, and web?
As to "availability", desktop app stores have become more prevalent over time, not less - which means that if this actually were a contributing factor, you would see more developers and people using desktop software, not less.
And as to "discoverability", there's no mutual exclusion between a webapp and a native application - in fact, the two values add. Making both makes your application strictly more discoverable.
All of these reasons seem somewhat disconnected from reality. I'm pretty sure the reason is simpler: companies want to get the highest ratio of revenue/users to development costs that they can (regardless of the negative externalities toward users e.g. performance and UX), and the trifecta of Android+iOS+webapp (combined with the fact that consumers now accept web applications in lieu of desktop ones) allows them to do that.
> If "time to ship" is actually important, than why do most companies go to the work of making three separate codebases for their applications - Android, iOS, and web?
In my experience at a FANG company, it turns out that building and supporting three separate code bases is (counterintuitively) faster and more productive than a unified code base.
We took a stab at this using Reactive Native and C++. It’s been a couple years at least since we did this, and so disclaimer: this might not be the case today. I am also no longer with that company.
The developers hated using RN. In 80% of the use cases, everything worked ok. The other 20% of the use cases consumed 90% of the teams time, and we effectively had to hire engineers who were experts in iOS, android, AND now react native. Meanwhile, the total amount of time to launch a new feature did not decrease.
Our C++ path had its own problems. Building dependencies across different architectures was a nightmare at times. My organization didn’t really have C++ developers. Surprisingly this didn’t really slow us down, and we quickly worked around the learning curve. This worked for us because we weren’t writing low level system code but just business logic.
When it came time to integrate into our iOS applications and several android applications, we again ran into build issues. We had many copy and pasted CMake scripts, which no one really understood and no one really wanted to maintain and own. This was frustrating enough that we experienced significant turnover, and we lost very valuable teammates.
In hindsight, we took both of these paths inorganically. These decisions were made by senior staff engineers and upper level management. It pissed off a lot of developers and lead to a lot of turnover. I have friends at that company, and they are again thinking about going back in the same direction.
> My organization didn’t really have C++ developers. Surprisingly this didn’t really slow us down
> When it came time to integrate into our iOS applications and several android applications, we again ran into build issues. We had many copy and pasted CMake scripts, which no one really understood and no one really wanted to maintain and own
Seems like the lack of C++ developers did slow you down. C++ developers knows how to make proper build scripts and wrap platform specific code to make their codebases portable.
Sure. What I was trying to convey was that we were able to quickly write business logic and test the logic for functional correctness. We had initially assumed that writing sleep plus pluswas going to be very difficult, big prone, and tedious. Surprisingly this didn’t slow us down. But building and integrating was extremely expensive.
Yeah, C++ isn't really that hard to write if you are experienced with unit testing in other languages. The big hurdle comes from managing the build system since it integrates with the language and there is no default way to just build and run things, this is very different from most modern programming languages. Which is why for team projects you shouldn't use C++ unless you have a person experienced with C++ who can set that up and manage it properly.
Indeed, for mobile, there are different scenarios : if your mobile app is the core driver of your service, than your problem is having them on play store and app store.
The "cross platform mobile" APIs are less than stellar,but maybe you can get by with one codebase. Otherwise two is just a requirement.
On the other hand, if mobile is just an afterthought, than a WebApps with a few media query is the reasonable choice.
> consistency with the UI conventions of a given desktop operating system
Desktop conventions sort of exist for macOS but definitely do not exist for Windows or Linux.
I won’t go into Linux because I don’t want to deal with that energy right now, but Windows is an absolute mess:
- WPF
- WinUI (2 and 3, totally different)
- WinForms
- MFC/Etc
- WebView2 (basically electron but being promoted as first class for some reason)
Even within Microsoft’s own apps they can’t agree on UI conventions (e.g. are things round or square - Maps and other built ins use both).
On web you have to totally create UI from scratch, which usually means bootstrap or your company’s design system. Turns out windows desktop is basically the same, because apps have to create their own look from scratch - even Microsoft’s own native apps (Office, Visual Studio, etc) don’t use out-of-the-box UI.
Of course for new work Microsoft is mostly using Electron or WebView2, which should be telling (Teams, VS Code, etc). On the developers side, they’re pushing blazor, which is relying on electron or webview2 for the desktop story.
> WebView2 (basically electron but being promoted as first class for some reason)
Rendering a webview using the OS should have much lower disk & memory overhead than bundling chrome + node. In theory, there's no reason it should be higher than the cost of a tab in a browser. I'd love to see some benchmarks though.
> If "time to ship" is actually important, than why do most companies go to the work of making three separate codebases for their applications - Android, iOS, and web?
They often don't. Loads of companies ship on iOS first and Android later. There are also widely used frameworks for building your app in javascript so it is the same code on both iOS and Android just living in WebViews.
Touch is wildly different as an interaction paradigm compared to mouse + keyboard. So that's the reason for the mobile app.
You'll make your life much more difficult doing an Android app in a language that's not Java or Kotlin, or doing an iOS app in a language that's not Swift or Objective-C, so that's why most apps which are the same between Android and iOS are mainly web views.
> If "time to ship" is actually important, than why do most companies go to the work of making three separate codebases for their applications - Android, iOS, and web?
Because if you develop for the web and handle the REST/other APIs well, it can be trivial to work on the mobile frontends that connect the interfaces (whether on Android or iOS). It becomes more a matter of plugging the right pipes.
Whereas the backend for desktop apps across systems may be completely different; you may create more work to handle the idiosyncracies of different OSes.
>Even using the "write one, use everywhere" framework entices such work, because of the dirty secret no one talks about : they don't really work besides simple apps.
Do you have examples? I'm not using many apps I guess.
Because there are barely any dedicated desktop software developers and there are a million web developers.
Now if developer tooling wasn't trash, the transition from one to the other wouldn't be a giant hurdle. But, it is trash, so, we are where we are.
For example I work as an iOS developer but I've done desktop (macOS) software and prefer it. You know how many job postings there are for macOS devs? Zero.
But you're kind of right. Another company would be trying to build this as a web app, but the sensitivity and security required for dealing with healthcare data makes it a no-brainer to build a desktop app.
>but the sensitivity and security required for dealing with healthcare data
I'm curious why you think this. Disclaimer I have worked for a cloud based EHR system as well as a HIPAA compliant mobile/web app. We haven't had any issues with sensitivity or security, (obviously following best practices). A password on a physical computer only goes so far, I would venture to say that our systems are *more* secure than a desktop app.
Purely based on the companies I've worked for in the past, and how difficult it was to get approval for web based (SaaS) apps over desktop apps.
Although, in one case they were happy to use an Enterprise web based solution, but the feeling I got was that there was a year long buy in process that ticked all the boxes before that got approved. Everyone's worries put to rest and such.
I build a desktop data IDE that helps you query files, APIs, databases; script the results; and graph/export. It was important to me to be desktop-first so that you can try it out at work without needing to go through the rigmarole required of a SaaS data analysis product. It'd be as hard to get permission for as installing Sublime or DBeaver.
If I got funding or this becomes sustainable I'd be interesting in building native versions.
I'm currently creating a forthcoming desktop app for Windows and Linux, a special kind of project management software. It's in early alpha but already has features that no web app could possibly have such as loading a directory tree with hundreds of thousands of files within 700 milliseconds (and it's not even optimized yet).
I think PC is becoming the new workstation while mobile is becoming the new consumer PC.
So it makes sense to only make new native applications for productivity and development only (think IDEs for example). Of course it's not a fixed rule.
I use Flutter so I make mobile apps and get desktop apps for free. Otherwise I wouldn't bother investing the effort for Windows/Mac/Linux when we have the web.
That is exactly how MS used to have it. I think they had the numbers on the reality of how many people said yes and I suspect it was terrible. So they forced the issue. Unfortunately not for the better. I wonder if they are really wasting resources though. Would a small variance really hurt what they think is going on? Do they really need a million plus number of samplers? If I remember my stats classes, once you get past 1k or so your well into diminishing returns territory.
Its 100% native Objective-C code on the client, plus a mixed Objective-C and Python pipeline for maintenance and backend - around 60 kloc in total. The other major components are a 50 kloc config-file and a 800k row database.
In addition to being fully desktop native, we also don't sell subscriptions.
> why aren't devs making desktop apps any more
we have a unique insight into the market, having seen every Mac app published in the past 20 years. i don't think your assertion is correct, there are a lot of desktop apps around. there are a few thousand apps under active development, and the majority of them are actually native, not electron (we count 25.000 native sparkle-feeds URLs versus 4.400 electron-feeds). i'd phrase the question the other way: why are there still so many native apps when most end users would never spend a dollar on software? i believe there are quite a few hobbyists out there, developing for fun or out of hopes of being able to make a profit later on. that further contributes to the expectance that software needs to be free, further destroying an already very weak market.
Just wanted to say that I love MacUpdater! It’s focus on usability and getting the app to function with so many types of updaters is what makes it a “classic macOS app” that is always a must-have for me.
I work exclusively on desktop apps nowadays, creating https://lowtechguys.com and https://lunar.fyi and I can say there are a few very annoying issues with this desktop ecosystem.
1. Auto-updating
2. Payments and licensing
3. Friction on trying the app
If you think about it, a web app can be instantly used by loading an URL, is automatically updated with a simple deployment, and you can just slap a Stripe Checkout on it and have payments.
On desktop, all those things have to be solved by hand with various combinations of frameworks/libraries.
The macOS App Store for example solves all the above issues, but its sandbox limitations can make an app impossible. Lunar for example is not allowed on the App Store, and my rcmd app switcher is still missing window switching because App Store restricts apps from fetching windows of other apps.
Of course, all those issues can be seen as advantages by other people:
1. Immutable code (no auto updating means the dev can't break or rip out features from a working app)
2. Free apps (licensing being so hard, a lot of small apps are released for free because the devs have capitulated on the payment front)
3. Works offline (installing apps is not that hard really)
there are still a ton being developed in specialty domains (even searching qt c++ on job boards returns a decent amount) not much that's consumer focused a lot of those users have moved to mobile and web
a lot of spaces are totally dominated by entrenched players (office)
During the 2000s Windows (and probably Mac but I didnt use it back then) made it way more difficult than it should be to quickly write GUI apps and allow them to be ran on multiple platforms. For obvious reasons given the time it was just another way to lock developers and their users on to a platform. Given this environment the market shifted towards the web as a way of handling this and now it just doesn’t make a lot of sense to revert back to OS specific APIs.
I write mostly desktop and command line tools for my day job.
While there's benefits to writing a web app which can do everything for a user, the main desktop app I develop and share with colleagues sits on top of user permissions that they have to negotiate themselves.
The reason for it: Compliance. Having a self-service tool which just automates work a user can do themselves using their existing permissions, instead of a centrally managed one where there's a service account used for the work is less likely to be killed off because of compliance issues, or identified as "shadow IT", or just outright denied the right to exist because I can't get a corporate sponsor to host it.
My personal projects I also prefer writing the same way, with scripts, commandline tools, or desktop apps. For example: I wanted to download every episode of a podcast in one go recently, and discovered that most podcatchers suck these days and the one I used to use is broken on Win10. So I banged out a script in an hour or so do to the job, rather than use some half-functional bloated mobile or web app.
I think a lot of people touched on it, but when choosing a market for your product as a small team starting out you likely don't choose desktop. Unless your users are primarily on desktop (e.g. pc gaming, software development). These days web and mobile are where the market is at.
It's expensive (time and money) to build apps for multiple platforms even if you use cross platform toolkits like React Native. The tooling just isn't there to make it easy enough to share code between platforms and create compelling products. It generally doesn't make business sense.
I'm working on this problem. Leave a reply and I'll invite you to test when I launch. ;)
Is this true? Desktop apps have far more power to create privacy or security concerns that web apps do. The web sandbox isn't perfect by any means, but it's very good and far stricter than anything else on desktop.
We all get annoyed by websites that set tracking cookies and send telemetry on the web for example, but there's a lot more of telemetry & tracking in most desktop apps that simply gets ignored - Firefox's DLTOKEN id for each download being just one recent example: https://www.ghacks.net/2022/03/17/each-firefox-download-has-....
For this same reason you see a mobile app for so many websites when there is no point. And so much pushing for using them. Because browsers don’t allow that much data collection.
Desktop apps can be worse for privacy if they're unsandboxed. But if the app isn't actually malware then they are better, because they process data locally instead of uploading it to a server to process. Even with the best intentions in the world, those SaaS services are getting hacked left right and centre these days and that's before you take national borders/sovereignty issues and spying into account.
What's needed is to go beyond the browser - we should be able to execute many kinds of apps inside portable sandboxes, not just HTML.
> Desktop apps can be worse for privacy if they're unsandboxed.
Sure, but the point is that you have no idea if a binary is sandboxed before you execute it.
> because they process data locally instead of uploading it to a server to process
No, they do whatever they want, which includes regularly transmitting all sorts of data over the internet, with far less restrictions and visibility compared to a browser.
> these SaaS services are getting hacked left right and centre these days and that's before you take national borders/sovereignty issues and spying into account.
And individual users get hacked far more often, primarily by being tricked into downloading and executing an arbitrary binary.
With app stores you do (or can) know that. But yes a default non-store distributed app must be trusted.
In many cases that's fine because there are lots of ways to get trust other than sandboxes. MS Word or first party Apple apps don't really need to be sandboxed other than as defence-in-depth because they're from trusted brands, for example. Steam doesn't sandbox games as far as I'm aware and in reality nobody cares because Valve keeps scams and malware off the store.
"No, they do whatever they want, which includes regularly transmitting all sorts of data over the internet, with far less restrictions and visibility compared to a browser."
The browser transmits every detail of every interaction by design. Meanwhile it's not actually true that desktop apps constantly upload data. Many don't. The norms are totally different: not only can they operate telemetry free by their nature, even when companies do collect this they almost always let you opt-in or out. That's why telemetry in Windows is a constant source of controversy whereas for websites everyone has given up caring a long time ago - even the "best" privacy approaches on the web still generate detailed server logs of every button press and page navigation.
"And individual users get hacked far more often, primarily by being tricked into downloading and executing an arbitrary binary."
No, not any more. Maybe in 2003 but even many years ago this "tricking" was mostly about piracy. Nowadays? I don't even remember the last time I was asked to clean someone's computer of viruses. The average user has been impacted far worse by data leaks, hacked email and social media accounts, bank phishing etc in the past decade and that's not because desktop apps went away (they did not - games alone are an industry bigger than Hollywood).
I'm not arguing against sandboxes. We should have a great cross platform mechanism for sandboxing non-web apps. But the web vs non-web privacy discussion is way, way more complex than just "web=good".
> Is this true? Desktop apps have far more power to create privacy or security concerns that web apps do. The web sandbox isn't perfect by any means, but it's very good and far stricter than anything else on desktop
This isn’t inherent.
If every program could run in in its own container then this wouldn’t matter.
There’s zero interest by anyone in doing so in yhe desktop, since the money is in selling services.
The most important reason i think is that the web has progressively become good enough for majority of the usecases. However there are still a few areas where good desktop apps are needed.
As someone who once worked on desktop apps professionally, I've been asking myself this same question a lot over the last couple years. I think there are a few reasons.
- Mobile completely eclipses desktop in active userbase. With the pandemic, desktop has made gains, because people have been stuck at home, but nobody expects that to last forever, and even with the gains, mobile is far more popular. Compound this with the fact that, for years, mobile experiences have been held back because developers were on desktop, even when most of their customers were not. It's difficult or impossible to develop on mobile, and a pain to test on a separate device, so most developers worked and tested on desktop, leading to things being too small or non-functional on mobile. This gave rise to the mobile-first push, mostly from managers and developers who understood the market importance of mobile. In many organizations, I suspect it was such a slog to get everyone on board, I doubt they want to risk undoing that effort by creating new desktop apps.
- Desktop frameworks have not just stagnated, they've degraded. This is most true on macOS, least true on Windows. Even as Apple has invested in updating AppKit, the results have been uneven. There are still some controls using cells, and some controls that were updated to use views, but then haven't been updated to match newer framework conventions. At the same time, new bugs get introduced in every major OS, making desktop apps buggier over time, and forcing app developers to constantly keep on top of things.
Apple has tried to hit this from multiple angles, from running iOS apps on Mac, to SwiftUI, but I think it's telling that SwiftUI works best on Watch, then on iOS, and finally on Mac. The desktop is the lowest priority for that framework. Catalyst and running iOS apps are largely throwing in the towel, replacing desktop development with mobile development, but even then, they're not perfect.
Linux has faced less-extreme versions of these issues with both GTK and QT, though at least these aren't strictly tied to OS version. There's also issues around licensing drama with QT, and the fact that selling software on Linux is a very difficult sell.
On Windows, Win32 and Winforms remain relatively stable, if somewhat unpleasant to work with. WPF and WinUI seem to always have some issues, not to mention WinUI is limited in what it can do, which is why Microsoft eventually opened their app store up to Win32 applications. But it's not always clear which framework you should be using for a new project.
There are, as you pointed out, always cross-platform frameworks like react native, but not to disparage those who are developing those frameworks, they just don't match up to true native frameworks once you get past very simple apps. There are issues with cross-platform design, of course, but also with performance. These issues can be overcome in cross-platform apps (see VS Code, Discord), but I don't think I've seen a cross-platform app that was "good enough" except for electron apps.
- The desktop is in a weird place, design and quality wise. Traditionally, the table stakes have been higher for desktop apps. You needed to have more features and more polish. You needed to be compatible with other desktop software and more hardware peripherals. There's also always been the fact that OS paradigms have traditionally been different. If you ship a cross-platform app that did things the Windows way, Mac users would complain, and vice-versa.
But that's becoming less important as more people migrate OSs more often, and platform conventions are becoming both less important and more homogenized, especially as users who grew up on mobile start using desktops for the first time. That's part of the reason VS Code can get away with so many Windows-style keyboard shortcuts on the Mac.
However, it's still an issue. Just look at the 1Password beta forums right now to see how many contradictory changes are being requested now that 1Password shares a UI design between desktop platforms.
- Important Web apps have gotten to a place where they're good enough. See: Figma vs. Sketch. Performance and polish-wise, Sketch beats Figma hands down, but Figma is good enough, and it makes collaboration much easier than Sketch. It's cross-platform, and since it's a Web app, it's easier to get your collaborators to use it, since all they have to do is click a link you send them.
There's a similar scenario with Google Docs. Native versions of Office are still technically superior, and Office 365 exists, but most college students and younger are using Google Docs. It's free, it works well enough, and it's easy to collaborate. If you try to write an entire book in it, you're quickly going to get bogged down with performance problems. Seriously, word processors from the 1990s outperform Google Docs once you get beyond a few hundred pages, but few people push it that hard. It's not uncommon to see college students with top-of-the-line Macbook Pros that have exactly third-party apps or personal files on the device. They're nothing but Google Docs and social media terminals.
Desktop conventions and integrations are still important to a large but slowly shrinking subset of users, but foreign to a large but growing subset of users. It's tough to know exactly how to design a desktop app to appeal to a large enough audience.
- The market. Desktop app stores have tried to push desktop software along the same pricing trajectory as mobile, but have only succeeded in splitting the market. There are lots of Mac users who won't download any software outside the App Store, but the App Store on the desktop sucks. The sandboxing requirements make a lot of software impossible, the pricing is in a downward spiral, as on Mobile, but unlike Mobile, few people actually browse the app store, so it doesn't lead to any discoverability. The story on Windows is similar but less extreme. Microsoft has opened their app store up so basically any app can be on there, but there's not really an incentive to put apps there, and people don't browse it.
There's no good places people go to browse for desktop software anymore, perhaps because of a lack of interest, and that can make deciding where to sell your software and how to promote it a difficult decision.
Maybe at big companies it's hard to convince the team that a desktop application is the way to go, but individual developers can do what they want and many are making desktop apps.
I'm using a new, still-in-development electronics CAD software called Horizon EDA which is a desktop application written in C++. It's nice and snappy and I never have had any inclination to do PCB layout on my phone.
Because Apple and Microsoft and Linux all have their own proprietary UI libraries, and their own Look And Feel, and it's just not worth implementing for both/all unless you are super large. And then, if you are super large, then you actually want to create your own Look And Feel.
So the web is like a second language. It's a second Look And Feel for users, and users appear to be comfortable learning the native UI LAF and the Web LAF. For developers, it's a second language in that it's JS/TS/CSS that works across all platforms.
It's just easier and cheaper to write for the Web LAF than it is to write for all proprietary LAFs and the results are pretty good, sometimes better.
Third party stuff like java, avalonia (cross platform WPF), Qt just suck in my experience. Might as well just use electron.
Because supporting 4 platforms is harder than supporting 1.
Even a boring business application will have users across 4 platforms: Windows, MacOS, iOS, and Android. It’s easier to point them all to a URL in a browser than to debug issues on an end user PC.
> Because supporting 4 platforms is harder than supporting 1.
True, and yet completely irrelevant, because every moderate-size app (and even many small ones) have at least three clients already: iOS, Android, and web (which may then be bundled into a desktop psuedo-app using Electron), and so they already have three different platforms to support and end-user devices to debug, in the exact scenario you said that they're trying to avoid.
If you're using Electron, you're offloading much of that work to someone else. You're still effectively doing the whole "write once, run everywhere" thing.
Yes, an alternative answer to OPs question is that, literally, desktop apps do exist. I don't think they literally thought desktop apps don't exist, though.
> supporting 4 platforms is harder than supporting 1
and
> It’s easier to point them all to a URL in a browser than to debug issues on an end user PC
...except that neither of those are valid, because (1) companies already support more than a single platform and (2) they already implement non-web clients for iOS and Android, so the debugging ease argument doesn't apply either.
Companies already shown that they're willing to invest in n+1 platforms to get more profit - that is, they've already decided that 3 is better than 1, so why would 7 (or 4) not be better than 3?
People get upset if you try to charge them a monthly fee for a desktop app. Yet they are happy to rent a SaaS solution.
Also, lower support costs. Making your app work on everyone's computer is surprisingly annoying, because some of your users will have Trojans, damaged system files, or an aggressive IT department.
>>People get upset if you try to charge them a monthly fee for a desktop app. Yet they are happy to rent a SaaS solution.
So don't. Set a fair price and sell me a license. I don't want you in my checking account. I don't want you looking over my shoulder. I find the privileges which purveyor's of web/phone apps have arrogated to themselves to be creepy for the most part.
Then, again I'm pretty sure I've turned into a grumpy old man.
I just founded a company where we're building a cross-platform native desktop app. Perhaps given we're on HN we can scope this to ask why aren't any _startups_ building native desktop apps?
It's something I wonder about as well. We're building real-time performance critical software, so we don't have much of an alternative. Given these constraints, we also ruled out Electron, Avalonia, React Native early on.
We're using Qt Quick which doesn't get nearly the love it should. I was a web developer for 5 years in a past life, and I'm pretty blown away by how pleasant and well-designed Qt Quick's QML language is. One of our team members created Canonic, https://www.canonic.com to explore how the web might look if QML was used as the document markup for the web.
The popular opinion around Qt Quick is that it is best suited for mobile or embedded projects with a dynamic UI, animations, etc. But over the last few years, it has really become a great desktop solution – to the point where Qt put Widgets into maintenance mode and is focusing efforts on Qt Quick across desktop, mobile and embedded targets.
With Qt 6, the GUI is drawn using native graphics acceleration: Metal on macOS, DirectX11 on Windows, Vulkan on Linux. This makes it really easy to bring in a texture you're drawing in some other piece of code outside of Qt. As a result, the QtMultimedia framework in Qt6 is zero-copy on most platforms with the FFmpeg backend. Frames get decoded if a GPU HW decoder is available, then this texture can be read directly by QtQuick and then rendered by the display server without ever being copied. I don't think there's a single other cross platform framework out there that achieves the same level of usability, performance and easy access to platform native APIs.
Here are just a few non-trivial desktop apps that come to mind using Qt Quick:
I definitely see Qt and Qt Quick technologies as a competitive advantage for us. We can develop the frontend quickly with QML / Javascript. We get full graphics accelerated GUI performance. Our app running under an average use-case idles at 100MB of RAM, which is basically equivalent to what a running instance of Finder on macOS uses. A full 1/5 of what Discord, Slack, Steam, etc use.
If you want to build real-time, high performance, native desktop apps, we're hiring. Email in profile.
In engineering land, I want to point out that 90%+ of the applications I use are actually desktop applications. Perhaps the majority of developers work on web apps (questionable), but certainly the majority of tools I, my team, and colleagues use are desktop apps.
- editors for code and text
- presentation and writing tools
- CAD software
- Music software
- Schematic / EE software
- Voice/Video chat applications
- All IM clients
- Web browsers for casual reading
- All games
In B2B or management land, most the applications are, indeed, web-based.
- Google drive
- Internal business tools @ my place of employment.
Still, I "use" those much less than 10% of the time, and most of them are simply web forms.
Many even prefer email or similar clients to be desktop apps. This seems like an important part of the conversation that is being left out: the more complex the task that the app is handling, and the more memory-intensive, the better fit it is for a desktop app.
That said, as web browsers get better at memory management and safe hardware access (e.g. WebGL / WebGPU) I expect this list of tools that "must" be thick to dwindle.
> Many even prefer email or similar clients to be desktop apps.
"Many" ?"even"? You make that sound like "many, as opposed to most", and "even, i.e. an exception".
I haven't had a job yet -- at least not this century, can't quite recall before -- where the standard e-mail client was anything other than the Outlook desktop client. The Web UI, if in use at all, has always been distinctly secondary, "If the real Outlook doesn't work, try this"-style. (Hmm, how old is Outlook? Must have been something else in the 90, at least the early ones...? And as a TA at Uni before that it was a TUI-thingy from IBM, but then OTOH maybe that doesn't count as a "real job".)
> I expect this list of tools that "must" be thick to dwindle.
The reason daliy-use tools must be "thick" is that in such tools you want a nice UI with some flow to it, so that it becomes in effect invisible. And fucking Web pages that masquerade as "applications" just don't have or do any of that. So let's hope this mania ends soon and your list, far from dwindling, starts to expand again.
> Still, I "use" those much less than 10% of the time, and most of them are simply web forms.
90% of them are various clunky ticket systems, (at least) one of which will invariably be JIRA. Preferably at least three or four of them in the same <10k staff corporation to do slightly different but basically similar things, which could all be done in any one of them.
All the things mentioned in other comments are valid - but for me easiest one and biggest:
SaaS gives you recurring revenue that one time install is not giving, because for desktop single install even if you release new version people might or might not buy it. So it is far easier to earn money with web-apps than desktop.
For certain complex tasks (e.g. image/video/audio editing), desktop has advantages. But for the bulk of what most people do on their computers, web apps are good enough and much more convenient. Convenience wins.
It's the complete opposite and one of the reasons I prefer to use a comparable web app over a native desktop app. Installing an app requires way more trust into it than using a web app that runs inside a heavily restricted sandbox.
As for why I also like to prefer developing web apps, that's because the available tools make web development more productive. Checking changes to the source is near-instant by simply pressing F5 in the browser, and the dev tools in browsers are some of the best you can get for debugging.
As a user, I am unsure that is a positive. There are near-daily stories about how feature X of a program was removed/hidden/etc for seemingly no reason. Using an offline only program ensures some UI developer does not get to swap buttons on me for "engagement".
i think most folks allow auto-updates because we want new features and security fixes. So, while you're technically correct i think the practical result is that it's no different on this axis than a webapp.
This is why I am still using an arse-old version of Mailwasher - the newer versions started sharing data with the mothership, including your eMail passwords, just so it could transparently sync that data to their iOS & Android apps.
My passwords are between me and my eMail server. No-one else should get that data, for any reason whatsoever.
Even in iOS which keeps apps in a fairly restrictive sandbox + goes through some kind of review process I wouldn't trust an app like Facebook's or Reddit's to not be gathering some kind of other information to use somehow.
There is a reason Reddit insists on you downloading their app.
This, on the other hand, that is, necessarily networked applications where the entire purpose is to interact with people and data that are geographically distributed, is perfect for taking place in a browser, and yes, I agree that I trust the browser much more than someone's native app. For applications that don't need to be networked, though, given me native.
I think there are other possible motivations, too. Not to say that you are wrong (you aren't), and not that this necessarily applies to Reddit specifically, but there is something to be said for having your service's icon implanted on the user's homescreen, possibly being one of the first things they see every time they open their phone. This opens up having muscle-memory to open up your app, making it that much easier to hop on and start doomscrolling. Having an app also makes it easier to drive engagement using notifications.
I think it's eye opening how unsuccessful web notifications have been on desktop. It's my belief that most people would not turn on notifications for most apps which send them if Android prompted like web notifications (I think iOS also defaults to allowing notifications without prompting for permission, but I'm not sure).
There isn't even an option on Android to block notifications except when explicitly allowed in settings. Each time you install a new app, you need to turn off its notifications.
iOS does not allow notifications by default. Apps must display a system pop-up to request permission to display notifications, and the option is there to deny permission.
Whether users have already fallen into the same cycle as cookie consent pop-ups and, more recently, GDPR pop-ups, just clicking "allow" by default because it's the easier option, is a different question.
I just think the entire concept of desktop-targeted notifications is superfluous and thereby it doesn't surprise me that it is unpopular. Like, if I want a notification about something I probably want it to go to my phone; sure, it is maybe nice to also be able to browse my phone notifications on my desktop, but I don't need two actual "notifications", as it isn't like I don't have my phone near me 24/7: what I'd really want is the "notification" to happen on my phone and then let me remotely browse a synchronized list of notifications (preferably end-to-end encrypted to the running phone, not using a user) from my desktop... but registering for a notification only on my desktop--or even my laptop!--seems like an extremely niche value proposition.
It's obviously true. Executing random binaries from the internet directly on your machine is clearly much less secure than executing a js script in an extremely hardened and restricted browser sandbox.
Which isn't true with all the access the latest html5 apps want to your hdd,webcam, etc .... They can take over your system silently and you don't need to click on anything
> They can take over your system silently and you don't need to click on anything
No, this is totally wrong. All browsers require explicit permission to grant access to hardware resources, they cannot "take over your system silently"... unlike an arbitrary binary.
This is absurd. A binary reviewed and vetted by a Linux distro is really unlike to contain spyware, unlike 90% of webpages. The web a well-known security dumpster fire.
Additionally, it's false that desktop applications are not sandboxed. On the contrary, the sandbox implemented around an application can be way more fine-grained that a browser. Firejail is a good example.
Browsers are behemoths and you can look up for yourself how many vulnerabilities they have and also the SLOC count.
Edit: silent downvotes? Leave it to HN to believe that webshit is more secure than desktop applications. This is material for /r/ShitHNSays
I can't downvote you, but an arbitrary binary is unequivocally a much bigger security and privacy threat than a js script executed in the browser, this is an indisputable fact. My guess is that you're getting downvoted because you're confidently espousing an opinion that any security expert would easily disabuse you of, if you're willing to listen.
> This is absurd. A binary reviewed and vetted by a Linux distro is really unlike to contain spyware
What's absurd is your special pleading a linux distro review to conclude that arbitrary code execution is more secure than a js script. This is wrong on so many levels. This comparison is also specious because you're comparing a curated repository to arbitrary js on the internet. You are also woefully misinformed if you think that "linux distro review" precludes the existence of your vaguely defined "spyware", arbitrary binaries (unlike js scripts) have unrestricted socket access and quite regularly emit all kinds of telemetry over the internet.
> Additionally, it's false that desktop applications are not sandboxed. On the contrary, the sandbox implemented around an application can be way more fine-grained that a browser. Firejail is a good example.
You have no idea whether or not an arbitrary binary is sandboxed before you execute it, thus it is capable of literally anything - not true of an arbitrary js script which is always sandboxed.
> Browsers are behemoths and you can look up for yourself how many vulnerabilities they have and also the SLOC count.
The top browsers are literally the most hardened sandboxes in the history of computing and there are far more vulnerabilities exposed through the uncountable ecosystem of arbitrary binaries than through browsers, many of which are never patched, and when they are, often aren't received by users because they may not upgrade them. Additionally, the vast majority of browser vulnerabilities are of a modest threat level, with the higher threat vulnerabilities usually being discovered by highly sophisticated security research firms where they are usually safely patched before ever being exploited in the wild.
> This is material for /r/ShitHNSays
Indeed. Try submitting this thread and see how that turns out for you.
Do you execute shell scripts that are curled from the internet?
Why not? The reason is probably the same why other people argue that they don't want to install desktop apps anymore.
They don't trust those apps, because the security model they have in place doesn't live up to their expectations. Most users don't use opensnitch, selinux or firejail because those tools - honestly - suck for normal users.
We need to make app sandboxing easier, GUI driven and as simple as the android settings app (when it comes to the approachability).
The dumpster config fatigue that is selinux is just a bad joke and nobody will ever be able to use this tool correctly without having to make thousands of mistakes.
We have to build better profilers that use reasonable sandboxes by default, and allow to generate a config automatically for the end users.
The useless tech that is flatpak/snap/appimage is pretty much not what it promised initially when it nowadays bundles a microkernel, shared libraries and everything the app needs ... but cannot even protect my user's profile folder from the app I'm running.
How is installing a binary supposed to be more secure than a web page? A binary can do largely whatever it wants, especially for an average person that will grant it all the permissions it requests.
You are making a surprisingly good case for browser apps. With binaries, you're restricted to a tiny fraction of available apps that are carefully vetted to _redue_ the chance of security issues, whereas with browser apps, you can run any untrusted web page. A malicious binary can spy on pretty much any private and sensible data on your PC, while a malicious web page can only do some fingerprinting.
No it's not. To use a web app I have to have an internet connection to even start the app. Anything I click on that loads a new resource sends data to the server about what I'm doing. I can unplug from the net, install desktop software and run it without it being able to send anything anywhere.
The average user and use case typically has internet connection so you're talking about edge-case scenarios. In a regular scenario, a native app can send whatever it wants about your system to a server, including your sensitive photos and private information.
> Anything I click on that loads a new resource sends data to the server about what I'm doing.
Nothing stops native apps of doing that. Plenty require you to have internet connection and coerce you into telemetry. The difference is that one can only send data that the browser will allow it to have, while the other can read all your sensitive data on disk.
Or living in a rural area with crappy Internet. I've installed wireless Internet in places where the people are wealthy but could only get a crappy DSL line. An Internet connected app, web or native, was a pain in the butt to use.
Mobile users can have problems too. I still come across dead zones in network coverage that render some apps useless and I'm not even outside of town.
I am not at all concerned about an application developer having access to data about my usage of his app. (In fact I have a hard time empathizing with people who are so concerned).
I am extremely concerned about an application developer having the run of other data which has nothing to do with his app, which is always the case on a general purpose computer, except insofar as it’s been made into a walled garden.
Every single thing you just mentioned both occurs and doesn't occur with both web and native apps depending on what the app was built to do.
- Web apps can require internet
- Web apps can not require internet
- Native apps can require internet
- Native apps can not require internet
- Web apps can inform a server about what you're doing
- Web apps can not inform a server about what you're doing
- Native apps can inform a server about what you're doing
- Native apps can not inform a server about what you're doing
- You can unplug from the net, install desktop software (native app) and run it without it being able to send anything anywhere
- You can not unplug from the net, install desktop software (native app) and run it without it being able to send anything anywhere
- You can unplug from the net, install desktop software (native app) and run it without it being able to send anything anywhere until you plug back into the net
- You can unplug from the net, install desktop software (web app) and run it without it being able to send anything anywhere
- You can not unplug from the net, install desktop software (web app) and run it without it being able to send anything anywhere
- You can unplug from the net, install desktop software (web app) and run it without it being able to send anything anywhere until you plug back into the net
I think your statement on the security of desktop apps is a tad misinformed.
Desktop apps do have to adhere to the same system security permission that the browser provides. WebApps can be even more intrusive than a desktop app because you're constantly sending signals to a central set of servers with a unique browser fingerprint. You also lose control of updates, and would be completely unaware of new tracking dependencies being injected. The data you create with a web app is and always will be property of the company that manages it.
Desktop apps being packed into Flatpak is a good start to addressing sandboxing desktop apps, imo, and touches on your concerns as well.
> Desktop apps do have to adhere to the same system security permission that the browser provides. WebApps can be even more intrusive than a desktop app because you're constantly sending signals to a central set of servers with a unique browser fingerprint. You also lose control of updates, and would be completely unaware of new tracking dependencies being injected. The data you create with a web app is and always will be property of the company that manages it.
A browser is just an arbitrary binary, a desktop app is going to be capable of anything a browser is, but much much more.
> Desktop apps do have to adhere to the same system security permission that the browser provides.
A desktop app on an average users PC has access to most files on disk, including sensitive data. While a browser has that too, apps running inside a browsers sandbox do not.
Yes it's possible for web apps to integrate things like FullStory that let devs monitor people like a citizen of Zalem. But local guis are doing that too these days. For instance someone posted a show hn thread a few months ago of a terminal gui they built that had fullstory integrated. The author was like mea culpa and removed it, since all he probably wanted to do was fix bugs. But my point is that everything creepy browsers are able to do, local apps can now do too -- and then some. On the other hand, local apps can be positively the most secure and they're the foundation on which big companies are built. But what distinguishes the apps that empower you versus the ones that disempower you isn't obvious, so I'll explain how I do it.
The question people always ask is how can we build a technology that makes being evil impossible? Like sandboxing. And that's usually the wrong question to be asking, because it's a people problem, not a technology problem. What we need is transparency. The ability to monitor the monitors. If you can empirically see what a local app is actually doing, then you can say for certain that it's more trustworthy than anything else. So how do we do that?
Well, for starters, here's a 1KLOC tutorial (similar to Antirez's Kilo editor or Linenoise) on how to build your own version of the `strace` command in pure C. https://github.com/jart/cosmopolitan/blob/master/tool/build/... If you monitor the system interfaces then you don't need to read the source code. It's analogous to watching someone's behavior rather than reading their dna. Another good tool that might help you secure your digital spaces is Blinkenlights. Right now it only supports static binaries, but if you have one of those and you want to learn more about how it works, then you can watch what it does with memory in real time. https://justine.lol/blinkenlights/
This is the same philosophy behind Chrome's incredible debugger GUI (which is something that sadly local electron-like apps have the power to take away) because transparency is the bedrock of trust. It's always surprised me that more people haven't been focusing on building tools like this. It also makes me sad when certain open source apps (which shall remain nameless because I don't want to be flamed) go out of their way to make strace output incomprehensible.
I believe the rationale here is, sure, you need to trust that the vendor isn't giving you malware, but for most software makers that have existed for longer than a few months, you have a reasonable track record to grant that level of trust. On the other hand, trusting that your data won't be exfiltrated somehow is much harder because of how widespread the practice is. But at least desktop software can, in principle, run without network access. A web app cannot.
Beyond that, you can also, in principle, run a desktop app in a sandbox and/or audit it in some way to observe its behavior and assure yourself it isn't malicious, and only then use it on a sensitive host. A web app, in contrast, can't even be guaranteed to serve you the same version of itself from one second to the next, let alone guarantee that any change made to it between requests wasn't malicious. The sandboxing done by the browser is the only protection you have (short of only ever visiting the web at all from a sandboxed host).
Then get into applications like basic photo processing and document editing. Sure, it can be done server-side via the web, but the only way for that to be possible is for you to upload all of your photos across the network to the server, along with all the threat models that entails. If the software is running on your host, your data can stay on your host. I'm assuming here that pure Javascript is still not really used for compute-intensive applications and things like doc-to-pdf conversion and photo smoothing accomplished via web apps mostly still relies on server-side processing.
Browser sandboxes are among the best in the world. Chrome has an entire team focusing on sandboxing the browser nonstop. Every single Tab is sandboxes from the next, too.
You mean like “trusting” that Zoom won’t secretly install a web server on your computer that reinstalls itself when you uninstall it or trusting that Chrome won’t corrupt your Mac when you turn off System Integrity Protection
If you’re using a web app, your data lives on someone else’s server. Not on a computer you can physically control or even—perish the thought—disconnect from the internet.
Oh cool, the app that handles all my private financial and/or health records runs on the internet and keeps my information in a datacenter in northern Virginia, but at least it can’t get access to the data on my local drive, which by the way there isn’t any because all of that data is in another web app and stored in quite possibly the exact same datacenter!
But there is no need for the app to do that really, there are APIs like localstorage and IndexedDB in place to allow you to do all this clientside. Browsers have come a long way from just html renderers.
There's no need for web apps to store all of your data in the cloud, but almost all of them in the real world do it anyway. Users have been conditioned to expect that if they log into the same web app from two different devices that their data will still be there, and this can be a huge convenience, but it's also less private and less secure than what we had with desktop apps.
I would say that consumer purchasing behavior has veered away from desktop software to mobile applications (due to not having another choice of app distribution) and subscription based services.
In the past, an app like Spotify may have been a desktop app that somebody developed and then sold in a retail box to manage a music library and generate playlists; now they give away the app for free to draw people into the subscription model.
It also does not help that Microsoft has been fumbling and rebooting its desktop app story repeatedly for about a decade now. The last bastion of good desktop apps has been on macOS, and now the cracks are starting to form in that dam as well with even developers like 1Password switching to multi-platform Electron and SaaS. (I actually like the new version of 1Password, but it's a canary for the state of desktop)
Microsoft has repeatedly bungled their app store on desktop - no idea how it is as bad as it is. You would think they might see that as a huge cash cow for them. Not that I am rooting for a more app store based Windows - a standard .MSI is pretty damn easy and convenient. But I think a lot of average end users might prefer something as seamless as MacOS app store or the Android play store - ESPECIALLY if it controlled auto-updates in a convenient and similar manner.
They are starting to get there with big official apps and you even install Python from Windows Store now - but they need to just pick a golden path for developers to make applications for everything from Grubhub to sports gambling - and then turn them loose
Microsoft doesn't even want you to develop indie windows applications anymore. They are pushing Windows S mode where webapps just work and anything else has to come thru the windows app store. The solution to that extra friction is just do web-based.
IMVH because of sheep effect: Big&Powerful push for web(cr)apps because that's what they need for profit and countless others think that following them is a good idea...
Beside that "desktop" was equally abandoned by giants, reduced to a modern dumb terminal with a generic "endpoint" name, just a bootloader for a WebVM improperly named browser for legacy reasons, so develop for desktop means develop for something uncertain: classic widget-based UIs have proven to be limited and nowadays also derailed and crappy, modern document UI essentially exists only in Emacs (and classic, now abandoned, systems) and in web/notebook UI with an uncertain future. So sheep effects aside many think "if I develop a webapp today it's code can be combined with/injected in something else tomorrow", while if I choose Qt who know what can happen, same for GTk, same for non-portable UI libraries (Windows, OSX) and classic Tk are almost abandoned...
We must rediscover classic desktop tech but that means rediscover desktops, not as dumb terminals but as a single and unique system with anything ready available to the user, like classic SmallTalk workstations or actual Emacs, where the concept of "composability" web-app have it's far superior and directly available to the user. Such move can only be done by a community or a giant, since we do not have anymore a real FLOSS community, most are just unpaid voluntary workers for some giant, simply because nowadays most are on someone else computer (so called "cloud"), and giants have the opposite interest... It's unfortunately unlikely.
Everyone have came back to laptop and desktop just because with the push to remote work due to covid scenario they need something to really work or study with and so it appear clear that mobile crap is unusable for that. The mere fact that too many still choose laptop (that are craptop essentially) not because they travel but as desktop replacer clearly depict a thing: people do not have learnt the lesson. They choose "computers" just because they need them, but most still fail to understand that WFH is not put a craptop, perhaps a 14" one, somewhere in a house but dedicate a place, possibly a silent and lockable one (just to avoid Judge cat, MP who have sex in streaming, ...) PER HUMAN BEING in their own house with a proper office setup there.
We can hope for a real came back to desktop if and when WFH switch mode from an emergency push chosen to make companies pay less shifting costs to workers to a stale and developed work mode. Perhaps then enough people will realize what does they really need and so the demand will be big enough to force a change...
You've got shops which make Mac apps, Windows apps, and both, and it's three fairly different cultures. The "both" shops are mostly large and well established, Macs have a lot of indie devs, and Windows has a deep culture of B2B apps that is chugging along just fine. Not to mention PC games, which are certainly desktop apps, but I consider them in their own category and figure games aren't what you meant either.
people are building desktop software and have been doing that all along. it never went away. people just think that it did. also, there are mature and solid toolkits for desktop software. why not use those? i would never use a shit app toolkit with its origins in web or browser applications to build desktop software. i have used applications built using web toolkits and the user experience or performance is second class.
Not sure I would say they are shit toolkits but yes I definitely also generally prefer a desktop native application. For some applications though where latency and the responsiveness of the GUI is not super critical I think it is probably a big advantage to have one web toolkit based piece of software that can run on anything - especially for smaller dev shops.
most desktop toolkits by now support mobile platforms too, so why would you use a web toolkit? if your app needs to be able to deploy on the web it probably has to be completely rearchitected anyway.
We've been thinking about this and working on it a lot lately. I hope to announce our efforts on HN soon (company is in stealth mode and isn't launched yet).
There's really many different aspects to this that you can't do justice to in an HN comment. But firstly let's observe that web vs non-web isn't deeply fundamental. Steve Jobs himself tried to push web apps on mobile devs and they rejected it. Many other people have tried to push mobile web apps and generally, failed. Users wanted apps built with non-web technologies.
So why hasn't this happened on the desktop? I think it's a combination of:
- No sandbox. In turn this means, IT departments are scared of desktop apps and have to whitelist them but many IT departments are bureaucratic and slow, so this can be painful. This is exacerbated by:
- Poor deployment technologies in general, with a poor understanding of what IT departments need. On MacOS X there's no software update system out of the box except the App Store, which comes with massive downsides. On Windows, IT depts generally need some sort of consistent package management system but Windows apps all invent their own, because up until recently with MSIX, Windows has not provided what was needed. Most apps are thus painful to (re)package and control. Witness the screams of Windows network admins about Electron - it's a disaster for them. In effect every Electron apps (using Squirrel) installs into roaming $HOME meaning when users roam massive apps get copied around, slowing down login. This is all wrong but the Electron distribution systems are unmaintained, so it never gets fixed.
- Cross platform desktop UI toolkits aren't that great IMO especially for devs who want to avoid C++. Your best option is JavaFX, but it's not well known and doesn't have a wealthy tech firm who can subsidize its development (any more). Still, the open source community around it has been growing and the tech/capabilities are very solid.
That said, the alternative is HTML5 which isn't a UI toolkit at all. So, a lot of this is comparing apples and oranges.
- Conflation of unrelated aspects, e.g. a lot of people associate desktop app with pay-per-upgrade. You can do that with desktop apps, but you don't have to. You can implement subscription pricing too, c.f. JetBrains or lots of mobile-first SaaS firms. You actually have a lot more flexibility around pricing because your costs are more or less fixed.
You could go on - the web is an evolved technology with a form of evolutionary robustness that takes a lot of work to even analyze, let alone out compete.
Nonetheless, I feel strongly that the software industry should:
a. Analyze why we like the web as a technology.
b. Use that analysis to do better than it.
Here's a controversial opinion for you: HTML5 is reaching end-of-life as a technology. Google lavishly funds Chrome and they don't seem to know what to do with that money beyond try and turn HTML5 into a spec for an entire operating system, except it's one that ignores the entire history of OS design. The sheer size of the spec means it's become unimplementable! Not even Microsoft or Apple can fully implement it anymore meaning it's more or less Google's plaything. The openness was one of the big reasons to like the web; IMO that's no longer a competitive advantage especially as many competing platforms e.g. JVM+JavaFX are themselves open source and actually have multi-vendor communities around them, which Blink does not.
Meanwhile, despite the vast size of the spec, the platform itself doesn't really meet people's needs and hasn't innovated on the core tech. It's stuck in a local minima which kills progress. As a consequence many websites these days are layering many pre-processors and alternative languages on top that compile to the "machine code" of HTML/CSS/JS. Trivial example: the popularity of Markdown is more or less driven by the failure of HTML to be an ideal appearance-neutral markup format. Yet, browsers can't render Markdown sites natively.
We need new projects that try to build post-web technologies, ones that take the ideas that work (markup, sandboxing, a portable browser etc) and throws out or upgrades the ideas that aren't working.
I believe we had another interesting thread on this recently here on HN
I think the main reason is that mobile apps often provide a more immediate return on investment - especially on apps designed for the consumer space. You can potentially track much more and then sell that data, ping people with notifications wherever they are, and there is a huge userbase of both technically savvy and less technically savvy
I’ve had issues where the user has run out of hard drive space and now the app randomly crashes. Or when the antivirus scanning software locks down the DLL and the app hangs. Or the user wants to just make a quick change on their iPad while traveling on a plane.
Yes, I imagine the Windows security model where things stop working at random by no good reason (like "not enough people use that software") is an important factor here.
Craft [0] are a Notion [1] (note-taking app) competitor that decided to go desktop app first rather than web-first. Their desktop app is beautiful but now they are implementing the webapp and it is obvious that there is a lot of feature disparity with the desktop version. To me, it's pretty telling of why the smart decision is often to go web first.
I guess it might make sense for Craft because they are in a very crowded space with a large potential market and being native-first is a differentiator over their competitors. I think in most cases though, it doesn't make sense to go native-first.
Being web-first means that you can ship a consistent UI on every platform using tools that most devs are familiar with.
- You code once, it works everywhere.
- There is nothing to install, no updates to download.
- Lots of web devs means more devs to choose from when hiring.
The accessibility of the web platform make it a lowest common denominator. Desktop does add a ton of benefits though. Some of it is intangible like "feels more solid". But there are a ton of tangible benefits too.
- Easily accessible on the dock or start menu. Once installed then less likely to be forgotten.
- Native notifications.
- More refined UI. Not just one of many tabs.
- Access to native APIs that aren't available inside the web sandbox.
- Alt-tab or Command-tab.
- Quick access, i.e. can live in the menubar/tray or can be assigned to a global keyboard shortcut which pops up the app quickly.
But... Electron can provide all of that! So, it's mighty tempting to start with a web app and then pepper in some desktop functionality later by wrapping your web app using Electron.
* Disclaimer: I'm working on this problem with ToDesktop [2] so factor my bias into your comments. :)
Having tried both Craft and Notion, I think your comparison is absolutely leaving out a number of factors in favor of the web-first solution.
Granted, maybe I have my own biases towards native applications, but to not list the performance and native system integrations as benefits of desktop over web seems criminal to me.
>- You code once, it works everywhere.
I realize this may be a controversial opinion, but this was a stupid dream when I was a kid, and it's still a stupid dream now. Write once, run everywhere means you write a piece of software that sucks everywhere.
>- There is nothing to install, no updates to download.
Ah yes, because everyone loves change. Especially unforecasted change right under their feet, with no way to revert to a previous version. No downsides here!
>But... Electron can provide all of that! So, it's mighty tempting to start with a web app and then pepper in some desktop functionality later by wrapping your web app using Electron.
It certainly is tempting, but the easy way out is seldom the right one.
It's a very common thing that I see Electron apps overriding or not implementing native behaviors. Right click menus, menu bar items, common global shortcuts, none of that can be taken for granted in an Electron app, but mostly comes "for free" in a native app. You can certainly get some of this in a web/Electron app by using native forms and fields and not writing custom CSS/JS monstrosities for everything, but that so rarely seems to be the case these days. Why is Teams so special the right-click menu shouldn't act like the right click menu in Explorer, or Firefox? Why does slack use a native right click menu for messages, and not one when I right click on a channel in the sidebar?
>Some of it is intangible like "feels more solid".
Some of these "intangible" benefits are certainly hard to quantify, but it can absolutely be a death by 1000 cuts type thing if behavior you're used to in literally every other application suddenly no longer works in an Electron app. For example, I have a global keyboard shortcut on my mac to Enter/Exit Full Screen mode. It doesn't work in Notion. Why? Because Notion (and other Electron apps like Slack) call the menu option something different. And yet... If I assign a keyboard shortcut to that different action, it works in Slack, but Notion just silently eats the input like nothing happened. Small tiny things like this are table stakes for applications feeling good, and when one application doesn't work like the rest of your system, it feels bad. This is absolutely exacerbated when you have a website as a desktop app because it's trying to trick you into thinking it's a real application.
Why do users no longer seem to care about desktop applications? It's because most of the people making them stopped giving a shit long before users did, and users are just rolling with the punches. No one is happy that everything is slower now though. Non-technical users may just not know how to explain why they're frustrated with everything.
A lot of people will give a list of technical reasons (e.g. native platforms are "bad"), but in my opinion those tend to be exaggerated to a considerable degree – Cocoa/AppKit, Win32, etc aren't sexy or buzzwordy but they are deeply capable and in the hands of a knowledgable developer, can save a lot of time by virtue of no need to write a bunch of widgets from scratch (or contend with all the caveats and gotchas of a bunch of half-baked widgets from third party libraries).
I think the biggest driver is actually a shift in software licensing models. Web apps are eminently more compatible with subscriptions and SaaS; access given by one-time purchases is easily revoked or diminished, they can't be pirated, and user data living in some datacenter somewhere makes it a cinch to keep customers locked in. In short, it strips control from the user while giving more to the developer. This by far has greater implications on developer profit margins than tech stack alone does.
There are too many platforms now. Back in the day people would buy into the windows or mac ecosystem and it was all they had. You could build only a windows desktop app and sell it and have enough happy customers to make a living. Nowadays people expect your thing to be on their laptop and their tablet and their phone, so in practice you have to be on windows, android and iOS at a minimum, and probably macOS too, and linux for brownie points. And people often have windows and mac devices and expect you to be on both. Oh, and you better sync their data, so you need a server also.
So you need at least a server codebase, a mobile codebase (or two), and then a desktop codebase. Having to build at least three codebases instead of one is why you see fewer solo developers making a product and earning a living from it. And then for that desktop codebase, you can either build it twice for windows and mac, or you can just build it as a web app and maybe wrap it in electron for easy platform integration. The other cross-platform approaches don’t seem all that much more productive than web dev, so most people stick to web technologies for serving the desktop.
Because they unlearned how to do it. Their managers unlearned how to even think about a product being shipped and installed on many different clients that are (gasp) not under our control.
Seriously, it's a loss of skills. Everyone is doing the "single installation target" SaaS nowadays, so naturally most developers lost (or never acquired) the skill to think in terms of installable software. I even met managers that had visible problems to contemplate a world in which different customers would run different versions of our product at the same time.
As a side effect of this loss of skill, software that actually has to be installed nowadays tends to come with ever worse installation methods. It's not unusual to see the installation method for software A make it impossible to install or run B.
Honestly I don't think this is the reason. I'm a web developer and more than once have chosen the web for a target simply because it....made more sense from every angle. There was just no incentive to build a desktop app that wouldn't offer anything the user couldn't get by going to a web app. There are plenty of use cases where this isn't true, but for a lot of apps it is going to be the case.
This is so true. I have met developers who couldn’t even visualize any application without a server.
At one point, I was building a simple command line based CRM. It was running off a local json file. I just had basic crud functions and was loading whole file in-memory during execution.
When I explained that so many of my developer friends keep trying to fit in server. The concept of no webapi, no database, no docker was do foreign to them. Or they just thought I was lazy for not using all that.
There are many reasons web applications are more attractive. Web applications can't be pirated as the user does not receive the program. Users don't want desktop applications, as it's more convenient to login to a web application with a Google or Facebook account than it is to install a native application. A web application works on any device and most people only have a smartphone.
For me the biggest aspect of the shift is simply that web and mobile apps became the larger standard to follow, with a higher effective install base.
In the 80s and 90s there was definitely interest in consumer software, but the applications often struggled to be useful precisely because information was bottled up on the local machine, and so you had to think in terms of publishing to analog, which in turn made everything consolidate around the few apps that did those things at a professional level.
Once you got into web based experiences the possibilities for consumers skyrocketed since the information transmission could go "all-digital" - you could post on forums, write a blog, buy and sell goods, publish video and so forth. And by doing it web-first you were writing to a standard with broad implementation.
After the rise of the smartphone the picture has shifted again towards experiences that are consumer internet focused, but using the device that most people have, versus the one that has the most power. There are video editing apps on mobile now, and they're actually reasonably powerful, if not quite appropriate for pro work. Drawing and writing is likewise very accessible on mobile with the small investments of a capacitive pen or keyboard. There's a definite sense of never having to leave mobile if you stay completely in the zone of working with standardized apps to publish online.
The desktop itself has been increasingly ceded to open source projects, for better or for worse. The biggest applications of yesteryear have competition, albeit often still far behind in certain categories. The smaller niche ones often get absorbed into a library and therefore are something you can script to taste if you have a little bit of programming knowledge - much like how microcomputer BASICs worked back in the day, just with more package management cruft.
And if you go in this latter direction for your work, then you aren't writing to a standard, you're writing for a user of one and wholly defining your "information life" on your own terms - and therefore you don't need too much in the way of cross-platform compatibility goop or integration polish, and can probably build off of a terminal, browser window or game engine to provide the UI.
I have to wonder if some of the comments here are satirical regarding security being better on the web.
Nothing screams secure like allowing anyone with an internet connection globally to poke and prod at your duck-typed interpreted language that has 91,324 dependencies for a TODO app.
- Most users, regardless of whether they purport to care about their privacy or security, don't even fundamentally understand either concept thoroughly enough to even appreciate that they're missing.
- Users are lazy (prefer web app over installation process, or even just taking the time to download a compiled portable binary). Devs are lazy ("why make 3 or 4 clients when we can make 1?"), on the basis that humanity is lazy. A desire to preserve energy and "take the easy way" is hard-coded into our DNA. It's ostensibly a good thing in most circumstances, just not this one.
- Younger people (a considerable chunk of my own generation included) are increasingly not functionally computer literate[1]. While virtually everyone is smartphone literate these days, I believe the days are numbered in which the laptop and desktop computer are used by anyone other than students, technology professionals, hobbyists, and serious PC gamers.
I understand that these hypotheses taken together are a pretty bleak outlook for the form factor of the personal computer that many of us deeply cherish and couldn't imagine life without, but we are the exception in broader society at this point, not the norm[2].
- Cross platform UI is hard. (I'm learning to use Avalonia, and that's getting better but for now it feels pretty hard to get into, especially for someone coming from outside the XAML/dotnet ecosystem)
- Downloading an app presents significant friction to users. If you can solve it with a web app, you'll get a much wider reach (though converting that to paying customers might be a challenge)
- Platforms are making it HARD (signing, lots of "run this app from an unsafe source? open anyways?") It's doable if you already have a large, established app but just throwing together little utility apps is much harder than it used to be.
- Cloud Data. Users expect their projects and data to be available on the cloud. Platforms offer good ways to do this (like CloudKit) but there's no cross platform way to do this. Also this means that you're going to have to deal with auth/login at some level, which is a pain.
- No marketplace. If you build a standalone app, with no subscription nonsense, where are you going to sell it? Build your own store page and payment processor?
- The elves have left for the undying lands. The golden days where an app could exist as a standalone thing is just over. Our world is so interconnected that apps need to be connected to lots of other things -- even standalone apps end up talking to a bunch of apis. This interconnectedness is significantly easier in a browser or server-based app, even though it's quite possible in a desktop app.
I wrote and write a lot of software for internal users at my company.
What is easier?
- Test on firefox, ie, chrome, safari
or
- Test on windows 10, windows 11, 3 different versions of osx, and whateverthehell the CEO has installed at home?
Oh, and what about updating? Oh man that's a fun one. I was once part of a company with an auto-updating java installation. It was great, and we spent a non-insignificant amount of time making that not break every time, and it still broke depending on the machine privileges. We also had to make it work on 4 different versions of java because admins weren't willing to upgrade anything without a 6-month cycle.
What feature would I get? Desktop notification? Okay that's one (ish). Oh buuuut then a client would want to run it on their Android. And iPhone. And on that iPad. And what about on their fucking Kindle.
I think I'll stick to web :P And with React and other modern frameworks and sockets, there's very little that my SPA can't do.
It seems there are new apps for GNOME once in a while and it makes me super happy. I am thinking myself of writing one. They look gorgeous with the latest GTK.
I think good desktop apps are infinitely better than good web apps.
However, I pretty much only build web apps. Here's some reasons why:
* Getting pixel-perfect design is SO MUCH FASTER&EASIER on the web (and using web-first tech like React Native, in my experience, has largely just met a stigma of not-really-native and sometimes suffers on perf for the sake of design). [FWIW I've been doing "web" stacks for almost 20 years, but I've also done C# for ~10, Java for ~5, and C++ for ~5]
* Even if you're not doing anything particularly resource-intensive, it's way easier to target a single machine spec (your server) and know everything just works, rather than worrying about resource-related bugs on lower-end/oddball machines, antivirus, hardware issues (out of ram? out of HD space? etc)
* Similarly, shouldering heavy load/cost on a server lets my apps be usable by more people who might not have up-to-date machines
* Barrier-of-entry is significantly lower on web apps: someone can click a link and immediately demo whatever I made, rather than worrying about marketing and landing pages trying to convince someone that it's safe and OK to download/execute a random binary
* Web apps are a single codebase that's also usable on mobile, tablets, and other devices without having to worry about even more build targets as long as your design is reasonably responsive. I know these "native" libs also technically support that, but it seems there's always custom build configuration per device and/or lower-level per-platform code for e.g. IO/native functionality
That said, I wish desktop apps were easier to design for. I could probably get over all the other "downsides" if I could make native apps that looked and felt as smooth & nice as web ones.
I write desktop apps all the time and want to learn how to write more, but the apps I write are (A) for my own use and/or (B) for helping me build my Web site for my startup! So, there is irony here: I write desktop apps to help me build a Web app!
Word to some in the computer industry: Yes, shrink-wrapped, downloadable, installable apps have problems, but a partial solution is end-user programming! Or, all over the world, kids are being taught how to program: Wellllll, they can program and write their own apps, just as macros, scripts, DLLs, or EXEs. And they don't have to go through some install process or worry about computer security! Ah, the security they invade is their own!
But, yes, more with virtual machines, sandboxes, etc. would be good.
One old approach to security was to give a hierarchical file system directory to a user/app and then say, you can do anything you want inside this directory and nothing (or be somewhat restricted, e.g., write only, read only, etc.) outside. That's now an old idea that long worked great. My understanding is the the Windows file system NTFS (new technology file system) has some or all of this functionality ready to go as a means sand boxing.
IBM's virtual machine (CP67/CMS, control program for the 360/67, conversational monitor system) gave the user a directory and a virtual machine and said, go for it, do anything you want in this virtual machine, even write machine code using privileged instructions, write and run a whole operating system -- was safe, secure, and worked great. In fact could run CP67 on CP67 on CP67 -- once as a demonstration CP67 was run 7 levels deep! In fact the reason CP67/CMS existed was as a tool for developing operating systems, but it also made a terrific time sharing system. Anything done back in those days would be just trivial to do today, trivial in the amount of code, how much memory it would need, how fast could start a new machine, how much overhead due to the virtualization, how to have the file system offer the new machine what it needed, securely, etc.
Developing a server rendered (traditional) web app is one subjective order of magnitude easier than a desktop one (I did both professionally in the 90s). I believe that mobile is as hard as desktop but I have little direct experience there. SPAs are about as hard as desktop apps. I have some experience on that.
Web apps are orders of magnitude easier to distribute and update.
Hence customers moved to the web and developers followed. The only big market for desktop apps is games IMHO.
Then if I build a web app for myself (I did) I can use it instantly on my phone, my tablet, my laptop. Should I build native apps? No thank you.
On macOS, I feel like I install more new desktop apps than I have the time to play with (thanks Setapp). I barely use web apps day-to-day. When I have to, I will create a wrapper app for them and/or interact with them through Raycast. To me the landscape of new/actively developed apps feels alive and well.
A thing I've noticed lately is that a good chunk of the new apps I install on iOS show up with compatible versions in my Purchased list on the Mac App Store. That's probably due to Mac Catalyst and a little bit due to SwiftUI cross-platform. Most catalyst apps are in the same quality category as decent Electron apps. I still prefer those (both electron and catalyst) over a web app.
We've built a UI builder design tool, in Cocoa and SwiftUI, for the Mac. The content you produce with it is meant to be consumed (using our SDKs) on mobile, but the authoring tool itself is a native desktop app. Of course the limitation of this is our app doesn't run on Linux or Windows, but our target user persona is highly likely to have a Mac.
We've made use of the built-in Document-based apps stuff (NSDocument and friends) in AppKit and it works nicely as a good desktop app citizen.
I've personally wanted to work on a desktop app for years (after growing up during the 90s and admiring the work of many desktop software vendors). I also think that given how almost all productivity work is done on desktop computers that the native software space is rather underserved.
It's interesting how much of the built-in frameworks on desktop are oriented around a document paradigm, for productivity software. Both AppKit/Cocoa on the Mac and the venerable MFC on Windows are great examples of this.
Actually doing building a non-trivial desktop app involved our team getting up to speed on the programming paradigms that have evolved over the last 30 years on the desktop. It really is materially different from the mobile apps many of us worked on previously.
Because desktop apps are harder to make money with.
They are easy to crack / pirate.
Also making a monthly subscription is harder for a desktop app (you would need still also a webapp to check for the monthly subscription).
It is on the other hand very hard to pirate software that only runs on some server. Also quite easy to force a monthly subscription if you simply hide app behind a login screen.
Additionally your webapp also has the data of the user hostage so the user can not switch to a competitor.
So basicly I think webapps are a dark pattern. The user of course prefers desktopapps but due to above reasons there is hardly incentive to build them.
Will this change?
I think so yes. Eventually this whole Saas bubble will burst, because
- Chrome filesystem API will make building a desktop app as easy as a webapp
- Open source and crowd funded software will pay the bill for those making desktop apps and users will be more then willing to switch.
When will this happen?
If we are lucky within 5 years, but more likely 20 or 30 years.
So, you start with the idea that people prefer making web apps essentially because "money". Then conclude that making desktop apps will eventually go back to be the favorite, because they'll become as easy to make as web apps. Doesn't that imply that ease of development, distribution, and maintenance are (the?) current major factors?
The economics of web apps is a pretty straight line and may not point to a particular cupid trend. If you make a web app and want to serve it, someone has to foot the bill (bandwidth, storage, security, maintenance, etc). Also, people will keep asking for improvements and fixes. If you want to offer nicer things, you may want to hire a UI/UX dev. If the app becomes popular, at some point you'll have no choice than to generate some income. It means that you can finance the app through either donations (which you can pull off if you're super popular, like Lichess), ads (if you're reasonably popular, like Photopea), or subscriptions (if you're in a niche, like many of the one-person SaaS out there).
Also, your target audience may express its preference for one monetization model over others. If you build a web competitor to the Salesforce suit, you might be surprised to see many of your prospective users frown when they ask you how much it costs and you reply "donations".
I sure hope so. Because if not, that will mean everything is on "the cloud" -- and the main thing to know about "the cloud" is that there is no such thing, it's just someone else's machine. And if everything is on that, the owner of that machine will in practice have become Big Brother. It doesn't matter if there's just one of him or some oligarchy of a few of them.
Wether it be the Federation or the Empire; MicroGoog, OraFace, or Applazon; or any combination thereof -- Big Brother will inevitably be evil because A) Power corrupts (where is "Don't be evil" now?), and B) the whole idea of a "Big Brother" with absolute power is evil.
web apps are just better. they run on anything that has a web browser in it. web devs as well as the users dont have to deal with many problems related to native software.
i would like to hightlight a precious insight of phtrivier:
"But they have chrome on their computer (it came preinstalled, and they need it for Facebook), so at least you get a chance to grow your user base if you manage to put a link to your website in your facebook feed, or, if they're high-tech users, as an ad on the top of their google search result (also knows as "the internet".)"
As a developer of PC-oriented app you have basically two options:
a) desktop app
b) web app
Option a) is obviosly much better in terms of UX, performance, complexity etc. You have(had) tons of decent options, you can chose language you like, you have native support for many OS operations, etc. But the downside is distribution – you have to compile Windows/Linux/Mac versions for different platforms (32bit/64bit/Intel/ARM/etc). It's painful, hard, long and whenever you have a tiny update, you go all over this process again.
Option b) allows you eseentially to distribute app immediately. Your clients just need open URL and that's it. In 80% of cases this is exactly what's needed. All the bells and whistles of distribution platforms like AppStore are just a burden for majority of apps for small/medium software shops. Of course, you'll have to sacrifice a lot - performance and safety, most of all. Even worse, as web ecosystem is just a huge pile of hacks on top of hacks, you forced to use the worst "framework" for UI development – literally the typesetting engine from 80s (HTML), some hackish global classes thing for styling and most underthough scripting thing called Mocha (renamed to javascript later). So the quality of the web apps are typically many times worse and dev experience is terrible, but it's a new norm these days.
Fortunately there is a progress. We have Flutter these days, which is the best thing that happened in UI development worlds for the past 20 years. I do decent Desktop apps with it, and get as a bonus native iOS/Android and Web versions.
This is exactly it. To make matters worse, the fraction of developers capable of building GUI tools is decreasing. You may have an awesome cross-platform GUI app in C++ Qt4 but good luck finding skilled developers willing to spend their time maintaining it.
How about: MacOS and Linux are hugely over represented among developers, but all the customers (who aren’t on mobile) use Windows.
We don’t want to switch to Windows ourselves, and we also don’t want to foreclose on the Windows audience.
Finally I don’t think it’s the case that very many of today’s web applications would even make sense as single-user desktop applications. They are fundamentally structured ways of communicating with other people, not of interacting strictly with one’s own data. The server part is essential.
User here (self-employed computer technician); I'm not a programmer.
I have a beautiful little accounting program written for macOS called Corona. It took me two years to find it, because all of the web app-based accounting software crowded it out from visibility. My needs are simple and I don't need to share that data with any other device.
Definitely don't want my company accounting data held hostage by anyone other than me. No way to know if the hosting company is selling my data behind my back, no matter what they say in print. Besides, I only had to pay once for Corona, vs. paying constantly for cloud based SaaS. QuickBooks is another story...
For me, in this case, desktop software is much better than a web app. My laptop is older and Keep It (native app) is far better in performance and usability than Evernote (v.10x Electron app) was. Not everyone has a newer laptop.
Our primary application is a CRUD-ish desktop application, serving a niche. We have large corporations with hundreds of active users down to single-person entities.
Key factor is nobody really wants to host anything anymore. Server updates, firewalls, backups, user access managment, SANs, networks... nobody wants to deal with that stuff. They just want to enter their data, send and receive some messages, get some PDFs.
So, we're trying to get a cloud-based version of our application going. Certainly won't be easy given the abysmal state of web app UI design compared to what we're used to.
It's easier to develop and distribute.
The web has by a long shot the richest ecosystem for UI development.
There are tons of comparatively cheap developers available.
Browsers leaks the least amount of the underlying platform through.
Part of the reason is also that cross platform alternatives all come with massive caveats. The ecosystems are tiny, except for maybe Qt. Their cross platform abstractions are very leaky. They are often far more difficult to use.
Distribution of desktop apps is difficult and users are content with web apps. Asking someone to run your untrusted code is a big ask. Big downloads and lengthy installation procedures further raise the bar.
Users also don't expect desktop apps anymore. Not having a mobile app can seriously inhibit your product, but if it works mostly fine as a web app from desktop or in a wrapper like electron few will complain.
For the Apple ecosystem, SwiftUI is looking pretty good. I wrote an app that works on iOS, iPadOS, and macOS by taking a ten hour online class to learn what I needed. Swift is also a very robust language and platform.
Right now, more users have switched to doing things on their phones and tablets, so that is what developers are building.
If enough people are willing to buy a desktop type app, then developers will sell that to them. But I think the mobile ecosystem offers a better range of possibilities -- you can charge for your app, you can go ad-supported, or you can give your app away (and have in-app purchases or extract value another way). With desktop, you have to charge or give your application away -- there is no ads or in-app stuff.
“In-app purchases” are kind of common in desktop apps. They have a basic free version which constantly nags you to buy a subscription or the premium license which then unlocks more features. And lots of desktop apps have ads.
I personally prefer desktop apps. I find most of them smoother, and more intuitive given that you get used to the controls/widgets of a particular OS and most apps will have some similarities in their GUI. But web apps are so much easier to handle, and you get less of the headaches of OS compatibility, package management, dependencies, security, etc.
First of all: There's nothing like the dicotomy between Windows and MacOS.
In fact, even if the share is still very low, there is a significant amount of other operating systems out there used by hundreds or thousands of users, and those OSes are often not covered at all or poorly covered by cross-platform tooling.
Linux is the most prominent example, not only with standard linux distributions, although highly adopted at least by developers and tech enthusiasts, but even with machines like the chromebook, which is is fact a special linux distribution on its own.
The web, on the contrary, is very well supported by any and every operating system out there, because it will be a big shame if you do not support the web in these days. So web apps come in.
In regard to the cross-platform tooling, they are often slow and poorly supported in special use cases and scenarios, so in fact you will come to the need of rewriting the desktop app in a native way, which means rewrite one hybrid app into at least three native ones, which is not really a deal I would make.
For a sample use case of when not to build hybrid apps you can turn to the Microsoft Teams app, which is slow and poorly designed and developed. This has happened because a typical native use case was ported to the web, which in fact works well in the browser, but the underlying cross-platform framework used is not well designed to support those use cases, so in fact it is really slowing down the app.
Last but not least, you still have to make a web app even if you make a desktop app because marketing, and you surely still need a mobile app, same reason. So the webapp is, for the most use cases, the first thing you make and the most universal app you can distribute to your users.
So this is, IMHO, the motivation around the lack of native (really native) desktop apps. These are my 2c.
Is any explanation needed beyond "You really rake in the cash with SaaS and nobody pirates it"?
It's not like devs own the companies. And even if they did SaaS is a steady paycheck and a load of way more interesting problems that devs seem to love.
For example, Here are just two frameworks/tool kits that are easy to use and to build desktop apps with.
React Native for Windows and MacOS: https://microsoft.github.io/react-native-windows/
Avalonia UI: http://avaloniaui.net/
Also, If they do use frameworks like the ones mentioned above they will still only release the app for mobile and then rebuild the code for a webapp which is slower and does not have the functionality.
John