Can anyone fill me in on how this compares to Java's Project Loom approach where they have said a hard no to coloring functions. What are the advantages and disadvantages of async/await vs introducing a green thread like approach for problems?
Implementing async/await is only a compiler change, so it can be implemented rather easily on top of any languages, the compiler transforms the code to a state machine.
co-routine (Java Loom's one, Go one, Scheme one, etc) requires to be able to serialize and deserialize a part of the stack (move the stack on the the heap and vice versa), so it's hard/impossible to implement with non managed runtimes which like in C or objective C.
In C, you can declare an address/pointer to an address on stack (using &), but with a co-routine mechanism, the addresses of parts of the stack are not constant.
If you have a managed runtime, you rewrite those kind of pointers when you copy parts of the stack back and forth.
async/await is more flexible in some sense. If you run the tasks on a thread pool, it's mostly equivalent to green threads.
But if you run the tasks on a single thread, you get cooperative multi-tasking. You can access mutable shared state without using locks, as long as you don't use "await" while the shared state is in an inconsistent state.
For user interfaces this is huge advantage: you can run all your code on the UI thread; but the UI stays responsive while awaiting tasks.
How does UI code work in Loom if you need, say, a single render thread to issue GL commands? Wouldn't you need a native thread (making Loom moot), or is there some other trick at play?
If you have multiple virtual threads issuing non-atomic commands that must be run together and not interwoven, how does the scheduler know when it's ok to yield one thread to another without explicit support?
How can you set a shader and then draw a mesh each in two virtual threads without possibly interweaving their execution, for example?
>For user interfaces this is huge advantage: you can run all your code on the UI thread; but the UI stays responsive while awaiting tasks.
... assuming any other tasks/coroutines running on the UI thread are being cooperative and not doing bad things like waiting on synchronous functions or otherwise hogging the UI thread too much. Done right, it's a big performance and maintainability win over multithreading, but when done poorly, it can result in large variance in latency/responsiveness of any individual task.
async/await is really just another formulation of linked stack frames, where control is explicitly yielded. At the end of the day, the thread stack state needs to be reified somehow, and all of these approaches are equivalent in power. The only difference is that now you have a type system that delineates between functions that can and cannot be shuffled between kernel threads.
The benefits that Go (and potentially Loom) provide are with the scheduler. When Go code calls into other Go code, it's fast because preemption points can be inserted by the compiler. The goroutine is parked when it is blocked (on I/O or some foreign function), and in the slow path this logic is executed on a new kernel thread.
Although the Go approach involves more overhead in the slow path, wrangling blocking code to work with the scheduler has cross-cutting implications for library design. I.e, I don't have to worry that a library I import that does file I/O will pin my goroutine to a blocked thread by using a blocking syscall.
Async/await provides a mechanism for explicit context switching between threads. This is useful when you might need to run code on specific native threads or other native tasks outside the runtime.
It seems like it would be much harder or impossible in languages that work to hide these details. How do you go from implicit blocking to explicit native thread usage?
At least Loom seems to be keeping native threads so Java code can simply fall back to that, I guess.
One way to solve it is to introduce a concept like a nursery (a la structured concurrency) and when setting it up you instruct it to create a new thread that is only used for those particular tasks.
This of course requires a scheduler that needs to handle hierarchical control of execution rather than having an unformed mass of tasks, but nurseries provide other benefits as well.
In Loom we can make some trade offs around our knowledge of the Java stack and the standard library.
1. We know that no JVM frame contains a raw pointer to anywhere else in the stack.
2. We know which stack frames are JVM frames, and which are native frames.
3. We know that there are unlikely to be native frames in the portion of the stack between the point where we yield control and the point we want to yield to.
4. We can change the standard library to check if we are in our new lightweight thread or not and act appropriately.
Knowing these we can avoid function coloring and push the complex management of green threads into the standard library, and reuse existing task scheduler code to manage these virtual threads, and we can work to make thread local variables etc work seamlessly with this new concurrency abstraction.
This puts us in a very different design space. We can move a portion of the stack to the heap when a thread is unmounted, and we can change things about the GC and other internal systems to make them aware of this new thing that can be in the heap.
This type of approach would be much harder in a language like swift that tends to coexist with C and other languages that use raw pointers or a non-moving GC, so I think the question is not which is the better approach but which is the better approach within your language ecosystem.
If Loom ever comes out :p, at this point I am starting to wonder if this is a case of "the right thing" against "worse is better" and maybe we just will never see the light of Loom.
Java is at a stage where it's clear it will continue to be a popular platform and language for decades to come. It can afford to do "the right thing" because the question it's facing is not how to attract more developers now, but how to avoid mistakes its numerous developers will pay for, and build solid foundations for many years to come. It's a common difference between young and established languages, and, naturally, Java made its own fair share of design mistakes when it was young.
Of course, there are technical differences that make certain design choices easier or harder in different languages. As aardvark179 mentions above, Java doesn't have pointers into the stack, and intermediate FFI frames are extremely rare (also due to the platform's established and large ecosystem).
(I'm the technical lead for OpenJDK's Project Loom)
I'm very excited about Loom. I've also been waiting 3 years for it, and am probably getting impatient. I think it worries me that there still isn't any scheduled release date. And what has happened is as a user of Java, no framework has been able to develop in that time around a fiber or coroutine model, where in other languages, like C#, the TAP model has had ample time to develop and for libraries and framework to transition to it, best practices to occur, etc. While on Java we're still waiting.
Now, I actually still believe Java is doing the right thing (hence the name). And the right thing if successful is more right, thus the name. And you make a good point in that Java is already popular and can probably afford to take a really long time to bring a better async story to its table.
But lately I've been wondering if Loom can fail? It seems to still somewhat be experimental, is there any chance it wouldn't ship? That after all the hard work, it doesn't get merged into OpenJDK? Maybe it turns out to be more hassle, break too much user code, be too difficult to bring back to Graal, or fail to be natively compiled by Substrate, etc.
Hopefully not, and I'm sure you're more aware then anyone of the risks here. And I trust you and the team working on Loom. But sometimes seeing what's happening and being done in C# and other languages, I do stop and wonder if worse is better, but when Loom does arrive (if ever?), I'll probably be happy that this was the route Java took.
All languages end up with simple concurrency primitives such as async/await.
No one takes the next steps and introduces the high-level primitives you actually need to work with actors and concurrency in a sane manner: monitors, messages, supervisor trees. Erlang has been around for thirty years, people.
FTA: ”This is a common pattern: a class with a private queue and some properties that should only be accessed on the queue. We replace this manual queue management with an actor class
[…]
Things to note about this example:
- Declaring a class to be an actor is similar to giving a class a private queue and synchronizing all access to its private state through that queue.
- Because this synchronization is now understood by the compiler, you cannot forget to use the queue to protect state: the compiler will ensure that you are running on the queue in the class's methods, and it will prevent you from accessing the state outside those methods.
- Because the compiler is responsible for doing this, it can be smarter about optimizing away synchronization, like when a method starts by calling an async function on a different actor.”
Monitors, linking, supervision trees, vm-level introspection into the state of the actors, distribution primitives that make actor identity nonlocal across clusters, actor cancellation (like kill -KILL), graceful actor shutdown, sane id serialization (how easy is it for me to serialize an actor Id, put it on a kafka queue and have it come back in a response so I can route the response back to the actor) etc, etc, etc.
additionally if you go to Elixir, they've implemented async/await on top of the not-really-"actors" of the erlang VM in its standard library, and it's super easy to use and understand. Arguably easier than async/yield/await like exists in most languages that use async as a coroutine serialization primitive.
No idea why you're getting downvoted, but you're right. ONe of the reasons why Erlang allows for monitors, and supervision trees and many other niceties are precisely because the VM is built that way: Processes are isolated. Even if one process suddenly dies, the VM will take care of cleanup and will notify any monitoring processes, etc.
async/await are not primitives. Mutexes, semaphores, atomic counts etc... those are true primitives in multithreading and they have been around forever (since the 70s at least).
I feel the Swift lang. design is like amateur hour at its best, trying to reinvent the wheel, but still end up where it started, but at worse overrall usability. Just re-arranging chairs. 8 years later, and still Objective-c + GCD combo is better at multithreading.
In comparison: Java had decent multithreading support since version 1.1,(one year later after its release) and it had NIO by JDK 1.4 and full modern multithreading by JDK 1.5.
Swift is a couple of years behind because it is trying too hard to be cool and different.
> async/await are not primitives. Mutexes, semaphores, atomic counts etc... those are true primitives in multithreading
async/await, a way to model concurrency, and mutexes/semaphore/etc, a way to safely share data, belong to separate categories and one does not preclude the usage of the other, especially if your coroutines are allowed to run on different threads.
I don’t think they meant one precludes the other, and “modeling concurrency” definitely is a “sharing data” problem. In other words, you would have to build a nicer concurrency abstraction out of a lower level primitive at some point.
You can definitely find iOS/macOS developers who prefer objective-c, but they're in the minority. The vast majority would say that Swift is way, way more usable than Objective-C. Obj-C still has some advantages (dynamism being a big one) but for most tasks Swift makes developing both easier and more safe.
Coming from several years of writing Obj-C, Swift has certainly been a net benefit for me. Writing it well does require a bit of a shift in one's way of thinking (writing "Obj-C in Swift" is a recipe for pain) but I find that once that hurdle is cleared it's a very productive language to work with.
OK, OP was somewhat remiss in saying "no-one". But you have, I think, missed his point.
Instead, substitute "few mainstream language designers" and it stands up. By mainstream I mean Java, Javascript/Typescript, C#, C, C++, Python and such. Most have introduced async/await. None has meaningfully gone beyond that as far as I'm aware. Working with Erlang's concurrency model is a refreshingly simple, consistent mental model compared to the mismash of concurrency features provided by the mainstream. In Erlang, it's as simple as:
No: regular functions.
Yes: are there only a few, and/or do I need strong isolation?
Yes: use OS-level processes
No: do I want the OS to take care of scheduling / preemption?
Yes: use threads
No: use async/await
Is there a chance that my async operations will be scheduled across multiple OS threads?
No: get speed boost from no scheduling overhead, but remember to yield if there's any long-running actions.
Yes: build my own likely-buggy, half-baked scheduler
Oh, and as a bonus: run back up the entire call stack to make all functions that call mine async.
And that's before we get to error handling. I'd take Erlang supervision trees _every day_ over trying to figure out which nested async callback function generated an exception.
Does Swift's concurrency plan include thread-safety guarantees? I would like to see a higher level language than Rust that includes similar thread-safety guarantee "Thread safety isn't just documentation; it's law".
these small mistakes are starting to add up with Swift. They should really nip these things in the butt instead of adding upon the inconsistencies. Its better to make bold decisions now that you know is right than to change them 10 years from now when everyone is already use to them, which is what some older languages are dealing with now.
Why not just have it be async func refreshPlayers() { }
That's like saying: Could you imagine having a function with many arguments and trying to find the return type?
Swift already uses the space at the end of the function declaration for things like throw and generic constraints. I personally don't see an issue with where it is other than I also write a lot of JavaScript and the context switching between languages might take a couple seconds.
Well you can instantly tell its an async func, and it reads like english when reading it. Why do you think this would be a mistake? I'm just looking for a decent reasoning.
Since Swift strives for clarity at the call-site, you will definitely still have that property. Readers will know it's async because it will say `let x = await loadContent()` (the same way they know it could return an error because it has to start with `try`).
If, rather, you're looking at the function definition, you already need to read the whole thing. You need to read what the parameters are, whether or not it throws, what its return type is, etc. So putting the effects (like throwing or async) after the name makes it IMO much more scannable. Because the first thing I want to know is the name. I only care about those implementation details once I know I'm looking at the right thing.
There's also no universal consistency between languages. Rust put `.await` after the function invocation rather than as a prefix keyword. C++ used `co_async` and `co_await` as the keywords to avoid breaking lots of people's code. Swift should put the keyword where it makes sense from a consistency standpoint with Swift, which in this case is with `throws`.
I agree that it feels like the language maintainers are backed into corners and cannot correct old mistakes.
Which feels strange coming from Apple. Google showed up to use this with Go, write a tool that updates the code from version "x" to version "y" instead of being beholden to source compatibility issues in situations like this.
True, it has something like that (which works to varying levels of success depending on the project). The excuse for not fixing weird syntax oddities like this though is breaking source compatibility, which my point was tooling should solve.
Essentially the tooling in the Apple ecosystem is rough to work with and feels poorly thought out - it feels like the teams work independently and then mash their stuff together right before go live.
If all private state is managed by serial queue, doesn't it mean you can easily deadlock yourself? Will swift statically reject deadlocks on reentrance?
Queues are a dynamic concept; which queue you’re on is a runtime property. For this reason you may be familiar with assertions to check if a function is running on some queue.
Actors however are a static concept, we know at compile time which actor is local, and if we have the right one active. So the check about whether you’re on right actor happens at compile time.
You can think of it as if queues are part of the type system, and the compiler can work out statically what queue is used by any code, and so it can label an entire call tree’s queues by control flow analysis.
Because of this you wouldn’t dispatch onto the same queue twice and deadlock. Rather, the compiler would see that the correct actor is local already in the call tree and there’s nothing to do so the dispatch is elided. It would only “switch queues” if it needs a new one.
Besides the deadlock issue, the other advantage of this is it gets optimized out if you have several calls with the same actor.
Yikes. Swift is becoming horribly complex and opaque. Yet another set of @shoehornedKeywords... Colored functions...
Let me ramble a bit here.
Apple’s platforms had always put users before developers, that’s why they were so successful. They’ve built the best user experience by far using nothing but a simple Smalltalk-esque language from the 80s. Look at their frameworks: CoreAudio, CoreAnimation, UIKit... Tremendously powerful, and the best in their league - in terms of possibilities, not necessarily developer experience! Others were shoving garbage-collected virtual machines in their devices, piling abstraction over abstraction, “fluent APIs” (remember those?); meanwhile Apple built an empire with this quirky dynamic language (plus some C++ under the hood), where you had to do manual reference counting as late as 2012! Developers were livid. “How am I supposed to program in this weird language?” “AutoLayout? Why can’t it be just like CSS or something??”
But Apple didn’t give a shit, they knew their platform, their frameworks were the best, period. And whiny devs had to adapt to gain access to the richest cohort of users in the computing landscape.
With Swift and SwiftUI we seem to be heading towards a different future, and I’m yet to be convinced that it’s the right one.
What makes you think "They’ve built the best user experience by far"
I can't even say "best" experience let alone "by far".
Apple is notoriously anti consumer on everything from hardware to software.
I don't know what you are talking about. They had a fun signup screen when you first boot your iPhone. I have 0 ideas what you are talking about for macOS.
I think their primary reason for success was their advertising/marketing.
> Apple is notoriously anti consumer on everything from hardware to software.
Yet millions of people continue to happily give them money without being forced to, unlike say Microsoft or Google that you just cannot avoid even if you don't own any of their products.
I was a Windows/PC user for most of my life before I switched to Macs and in my experience Apple has been consistently more consumer-friendly than Microsoft or Google.
In fact my first foray into the Apple ecosystem was an iPad that I bought as a gift for someone, and fell in love with it so much I wanted to develop for iOS and bought my very first Mac for that. So far nothing has made me consider switching to Microsoft or Google.
The only thing I hate about Apple is their poor documentation and lack of official support on their own forums.
As someone that used both, I think you are under the psychology spell.
It's not like you could leave Apple, you are locked in.
It may be hard to put into words why you feel that way. (which is why your post never mentioned anything about quality)
As mentioned opening the iphone box and giving Apple my personal information felt fantastic. The actual performance and quality was extremely dissapointing.
> As someone that used both, I think you are under the psychology spell.
One could say the same for you, but that won't get us anywhere.
> It's not like you could leave Apple, you are locked in.
I have lived most of my life without Apple and I know how to do it again.
Tell me, how do you avoid Google or Microsoft?
I was happily using GitHub before Microsoft bought it.
I was happily using YouTube before Google bought it and made it worse.
I can't avoid feeding my data to Google even if I don't use any of their services, through their Ads, Analytics etc. and who knows what kind of shadow profiles they build from just my proximity to their other users.
Now, every time I sign into YouTube I have to delete all Google cookies to avoid being automatically signed into Search.
> As mentioned opening the iphone box and giving Apple my personal information felt fantastic. The actual performance and quality was extremely dissapointing.
I've been using the iPhone since the 5S and honestly, every time I have to handle someone's Android (e.g. to show them how to do something quicker than telling them), I want to hand it back as soon as possible.
Autolayout vs CSS is just a trade off on what layout algo you want. Haversine layout systems tend to be more computationally inefficient than flexbox actually, but more powerful. Yes the original programming API was horrible, but that was a quirk about how apple does API review, and they were mostly expecting you to use IB to do your autolayouts, and stuff like SnapKit put a sane API on top of it soon after.
TBH Obj-C once you looked past it's ugly syntax is a better developer experience than swift today if your making large-ish apps. Today the debugger is still buggy, build times are still way slower and swift code bases create stutters in unresponsiveness inside Xcode to this day. And it was way worse back in the swift v1-3 days.
TBH I think swift could solve a lot of their build speed issues if they removed type inference beyond a very basic set and brought back fine file grained importing like you have with C and Java. You still get %95 the benefits with a couple of features that most IDEs like IntelliJ with java are shown to be NBD.
Stuff like Flutter & Kotlin Multiplatform might make a lot of this stuff moot although, and swift will be regulated as an iOS compatibility layer and a nicer looking C++
They are mixing up their priorities. SwiftUI is catering to the mediocre webdev demographic, with a lot of opaque magic and language-level (!) hacks only for the sake of a “friendly” React-like API. In terms of features and performance it’s way behind UIKit, yet Apple is pushing it for the new iOS 14 home screen widgets.
It’s basically the usual story with post-Jobs Apple: catering for the masses while neglecting the pros - but this time it’s about developers instead of consumers.
Sure, I’m familiar with Conal Elliott’s work. But other than its “elegance”, I don’t hold reactive programming in particularly high regard, especially when implemented on top of non-pure languages. I consider FRP a “clever” technique - and while “clever” techniques make programming a fun puzzle, they can be detrimental to performance and end-user experience.
We have a finite cleverness budget; if you spend it on your code, then your product will suffer. If you build cleverness into the platform SDKs (hello SwiftUI! hello J2EE design pattern hell!), then every product on your platform will suffer.
I haven't seen any cleverness in reactive code that one wouldn't encounter in a non-reactive code base. It is true that one may need to learn new abstractions, but a more imperative coding style requires lots of glue code, much of it having inconsistent quality. You'll end up needing ramp-time no matter how you go. Building UI involves the same problems and trade-offs whether using reactive programming techniques or not.
Conal Elliott's work, while super interesting, does not cover the entirety of reactive programming. I found this[1] write-up to be helpful in defining what reactive programming is and it's value. The author is a member of the team working on Swift Concurrency.
And SwiftUI is compatible with anything from UIKit, so it actually doesn't hold anyone back who wants to try to do something non-trivial. If you watch the demos, particularly from the most recent WWDC, you'll see it actually makes quite complex behavior much more doable. Does that reduce the value of esoteric framework knowledge? Perhaps it does, but that means engineers now have more time to spend on the product not less.
Funny, I started with iOS development in 2008. I haven't done much recently, but laying things out with UIKit always felt pretty sensible to me. CSS, not so much. CSS was always the "WTF" of development. Maybe it's better now with Flexbox.
Glad to see a mention of CoreAudio for once in a public forum.
No matter the value for money or compatibility, I will never leave macOS for windows because of it. I bought a hugely powerful windows PC in 2018 and the audio performance is awful compared to a 2011 macbook and my 2019 macbook is on another level.
Windows lack of attention to audio drivers is frankly staggering. CoreAudio 'just works'.
This really depends on your needs. For music production things are different in Windows. Due to latency, studio-level hardware vendors completely skip Microsoft's audio stack and use something that talks directly to the card called ASIO. So, you kinda need external audio hardware even if you're on-the go and wants to do some low-latency audio. Or you use hacks like Asio4all to achieve low latency. In macOS CoreAudio has low latency even if you use the built-in 3.5mm audio jack.
I have significant professional experience in audio engineering so let’s skip the petty rhetorical questions.
Windows audio drivers are so poor that they’re bypassed completely and basically all major hardware producers offer either ASIO drivers on windows or native CoreAudio drivers.
Using the same high quality external sound card, I get better performance on macOS with latency and buffer than I do on a windows machine with significantly ‘better’ hardware.
I know full well sound hardware makes a difference, and macOS and CoreAudio interfaces with said hardware significantly better than windows. There’s a reason producers of pro audio interfaces write their drivers natively for macOS and use ASIO for windows.
I’m talking about pro audio production tools here, not playing a few mp3s.
> They’ve built the best user experience by far...
...in your opinion. In the opinions of people who manage IT for the vast, vast majority of businesses around the world, where most of the serious software users exist, that title belongs to Microsoft Windows.
Apple is so overtly anti-consumer that I just can’t take opinions like this seriously. And Swift is a trash language with a shitty developer experience just like Obj-c was before it. They couldn’t even get strings right. Nobody wants it outside of Apple’s little bubble.
What developers? Web developers trying to do native stuff, maybe.
Give me a native project over Web stuff any day of the week and I up for grabs, yet I do Web related projects since 1999.
Objective-C and Cocoa is great compared with the chaos of npm, yeoman, babel, node, deno, coffeescript, Typescript, bucklescript, angular, react, ember, dojo, prototype.js, jQuery, Vue, svelte, scss, and whatever someone comes up next month to improve their CV.
I am so happy that thanks to WebAssembly I am getting Flash back.
I’m so excited to see them embracing actors. I don’t use Swift and likely won’t in the future but I really hope this decision bleeds over into other mainstream languages.
I find a lot of joy watching the JavaScript community and languages like Swift and Kotlin meet in the middle. Especially when you start looking at Typescript. Even Java has become tolerable these days.
From a rustacean perspective I'm wondering why they aren't copying Rust's Send/Sync traits. They work so well: they're purely a type-system thing, without being tied to any coroutine runtime. And they allow creation of multi-threaded primitives, like actors, queues, locks, async, etc. in libraries without baking them into the language.
> Some libraries require you to use a specific runtime because they rely on runtime internals to provide a solid API for you to use or wrap their own API around an existing runtime. One example is the actix_web web framework, which wraps its own API around tokio.
So if you want to use two libraries that follow this approach, now you have faced with the hurdle to combine two async/await runtimes into the application.
We had this discussion before. Please don't repeat wrong statements over and over. C++ is in no ways different from Rust in this regard - they all just provide a coroutine transformation and future type, but not a builtin executor. And there is nothing more interoperable about C++ solution than about Rusts.
Actually Rust ones might even be better, since there is at least a well-defined interface that executors should follow (through the `Waker` type). The C++ executor interface discussion is probably older than the coroutines proposal, but afaik there still hasn't happened any standardization.
Is async/await more suitable for user interfaces?
reply