The other thing I noticed is that they are wrapping every Task.Delay(…) inside a Task.Run(…) call. That should not be necessary. The inner task will yield back to the called when it hits the delay call anyway. There is no need to queue it with an immediate yield as if it is background or cpu-bound work.
The Run() call might be creating a more complex state machine and using more memory than necessary (2 tasks vs just 1 for each delay). The memory usage might be even lower if the Run() call is dropped.
You have a threadpool with X threads. You dispatch Y tasks. X of them run for 5 minutes. That means the remaining Y-X tasks are delayed by 5 minutes despite low CPU utilization.
Because blocking on tasks is a deadlock waiting to happen [0], and creates UI hiccups when the task takes longer than 16ms to complete.
In other words, the conditions you must satisfy for using Task.Wait or Task.Result are very complicated. They depend on and create global constraints of the entire program, and so you should avoid using them whenever possible.
Yes, I meant referenced in the code, not necessarily called during execution. It makes sense to do it in the beginning if delay is referenced. I guess it was just a small optimization that they decided to do without then.
If the task is a long running expensive task, it's going to block the main thread from being responsive to user input. If it's a quick task then you don't need to schedule it. I struggle to see the place for a medium length task which is long enough that it needs to be deferred but short enough that it can't be executed immediately.
> Scheduling task in setInterval will run the task every interval time set regardless the previous task has finished or not. This in turn will starve your hardware if the function could not finish before the next execution begin.
That only applies to async work performed inside the callback. Once the CPU is starved, or if the work performed inside the callback takes longer than the interval, then setInterval will start to skip.
Apologies but it seems you have gotten wrong impression (or maybe I did a poor job in explaining).
It has never been a big issue in the first place because by now everyone knows not to 'Thread.Sleep(500)' or 'File.ReadAllBytes' in methods that can be executed by threadpool and use 'await Task.Delay(500)' or 'await File.ReadAllBytesAsync' instead. And even then you would run into threadpool starvation only under load and when quickly exhausting newly spawned threads. It is a relatively niche problem, not the main cornerstone of runtime design some make it out to be.
Also, .NET 6 is old news and has been released on Nov 8, 2021. It is the deployment target for many enterprise projects nowadays.
It's a cooperative yield point so while the code containing the `await` blocks and waits, the underlying thread/core is released and "stolen" by the standard scheduler for other work, and then resumed when the processes it's waiting for complete or are cancelled.
Interesting. I suppose that makes sense, to let them speed up execution... I am a bit curious how it'd affect concurrent behavior based on time changes (e.g. if it's cpu-bound for longer than the sleep, does the wake still occur in the middle? Or does it wait to advance time until everything's blocked? What if you runtime.Yield? etc?), but that's certainly an efficient option for a public execution environment like this.
It sounds like this sort of problem comes from using a cooperative scheduler to implement concurrency of arbitrary routines rather than control flow. I haven't been in a situation in which it would even be possible for something to yield less often than I expect, because I expect it to run until it yields. Similarly I don't often find that subroutines return too infrequently because I expect them to run until they return.
This library is probably nice for the places I would otherwise use threads.
Doesn't matter. Unless you do TaskFactory.StartNew or Task.Run, it's not going to try to schedule work on another thread. Everything is happening in the main thread in your code.
>The "hell" rears its head when you need to coordinate multiple such sequences that are running concurrently.
This situation is why I wrote DelayedOp: https://github.com/osuushi/DelayedOp . Of all the little reusable code bits I've written, this one has ended up in more of my projects than any other.
Basically, a DelayedOp is a lightweight manager that runs a callback after its "wait" and "ok" calls have balanced. It ends up being equivalent to _.after(), but it's more natural to use than manually counting your concurrent operations, and it has some conveniences for debugging, so that if a callback never fires, you can find out why.
I seem to have misremembered, the odd one out was WhenAny, and the issue is that it crashes when given the empty set, instead of waiting forever (which is fine if you're dealing with a specific number of tasks, but not fine if you're actually using it for a dynamic number of tasks).
I could have sworn it was WaitAll, but heck - clearly not! Sorry :-).
Those solutions raise the low-priority task to the higher circle. Which breaks a deadlock but doesn't fix the latency issue.
The low-priority task was low-priority for a reason. It wasn't supposed to soak up CPU cycles needed for higher-priority tasks. Those solutions do just that - making a long-running background task run before low-latency more-critical tasks increases their latency (delays them responding to their time-sensitive events).
Does the spec explicitly tells about how to time tasks vs microtasks? I was under the impression that it was up to the implementation and not relevant.
To add something more, I would really like to see an example where this actually ruins a program execution.
Thread.yield() is very weird, and something I'd not advise using aside spin lock with backoff. It's OS dependent, and if may just bounce to the OS and back, continuing the current execution.
The Run() call might be creating a more complex state machine and using more memory than necessary (2 tasks vs just 1 for each delay). The memory usage might be even lower if the Run() call is dropped.
reply