Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Probably because there is no need for it ? If WaitGroup waits for the end of all tasks to continue, there is map/reduce which should do the trick.


sort by: page size:

I seem to have misremembered, the odd one out was WhenAny, and the issue is that it crashes when given the empty set, instead of waiting forever (which is fine if you're dealing with a specific number of tasks, but not fine if you're actually using it for a dynamic number of tasks).

I could have sworn it was WaitAll, but heck - clearly not! Sorry :-).


Because blocking on tasks is a deadlock waiting to happen [0], and creates UI hiccups when the task takes longer than 16ms to complete.

In other words, the conditions you must satisfy for using Task.Wait or Task.Result are very complicated. They depend on and create global constraints of the entire program, and so you should avoid using them whenever possible.

0: http://blog.stephencleary.com/2012/07/dont-block-on-async-co...


Guarantee is a strong term. If you have an infinite number of processes to running, there aren't guarantees on how long until it is scheduled.

Similarly, with cooperative multi-tasking as in Gevent, you can manipulate scheduling to try to provide better guarantees about wait times. It's just... you can't ignore the problem.


The code in the linked article has a bug: it doesn't wait for the queued tasks to complete, so the program will exit while people are still waiting for their haircuts - or worse, while the barber's actually cutting some hair.

You can fix this by using dispatch_group_async instead of dispatch_async, then after the loop (before releasing the queue) call dispatch_group_wait to make sure all the queued blocks have completed.

This plus creating and releasing the group only adds three lines of code; the rest of the author's algorithm is correct as far I can tell.

I guess it just goes to show that even with Grand Central you still have to be a little careful.


IMHO, the best (in the sense of comprehensibility) fix to this is to never wait indefinitely. Waits should always have a timeout and on timeout all preconditions should be rechecked.

Only guess I’d have is to protect the system against infinite-loop tasks, but I don’t remember any other runtime caring and an a task which never terminates seems easier to diagnose than one which disappears on you.

Apparently the tasks are "joined" automatically before the enclosing "Async do" block is allowed to finish.

Importantly it doesn't do this for dispatch_async/dispatch_semaphore/dispatch_block_wait.

If you want to avoid an expensive operation in cases where the result is no longer needed then surely you'd want to cancel the task as soon as you know that and not at some indeterminate time when the GC runs so I don't think this behavior makes sense even for that scenario.

It is async, the operation is delayed until the processor has nothing important to do.

The other thing I noticed is that they are wrapping every Task.Delay(…) inside a Task.Run(…) call. That should not be necessary. The inner task will yield back to the called when it hits the delay call anyway. There is no need to queue it with an immediate yield as if it is background or cpu-bound work.

The Run() call might be creating a more complex state machine and using more memory than necessary (2 tasks vs just 1 for each delay). The memory usage might be even lower if the Run() call is dropped.


Why do you need to join?

Let's say I write a task that updates a progress bar as an infinite loop, and let it be gc'ed on program exit, without ever joining it. What's wrong with that design? I can, of course, modify the task to check a flag that indicates program completion, and exit when it's set. But does this extra complexity help the code quality in any way?

Or suppose I spawn a task to warm up some cache (to reduce latency when it's used). It would be nice if it completes before the cache is hit, but surely not at the cost of blocking the main program. I just fire-and-forget that task. If it executes only after the cache was hit, it will realize that, and become a no op. Why would I want to join it at the end? It may not be free (if the cache was never hit, why would I want to warm it up now that the main program is exiting?).


If you are running Task.WaitAll on a UI thread you are likely to create a deadlock.

Doesn't that support it?

> Wait-freedom means that ... Each operations is executed in a bounded number of steps.


You can use `wait` to wait for jobs to finish.

    some_command &
    some_other_command &
    wait

In one of the snippets, there is a 10 second sleep inside a transaction to add delays between polling if there are no tasks,

Shouldn't that be outside the transaction?


When there's heavy amount of updates, CAS loop updates will start to fail, so it needs to be retried the more the more there is contention.

Fetch-and-add is totally wait-free, much better upper bound. It always succeeds on first try.


That must mean that it blocks and wait for the result. That doesn't "solve" the problem though.

>limiting the synchronous APIs to just workers seems like one too many layers of indirection

An unresponsive script is slowing this window down - kill process ot wait?

next

Legal | privacy