5

Consider http://en.cppreference.com/w/cpp/experimental/when_any. The following is just a naive and simplified implementation:

#include <future> template<typename Iterator> auto when_any(Iterator first, Iterator last) { while (true) { for (auto pos = first; pos != last; ++pos) { if (pos->is_ready()) { return std::move(*pos); } } } } 

I am not satisfied because it is a busy polling in an infinite loop.

Is there a way to avoid busy polling?

9
  • 2
    This is more of a wait_for_any than when_any. Commented Jun 4, 2017 at 15:44
  • 3
    @BasileStarynkevitch, busy polling means the thread executing when_any will keep busy, rather than entering a waiting state and being notified when any future is ready. Commented Jun 4, 2017 at 15:51
  • 2
    It may be worth looking at condition variables. Commented Jun 4, 2017 at 16:00
  • 2
    @Galik how are condition variables​ going to help here? Commented Jun 4, 2017 at 16:01
  • 2
    @Galik I am aware. I don't see how you would use them here. Commented Jun 4, 2017 at 16:04

3 Answers 3

5

A polling free version would launch 1 thread per future and have them set a condition variable with which future is ready.

Then you "leak" the threads until the futures are ready while returning the fact one is ready.

This sucks. But no polling.

To do better, you need to have a future with a continuation you can set (and remove ideally). Then you just ask the futures to notify you when done, then wait . This requires modifying or writing your own future.

This is one of the reasons both continuations and when_any are both proposed for standarization. You need them in the future.

Now if you have your own system, you can base it off a thread safe queue delivering stuff rather than futures, implemented via condition variables. This requires cooperation at the point of "future" creation.

struct many_waiter_t { std::mutex m; std::condition_variable cv; std::vector<std::size_t> index; std::size_t wait() { auto l = lock(); cv.wait(l, [this]{ return !index.empty(); }); auto r = index.back(); index.pop_back(); return r; } void set( std::size_t N ) { { auto l = lock(); index.push_back(N); } cv.notify_one(); } }; template<class T> std::future<T> add_waiter( std::future<T> f, std::size_t i, std::shared_ptr<many_waiter_t> waiter ) { return std::async([f = std::move(f), waiter, i]{ auto r = f.get(); waiter.set(i); return r; }); } 

Consuming an array of futures fs, we can generate a new array of futures f2s and a waiter, such that the waiter can be non-spinlock waited against until a future is ready, and the f2s correspond to the original fs.

You can repeatedly wait on the waiter until the f2s are all ready.

Sign up to request clarification or add additional context in comments.

6 Comments

How are you going to recover futures other than the ready one when you have threads blocked on them?
@yurikilochek Those threads produce futures in turn that are made ready when the future they are waiting on is made ready. In effect, you are wrapping future<foo> in a future<foo> that also triggers a (shared) condition variable before itself becomes ready.
@Okay, that would work, but a typical use case of processing result, removing ready future from the set, and waiting on the rest, in a loop, would spawn insane amounts of threads.
@yuri I said it sucked. You can do a but better with a singleton if you can distinguish future by its state.
@yurikilochek Toy implementation added.
|
3

Not really, futures without continuations are of very limited usefulness.

If you are forced to do this and to use std::future, I suggest smarter polling via .wait_for() with increasing timeouts.

Comments

2

I have posted an implementation of when_any over on CodeReview. As Yakk said in his answer,

To do better, you need to have a future with a continuation you can set (and remove ideally). Then you just ask the futures to notify you when done, then wait. This requires modifying or writing your own future.

So my implementation relies on future::then(), and the gist of it is:

template<class... Futures> struct when_any_shared_state { promise<tuple<Futures...>> m_promise; tuple<Futures...> m_tuple; std::atomic<bool> m_done; std::atomic<bool> m_count_to_two; when_any_shared_state(promise<tuple<Futures...>> p) : m_promise(std::move(p)), m_done(false), m_count_to_two(false) {} }; template<class... Futures> auto when_any(Futures... futures) -> future<tuple<Futures...>> { using shared_state = detail::when_any_shared_state<Futures...>; using R = tuple<Futures...>; promise<R> p; future<R> result = p.get_future(); auto sptr = make_shared<shared_state>(std::move(p)); auto satisfy_combined_promise = [sptr](auto f) { if (sptr->m_done.exchange(true) == false) { if (sptr->m_count_to_two.exchange(true)) { sptr->m_promise.set_value(std::move(sptr->m_tuple)); } } return f.get(); }; sptr->m_tuple = tuple<Futures...>(futures.then(satisfy_combined_promise)...); if (sptr->m_count_to_two.exchange(true)) { sptr->m_promise.set_value(std::move(sptr->m_tuple)); } return result; } 

You attach a continuation to each incoming future (using then). This continuation holds a shared_ptr to a shared state. The shared state holds a count-to-one (m_done) and a count-to-two (m_count_to_two). Each continuation that executes will increment the count-to-one; if it's the winner, it will also increment the count-to-two. The main thread will also increment the count-to-two after it finishes setting up all this stuff. As soon as the count-to-two has reached 2 (indicating that the main thread finished setting up and at least one continuation has executed), we'll call set_value on the promise corresponding to when_any's return future. Ta-da!

1 Comment

Sometimes the answer to a C++ question is just "ask Arthur" :) Nice solution.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.