A Small C++11 ThreadPool Update

Over on github I uploaded a more advanced version of the ThreadPool class from the Thread Pool with C++11 entry. It allows to get std::futures as return values of the enqueue method.

Example usage:

    ThreadPool pool(4);
    
    std::future<std::string> str = 
        pool.enqueue<std::string>(
        []()
        {
            return "hello world";
        }
    );
    
    std::future<int> x = 
        pool.enqueue<int>(
        []()
        {
            return 42;
        }
    );
    
    std::cout << str.get() << ' ' 
              << x.get() << std::endl;

10 thoughts on “A Small C++11 ThreadPool Update

  1. Hi!
    FYI this last example on gitHub always ends for me with a Segmentation fault: 11 (osx gcc 4.7.2).
    But anyway, the example given never stop printings hello world. It’s not what I expected.
    I have a set, and for each element in the set, I would like to run a task in a separate thread (as many threads as I have cpus) and when a task is completed in one of the thread it should start an other one. Is your example a good way to handle this ? Thank you.

  2. Hi again, don’t mind the question, I got my reply. I had a loop hidden from the beginning. The ThreadPool as proposed is perfect for my use case.

  3. Hello, I wanted to thank you for such a great example of modern C++ programming that you’ve created with you ThreadPool class. I’d like to create a programming assignment idea that has multiple threads work cooperatively on a large number of tasks.

    The duration of the tasks can be anywhere from just a few milliseconds, up to a couple of minutes’ time–or even forever (that is, they might fail to complete at all.)

    The problem for me while trying to use your nice ThreadPool to do this, is that the task futures that take a long time to execute block the speedier ones from returning promptly back to the caller. So, the practical result is that the caller waits around with no results coming back (even though many are already finished), and then suddenly a deluge of backlogged results appear whenever the long-running task finally completes, which then stops blocking the results coming back from the entire container of futures.

    To demonstrate, I created a simple example code based on your example provided. Hopefully, my version highlights the problem I’m encountering. The thread sleep times are actually too short by an order or two of magnitude for a good number of the tasks, but hopefully this example will properly demonstrate my point to you anyway.

    I hope this all makes sense, and that I’m communicating myself clearly.

    So, I’ve linked my example to you in the URL above.

    I would welcome communicating with you over email if you would be able to.

    Thanks again, and have a great day!

    • I think the issue here is that going through the futures sequentially and calling .get on them results in the blocking behavior. Sadly I haven’t really found a clean way to wait for any future in a collection to become ready. You can always iterate over the futures and call wait_for with a zero duration and only take those that report a “ready” status.

      • I regularly achieve this by creating a class that wraps a mutex/condition protected queue. Worker threads that complete insert a future into the queue and then call a callback function that notifies blocked thread(s) that data is ready and the next ready future can be popped. It works best using a single pointer atomic queue as then callbacks can insert efficiently on ready without another mutex..

        Nice simple threadpool btw. Like it.

  4. Hi,

    very nice pool, but is there any possibility to let the main routine wait until all task from the pool are done?

    • In the version that is on github the destructor waits for all tasks to finish. With that version you could also just wait for all returned futures to finish (call .wait() on all of them). I’m unsure about having something like ThreadPool::wait() because it brings up questions like what happens when one thread waits on the pool to finish while another one enqueues new tasks.

  5. Hi,

    I have heavily modified your thread pool implementation. I’m the github user ‘n00btime’, who submitted the std::result_of patch. I couldn’t find a more direct way to contact you. I’d like to include this modified version in a library that I plan to license under 3-clause BSD. I read the permissive license you posted, but not being a lawyer, I wanted to ask your permission as well. So long as I reproduce your license in the source (alongside the 3-clause BSD text) and make it clear that this is not the original work, is this ok?

    I’d also be happy to share the source changes with you.

  6. hi, when I reading your code, I’m wondering if the ‘task’ throw an exception, the thread itself will dead. Should I add some try catch block to avoid this situation?
    Thanks a lot.