|
|
Created:
8 years, 9 months ago by turnidge Modified:
8 years, 9 months ago CC:
reviews_dartlang.org, vm-dev_dartlang.org, Søren Gjesse Visibility:
Public. |
DescriptionImplement ThreadPool.
Committed: https://code.google.com/p/dart/source/detail?r=5484
Patch Set 1 #Patch Set 2 : #Patch Set 3 : #Patch Set 4 : #Patch Set 5 : #Patch Set 6 : #Patch Set 7 : #Patch Set 8 : #
Total comments: 30
Patch Set 9 : #
Total comments: 6
Patch Set 10 : #Patch Set 11 : #
Total comments: 16
Patch Set 12 : #
Total comments: 23
Patch Set 13 : #
Total comments: 1
Messages
Total messages: 11 (0 generated)
Ok, this is an implementation of a thread pool. It is different from Soren's implementation in that it has a monitor per thread for notification and that it dynamically grows and shrinks. Let me know what you think.
https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... File runtime/vm/thread_pool.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:60: void ThreadPool::RemoveWorkerFromList(Worker* worker, Worker** head) { ASSERT(worker != NULL && head != NULL); https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:80: void ThreadPool::AddWorkerToList(Worker* worker, Worker** head) { ASSERT(worker != NULL && head != NULL); https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:91: ThreadPool::Worker* ThreadPool::GetIdleWorker() { Missing MutexLocker ml(mutex_); ? https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:95: worker = new Worker(this); ASSERT(worker != NULL); https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:111: ASSERT(!done_); Instead of asserting shouldn't you just check for this and return. A racing worker thread could enter this after done_ was set to true it will add itself to the idle worker list. It will also be modifying the running list concurrently with the guy shutting down the threadpool https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:120: ASSERT(!done_); Instead of asserting for !done_ here shouldn't you just check for this and return true; A racing worker thread could enter this after done_ was set to true and we will not find it in the list and return false causing the thread to go back into the wait loop. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:198: } Do we need this initial while loop, there is an implicit assumption with this loop that a task will be provided when a thread is started. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:219: ml.Wait(FLAG_worker_timeout_millis); Monitor::WaitResult result = ml.Wait(...); and you could check if (result == Monitor::kTimedOut) below. The OS::GetCurrentTimeMillis() then only needs to be done if task_ == NULL for the spurious wakeup case to compute the remaining time to wait. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... File runtime/vm/thread_pool.h (right): https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:14: class Worker; Why is this forward declaration needed? I don't see any reference to it before the actual declaration below. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:43: uint64_t workers_stopped() const { return workers_stopped_; } workers_active() and workers_idle() would also be useful stats. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:63: Monitor* monitor_; Why not use a value object for the monitor? https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:70: Worker* prev_; // Protected by ThreadPool::mutex_ Do we need a doubly linked list for this? a singly linked list maybe sufficient. Most frequent operations are going to be add to head and remove from head. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:90: Mutex* mutex_; why not use a valueobject for the mutex? https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:91: bool done_; would 'bool shutting_down_;' be more readable? Using done_ both in the threadpool and the worker class makes it hard to read. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... File runtime/vm/thread_pool_test.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool_test.cc:67: } 100 threads are going to hang around for default time in the background? Why not stop the threadpool here?
https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... File runtime/vm/thread_pool.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:60: void ThreadPool::RemoveWorkerFromList(Worker* worker, Worker** head) { On 2012/03/06 05:07:00, asiva wrote: > ASSERT(worker != NULL && head != NULL); Function removed, but added assert in some similar places. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:80: void ThreadPool::AddWorkerToList(Worker* worker, Worker** head) { On 2012/03/06 05:07:00, asiva wrote: > ASSERT(worker != NULL && head != NULL); Ditto above. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:91: ThreadPool::Worker* ThreadPool::GetIdleWorker() { On 2012/03/06 05:07:00, asiva wrote: > Missing MutexLocker ml(mutex_); ? We are already holding the lock. The only caller is in ThreadPool::Run(). https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:95: worker = new Worker(this); On 2012/03/06 05:07:00, asiva wrote: > ASSERT(worker != NULL); Should I be doing this on every "new" then? https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:111: ASSERT(!done_); On 2012/03/06 05:07:00, asiva wrote: > Instead of asserting shouldn't you just check for this and return. > > A racing worker thread could enter this after done_ was set to true it will add > itself to the idle worker list. It will also be modifying the running list > concurrently with the guy shutting down the threadpool Fixed. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:120: ASSERT(!done_); On 2012/03/06 05:07:00, asiva wrote: > Instead of asserting for !done_ here shouldn't you just check for this and > return true; > > A racing worker thread could enter this after done_ was set to true and we will > not find it in the list and return false causing the thread to go back into the > wait loop. Fixed. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:198: } On 2012/03/06 05:07:00, asiva wrote: > Do we need this initial while loop, there is an implicit assumption with this > loop that a task will be provided when a thread is started. I moved the call to Thread::Start to be under lock, so I was able to get rid of this loop. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.cc:219: ml.Wait(FLAG_worker_timeout_millis); On 2012/03/06 05:07:00, asiva wrote: > Monitor::WaitResult result = ml.Wait(...); > > and you could check if (result == Monitor::kTimedOut) below. > > The OS::GetCurrentTimeMillis() then only needs to be done if task_ == NULL for > the spurious wakeup case to compute the remaining time to wait. Done. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... File runtime/vm/thread_pool.h (right): https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:14: class Worker; On 2012/03/06 05:07:00, asiva wrote: > Why is this forward declaration needed? I don't see any reference to it before > the actual declaration below. I used to have a task->worker pointer, but I don't anymore. Removed. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:43: uint64_t workers_stopped() const { return workers_stopped_; } On 2012/03/06 05:07:00, asiva wrote: > workers_active() and workers_idle() would also be useful stats. Added workers_running() and workers_idle(). https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:63: Monitor* monitor_; On 2012/03/06 05:07:00, asiva wrote: > Why not use a value object for the monitor? Done. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:70: Worker* prev_; // Protected by ThreadPool::mutex_ On 2012/03/06 05:07:00, asiva wrote: > Do we need a doubly linked list for this? a singly linked list maybe sufficient. > Most frequent operations are going to be add to head and remove from head. Changed the code to use two singly-linked lists. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:90: Mutex* mutex_; On 2012/03/06 05:07:00, asiva wrote: > why not use a valueobject for the mutex? Done. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool.h:91: bool done_; On 2012/03/06 05:07:00, asiva wrote: > would 'bool shutting_down_;' be more readable? > Using done_ both in the threadpool and the worker class makes it hard to read. Done. https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... File runtime/vm/thread_pool_test.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/12008/runtime/vm/thread_p... runtime/vm/thread_pool_test.cc:67: } On 2012/03/06 05:07:00, asiva wrote: > 100 threads are going to hang around for default time in the background? > > Why not stop the threadpool here? The destructor stops the threadpool.
Just a few quick things I noted. More to come... -Ivan https://chromiumcodereview.appspot.com/9581039/diff/15001/runtime/vm/thread_p... File runtime/vm/thread_pool.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/15001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:89: while (current != NULL) { Isn't current guaranteed to be NULL here? https://chromiumcodereview.appspot.com/9581039/diff/15001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:123: // We only want to remove a worker if it is idle. Comment out of place? https://chromiumcodereview.appspot.com/9581039/diff/15001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:128: // We found this worker in the idle list. Remove it. Comment seems out of place.
Uploaded a new version with some fixes. https://chromiumcodereview.appspot.com/9581039/diff/15001/runtime/vm/thread_p... File runtime/vm/thread_pool.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/15001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:89: while (current != NULL) { On 2012/03/07 01:06:05, Ivan Posva wrote: > Isn't current guaranteed to be NULL here? Oops, that broke in my most recent changes. Fixed. https://chromiumcodereview.appspot.com/9581039/diff/15001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:123: // We only want to remove a worker if it is idle. On 2012/03/07 01:06:05, Ivan Posva wrote: > Comment out of place? Comment removed, here and below. https://chromiumcodereview.appspot.com/9581039/diff/15001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:128: // We found this worker in the idle list. Remove it. On 2012/03/07 01:06:05, Ivan Posva wrote: > Comment seems out of place. Comment removed.
https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... File runtime/vm/thread_pool.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:35: worker = GetIdleWorker(); Why not treat the new thread creation case different from the case where you really find an idle thread. That would simplify the main worker loop as you would not have to deal with a racing notify. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:37: // Release ThreadPool::mutex_ before calling Worker functions. ASSERT(worker != NULL); https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:129: current->all_next_ = current->all_next_->all_next_; current->all_next_ = worker->all_next_; maybe more compact. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:152: current->idle_next_ = current->idle_next_->idle_next_; current->idle_next_ = worker->idle_next_; maybe more compact. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:244: } Threads which are in the idle list will not remove themselves from the idle list when the threadpool is shutdown. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:256: } MonitorLocker ml(&monitor_); ASSERT(task_ != NULL); while (state_ != kDone) { timeout = FLAG_worker_timeout_millis; while (task_ == NULL) { if (state_ != kIdle) { pool_->SetIdle(this); } idle_start = OS::GetCurrentTimeMillis(); result = ml.Wait(timeout); if (state_ == kDone) return; if (task_ == NULL) { timeout -= (OS::GetCurrentTimeMillis() - idle_start); if (result == kTimeout || timeout <= 0) { ASSERT(FLAG_worker_timeout_millis > 0); pool_->ReleaseIdleWorker(this); return; } } } ASSERT(state_ == kRunning); task = task_; task_ = NULL; monitor_.Exit(); task->Run(); delete task; monitor_.Enter(); } https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:274: ASSERT(worker->state_ == kDone && worker->idle_next_ == NULL); Also (worker->all_next_ == NULL)? https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... File runtime/vm/thread_pool.h (right): https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.h:77: bool done_; I feel started_ and done_ can be subsumed by the state_ field state == kInvalid or renamed to kNotStarted implies started == false; state == kDone implies done_ == true;
Ok. I've replaced state_ with a single boolean, owned_, which says whether the worker is owned by the thread pool. started_ is gone as well. Other cleanups and corrections. Thanks for your careful reviews! https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... File runtime/vm/thread_pool.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:35: worker = GetIdleWorker(); On 2012/03/09 18:36:29, asiva wrote: > Why not treat the new thread creation case different from the case where you > really find an idle thread. That would simplify the main worker loop as you > would not have to deal with a racing notify. Done. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:37: // Release ThreadPool::mutex_ before calling Worker functions. On 2012/03/09 18:36:29, asiva wrote: > ASSERT(worker != NULL); Done. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:129: current->all_next_ = current->all_next_->all_next_; On 2012/03/09 18:36:29, asiva wrote: > current->all_next_ = worker->all_next_; > maybe more compact. Done. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:152: current->idle_next_ = current->idle_next_->idle_next_; On 2012/03/09 18:36:29, asiva wrote: > current->idle_next_ = worker->idle_next_; > maybe more compact. Done. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:244: } On 2012/03/09 18:36:29, asiva wrote: > Threads which are in the idle list will not remove themselves from the idle list > when the threadpool is shutdown. Discussed offline -- already removed from idle list in this case. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:256: } Discussed offilne. On 2012/03/09 18:36:29, asiva wrote: > MonitorLocker ml(&monitor_); > ASSERT(task_ != NULL); > while (state_ != kDone) { > timeout = FLAG_worker_timeout_millis; > while (task_ == NULL) { > if (state_ != kIdle) { > pool_->SetIdle(this); > } > idle_start = OS::GetCurrentTimeMillis(); > result = ml.Wait(timeout); > if (state_ == kDone) return; > if (task_ == NULL) { > timeout -= (OS::GetCurrentTimeMillis() - idle_start); > if (result == kTimeout || timeout <= 0) { > ASSERT(FLAG_worker_timeout_millis > 0); > pool_->ReleaseIdleWorker(this); > return; > } > } > } > ASSERT(state_ == kRunning); > task = task_; > task_ = NULL; > monitor_.Exit(); > task->Run(); > delete task; > monitor_.Enter(); > } https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:274: ASSERT(worker->state_ == kDone && worker->idle_next_ == NULL); On 2012/03/09 18:36:29, asiva wrote: > Also (worker->all_next_ == NULL)? Done. https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... File runtime/vm/thread_pool.h (right): https://chromiumcodereview.appspot.com/9581039/diff/20001/runtime/vm/thread_p... runtime/vm/thread_pool.h:77: bool done_; On 2012/03/09 18:36:29, asiva wrote: > I feel started_ and done_ can be subsumed by the state_ field > state == kInvalid or renamed to kNotStarted implies started == false; > > state == kDone implies done_ == true; From offline: started_ is now gone. Thanks. We are keeping done_ because we can't access pool_ after it might have been deleted.
LGTM This implementation has worker threads taking the global thread pool lock after every task is done in order to add itself to the idle list. This makes me wonder as to why this implementation would be more efficient than a simpler implementation which has worker threads waiting on the thread pool lock for tasks. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... File runtime/vm/thread_pool.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:59: // Call StartThread after we've assigned the first t first task. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:191: count_idle_--; Will these counts get updated when we are doing a thread pool shutdown? https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... File runtime/vm/thread_pool_test.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool_test.cc:67: } Shutdown of the thread pool is done as part of the destructor but the test does not verify that all the threads did indeed shutdown. Would it be better if you did an explicit shutdown of the thread pool and then checked the count_stopped, count_running, count_idle values.
LGTM with comments. -Ivan http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.cc File runtime/vm/thread_pool.cc (right): http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.cc#n... runtime/vm/thread_pool.cc:31: { // Grab ThreadPool::mutex_ before touching queues and other ThreadPool state. http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.cc#n... runtime/vm/thread_pool.cc:59: // Call StartThread after we've assigned the first t Comment cutoff. http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.cc#n... runtime/vm/thread_pool.cc:89: Worker* next = current->all_next_; There is unprotected access to fields marked as protected by ThreadPool::mutex_ in this block. Please comment why that is fine. http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.cc#n... runtime/vm/thread_pool.cc:146: worker->owned_ = false; worker->pool_ = NULL; ? http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.cc#n... runtime/vm/thread_pool.cc:217: { // NOLINT NOLINT? http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.cc#n... runtime/vm/thread_pool.cc:256: } remove http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.cc#n... runtime/vm/thread_pool.cc:273: ASSERT(task_ == NULL); Move up after the monitor enter. http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.h File runtime/vm/thread_pool.h (right): http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.h#ne... runtime/vm/thread_pool.h:49: // Starts the thread for the worker. Call Run() first. This comment "Call Run() first." is unclear. http://codereview.chromium.org/9581039/diff/27001/runtime/vm/thread_pool.h#ne... runtime/vm/thread_pool.h:53: void Run(Task* task); Maybe SetTask(task)?
https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... File runtime/vm/thread_pool.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:31: { On 2012/03/14 18:51:26, Ivan Posva wrote: > // Grab ThreadPool::mutex_ before touching queues and other ThreadPool state. Done. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:59: // Call StartThread after we've assigned the first t On 2012/03/14 17:36:09, asiva wrote: > first task. Done. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:89: Worker* next = current->all_next_; On 2012/03/14 18:51:26, Ivan Posva wrote: > There is unprotected access to fields marked as protected by ThreadPool::mutex_ > in this block. Please comment why that is fine. Done. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:146: worker->owned_ = false; On 2012/03/14 18:51:26, Ivan Posva wrote: > worker->pool_ = NULL; ? Done. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:191: count_idle_--; On 2012/03/14 17:36:09, asiva wrote: > Will these counts get updated when we are doing a thread pool shutdown? count_started_ == The number of workers started. This is fine. count_stopped_ == The number of workers where we have set owned_ = false. It doesn't actually count when they actually stop. I was missing a decrement of this in Shutdown() -- added it now. I have also added an assertion that started == stopped in Shutdown. count_idle == The number of tasks in the idle_list_. This gets pegged to 0 in Shutdown. count_running == (count_started_ - (count_stopped + count_idle_). I have pegged this to 0 in Shutdown. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:217: { // NOLINT On 2012/03/14 18:51:26, Ivan Posva wrote: > NOLINT? Yeah. The linter seems to be confused that the { is the start of the function body or something (because of the ")" on the prev line) and it complained about it being on its own line. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:256: } On 2012/03/14 18:51:26, Ivan Posva wrote: > remove Done. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.cc:273: ASSERT(task_ == NULL); On 2012/03/14 18:51:26, Ivan Posva wrote: > Move up after the monitor enter. Done. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... File runtime/vm/thread_pool.h (right): https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.h:49: // Starts the thread for the worker. Call Run() first. On 2012/03/14 18:51:26, Ivan Posva wrote: > This comment "Call Run() first." is unclear. Done. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool.h:53: void Run(Task* task); On 2012/03/14 18:51:26, Ivan Posva wrote: > Maybe SetTask(task)? Done. https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... File runtime/vm/thread_pool_test.cc (right): https://chromiumcodereview.appspot.com/9581039/diff/27001/runtime/vm/thread_p... runtime/vm/thread_pool_test.cc:67: } On 2012/03/14 17:36:09, asiva wrote: > Shutdown of the thread pool is done as part of the destructor but the test does > not verify that all the threads did indeed shutdown. > > Would it be better if you did an explicit shutdown of the thread pool and then > checked the count_stopped, count_running, count_idle values. I've added a test that workers eventually shut themselves down. I added a private ThreadPool::exit_monitor_ which is only set/used during testing and I'm using this monitor to notify the test just before workers exit.
https://chromiumcodereview.appspot.com/9581039/diff/32002/runtime/vm/thread_p... File runtime/vm/thread_pool.h (right): https://chromiumcodereview.appspot.com/9581039/diff/32002/runtime/vm/thread_p... runtime/vm/thread_pool.h:39: uint64_t workers_running() const { return count_running_; } I missed this in the first go around. Please do not use unsigned integers for counts as this will cause problems when comparing to signed values as you noticed yourself. |