From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46019) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XMTIa-0005yC-2Y for qemu-devel@nongnu.org; Tue, 26 Aug 2014 22:49:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XMTIT-0005WM-Uz for qemu-devel@nongnu.org; Tue, 26 Aug 2014 22:49:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:61366) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XMTIT-0005WG-Np for qemu-devel@nongnu.org; Tue, 26 Aug 2014 22:49:17 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s7R2nHA7008937 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Tue, 26 Aug 2014 22:49:17 -0400 From: Fam Zheng Date: Wed, 27 Aug 2014 10:49:13 +0800 Message-Id: <1409107756-5967-6-git-send-email-famz@redhat.com> In-Reply-To: <1409107756-5967-1-git-send-email-famz@redhat.com> References: <1409107756-5967-1-git-send-email-famz@redhat.com> Subject: [Qemu-devel] [PATCH v3 5/8] thread-pool: Implement .cancel_async List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Kevin Wolf , Paolo Bonzini , Stefan Hajnoczi The .cancel_async reuses the first half of .cancel: try to steal the request if not submitted yet. In this case set the elem to a special status THREAD_CANCELED_ASYNC, which means thread_pool_completion_bh should call the cb with -ECANCELED. If the request is already submitted, do nothing, as we know the normal completion will happen in the future. Signed-off-by: Fam Zheng --- thread-pool.c | 44 +++++++++++++++++++++++++++++++++++--------- 1 file changed, 35 insertions(+), 9 deletions(-) diff --git a/thread-pool.c b/thread-pool.c index 23888dc..9cb7a1d 100644 --- a/thread-pool.c +++ b/thread-pool.c @@ -202,6 +202,39 @@ restart: } } +/* With elem->pool->lock held */ +static bool thread_pool_cancel_from_queue(ThreadPoolElement *elem) +{ + if (elem->state == THREAD_QUEUED && + /* No thread has yet started working on elem. we can try to "steal" + * the item from the worker if we can get a signal from the + * semaphore. Because this is non-blocking, we can do it with + * the lock taken and ensure that elem will remain THREAD_QUEUED. + */ + qemu_sem_timedwait(&elem->pool->sem, 0) == 0) { + QTAILQ_REMOVE(&elem->pool->request_list, elem, reqs); + qemu_bh_schedule(elem->pool->completion_bh); + return true; + } + return false; +} + +static void thread_pool_cancel_async(BlockDriverAIOCB *acb) +{ + ThreadPoolElement *elem = (ThreadPoolElement *)acb; + ThreadPool *pool = elem->pool; + + trace_thread_pool_cancel(elem, elem->common.opaque); + + qemu_mutex_lock(&pool->lock); + if (thread_pool_cancel_from_queue(elem)) { + elem->state = THREAD_DONE; + elem->ret = -ECANCELED; + } + + qemu_mutex_unlock(&pool->lock); +} + static void thread_pool_cancel(BlockDriverAIOCB *acb) { ThreadPoolElement *elem = (ThreadPoolElement *)acb; @@ -210,16 +243,8 @@ static void thread_pool_cancel(BlockDriverAIOCB *acb) trace_thread_pool_cancel(elem, elem->common.opaque); qemu_mutex_lock(&pool->lock); - if (elem->state == THREAD_QUEUED && - /* No thread has yet started working on elem. we can try to "steal" - * the item from the worker if we can get a signal from the - * semaphore. Because this is non-blocking, we can do it with - * the lock taken and ensure that elem will remain THREAD_QUEUED. - */ - qemu_sem_timedwait(&pool->sem, 0) == 0) { - QTAILQ_REMOVE(&pool->request_list, elem, reqs); + if (thread_pool_cancel_from_queue(elem)) { elem->state = THREAD_CANCELED; - qemu_bh_schedule(pool->completion_bh); } else { pool->pending_cancellations++; while (elem->state != THREAD_CANCELED && elem->state != THREAD_DONE) { @@ -234,6 +259,7 @@ static void thread_pool_cancel(BlockDriverAIOCB *acb) static const AIOCBInfo thread_pool_aiocb_info = { .aiocb_size = sizeof(ThreadPoolElement), .cancel = thread_pool_cancel, + .cancel_async = thread_pool_cancel_async, }; BlockDriverAIOCB *thread_pool_submit_aio(ThreadPool *pool, -- 2.1.0