From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=60884 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PanJU-0001Mx-5x for qemu-devel@nongnu.org; Thu, 06 Jan 2011 05:43:25 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PanJS-0003fh-Ne for qemu-devel@nongnu.org; Thu, 06 Jan 2011 05:43:24 -0500 Received: from e23smtp03.au.ibm.com ([202.81.31.145]:37724) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PanJS-0003f3-6c for qemu-devel@nongnu.org; Thu, 06 Jan 2011 05:43:22 -0500 Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [202.81.31.245]) by e23smtp03.au.ibm.com (8.14.4/8.13.1) with ESMTP id p06Acw7Z016003 for ; Thu, 6 Jan 2011 21:38:58 +1100 Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p06AhJ4c2470142 for ; Thu, 6 Jan 2011 21:43:19 +1100 Received: from d23av03.au.ibm.com (loopback [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p06AhJHA013318 for ; Thu, 6 Jan 2011 21:43:19 +1100 Date: Thu, 6 Jan 2011 16:13:15 +0530 From: Arun R Bharadwaj Message-ID: <20110106104315.GC17489@linux.vnet.ibm.com> References: <20110104052627.15887.43436.stgit@localhost6.localdomain6> <20110104052739.15887.60037.stgit@localhost6.localdomain6> <20110105195545.GD9821@stefanha-thinkpad.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20110105195545.GD9821@stefanha-thinkpad.localdomain> Subject: [Qemu-devel] Re: [PATCH 06/13] Threadlet: Add dequeue_work threadlet API Reply-To: arun@linux.vnet.ibm.com List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: aliguori@us.ibm.com, qemu-devel@nongnu.org, aneesh.kumar@linux.vnet.ibm.com * Stefan Hajnoczi [2011-01-05 19:55:46]: > On Tue, Jan 04, 2011 at 10:57:39AM +0530, Arun R Bharadwaj wrote: > > @@ -574,33 +574,39 @@ static void paio_remove(struct qemu_paiocb *acb) > > } > > } > > > > -static void paio_cancel(BlockDriverAIOCB *blockacb) > > +/** > > + * dequeue_work: Cancel a task queued on the global queue. > > + * @work: Contains the information of the task that needs to be cancelled. > > + * > > + * Returns: 0 if the task is successfully cancelled. > > + * 1 otherwise. > > + */ > > +static int dequeue_work(ThreadletWork *work) > > { > > - struct qemu_paiocb *acb = (struct qemu_paiocb *)blockacb; > > - int active = 0; > > + int ret = 1; > > > > qemu_mutex_lock(&globalqueue.lock); > > - if (!acb->active) { > > - QTAILQ_REMOVE(&globalqueue.request_list, &acb->work, node); > > - acb->ret = -ECANCELED; > > - } else if (acb->ret == -EINPROGRESS) { > > - active = 1; > > - } > > + QTAILQ_REMOVE(&globalqueue.request_list, work, node); > > + ret = 0; > > qemu_mutex_unlock(&globalqueue.lock); > > > > - qemu_mutex_lock(&aiocb_mutex); > > - if (!active) { > > - acb->ret = -ECANCELED; > > - } else { > > - while (acb->ret == -EINPROGRESS) { > > - /* > > - * fail safe: if the aio could not be canceled, > > - * we wait for it > > - */ > > - qemu_cond_wait(&aiocb_completion, &aiocb_mutex); > > + return ret; > > It always succeeds? Why bother with the ret local variable? > Yes, I'll remove this. > > +} > > + > > +static void paio_cancel(BlockDriverAIOCB *blockacb) > > +{ > > + struct qemu_paiocb *acb = (struct qemu_paiocb *)blockacb; > > + if (!acb->active) { > > + if (dequeue_work(&acb->work) != 0) { > > + /* Wait for running work item to complete */ > > + qemu_mutex_lock(&aiocb_mutex); > > + while (acb->ret == -EINPROGRESS) { > > + qemu_cond_wait(&aiocb_completion, &aiocb_mutex); > > + } > > + qemu_mutex_unlock(&aiocb_mutex); > > } > > } > > - qemu_mutex_unlock(&aiocb_mutex); > > + > > paio_remove(acb); > > I'm not convinced this function works. If the request is active in a > worker thread and paio_cancel() is called then we invoke paio_remove(). > True. So can we do this: Since we have a patch which separately removes the active field [PATCH 7/13], can we fold patch 7 and this patch into a single patch? So that way we can maintain the correctness, because we are actually waiting for the active work to complete by doing a while (acb->ret == -EINPROGRESS) -arun > Stefan