From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46029) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XUXq8-00037D-Ij for qemu-devel@nongnu.org; Thu, 18 Sep 2014 05:17:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XUXpz-0005u7-D0 for qemu-devel@nongnu.org; Thu, 18 Sep 2014 05:17:24 -0400 Received: from mail-we0-x231.google.com ([2a00:1450:400c:c03::231]:43970) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XUXpz-0005l9-6h for qemu-devel@nongnu.org; Thu, 18 Sep 2014 05:17:15 -0400 Received: by mail-we0-f177.google.com with SMTP id u57so573436wes.8 for ; Thu, 18 Sep 2014 02:17:09 -0700 (PDT) Sender: Paolo Bonzini Message-ID: <541AA311.4080606@redhat.com> Date: Thu, 18 Sep 2014 11:17:05 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1411007799-23199-1-git-send-email-famz@redhat.com> <1411007799-23199-3-git-send-email-famz@redhat.com> In-Reply-To: <1411007799-23199-3-git-send-email-famz@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH 2/3] scsi: Introduce scsi_req_cancel_async List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Fam Zheng , qemu-devel@nongnu.org Cc: Kevin Wolf , Stefan Hajnoczi Il 18/09/2014 04:36, Fam Zheng ha scritto: > + QTAILQ_FOREACH_SAFE(ref, &req->cancel_deps, next, next) { > + SCSIRequest *r = ref->req; > + assert(r->cancel_dep_count); > + r->cancel_dep_count--; > + if (!r->cancel_dep_count && req->bus->info->cancel_dep_complete) { > + req->bus->info->cancel_dep_complete(r); > + } > + QTAILQ_REMOVE(&req->cancel_deps, ref, next); > + scsi_req_unref(r); > + g_free(ref); > + } I think there is one problem here. scsi_req_cancel_async can actually be synchronous if you're unlucky (because bdrv_aio_cancel_async can be synchronous too, for example in the case of a linux-aio AIOCB). So you could end up calling cancel_dep_complete even if the caller intends to cancel more requests. I think it's better to track the count in virtio-scsi instead. You can initialize it similar to bdrv_aio_multiwrite: /* Run the aio requests. */ mcb->num_requests = num_reqs; for (i = 0; i < num_reqs; i++) { bdrv_co_aio_rw_vector(bs, reqs[i].sector, reqs[i].qiov, reqs[i].nb_sectors, reqs[i].flags, multiwrite_cb, mcb, true); } return 0; and decrement the count on every call to the notifier. This is independent of the choice to make bdrv_aio_cancel_async semantics stricter. In the case of bdrv_aio_multiwrite, we know that bdrv_co_aio_rw_vector is never synchronous, but the code is simply nicer that way. :) Paolo