From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45832) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zr1yE-0007P0-GP for qemu-devel@nongnu.org; Tue, 27 Oct 2015 06:59:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Zr1yB-0005lo-6Y for qemu-devel@nongnu.org; Tue, 27 Oct 2015 06:59:14 -0400 Received: from mx-v6.kamp.de ([2a02:248:0:51::16]:44659 helo=mx01.kamp.de) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zr1yA-0005lb-Qj for qemu-devel@nongnu.org; Tue, 27 Oct 2015 06:59:11 -0400 References: <1444652845-20642-1-git-send-email-pl@kamp.de> <1444652845-20642-4-git-send-email-pl@kamp.de> <20151026103949.GA20111@stefanha-x1.localdomain> From: Peter Lieven Message-ID: <562F58EF.9050709@kamp.de> Date: Tue, 27 Oct 2015 11:58:55 +0100 MIME-Version: 1.0 In-Reply-To: <20151026103949.GA20111@stefanha-x1.localdomain> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 3/4] ide: add support for cancelable read requests List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: kwolf@redhat.com, jcody@redhat.com, jsnow@redhat.com, qemu-devel@nongnu.org, qemu-block@nongnu.org Am 26.10.2015 um 11:39 schrieb Stefan Hajnoczi: > On Mon, Oct 12, 2015 at 02:27:24PM +0200, Peter Lieven wrote: >> this patch adds a new aio readv compatible function which copies >> all data through a bounce buffer. The benefit is that these requests >> can be flagged as canceled to avoid guest memory corruption when >> a canceled request is completed by the backend at a later stage. >> >> If an IDE protocol wants to use this function it has to pipe >> all read requests through ide_readv_cancelable and it may then >> enable requests_cancelable in the IDEState. >> >> If this state is enable we can avoid the blocking blk_drain_all >> in case of a BMDMA reset. >> >> Currently only read operations are cancelable thus we can only >> use this logic for read-only devices. > Naming is confusing here. Requests are already "cancelable" using > bdv_aio_cancel(). > > Please use a different name, for example "orphan" requests. These are > requests that QEMU still knows about but the guest believes are > complete. Or maybe "IDEBufferedRequest" since data is transferred > through a bounce buffer. > >> Signed-off-by: Peter Lieven >> --- >> hw/ide/core.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> hw/ide/internal.h | 16 ++++++++++++++++ >> hw/ide/pci.c | 42 ++++++++++++++++++++++++++++-------------- >> 3 files changed, 98 insertions(+), 14 deletions(-) >> >> diff --git a/hw/ide/core.c b/hw/ide/core.c >> index 317406d..24547ce 100644 >> --- a/hw/ide/core.c >> +++ b/hw/ide/core.c >> @@ -561,6 +561,59 @@ static bool ide_sect_range_ok(IDEState *s, >> return true; >> } >> >> +static void ide_readv_cancelable_cb(void *opaque, int ret) >> +{ >> + IDECancelableRequest *req = opaque; >> + if (!req->canceled) { >> + if (!ret) { >> + qemu_iovec_from_buf(req->org_qiov, 0, req->buf, req->org_qiov->size); >> + } >> + req->org_cb(req->org_opaque, ret); >> + } >> + QLIST_REMOVE(req, list); >> + qemu_vfree(req->buf); >> + qemu_iovec_destroy(&req->qiov); >> + g_free(req); >> +} >> + >> +#define MAX_CANCELABLE_REQS 16 >> + >> +BlockAIOCB *ide_readv_cancelable(IDEState *s, int64_t sector_num, >> + QEMUIOVector *iov, int nb_sectors, >> + BlockCompletionFunc *cb, void *opaque) >> +{ >> + BlockAIOCB *aioreq; >> + IDECancelableRequest *req; >> + int c = 0; >> + >> + QLIST_FOREACH(req, &s->cancelable_requests, list) { >> + c++; >> + } >> + if (c > MAX_CANCELABLE_REQS) { >> + return NULL; >> + } > A BH is probably needed here to schedule an cb(-EIO) call since this > function isn't supposed to return NULL if it's a direct replacement for > blk_aio_readv(). You mean sth like: acb = qemu_aio_get(&bdrv_em_aiocb_info, bs, cb, opaque); acb->bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_aio_bh_cb, acb); acb->ret = -EIO; qemu_bh_schedule(acb->bh); return &acb->common; > >> + >> + req = g_new0(IDECancelableRequest, 1); >> + qemu_iovec_init(&req->qiov, 1); > It saves a g_new() call if you add a struct iovec field to > IDECancelableRequest and use qemu_iovec_init_external() instead of > qemu_iovec_init(). > > The qemu_iovec_destroy() calls must be dropped when an external struct > iovec is used. > > The qemu_iovec_init_external() call must be moved after the > qemu_blockalign() and struct iovec setup below. okay > >> + req->buf = qemu_blockalign(blk_bs(s->blk), iov->size); >> + qemu_iovec_add(&req->qiov, req->buf, iov->size); >> + req->org_qiov = iov; >> + req->org_cb = cb; >> + req->org_opaque = opaque; >> + >> + aioreq = blk_aio_readv(s->blk, sector_num, &req->qiov, nb_sectors, >> + ide_readv_cancelable_cb, req); >> + if (aioreq == NULL) { >> + qemu_vfree(req->buf); >> + qemu_iovec_destroy(&req->qiov); >> + g_free(req); >> + } else { >> + QLIST_INSERT_HEAD(&s->cancelable_requests, req, list); >> + } >> + >> + return aioreq; >> +} >> + >> static void ide_sector_read(IDEState *s); >> >> static void ide_sector_read_cb(void *opaque, int ret) >> @@ -805,6 +858,7 @@ void ide_start_dma(IDEState *s, BlockCompletionFunc *cb) >> s->bus->retry_unit = s->unit; >> s->bus->retry_sector_num = ide_get_sector(s); >> s->bus->retry_nsector = s->nsector; >> + s->bus->s = s; > How is 's' different from 'unit' and 'retry_unit'? > > The logic for switching between units is already a little tricky since > the guest can write to the hardware registers while requests are > in-flight. > > Please don't duplicate "active unit" state, that increases the risk of > inconsistencies. > > Can you use idebus_active_if() to get an equivalent IDEState pointer > without storing s? That should be possible. > >> if (s->bus->dma->ops->start_dma) { >> s->bus->dma->ops->start_dma(s->bus->dma, s, cb); >> } >> diff --git a/hw/ide/internal.h b/hw/ide/internal.h >> index 05e93ff..ad188c2 100644 >> --- a/hw/ide/internal.h >> +++ b/hw/ide/internal.h >> @@ -343,6 +343,16 @@ enum ide_dma_cmd { >> #define ide_cmd_is_read(s) \ >> ((s)->dma_cmd == IDE_DMA_READ) >> >> +typedef struct IDECancelableRequest { >> + QLIST_ENTRY(IDECancelableRequest) list; >> + QEMUIOVector qiov; >> + uint8_t *buf; >> + QEMUIOVector *org_qiov; >> + BlockCompletionFunc *org_cb; >> + void *org_opaque; > Please don't shorten names, original_* is clearer than org_*. Ok. > >> + bool canceled; >> +} IDECancelableRequest; >> + >> /* NOTE: IDEState represents in fact one drive */ >> struct IDEState { >> IDEBus *bus; >> @@ -396,6 +406,8 @@ struct IDEState { >> BlockAIOCB *pio_aiocb; >> struct iovec iov; >> QEMUIOVector qiov; >> + QLIST_HEAD(, IDECancelableRequest) cancelable_requests; >> + bool requests_cancelable; >> /* ATA DMA state */ >> int32_t io_buffer_offset; >> int32_t io_buffer_size; >> @@ -468,6 +480,7 @@ struct IDEBus { >> uint8_t retry_unit; >> int64_t retry_sector_num; >> uint32_t retry_nsector; >> + IDEState *s; >> }; >> >> #define TYPE_IDE_DEVICE "ide-device" >> @@ -572,6 +585,9 @@ void ide_set_inactive(IDEState *s, bool more); >> BlockAIOCB *ide_issue_trim(BlockBackend *blk, >> int64_t sector_num, QEMUIOVector *qiov, int nb_sectors, >> BlockCompletionFunc *cb, void *opaque); >> +BlockAIOCB *ide_readv_cancelable(IDEState *s, int64_t sector_num, >> + QEMUIOVector *iov, int nb_sectors, >> + BlockCompletionFunc *cb, void *opaque); >> >> /* hw/ide/atapi.c */ >> void ide_atapi_cmd(IDEState *s); >> diff --git a/hw/ide/pci.c b/hw/ide/pci.c >> index d31ff88..5587183 100644 >> --- a/hw/ide/pci.c >> +++ b/hw/ide/pci.c >> @@ -240,21 +240,35 @@ void bmdma_cmd_writeb(BMDMAState *bm, uint32_t val) >> /* Ignore writes to SSBM if it keeps the old value */ >> if ((val & BM_CMD_START) != (bm->cmd & BM_CMD_START)) { >> if (!(val & BM_CMD_START)) { >> - /* >> - * We can't cancel Scatter Gather DMA in the middle of the >> - * operation or a partial (not full) DMA transfer would reach >> - * the storage so we wait for completion instead (we beahve >> - * like if the DMA was completed by the time the guest trying >> - * to cancel dma with bmdma_cmd_writeb with BM_CMD_START not >> - * set). >> - * >> - * In the future we'll be able to safely cancel the I/O if the >> - * whole DMA operation will be submitted to disk with a single >> - * aio operation with preadv/pwritev. >> - */ >> if (bm->bus->dma->aiocb) { >> - blk_drain_all(); >> - assert(bm->bus->dma->aiocb == NULL); >> + if (bm->bus->s && bm->bus->s->requests_cancelable) { >> + /* >> + * If the used IDE protocol supports request cancelation we >> + * can flag requests as canceled here and disable DMA. >> + * The IDE protocol used MUST use ide_readv_cancelable for all >> + * read operations and then subsequently can enable this code >> + * path. Currently this is only supported for read-only >> + * devices. >> + */ >> + IDECancelableRequest *req; >> + QLIST_FOREACH(req, &bm->bus->s->cancelable_requests, list) { >> + if (!req->canceled) { >> + req->org_cb(req->org_opaque, -ECANCELED); >> + } >> + req->canceled = true; >> + } >> + } else { >> + /* >> + * We can't cancel Scatter Gather DMA in the middle of the >> + * operation or a partial (not full) DMA transfer would reach >> + * the storage so we wait for completion instead (we beahve >> + * like if the DMA was completed by the time the guest trying >> + * to cancel dma with bmdma_cmd_writeb with BM_CMD_START not >> + * set). >> + */ >> + blk_drain_all(); >> + assert(bm->bus->dma->aiocb == NULL); > This assertion applies is both branches of the if statement, it could be > moved after the if statement. Right. As pointed out in my comment to your requestion about write/discard I think it should be feasible to use buffered readv requests for all read-only IDE devices. Only thing I'm unsure about is reopening. A reopen seems to only flush the device not drain all requests. Peter