From: Fam Zheng <famz@redhat.com>
To: Peter Lieven <pl@kamp.de>
Cc: kwolf@redhat.com, qemu-block@nongnu.org, stefanha@gmail.com,
jcody@redhat.com, qemu-devel@nongnu.org, jsnow@redhat.com
Subject: Re: [Qemu-devel] [PATCH V3 5/6] ide: enable buffered requests for ATAPI devices
Date: Thu, 12 Nov 2015 19:25:00 +0800 [thread overview]
Message-ID: <20151112112500.GT4082@ad.usersys.redhat.com> (raw)
In-Reply-To: <1446799373-6144-6-git-send-email-pl@kamp.de>
On Fri, 11/06 09:42, Peter Lieven wrote:
> Signed-off-by: Peter Lieven <pl@kamp.de>
> ---
> hw/ide/atapi.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/hw/ide/atapi.c b/hw/ide/atapi.c
> index 29fd131..2f6d018 100644
> --- a/hw/ide/atapi.c
> +++ b/hw/ide/atapi.c
> @@ -190,8 +190,8 @@ static int cd_read_sector(IDEState *s, void *buf)
> block_acct_start(blk_get_stats(s->blk), &s->acct,
> 4 * BDRV_SECTOR_SIZE, BLOCK_ACCT_READ);
>
> - blk_aio_readv(s->blk, (int64_t)s->lba << 2, &s->qiov, 4,
> - cd_read_sector_cb, s);
> + ide_buffered_readv(s, (int64_t)s->lba << 2, &s->qiov, 4,
> + cd_read_sector_cb, s);
>
> s->status |= BUSY_STAT;
> return 0;
> @@ -424,9 +424,9 @@ static void ide_atapi_cmd_read_dma_cb(void *opaque, int ret)
> s->bus->dma->iov.iov_len = n * 4 * 512;
> qemu_iovec_init_external(&s->bus->dma->qiov, &s->bus->dma->iov, 1);
>
> - s->bus->dma->aiocb = blk_aio_readv(s->blk, (int64_t)s->lba << 2,
> - &s->bus->dma->qiov, n * 4,
> - ide_atapi_cmd_read_dma_cb, s);
> + s->bus->dma->aiocb = ide_buffered_readv(s, (int64_t)s->lba << 2,
> + &s->bus->dma->qiov, n * 4,
> + ide_atapi_cmd_read_dma_cb, s);
IIRC the dma aiocb are still going to be drained in bmdma_cmd_writeb, so why do
we need the bounce buffer?
> return;
>
> eot:
> --
> 1.9.1
>
>
next prev parent reply other threads:[~2015-11-12 11:25 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-06 8:42 [Qemu-devel] [PATCH V3 0/6] ide: avoid main-loop hang on CDROM/NFS failure Peter Lieven
2015-11-06 8:42 ` [Qemu-devel] [PATCH V3 1/6] ide/atapi: make PIO read requests async Peter Lieven
2015-11-09 23:35 ` John Snow
2015-11-06 8:42 ` [Qemu-devel] [PATCH V3 2/6] block: add blk_abort_aio_request Peter Lieven
2015-11-12 8:17 ` Fam Zheng
2015-11-06 8:42 ` [Qemu-devel] [PATCH V3 3/6] ide: add support for IDEBufferedRequest Peter Lieven
2015-11-12 9:57 ` Fam Zheng
2015-11-12 10:21 ` Peter Lieven
2015-11-06 8:42 ` [Qemu-devel] [PATCH V3 4/6] ide: orphan all buffered requests on DMA cancel Peter Lieven
2015-11-12 8:27 ` Fam Zheng
2015-11-12 8:45 ` Peter Lieven
2015-11-06 8:42 ` [Qemu-devel] [PATCH V3 5/6] ide: enable buffered requests for ATAPI devices Peter Lieven
2015-11-12 11:25 ` Fam Zheng [this message]
2015-11-12 11:42 ` Peter Lieven
2015-11-06 8:42 ` [Qemu-devel] [PATCH V3 6/6] ide: enable buffered requests for PIO read requests Peter Lieven
2015-11-12 11:33 ` [Qemu-devel] [PATCH V3 0/6] ide: avoid main-loop hang on CDROM/NFS failure Fam Zheng
2015-11-12 11:46 ` Peter Lieven
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151112112500.GT4082@ad.usersys.redhat.com \
--to=famz@redhat.com \
--cc=jcody@redhat.com \
--cc=jsnow@redhat.com \
--cc=kwolf@redhat.com \
--cc=pl@kamp.de \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).