From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS
Date: Thu, 26 Mar 2009 11:45:59 +0000 [thread overview]
Message-ID: <49CB6AF7.3080604@eu.citrix.com> (raw)
In-Reply-To: <49CB5FA0.10101@redhat.com>
Avi Kivity wrote:
> If cpu_physical_memory_map() returns NULL, then dma-helpers.c will stop
> collecting sg entries and submit the I/O. Tuning that will control how
> vectored requests are submitted.
>
I understand your suggestion now, something like:
---
diff --git a/dma-helpers.c b/dma-helpers.c
index 96a120c..6c43b97 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -96,6 +96,11 @@ static void dma_bdrv_cb(void *opaque, int ret)
while (dbs->sg_cur_index < dbs->sg->nsg) {
cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
+ if (dbs->iov.size + cur_len > DMA_LIMIT) {
+ cur_len = DMA_LIMIT - dbs->iov.size;
+ if (cur_len <= 0)
+ break;
+ }
mem = cpu_physical_memory_map(cur_addr, &cur_len, !dbs->is_write);
if (!mem)
break;
---
would work for me.
However it is difficult to put that code inside cpu_physical_memory_map
since I don't have any reference to link together all the mapping
requests related to the same dma transfer.
> If you problem is specifically with the bdrv_aio_rw_vector bounce
> buffer, then note that this is a temporary measure until vectored aio is
> in place, through preadv/pwritev and/or linux-aio IO_CMD_PREADV. You
> should either convert to that when it is merged, or implement request
> splitting in bdrv_aio_rw_vector.
>
> Can you explain your problem in more detail?
My problem is that my block driver has a size limit for read and write
operations.
When preadv/pwritev are in place I could limit the transfer size
directly in raw_aio_preadv\pwritev but I would also have to update the
iovector size field to reflect that and I think is a little bit ugly.
next prev parent reply other threads:[~2009-03-26 11:53 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-25 13:45 [Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS Stefano Stabellini
2009-03-25 15:22 ` Avi Kivity
2009-03-25 16:19 ` Stefano Stabellini
2009-03-25 16:45 ` Avi Kivity
2009-03-25 16:50 ` Stefano Stabellini
2009-03-25 17:47 ` Stefano Stabellini
2009-03-26 10:23 ` Avi Kivity
2009-03-26 10:31 ` Stefano Stabellini
2009-03-26 10:57 ` Avi Kivity
2009-03-26 11:45 ` Stefano Stabellini [this message]
2009-03-26 12:10 ` Avi Kivity
2009-03-26 12:28 ` Stefano Stabellini
2009-03-26 12:47 ` Samuel Thibault
2009-03-26 12:58 ` Avi Kivity
2009-03-26 15:30 ` Samuel Thibault
2009-03-26 18:32 ` Avi Kivity
2009-03-26 18:48 ` Samuel Thibault
2009-03-26 19:40 ` Avi Kivity
2009-03-26 23:18 ` Samuel Thibault
2009-03-27 9:52 ` Avi Kivity
2009-03-27 10:32 ` Samuel Thibault
2009-03-27 10:53 ` Avi Kivity
2009-03-27 13:45 ` Samuel Thibault
2009-03-26 22:42 ` Christoph Hellwig
2009-03-26 23:22 ` Samuel Thibault
2009-03-27 10:02 ` Avi Kivity
2009-03-27 10:36 ` Samuel Thibault
2009-03-27 10:58 ` Avi Kivity
2009-03-25 16:46 ` Samuel Thibault
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49CB6AF7.3080604@eu.citrix.com \
--to=stefano.stabellini@eu.citrix.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).