From: Anthony Liguori <anthony@codemonkey.ws>
To: Avi Kivity <avi@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
chrisw@redhat.com, qemu-devel@nongnu.org, kvm@vger.kernel.org,
Gerd Hoffmann <kraxel@redhat.com>
Subject: Re: [Qemu-devel] Re: [PATCH 2 of 5] add can_dma/post_dma for direct IO
Date: Sun, 14 Dec 2008 13:10:43 -0600 [thread overview]
Message-ID: <49455A33.207@codemonkey.ws> (raw)
In-Reply-To: <4944A1B5.5080300@redhat.com>
Avi Kivity wrote:
> Anthony Liguori wrote:
>>>
>>> There are N users of this code, all of which would need to cope with
>>> the
>>> failure. Or there could be one user (dma.c) which handles the failure
>>> and the bouncing.
>>>
>>
>> N should be small long term. It should only be for places that would
>> interact directly with CPU memory. This would be the PCI BUS, the
>> ISA BUS, some speciality devices, and possibly virtio (although you
>> could argue it should go through the PCI BUS).
>
> Fine, then let's rename it pci-dma.c.
Better, but then it should be part of pci.c. I've thought quite a bit
about it, and I'm becoming less convinced that this sort of API is going
to be helpful.
I was thinking that we need to make one minor change to the map API I
proposed. It should return a mapped size as an output parameter and
take a flag as to whether partial mappings can be handled. The effect
would be that you never bounce to RAM which means that you can also
quite accurately determine the maximum amount of bouncing (it should be
proportional to the amount of MMIO memory that's registered).
For virtio, I think the only change that we would make it to replace the
existing bouncing code with calls to map()/unmap() for each
scatter/gather array. You may need to cope with a scatter/gather list
that is larger than the inputted vector but that's easy.
This means you always map complete requests which is where I think it
fundamentally differs from what you propose. However, this is safe
because you should be able to guarantee that you can always bounce the
MMIO memory at least once. The only place where you could run into
trouble is if the same MMIO page is mapped multiple times in a single
scatter/gather. You could, in theory, attempt to cache MMIO bouncing
but that's probably unnecessary.
So virtio would not need this dma API.
I think the same is true about IDE/SCSI. I think we can built code that
relies on being able to completely map a scatter/gather list as long as
we make the map function sufficiently smart. As long as our bounce pool
is as large as the registered MMIO memory, we can always be safe is
we're sufficiently clever.
Xen is a more difficult case but I'm perfectly willing to punt that to
them to figure out.
Regards,
Anthony Liguori
next prev parent reply other threads:[~2008-12-14 19:10 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-12-12 18:16 [Qemu-devel] [PATCH 0 of 5] dma api v3 Andrea Arcangeli
2008-12-12 18:16 ` [Qemu-devel] [PATCH 1 of 5] fix cpu_physical_memory len Andrea Arcangeli
2008-12-12 19:06 ` [Qemu-devel] " Anthony Liguori
2008-12-12 19:26 ` Andrea Arcangeli
2008-12-12 18:16 ` [Qemu-devel] [PATCH 2 of 5] add can_dma/post_dma for direct IO Andrea Arcangeli
2008-12-12 19:00 ` Blue Swirl
2008-12-12 19:18 ` Anthony Liguori
2008-12-12 20:05 ` Blue Swirl
2008-12-12 20:10 ` Anthony Liguori
2008-12-12 19:15 ` [Qemu-devel] " Anthony Liguori
2008-12-12 19:37 ` Andrea Arcangeli
2008-12-12 20:09 ` Anthony Liguori
2008-12-12 20:25 ` Gerd Hoffmann
2008-12-12 19:39 ` Anthony Liguori
2008-12-13 9:22 ` Avi Kivity
2008-12-13 16:45 ` Anthony Liguori
2008-12-13 19:48 ` Avi Kivity
2008-12-13 21:07 ` Anthony Liguori
2008-12-14 6:03 ` Avi Kivity
2008-12-14 19:10 ` Anthony Liguori [this message]
2008-12-14 19:49 ` Avi Kivity
2008-12-14 23:08 ` Anthony Liguori
2008-12-15 0:57 ` Paul Brook
2008-12-15 2:09 ` Anthony Liguori
2008-12-15 6:23 ` Avi Kivity
2008-12-15 18:35 ` Blue Swirl
2008-12-15 22:06 ` Anthony Liguori
2008-12-16 9:41 ` Avi Kivity
2008-12-16 16:55 ` Blue Swirl
2008-12-16 17:09 ` Avi Kivity
2008-12-16 17:48 ` Anthony Liguori
2008-12-16 18:11 ` Blue Swirl
2008-12-16 15:57 ` Blue Swirl
2008-12-16 16:29 ` Paul Brook
2008-12-16 16:35 ` Blue Swirl
2008-12-14 17:30 ` Jamie Lokier
2008-12-13 14:39 ` Andrea Arcangeli
2008-12-13 16:46 ` Anthony Liguori
2008-12-13 16:53 ` Andrea Arcangeli
2008-12-13 17:54 ` Andreas Färber
2008-12-13 21:11 ` Anthony Liguori
2008-12-14 16:47 ` Andrea Arcangeli
2008-12-14 17:01 ` Avi Kivity
2008-12-14 17:15 ` Andrea Arcangeli
2008-12-14 19:59 ` Avi Kivity
2008-12-22 16:44 ` Ian Jackson
2008-12-22 19:44 ` Avi Kivity
2008-12-23 0:03 ` Thiemo Seufer
2008-12-23 1:02 ` Andrea Arcangeli
2008-12-23 17:31 ` Avi Kivity
2008-12-22 19:46 ` Avi Kivity
2009-01-05 10:27 ` Gerd Hoffmann
2008-12-13 22:47 ` Anthony Liguori
2008-12-14 6:07 ` Avi Kivity
2008-12-12 18:16 ` [Qemu-devel] [PATCH 3 of 5] rename dma.c to isa_dma.c Andrea Arcangeli
2008-12-12 18:16 ` [Qemu-devel] [PATCH 4 of 5] dma api Andrea Arcangeli
2008-12-12 18:55 ` Blue Swirl
2008-12-12 18:16 ` [Qemu-devel] [PATCH 5 of 5] bdrv_aio_readv/writev Andrea Arcangeli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49455A33.207@codemonkey.ws \
--to=anthony@codemonkey.ws \
--cc=aarcange@redhat.com \
--cc=avi@redhat.com \
--cc=chrisw@redhat.com \
--cc=kraxel@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).