From: Paul Brook <paul@codesourcery.com>
To: qemu-devel@nongnu.org
Cc: kvm-devel@lists.sourceforge.net,
Marcelo Tosatti <marcelo@kvack.org>,
Anthony Liguori <aliguori@us.ibm.com>,
Aurelien Jarno <aurelien@aurel32.net>
Subject: Re: [Qemu-devel] [PATCH 2/6] PCI DMA API (v2)
Date: Mon, 7 Apr 2008 10:44:41 -0500 [thread overview]
Message-ID: <200804071044.42048.paul@codesourcery.com> (raw)
In-Reply-To: <1207368175-19476-2-git-send-email-aliguori@us.ibm.com>
> +/* Return a new IOVector that's a subset of the passed in IOVector. It
> should + * be freed with qemu_free when you are done with it. */
> +IOVector *iovector_trim(const IOVector *iov, size_t offset, size_t size);
Using qemu_free directly seems a bad idea. I guess we're likely to want to
switch to a different memory allocation scheme in the future.
The comment is also potentially misleading because iovector_new() doesn't
mention anything about having to free the vetor.
> +int bdrv_readv(BlockDriverState *bs, int64_t sector_num,
>...
> + size = iovector_size(iovec);
> + buffer = qemu_malloc(size);
This concerns me for two reasons:
(a) I'm alway suspicious about the performance implications of using malloc on
a hot path.
(b) The size of the bufer is unbounded. I'd expect multi-megabyte transters to
be common, and gigabyte sized operations are plausible.
At minimum you need a comment acknowledging that we've considered these
issues.
> +void *cpu_map_physical_page(target_phys_addr_t addr)
> + /* DMA'ing to MMIO, just skip */
> + phys_offset = cpu_get_physical_page_desc(addr);
> + if ((phys_offset & ~TARGET_PAGE_MASK) != IO_MEM_RAM)
> + return NULL;
This is not OK. It's fairly common for smaller devies to use a separate DMA
engine that writes to a MMIO region. You also never check the return value of
this function, so it will crash qemu.
> +void pci_device_dma_unmap(PCIDevice *s, const IOVector *orig,
This funcion should not exist. Dirty bits should be set by the memcpy
routines.
Paul
next prev parent reply other threads:[~2008-04-07 15:44 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-04-05 4:02 [Qemu-devel] [PATCH 1/6] Use ram_addr_t for cpu_get_physical_page_desc (v2) Anthony Liguori
2008-04-05 4:02 ` [Qemu-devel] [PATCH 2/6] PCI DMA API (v2) Anthony Liguori
2008-04-05 4:02 ` [Qemu-devel] [PATCH 3/6] virtio for QEMU (v2) Anthony Liguori
2008-04-05 4:02 ` [Qemu-devel] [PATCH 4/6] virtio network driver (v2) Anthony Liguori
2008-04-05 4:02 ` [Qemu-devel] [PATCH 5/6] virtio block " Anthony Liguori
2008-04-05 4:02 ` [Qemu-devel] [PATCH 6/6] virtio balloon " Anthony Liguori
2008-04-06 6:57 ` [Qemu-devel] [PATCH 2/6] PCI DMA API (v2) Blue Swirl
2008-04-06 15:22 ` [kvm-devel] " Anthony Liguori
2008-04-06 17:01 ` andrzej zaborowski
2008-04-06 17:46 ` Anthony Liguori
2008-04-07 0:40 ` Paul Brook
2008-04-07 8:32 ` andrzej zaborowski
2008-04-07 15:44 ` Paul Brook [this message]
2008-04-07 3:41 ` [Qemu-devel] [PATCH 1/6] Use ram_addr_t for cpu_get_physical_page_desc (v2) Paul Brook
2008-04-07 13:22 ` Anthony Liguori
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200804071044.42048.paul@codesourcery.com \
--to=paul@codesourcery.com \
--cc=aliguori@us.ibm.com \
--cc=aurelien@aurel32.net \
--cc=kvm-devel@lists.sourceforge.net \
--cc=marcelo@kvack.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).