From: "Blue Swirl" <blauwirbel@gmail.com>
To: qemu-devel <qemu-devel@nongnu.org>
Subject: [Qemu-devel] Re: Faster, generic IO/DMA model with vectored AIO?
Date: Sat, 27 Oct 2007 19:53:16 +0300 [thread overview]
Message-ID: <f43fc5580710270953h2b06c7abqce0a71c0c5a31425@mail.gmail.com> (raw)
In-Reply-To: <f43fc5580710270556j15805369x334879e501a48e06@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 1077 bytes --]
On 10/27/07, Blue Swirl <blauwirbel@gmail.com> wrote:
> I changed Slirp output to use vectored IO to avoid the slowdown from
> memcpy (see the patch for the work in progress, gives a small
> performance improvement). But then I got the idea that using AIO would
> be nice at the outgoing end of the network IO processing. In fact,
> vectored AIO model could even be used for the generic DMA! The benefit
> is that no buffering or copying should be needed.
I made a sketch of the API, please have a look at the patch.
> Each stage would translate the IO list and callback as needed and only
> the final stage would perform the IO or memcpy. This would be used in
> each stage of the chain memory<->IOMMU<->device<->SLIRP<->host network
> device. Of course some kind of host support for vectored AIO for these
> devices is required. On target side, devices that can do
> scatter/gather DMA would benefit most.
Inside Qemu the vectors would use target physical addresses (struct
qemu_iovec), but at some point the addresses would change to host
pointers suitable for real AIO.
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: gdma_aiov.diff --]
[-- Type: text/x-diff; name=gdma_aiov.diff, Size: 3246 bytes --]
Index: qemu/vl.h
===================================================================
--- qemu.orig/vl.h 2007-10-27 14:58:09.000000000 +0000
+++ qemu/vl.h 2007-10-27 16:51:40.000000000 +0000
@@ -746,6 +746,85 @@
#include "hw/irq.h"
+/* Generic DMA API */
+
+typedef void DMADriverCompletionFunc(void *opaque, int ret);
+
+struct qemu_iovec {
+ target_phys_addr_t iov_base;
+ size_t iov_len;
+};
+
+typedef struct qemu_bus qemu_bus;
+
+typedef struct DMADriverAIOCB {
+ void *opaque;
+ int type;
+ int nent;
+ struct aiocb **aiocb;
+ DMADriverCompletionFunc *cb;
+ struct DMADriverAIOCB *next;
+} DMADriverAIOCB;
+
+typedef void DMARWHandler(void *opaque,
+ const struct qemu_iovec *dst_vector,
+ int dst_count,
+ const struct qemu_iovec *src_vector,
+ int src_count);
+
+qemu_bus *bus_init(unsigned int bus_bits, DMARWHandler north_handler,
+ void *north_handler_opaque, DMARWHandler south_handler,
+ void *south_handler_opaque);
+
+/* Direction CPU->bridge->device/memory */
+void bus_rw_south(qemu_bus *bus,
+ const struct qemu_iovec *dst_vector,
+ int dst_count,
+ const struct qemu_iovec *src_vector,
+ int src_count,
+ int is_write);
+
+static inline void bus_read_south(qemu_bus *bus,
+ const struct qemu_iovec *dst_vector,
+ int dst_count,
+ const struct qemu_iovec *src_vector,
+ int src_count)
+{
+ bus_rw_south(bus, dst_vector, dst_count, src_vector, src_count, 0);
+}
+static inline void bus_write_south(qemu_bus *bus,
+ const struct qemu_iovec *dst_vector,
+ int dst_count,
+ const struct qemu_iovec *src_vector,
+ int src_count)
+{
+ bus_rw_south(bus, dst_vector, dst_count, src_vector, src_count, 1);
+}
+/* From device towards CPU/memory (DMA) */
+void bus_rw_north(qemu_bus *bus,
+ const struct qemu_iovec *dst_vector,
+ int dst_count,
+ const struct qemu_iovec *src_vector,
+ int src_count,
+ int is_write);
+
+static inline void bus_read_north(qemu_bus *bus,
+ const struct qemu_iovec *dst_vector,
+ int dst_count,
+ const struct qemu_iovec *src_vector,
+ int src_count)
+{
+ bus_rw_north(bus, dst_vector, dst_count, src_vector, src_count, 0);
+}
+static inline void bus_write_north(qemu_bus *bus,
+ const struct qemu_iovec *dst_vector,
+ int dst_count,
+ const struct qemu_iovec *src_vector,
+ int src_count)
+{
+ bus_rw_north(bus, dst_vector, dst_count, src_vector, src_count, 1);
+}
+
/* ISA bus */
extern target_phys_addr_t isa_mem_base;
next prev parent reply other threads:[~2007-10-27 16:53 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-10-27 12:56 [Qemu-devel] Faster, generic IO/DMA model with vectored AIO? Blue Swirl
2007-10-27 16:53 ` Blue Swirl [this message]
2007-10-28 1:29 ` Paul Brook
2007-10-28 9:09 ` Blue Swirl
2007-10-28 19:10 ` Jamie Lokier
2007-10-29 19:33 ` Blue Swirl
2007-10-30 20:09 ` Blue Swirl
2007-10-28 20:55 ` Blue Swirl
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f43fc5580710270953h2b06c7abqce0a71c0c5a31425@mail.gmail.com \
--to=blauwirbel@gmail.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).