From: Gerd Hoffmann <kraxel@redhat.com>
To: Hannes Reinecke <hare@suse.de>
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org,
virtualization@lists.linux-foundation.org
Subject: Re: [Qemu-devel] [PATCH 0/4] megaraid_sas HBA emulation
Date: Wed, 28 Oct 2009 14:58:33 +0100 [thread overview]
Message-ID: <4AE84E09.7090002@redhat.com> (raw)
In-Reply-To: <4AE822E7.2010108@redhat.com>
Hi,
> From a really quick view fixing up the data xfer code paths doesn't
> look too bad. Think I'll give it a try.
Oh well. The interface pretty obviously designed for the esp, which is
the oldest scsi adapter in qemu ...
ESP: There is no scatter-gather support in the hardware. So for large
reads/writes there are quite switches between OS and ESP: The OS saying
"dma next sectors to this location" via ioports, the ESP doing it and
raising a IRQ when done, next round. The existing callback mechanism
models that pretty closely.
USB: streams the data in small packets (smaller than sector size, 64
bytes IIRC). Current interface works good enougth.
LSI: Hops through quite a few loops to work with the existing interface.
Current emulation reads one lsi script command at a time and does
reads/writes in small pieces like the ESP. I think it could do alot
better: parse lsi scripts into scatter lists and submit larger requests.
Maybe even have multiple requests in flight at the same time. That
probably means putting the lsi script parsing code upside down though.
MEGASAS: I guess you have scatter lists at hand and want to submit them
directly to the block layer for zerocopy block I/O.
So, where to go from here?
I'm tempted to zap the complete read-in-pieces logic. For read/write
transfers storage must be passed where everything fits in. The
completion callback is called on command completion and nothing else.
I think we'll need to modes here: xfer from/to host-allocated bounce
buffer (linear buffer) and xfer from/to guest memory (scatter list).
That means (emulated) hardware without scatter-gather support must use
the bounce buffer mode can can't do zerocopy I/O. I don't think this is
a big problem though. Lots of small I/O requests don't perform very
well, so one big request filling the bounce buffer then memcpy() from/to
guest memory will most likely be faster anyway.
comments?
Gerd
next prev parent reply other threads:[~2009-10-28 13:59 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-27 15:26 [Qemu-devel] [PATCH 0/4] megaraid_sas HBA emulation Hannes Reinecke
2009-10-27 16:47 ` Gerd Hoffmann
2009-10-28 8:11 ` Hannes Reinecke
2009-10-28 8:20 ` Avi Kivity
2009-10-28 8:40 ` Christoph Hellwig
2009-10-28 10:54 ` Gerd Hoffmann
2009-10-28 13:58 ` Gerd Hoffmann [this message]
2009-10-28 19:25 ` Hannes Reinecke
2009-10-29 4:37 ` Christoph Hellwig
2009-10-29 8:47 ` Gerd Hoffmann
2009-10-29 12:57 ` Gerd Hoffmann
2009-10-29 14:57 ` Christoph Hellwig
2009-10-29 15:14 ` Anthony Liguori
2009-10-29 15:15 ` Christoph Hellwig
2009-10-29 15:25 ` Anthony Liguori
2009-10-30 8:55 ` Gerd Hoffmann
2009-10-30 8:12 ` Hannes Reinecke
2009-11-03 21:03 ` Gerd Hoffmann
2009-11-11 1:49 ` Paul Brook
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4AE84E09.7090002@redhat.com \
--to=kraxel@redhat.com \
--cc=hare@suse.de \
--cc=kvm@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).