qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Gerd Hoffmann <kraxel@redhat.com>
To: Paul Brook <paul@codesourcery.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [sneak preview] major scsi overhaul
Date: Mon, 16 Nov 2009 22:50:25 +0100	[thread overview]
Message-ID: <4B01C921.80902@redhat.com> (raw)
In-Reply-To: <200911161853.34668.paul@codesourcery.com>

On 11/16/09 19:53, Paul Brook wrote:
>> We can also limit the amout of host memory we allow the guest to
>> consume, so uncooperative guests can't push the host into swap.  This is
>> not implemented today, indicating that it hasn't been a problem so far.
>
> Capping the amount of memory required for a transfer *is* implemented, in both
> LSI and virtio-blk.  The exception being SCSI passthrough where the kernel API
> makes it impossible.

I was talking about scsi-generic.  There is no option to reject 
excessive large requests, and it was no problem so far.

>>    And with zerocopy it will be even less a problem as we don't need host
>> memory to buffer the data ...
>
> zero-copy isn't possible in many cases. You must handle the other cases
> gracefully.

I havn't yet found a guest OS where lsi can't do zerocopy.
Name one where it doesn't work and I'll have a look.

>>> Disconnecting on the first DMA request (after switching to a data phase
>>> and transferring zero bytes) is bizarre behavior, but probably allowable.
>>
>> The new lsi code doesn't.  The old code could do that under certain
>> circumstances.  And what is bizarre about that?  A real hard drive will
>> most likely do exactly that on reads (unless it has the data cached and
>> can start the transfer instantly).
>
> No. The old code goes directly from the command phase to the message
> (disconnect) phase.

Hmm, well.  It switches from DI / DO to MI before the guest runs again 
so the guest will not notice the switch ...

>>> However by my reading DMA transfers must be performed synchronously by
>>> the SCRIPTS engine, so you need to do a lot of extra checking to prove
>>> that you can safely continue execution without actually performing the
>>> transfer.
>>
>> I'll happily add a 'strict' mode which does data transfers synchronously
>> in case any compatibility issues show up.
>>
>> Such a mode would be slower of course.  We'll have to either do the I/O
>> in lots of little chunks or loose zerocopy.  Large transfers + memcpy is
>> probably the faster option.
>
> But as you agreed above, large transfers+memcpy is not a realistic option
> because it can have excessive memory requirements.

This "large" refers to normal request sizes (which are large compared to 
page-sized scatter list entries).  Having a 64k request submitted as a 
single I/O, then memcpy is most likely faster than submitting 16 I/O 
requests with 4k each one after another.  Buffering would be no problem 
here.

But I still don't expect problems with zerocopy though.  And zerocopy 
hasn't noticable host memory requirements even on excessive large requests.

cheers,
   Gerd

  reply	other threads:[~2009-11-16 21:50 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-06 23:09 [Qemu-devel] [sneak preview] major scsi overhaul Gerd Hoffmann
2009-11-07 15:22 ` Blue Swirl
2009-11-09  9:08   ` Gerd Hoffmann
2009-11-09 12:37     ` Avi Kivity
2009-11-09 13:03       ` Gerd Hoffmann
2009-11-09 13:17         ` Avi Kivity
2009-11-09 13:39           ` Gerd Hoffmann
2009-11-09 13:48             ` Avi Kivity
2009-11-09 20:38       ` Blue Swirl
2009-11-09 21:25         ` Gerd Hoffmann
2009-11-11  4:06 ` Paul Brook
2009-11-11  9:41   ` Gerd Hoffmann
2009-11-11 14:13     ` Paul Brook
2009-11-11 15:26       ` Gerd Hoffmann
2009-11-11 16:38         ` Paul Brook
2009-11-16 16:35           ` Gerd Hoffmann
2009-11-16 18:53             ` Paul Brook
2009-11-16 21:50               ` Gerd Hoffmann [this message]
2009-11-24 11:59               ` Gerd Hoffmann
2009-11-24 13:51                 ` Paul Brook
2009-11-25 16:37                   ` Gerd Hoffmann
2009-11-26  7:31                     ` Hannes Reinecke
2009-11-26  8:25                       ` Gerd Hoffmann
2009-11-26 10:57                         ` Hannes Reinecke
2009-11-26 11:04                           ` Gerd Hoffmann
2009-11-26 11:20                             ` Hannes Reinecke
2009-11-26 14:21                               ` Gerd Hoffmann
2009-11-26 14:27                                 ` Hannes Reinecke
2009-11-26 14:37                                   ` Gerd Hoffmann
2009-11-26 15:50                                     ` Hannes Reinecke
2009-11-27 11:08                                       ` Gerd Hoffmann
2009-12-02 13:47                                         ` Gerd Hoffmann
2009-12-07  8:28                                           ` Hannes Reinecke
2009-12-07  8:50                                             ` Gerd Hoffmann
2009-11-16 19:08             ` Ryan Harper
2009-11-16 20:40               ` Gerd Hoffmann
2009-11-16 21:45                 ` Ryan Harper
2009-11-11 11:21 ` [Qemu-devel] " Gerd Hoffmann
2009-11-11 11:52   ` Hannes Reinecke
2009-11-11 13:02     ` Gerd Hoffmann
2009-11-11 13:30       ` Hannes Reinecke
2009-11-11 14:37         ` Gerd Hoffmann
2009-11-12  9:54           ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B01C921.80902@redhat.com \
    --to=kraxel@redhat.com \
    --cc=paul@codesourcery.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).