From: Stefan Hajnoczi <stefanha@redhat.com>
To: Mike Christie <michael.christie@oracle.com>
Cc: linux-scsi@vger.kernel.org, mdean@redhat.com, mst@redhat.com,
yvugenfi@redhat.com, virtualization@lists.linux-foundation.org,
target-devel@vger.kernel.org, pbonzini@redhat.com
Subject: Re: [PATCH v2 0/2] vhost-scsi: Fix IO hangs when using windows
Date: Thu, 13 Jul 2023 10:03:42 -0400 [thread overview]
Message-ID: <20230713140342.GA310002@fedora> (raw)
In-Reply-To: <41d2d3aa-b537-686a-f2b4-1e0a861cebc8@oracle.com>
[-- Attachment #1.1: Type: text/plain, Size: 3370 bytes --]
On Wed, Jul 12, 2023 at 11:05:11AM -0500, Mike Christie wrote:
> On 7/12/23 9:26 AM, Stefan Hajnoczi wrote:
> > On Tue, Jul 11, 2023 at 04:01:22PM -0500, Mike Christie wrote:
> >> On 7/11/23 1:34 PM, Stefan Hajnoczi wrote:
> >>> On Sun, Jul 09, 2023 at 03:28:57PM -0500, Mike Christie wrote:
> >>>> The following patches were made over Linus's tree and fix an issue
> >>>> where windows guests will send iovecs with offset/lengths that result
> >>>> in IOs that are not aligned to 512. The LIO layer will then send them
> >>>> to Linux's FS/block layer but it requires 512 byte alignment, so
> >>>> depending on the FS/block driver being used we will get IO errors or
> >>>> hung IO.
> >>>>
> >>>> The following patches have vhost-scsi detect when windows sends these
> >>>> IOs and copy them to a bounce buffer. It then does some cleanup in
> >>>> the related code.
> >>>
> >>> Hang on, virtio-scsi is a SCSI HBA and READs/WRITEs submitted must
> >>> follow the usual constraints on SCSI block limits. Would Windows send
> >>> mis-aligned I/O to a non-virtio-scsi SCSI HBA?
> >>
> >> It's like linux where you can config settings like that.
> >>
> >>>> Are you sure this is not a bug in the Windows guest driver where block
> >>> limits are being misconfigured?
> >>
> >> From what our windows dev told us the guest drivers like here:
> >>
> >> https://github.com/virtio-win
> >>
> >> don't set the windows AlignmentMask to 512. They tried that and it
> >> resulted in windows crash dump crashing because it doesn't like the
> >> hard alignment requirement.
> >>
> >> We thought other apps would have trouble as well, so we tried to add
> >> bounce buffer support to the windows driver, but I think people thought
> >> it was going to be uglier than this patch and in the normal alignment
> >> case might also affect performance. There was some windows driver/layering
> >> and buffer/cmd details that I don't fully understand and took their word
> >> for because I don't know a lot about windows.
> >>
> >> In the end we still have to add checks to vhost-scsi to protect against
> >> bad drivers, so we thought we might as well just add bounce buffer support
> >> to vhost-scsi.
> >
> > CCing virtio-win developers so they can confirm how the vioscsi driver
> > is supposed to handle request alignment.
> >
> > My expectation is that the virtio-scsi device will fail mis-aligned I/O
> > requests.
>
> I don't think you can just change the driver's behavior to fail now,
> because apps send mis-aligned IO and its working as long as they have less
> than 256 bio vecs.
>
> We see mis-aligned IOs during boot and also from random non window's apps.
> If we just start to fail then it would be a regression when the app no
> longer works or the OS fails to start up.
I was wrong:
The virtio-scsi specification contains no alignment requirements for I/O
buffers. It is fine for the driver to submit iovecs with any memory
alignment.
The QEMU code allocates a bounce buffer if the iovecs submitted by the
driver do not match the minimum alignment requirements on the host (e.g.
O_DIRECT requirements).
It makes sense that vhost_scsi needs to use a bounce buffer in cases
where the underlying storage has stricter memory alignment requirements.
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
prev parent reply other threads:[~2023-07-13 14:04 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-09 20:28 [PATCH v2 0/2] vhost-scsi: Fix IO hangs when using windows Mike Christie
2023-07-09 20:28 ` [PATCH v2 1/2] vhost-scsi: Fix alignment handling with windows Mike Christie
2023-07-09 20:28 ` [PATCH v2 2/2] vhost-scsi: Rename vhost_scsi_iov_to_sgl Mike Christie
2023-07-10 5:52 ` [PATCH v2 0/2] vhost-scsi: Fix IO hangs when using windows Michael S. Tsirkin
2023-07-11 1:36 ` michael.christie
2023-07-11 18:34 ` Stefan Hajnoczi
2023-07-11 21:01 ` Mike Christie
2023-07-12 14:26 ` Stefan Hajnoczi
2023-07-12 16:05 ` Mike Christie
2023-07-13 5:55 ` Vadim Rozenfeld
2023-07-13 14:07 ` Stefan Hajnoczi
2023-07-13 14:03 ` Stefan Hajnoczi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230713140342.GA310002@fedora \
--to=stefanha@redhat.com \
--cc=linux-scsi@vger.kernel.org \
--cc=mdean@redhat.com \
--cc=michael.christie@oracle.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=target-devel@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=yvugenfi@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).