From: "Michael S. Tsirkin" <mst@redhat.com>
To: Laszlo Ersek <lersek@redhat.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
qemu-devel@nongnu.org, Eugenio Perez Martin <eperezma@redhat.com>,
German Maglione <gmaglione@redhat.com>,
Liu Jiang <gerry@linux.alibaba.com>,
Sergio Lopez Pascual <slp@redhat.com>,
Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH 7/7] vhost-user: call VHOST_USER_SET_VRING_ENABLE synchronously
Date: Wed, 4 Oct 2023 12:30:14 -0400 [thread overview]
Message-ID: <20231004122927-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <67502261-0e48-60e1-f5d4-10f7f3bd164e@redhat.com>
On Wed, Oct 04, 2023 at 12:15:48PM +0200, Laszlo Ersek wrote:
> On 10/3/23 17:55, Stefan Hajnoczi wrote:
> > On Tue, 3 Oct 2023 at 10:41, Michael S. Tsirkin <mst@redhat.com> wrote:
> >>
> >> On Sun, Aug 27, 2023 at 08:29:37PM +0200, Laszlo Ersek wrote:
> >>> (1) The virtio-1.0 specification
> >>> <http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html> writes:
> >>>
> >>>> 3 General Initialization And Device Operation
> >>>> 3.1 Device Initialization
> >>>> 3.1.1 Driver Requirements: Device Initialization
> >>>>
> >>>> [...]
> >>>>
> >>>> 7. Perform device-specific setup, including discovery of virtqueues for
> >>>> the device, optional per-bus setup, reading and possibly writing the
> >>>> device’s virtio configuration space, and population of virtqueues.
> >>>>
> >>>> 8. Set the DRIVER_OK status bit. At this point the device is “live”.
> >>>
> >>> and
> >>>
> >>>> 4 Virtio Transport Options
> >>>> 4.1 Virtio Over PCI Bus
> >>>> 4.1.4 Virtio Structure PCI Capabilities
> >>>> 4.1.4.3 Common configuration structure layout
> >>>> 4.1.4.3.2 Driver Requirements: Common configuration structure layout
> >>>>
> >>>> [...]
> >>>>
> >>>> The driver MUST configure the other virtqueue fields before enabling the
> >>>> virtqueue with queue_enable.
> >>>>
> >>>> [...]
> >>>
> >>> These together mean that the following sub-sequence of steps is valid for
> >>> a virtio-1.0 guest driver:
> >>>
> >>> (1.1) set "queue_enable" for the needed queues as the final part of device
> >>> initialization step (7),
> >>>
> >>> (1.2) set DRIVER_OK in step (8),
> >>>
> >>> (1.3) immediately start sending virtio requests to the device.
> >>>
> >>> (2) When vhost-user is enabled, and the VHOST_USER_F_PROTOCOL_FEATURES
> >>> special virtio feature is negotiated, then virtio rings start in disabled
> >>> state, according to
> >>> <https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#ring-states>.
> >>> In this case, explicit VHOST_USER_SET_VRING_ENABLE messages are needed for
> >>> enabling vrings.
> >>>
> >>> Therefore setting "queue_enable" from the guest (1.1) is a *control plane*
> >>> operation, which travels from the guest through QEMU to the vhost-user
> >>> backend, using a unix domain socket.
> >>>
> >>> Whereas sending a virtio request (1.3) is a *data plane* operation, which
> >>> evades QEMU -- it travels from guest to the vhost-user backend via
> >>> eventfd.
> >>>
> >>> This means that steps (1.1) and (1.3) travel through different channels,
> >>> and their relative order can be reversed, as perceived by the vhost-user
> >>> backend.
> >>>
> >>> That's exactly what happens when OVMF's virtiofs driver (VirtioFsDxe) runs
> >>> against the Rust-language virtiofsd version 1.7.2. (Which uses version
> >>> 0.10.1 of the vhost-user-backend crate, and version 0.8.1 of the vhost
> >>> crate.)
> >>>
> >>> Namely, when VirtioFsDxe binds a virtiofs device, it goes through the
> >>> device initialization steps (i.e., control plane operations), and
> >>> immediately sends a FUSE_INIT request too (i.e., performs a data plane
> >>> operation). In the Rust-language virtiofsd, this creates a race between
> >>> two components that run *concurrently*, i.e., in different threads or
> >>> processes:
> >>>
> >>> - Control plane, handling vhost-user protocol messages:
> >>>
> >>> The "VhostUserSlaveReqHandlerMut::set_vring_enable" method
> >>> [crates/vhost-user-backend/src/handler.rs] handles
> >>> VHOST_USER_SET_VRING_ENABLE messages, and updates each vring's "enabled"
> >>> flag according to the message processed.
> >>>
> >>> - Data plane, handling virtio / FUSE requests:
> >>>
> >>> The "VringEpollHandler::handle_event" method
> >>> [crates/vhost-user-backend/src/event_loop.rs] handles the incoming
> >>> virtio / FUSE request, consuming the virtio kick at the same time. If
> >>> the vring's "enabled" flag is set, the virtio / FUSE request is
> >>> processed genuinely. If the vring's "enabled" flag is clear, then the
> >>> virtio / FUSE request is discarded.
> >>>
> >>> Note that OVMF enables the queue *first*, and sends FUSE_INIT *second*.
> >>> However, if the data plane processor in virtiofsd wins the race, then it
> >>> sees the FUSE_INIT *before* the control plane processor took notice of
> >>> VHOST_USER_SET_VRING_ENABLE and green-lit the queue for the data plane
> >>> processor. Therefore the latter drops FUSE_INIT on the floor, and goes
> >>> back to waiting for further virtio / FUSE requests with epoll_wait.
> >>> Meanwhile OVMF is stuck waiting for the FUSET_INIT response -- a deadlock.
> >>>
> >>> The deadlock is not deterministic. OVMF hangs infrequently during first
> >>> boot. However, OVMF hangs almost certainly during reboots from the UEFI
> >>> shell.
> >>>
> >>> The race can be "reliably masked" by inserting a very small delay -- a
> >>> single debug message -- at the top of "VringEpollHandler::handle_event",
> >>> i.e., just before the data plane processor checks the "enabled" field of
> >>> the vring. That delay suffices for the control plane processor to act upon
> >>> VHOST_USER_SET_VRING_ENABLE.
> >>>
> >>> We can deterministically prevent the race in QEMU, by blocking OVMF inside
> >>> step (1.1) -- i.e., in the write to the "queue_enable" register -- until
> >>> VHOST_USER_SET_VRING_ENABLE actually *completes*. That way OVMF's VCPU
> >>> cannot advance to the FUSE_INIT submission before virtiofsd's control
> >>> plane processor takes notice of the queue being enabled.
> >>>
> >>> Wait for VHOST_USER_SET_VRING_ENABLE completion by:
> >>>
> >>> - setting the NEED_REPLY flag on VHOST_USER_SET_VRING_ENABLE, and waiting
> >>> for the reply, if the VHOST_USER_PROTOCOL_F_REPLY_ACK vhost-user feature
> >>> has been negotiated, or
> >>>
> >>> - performing a separate VHOST_USER_GET_FEATURES *exchange*, which requires
> >>> a backend response regardless of VHOST_USER_PROTOCOL_F_REPLY_ACK.
> >>>
> >>> Cc: "Michael S. Tsirkin" <mst@redhat.com> (supporter:vhost)
> >>> Cc: Eugenio Perez Martin <eperezma@redhat.com>
> >>> Cc: German Maglione <gmaglione@redhat.com>
> >>> Cc: Liu Jiang <gerry@linux.alibaba.com>
> >>> Cc: Sergio Lopez Pascual <slp@redhat.com>
> >>> Cc: Stefano Garzarella <sgarzare@redhat.com>
> >>> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
> >>
> >>
> >> So you want me to hold on to this patch 7/7 for now?
> >> And maybe merge rest of the patchset?
> >
> > Up to Laszlo, but I wanted to mention that I support merging this
> > patch series. A ring has not been enabled/disabled until the back-end
> > replies, so I think this patch series makes sense.
>
> Sorry, I didn't get to see this part of the discussion yesterday, and
> now I see that Michael has gone ahead with a PR that contains v2 of this
> set. The night before yesterday I posted v3
> <https://patchwork.ozlabs.org/project/qemu-devel/cover/20231002203221.17241-1-lersek@redhat.com/>,
> with commit message updates / improvements only (based on feedback), so
> please merge that one.
>
> Thanks!
> Laszlo
OK. I'll need to do another PR soonish since a bunch of patchsets
which I wanted in this PR had issues and I had to drop them.
v3 will be there.
--
MST
next prev parent reply other threads:[~2023-10-04 16:31 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-27 18:29 [PATCH 0/7] vhost-user: call VHOST_USER_SET_VRING_ENABLE synchronously Laszlo Ersek
2023-08-27 18:29 ` [PATCH 1/7] vhost-user: strip superfluous whitespace Laszlo Ersek
2023-08-30 8:26 ` Stefano Garzarella
2023-08-30 15:04 ` Philippe Mathieu-Daudé
2023-08-27 18:29 ` [PATCH 2/7] vhost-user: tighten "reply_supported" scope in "set_vring_addr" Laszlo Ersek
2023-08-30 8:27 ` Stefano Garzarella
2023-08-30 15:04 ` Philippe Mathieu-Daudé
2023-08-27 18:29 ` [PATCH 3/7] vhost-user: factor out "vhost_user_write_msg" Laszlo Ersek
2023-08-28 22:46 ` Philippe Mathieu-Daudé
2023-08-30 8:31 ` Stefano Garzarella
2023-08-30 9:14 ` Laszlo Ersek
2023-08-30 9:54 ` Laszlo Ersek
2023-08-27 18:29 ` [PATCH 4/7] vhost-user: flatten "enforce_reply" into "vhost_user_write_msg" Laszlo Ersek
2023-08-28 22:47 ` Philippe Mathieu-Daudé
2023-08-30 8:31 ` Stefano Garzarella
2023-08-27 18:29 ` [PATCH 5/7] vhost-user: hoist "write_msg", "get_features", "get_u64" Laszlo Ersek
2023-08-30 8:32 ` Stefano Garzarella
2023-08-30 15:04 ` Philippe Mathieu-Daudé
2023-08-27 18:29 ` [PATCH 6/7] vhost-user: allow "vhost_set_vring" to wait for a reply Laszlo Ersek
2023-08-28 22:49 ` Philippe Mathieu-Daudé
2023-08-30 8:32 ` Stefano Garzarella
2023-08-27 18:29 ` [PATCH 7/7] vhost-user: call VHOST_USER_SET_VRING_ENABLE synchronously Laszlo Ersek
2023-08-30 8:39 ` Stefano Garzarella
2023-08-30 9:26 ` Laszlo Ersek
2023-08-30 14:24 ` Stefano Garzarella
2023-08-30 8:41 ` Laszlo Ersek
2023-08-30 8:59 ` Laszlo Ersek
2023-08-30 9:04 ` Laszlo Ersek
2023-08-30 12:10 ` Stefan Hajnoczi
2023-08-30 13:30 ` Laszlo Ersek
2023-08-30 15:37 ` Stefan Hajnoczi
2023-09-05 6:30 ` Laszlo Ersek
2023-09-25 15:31 ` Laszlo Ersek
2023-10-01 19:24 ` Michael S. Tsirkin
2023-10-01 19:25 ` Michael S. Tsirkin
2023-10-02 1:56 ` Laszlo Ersek
2023-10-02 6:57 ` Michael S. Tsirkin
2023-10-02 14:02 ` Laszlo Ersek
2023-10-02 6:49 ` Michael S. Tsirkin
2023-10-02 21:12 ` Stefan Hajnoczi
2023-10-02 21:13 ` Stefan Hajnoczi
2023-10-03 12:26 ` Michael S. Tsirkin
2023-10-03 13:08 ` Stefan Hajnoczi
2023-10-03 13:23 ` Laszlo Ersek
2023-10-03 14:25 ` Michael S. Tsirkin
2023-10-03 14:28 ` Laszlo Ersek
2023-10-03 14:40 ` Michael S. Tsirkin
2023-10-03 15:45 ` Stefan Hajnoczi
2023-10-02 22:36 ` Michael S. Tsirkin
2023-10-03 0:17 ` Stefan Hajnoczi
2023-10-03 14:28 ` Michael S. Tsirkin
2023-10-03 14:41 ` Michael S. Tsirkin
2023-10-03 15:55 ` Stefan Hajnoczi
2023-10-04 10:15 ` Laszlo Ersek
2023-10-04 16:30 ` Michael S. Tsirkin [this message]
2023-10-04 10:17 ` Laszlo Ersek
2023-08-30 8:48 ` [PATCH 0/7] " Stefano Garzarella
2023-08-30 9:32 ` Laszlo Ersek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231004122927-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=eperezma@redhat.com \
--cc=gerry@linux.alibaba.com \
--cc=gmaglione@redhat.com \
--cc=lersek@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=slp@redhat.com \
--cc=stefanha@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).