From: Hanna Czenczek <hreitz@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Eugenio Perez Martin <eperezma@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
qemu-devel@nongnu.org, virtio-fs@redhat.com,
German Maglione <gmaglione@redhat.com>,
Anton Kuchin <antonkuchin@yandex-team.ru>,
Juan Quintela <quintela@redhat.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH 2/4] vhost-user: Interface for migration state transfer
Date: Wed, 19 Apr 2023 13:24:29 +0200 [thread overview]
Message-ID: <e8459c3d-7716-b62d-ac8a-b50dd65e7059@redhat.com> (raw)
In-Reply-To: <CAJSP0QWbGQ9BaXDGMgasfk=qWt1DKHxcE=rK9BeuotQvQUuomw@mail.gmail.com>
On 19.04.23 13:21, Stefan Hajnoczi wrote:
> On Wed, 19 Apr 2023 at 07:10, Hanna Czenczek <hreitz@redhat.com> wrote:
>> On 18.04.23 09:54, Eugenio Perez Martin wrote:
>>> On Mon, Apr 17, 2023 at 9:21 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>>> On Mon, 17 Apr 2023 at 15:08, Eugenio Perez Martin <eperezma@redhat.com> wrote:
>>>>> On Mon, Apr 17, 2023 at 7:14 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>>>>> On Thu, Apr 13, 2023 at 12:14:24PM +0200, Eugenio Perez Martin wrote:
>>>>>>> On Wed, Apr 12, 2023 at 11:06 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>>>>>>> On Tue, Apr 11, 2023 at 05:05:13PM +0200, Hanna Czenczek wrote:
>>>>>>>>> So-called "internal" virtio-fs migration refers to transporting the
>>>>>>>>> back-end's (virtiofsd's) state through qemu's migration stream. To do
>>>>>>>>> this, we need to be able to transfer virtiofsd's internal state to and
>>>>>>>>> from virtiofsd.
>>>>>>>>>
>>>>>>>>> Because virtiofsd's internal state will not be too large, we believe it
>>>>>>>>> is best to transfer it as a single binary blob after the streaming
>>>>>>>>> phase. Because this method should be useful to other vhost-user
>>>>>>>>> implementations, too, it is introduced as a general-purpose addition to
>>>>>>>>> the protocol, not limited to vhost-user-fs.
>>>>>>>>>
>>>>>>>>> These are the additions to the protocol:
>>>>>>>>> - New vhost-user protocol feature VHOST_USER_PROTOCOL_F_MIGRATORY_STATE:
>>>>>>>>> This feature signals support for transferring state, and is added so
>>>>>>>>> that migration can fail early when the back-end has no support.
>>>>>>>>>
>>>>>>>>> - SET_DEVICE_STATE_FD function: Front-end and back-end negotiate a pipe
>>>>>>>>> over which to transfer the state. The front-end sends an FD to the
>>>>>>>>> back-end into/from which it can write/read its state, and the back-end
>>>>>>>>> can decide to either use it, or reply with a different FD for the
>>>>>>>>> front-end to override the front-end's choice.
>>>>>>>>> The front-end creates a simple pipe to transfer the state, but maybe
>>>>>>>>> the back-end already has an FD into/from which it has to write/read
>>>>>>>>> its state, in which case it will want to override the simple pipe.
>>>>>>>>> Conversely, maybe in the future we find a way to have the front-end
>>>>>>>>> get an immediate FD for the migration stream (in some cases), in which
>>>>>>>>> case we will want to send this to the back-end instead of creating a
>>>>>>>>> pipe.
>>>>>>>>> Hence the negotiation: If one side has a better idea than a plain
>>>>>>>>> pipe, we will want to use that.
>>>>>>>>>
>>>>>>>>> - CHECK_DEVICE_STATE: After the state has been transferred through the
>>>>>>>>> pipe (the end indicated by EOF), the front-end invokes this function
>>>>>>>>> to verify success. There is no in-band way (through the pipe) to
>>>>>>>>> indicate failure, so we need to check explicitly.
>>>>>>>>>
>>>>>>>>> Once the transfer pipe has been established via SET_DEVICE_STATE_FD
>>>>>>>>> (which includes establishing the direction of transfer and migration
>>>>>>>>> phase), the sending side writes its data into the pipe, and the reading
>>>>>>>>> side reads it until it sees an EOF. Then, the front-end will check for
>>>>>>>>> success via CHECK_DEVICE_STATE, which on the destination side includes
>>>>>>>>> checking for integrity (i.e. errors during deserialization).
>>>>>>>>>
>>>>>>>>> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
>>>>>>>>> Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
>>>>>>>>> ---
>>>>>>>>> include/hw/virtio/vhost-backend.h | 24 +++++
>>>>>>>>> include/hw/virtio/vhost.h | 79 ++++++++++++++++
>>>>>>>>> hw/virtio/vhost-user.c | 147 ++++++++++++++++++++++++++++++
>>>>>>>>> hw/virtio/vhost.c | 37 ++++++++
>>>>>>>>> 4 files changed, 287 insertions(+)
>>>>>>>>>
>>>>>>>>> diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
>>>>>>>>> index ec3fbae58d..5935b32fe3 100644
>>>>>>>>> --- a/include/hw/virtio/vhost-backend.h
>>>>>>>>> +++ b/include/hw/virtio/vhost-backend.h
>>>>>>>>> @@ -26,6 +26,18 @@ typedef enum VhostSetConfigType {
>>>>>>>>> VHOST_SET_CONFIG_TYPE_MIGRATION = 1,
>>>>>>>>> } VhostSetConfigType;
>>>>>>>>>
>>>>>>>>> +typedef enum VhostDeviceStateDirection {
>>>>>>>>> + /* Transfer state from back-end (device) to front-end */
>>>>>>>>> + VHOST_TRANSFER_STATE_DIRECTION_SAVE = 0,
>>>>>>>>> + /* Transfer state from front-end to back-end (device) */
>>>>>>>>> + VHOST_TRANSFER_STATE_DIRECTION_LOAD = 1,
>>>>>>>>> +} VhostDeviceStateDirection;
>>>>>>>>> +
>>>>>>>>> +typedef enum VhostDeviceStatePhase {
>>>>>>>>> + /* The device (and all its vrings) is stopped */
>>>>>>>>> + VHOST_TRANSFER_STATE_PHASE_STOPPED = 0,
>>>>>>>>> +} VhostDeviceStatePhase;
>>>>>>>> vDPA has:
>>>>>>>>
>>>>>>>> /* Suspend a device so it does not process virtqueue requests anymore
>>>>>>>> *
>>>>>>>> * After the return of ioctl the device must preserve all the necessary state
>>>>>>>> * (the virtqueue vring base plus the possible device specific states) that is
>>>>>>>> * required for restoring in the future. The device must not change its
>>>>>>>> * configuration after that point.
>>>>>>>> */
>>>>>>>> #define VHOST_VDPA_SUSPEND _IO(VHOST_VIRTIO, 0x7D)
>>>>>>>>
>>>>>>>> /* Resume a device so it can resume processing virtqueue requests
>>>>>>>> *
>>>>>>>> * After the return of this ioctl the device will have restored all the
>>>>>>>> * necessary states and it is fully operational to continue processing the
>>>>>>>> * virtqueue descriptors.
>>>>>>>> */
>>>>>>>> #define VHOST_VDPA_RESUME _IO(VHOST_VIRTIO, 0x7E)
>>>>>>>>
>>>>>>>> I wonder if it makes sense to import these into vhost-user so that the
>>>>>>>> difference between kernel vhost and vhost-user is minimized. It's okay
>>>>>>>> if one of them is ahead of the other, but it would be nice to avoid
>>>>>>>> overlapping/duplicated functionality.
>>>>>>>>
>>>>>>> That's what I had in mind in the first versions. I proposed VHOST_STOP
>>>>>>> instead of VHOST_VDPA_STOP for this very reason. Later it did change
>>>>>>> to SUSPEND.
>>>>>>>
>>>>>>> Generally it is better if we make the interface less parametrized and
>>>>>>> we trust in the messages and its semantics in my opinion. In other
>>>>>>> words, instead of
>>>>>>> vhost_set_device_state_fd_op(VHOST_TRANSFER_STATE_PHASE_STOPPED), send
>>>>>>> individually the equivalent of VHOST_VDPA_SUSPEND vhost-user command.
>>>>>>>
>>>>>>> Another way to apply this is with the "direction" parameter. Maybe it
>>>>>>> is better to split it into "set_state_fd" and "get_state_fd"?
>>>>>>>
>>>>>>> In that case, reusing the ioctls as vhost-user messages would be ok.
>>>>>>> But that puts this proposal further from the VFIO code, which uses
>>>>>>> "migration_set_state(state)", and maybe it is better when the number
>>>>>>> of states is high.
>>>>>> Hi Eugenio,
>>>>>> Another question about vDPA suspend/resume:
>>>>>>
>>>>>> /* Host notifiers must be enabled at this point. */
>>>>>> void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev, bool vrings)
>>>>>> {
>>>>>> int i;
>>>>>>
>>>>>> /* should only be called after backend is connected */
>>>>>> assert(hdev->vhost_ops);
>>>>>> event_notifier_test_and_clear(
>>>>>> &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier);
>>>>>> event_notifier_test_and_clear(&vdev->config_notifier);
>>>>>>
>>>>>> trace_vhost_dev_stop(hdev, vdev->name, vrings);
>>>>>>
>>>>>> if (hdev->vhost_ops->vhost_dev_start) {
>>>>>> hdev->vhost_ops->vhost_dev_start(hdev, false);
>>>>>> ^^^ SUSPEND ^^^
>>>>>> }
>>>>>> if (vrings) {
>>>>>> vhost_dev_set_vring_enable(hdev, false);
>>>>>> }
>>>>>> for (i = 0; i < hdev->nvqs; ++i) {
>>>>>> vhost_virtqueue_stop(hdev,
>>>>>> vdev,
>>>>>> hdev->vqs + i,
>>>>>> hdev->vq_index + i);
>>>>>> ^^^ fetch virtqueue state from kernel ^^^
>>>>>> }
>>>>>> if (hdev->vhost_ops->vhost_reset_status) {
>>>>>> hdev->vhost_ops->vhost_reset_status(hdev);
>>>>>> ^^^ reset device^^^
>>>>>>
>>>>>> I noticed the QEMU vDPA code resets the device in vhost_dev_stop() ->
>>>>>> vhost_reset_status(). The device's migration code runs after
>>>>>> vhost_dev_stop() and the state will have been lost.
>>>>>>
>>>>> vhost_virtqueue_stop saves the vq state (indexes, vring base) in the
>>>>> qemu VirtIONet device model. This is for all vhost backends.
>>>>>
>>>>> Regarding the state like mac or mq configuration, SVQ runs for all the
>>>>> VM run in the CVQ. So it can track all of that status in the device
>>>>> model too.
>>>>>
>>>>> When a migration effectively occurs, all the frontend state is
>>>>> migrated as a regular emulated device. To route all of the state in a
>>>>> normalized way for qemu is what leaves open the possibility to do
>>>>> cross-backends migrations, etc.
>>>>>
>>>>> Does that answer your question?
>>>> I think you're confirming that changes would be necessary in order for
>>>> vDPA to support the save/load operation that Hanna is introducing.
>>>>
>>> Yes, this first iteration was centered on net, with an eye on block,
>>> where state can be routed through classical emulated devices. This is
>>> how vhost-kernel and vhost-user do classically. And it allows
>>> cross-backend, to not modify qemu migration state, etc.
>>>
>>> To introduce this opaque state to qemu, that must be fetched after the
>>> suspend and not before, requires changes in vhost protocol, as
>>> discussed previously.
>>>
>>>>>> It looks like vDPA changes are necessary in order to support stateful
>>>>>> devices even though QEMU already uses SUSPEND. Is my understanding
>>>>>> correct?
>>>>>>
>>>>> Changes are required elsewhere, as the code to restore the state
>>>>> properly in the destination has not been merged.
>>>> I'm not sure what you mean by elsewhere?
>>>>
>>> I meant for vdpa *net* devices the changes are not required in vdpa
>>> ioctls, but mostly in qemu.
>>>
>>> If you meant stateful as "it must have a state blob that it must be
>>> opaque to qemu", then I think the straightforward action is to fetch
>>> state blob about the same time as vq indexes. But yes, changes (at
>>> least a new ioctl) is needed for that.
>>>
>>>> I'm asking about vDPA ioctls. Right now the sequence is SUSPEND and
>>>> then VHOST_VDPA_SET_STATUS 0.
>>>>
>>>> In order to save device state from the vDPA device in the future, it
>>>> will be necessary to defer the VHOST_VDPA_SET_STATUS 0 call so that
>>>> the device state can be saved before the device is reset.
>>>>
>>>> Does that sound right?
>>>>
>>> The split between suspend and reset was added recently for that very
>>> reason. In all the virtio devices, the frontend is initialized before
>>> the backend, so I don't think it is a good idea to defer the backend
>>> cleanup. Especially if we have already set the state is small enough
>>> to not needing iterative migration from virtiofsd point of view.
>>>
>>> If fetching that state at the same time as vq indexes is not valid,
>>> could it follow the same model as the "in-flight descriptors"?
>>> vhost-user follows them by using a shared memory region where their
>>> state is tracked [1]. This allows qemu to survive vhost-user SW
>>> backend crashes, and does not forbid the cross-backends live migration
>>> as all the information is there to recover them.
>>>
>>> For hw devices this is not convenient as it occupies PCI bandwidth. So
>>> a possibility is to synchronize this memory region after a
>>> synchronization point, being the SUSPEND call or GET_VRING_BASE. HW
>>> devices are not going to crash in the software sense, so all use cases
>>> remain the same to qemu. And that shared memory information is
>>> recoverable after vhost_dev_stop.
>>>
>>> Does that sound reasonable to virtiofsd? To offer a shared memory
>>> region where it dumps the state, maybe only after the
>>> set_state(STATE_PHASE_STOPPED)?
>> I don’t think we need the set_state() call, necessarily, if SUSPEND is
>> mandatory anyway.
>>
>> As for the shared memory, the RFC before this series used shared memory,
>> so it’s possible, yes. But “shared memory region” can mean a lot of
>> things – it sounds like you’re saying the back-end (virtiofsd) should
>> provide it to the front-end, is that right? That could work like this:
>>
>> On the source side:
>>
>> S1. SUSPEND goes to virtiofsd
>> S2. virtiofsd maybe double-checks that the device is stopped, then
>> serializes its state into a newly allocated shared memory area[1]
>> S3. virtiofsd responds to SUSPEND
>> S4. front-end requests shared memory, virtiofsd responds with a handle,
>> maybe already closes its reference
>> S5. front-end saves state, closes its handle, freeing the SHM
>>
>> [1] Maybe virtiofsd can correctly size the serialized state’s size, then
>> it can immediately allocate this area and serialize directly into it;
>> maybe it can’t, then we’ll need a bounce buffer. Not really a
>> fundamental problem, but there are limitations around what you can do
>> with serde implementations in Rust…
>>
>> On the destination side:
>>
>> D1. Optional SUSPEND goes to virtiofsd that hasn’t yet done much;
>> virtiofsd would serialize its empty state into an SHM area, and respond
>> to SUSPEND
>> D2. front-end reads state from migration stream into an SHM it has allocated
>> D3. front-end supplies this SHM to virtiofsd, which discards its
>> previous area, and now uses this one
>> D4. RESUME goes to virtiofsd, which deserializes the state from the SHM
>>
>> Couple of questions:
>>
>> A. Stefan suggested D1, but it does seem wasteful now. But if SUSPEND
>> would imply to deserialize a state, and the state is to be transferred
>> through SHM, this is what would need to be done. So maybe we should
>> skip SUSPEND on the destination?
>> B. You described that the back-end should supply the SHM, which works
>> well on the source. On the destination, only the front-end knows how
>> big the state is, so I’ve decided above that it should allocate the SHM
>> (D2) and provide it to the back-end. Is that feasible or is it
>> important (e.g. for real hardware) that the back-end supplies the SHM?
>> (In which case the front-end would need to tell the back-end how big the
>> state SHM needs to be.)
> How does this work for iterative live migration?
Right, probably not at all. :)
Hanna
next prev parent reply other threads:[~2023-04-19 11:25 UTC|newest]
Thread overview: 93+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-11 15:05 [PATCH 0/4] vhost-user-fs: Internal migration Hanna Czenczek
2023-04-11 15:05 ` [PATCH 1/4] vhost: Re-enable vrings after setting features Hanna Czenczek
2023-04-12 10:55 ` German Maglione
2023-04-12 12:18 ` Hanna Czenczek
2023-04-12 20:51 ` Stefan Hajnoczi
2023-04-13 7:17 ` Maxime Coquelin
2023-04-13 8:19 ` Hanna Czenczek
2023-04-13 11:03 ` Stefan Hajnoczi
2023-04-13 14:24 ` Anton Kuchin
2023-04-13 15:48 ` Michael S. Tsirkin
2023-04-13 11:03 ` Stefan Hajnoczi
2023-04-13 17:32 ` Hanna Czenczek
2023-04-13 13:19 ` Michael S. Tsirkin
2023-04-11 15:05 ` [PATCH 2/4] vhost-user: Interface for migration state transfer Hanna Czenczek
2023-04-12 21:06 ` Stefan Hajnoczi
2023-04-13 9:24 ` Hanna Czenczek
2023-04-13 11:38 ` Stefan Hajnoczi
2023-04-13 17:55 ` Hanna Czenczek
2023-04-13 20:42 ` Stefan Hajnoczi
2023-04-14 15:17 ` Eugenio Perez Martin
2023-04-17 15:18 ` Stefan Hajnoczi
2023-04-17 18:55 ` Eugenio Perez Martin
2023-04-17 19:08 ` Stefan Hajnoczi
2023-04-17 19:11 ` Eugenio Perez Martin
2023-04-17 19:46 ` Stefan Hajnoczi
2023-04-18 10:09 ` Eugenio Perez Martin
2023-04-19 10:45 ` Hanna Czenczek
2023-04-19 10:57 ` Stefan Hajnoczi
2023-04-13 10:14 ` Eugenio Perez Martin
2023-04-13 11:07 ` Stefan Hajnoczi
2023-04-13 17:31 ` Hanna Czenczek
2023-04-17 15:12 ` Stefan Hajnoczi
2023-04-19 10:47 ` Hanna Czenczek
2023-04-17 18:37 ` Eugenio Perez Martin
2023-04-17 15:38 ` Stefan Hajnoczi
2023-04-17 19:09 ` Eugenio Perez Martin
2023-04-17 19:33 ` Stefan Hajnoczi
2023-04-18 8:09 ` Eugenio Perez Martin
2023-04-18 17:59 ` Stefan Hajnoczi
2023-04-18 18:31 ` Eugenio Perez Martin
2023-04-18 20:40 ` Stefan Hajnoczi
2023-04-20 13:27 ` Eugenio Pérez
2023-05-08 19:12 ` Stefan Hajnoczi
2023-05-09 6:31 ` Eugenio Perez Martin
2023-05-09 9:01 ` Hanna Czenczek
2023-05-09 15:26 ` Eugenio Perez Martin
2023-04-19 10:57 ` [Virtio-fs] " Hanna Czenczek
2023-04-19 11:10 ` Stefan Hajnoczi
2023-04-19 11:15 ` Hanna Czenczek
2023-04-19 11:24 ` Stefan Hajnoczi
2023-04-17 17:14 ` Stefan Hajnoczi
2023-04-17 19:06 ` Eugenio Perez Martin
2023-04-17 19:20 ` Stefan Hajnoczi
2023-04-18 7:54 ` Eugenio Perez Martin
2023-04-19 11:10 ` Hanna Czenczek
2023-04-19 11:21 ` Stefan Hajnoczi
2023-04-19 11:24 ` Hanna Czenczek [this message]
2023-04-20 13:29 ` Eugenio Pérez
2023-05-08 20:10 ` Stefan Hajnoczi
2023-05-09 6:45 ` Eugenio Perez Martin
2023-05-09 15:09 ` Stefan Hajnoczi
2023-05-09 15:35 ` Eugenio Perez Martin
2023-05-09 17:33 ` Stefan Hajnoczi
2023-04-20 10:44 ` Eugenio Pérez
2023-04-13 8:50 ` Eugenio Perez Martin
2023-04-13 9:25 ` Hanna Czenczek
2023-04-11 15:05 ` [PATCH 3/4] vhost: Add high-level state save/load functions Hanna Czenczek
2023-04-12 21:14 ` Stefan Hajnoczi
2023-04-13 9:04 ` Hanna Czenczek
2023-04-13 11:22 ` Stefan Hajnoczi
2023-04-11 15:05 ` [PATCH 4/4] vhost-user-fs: Implement internal migration Hanna Czenczek
2023-04-12 21:00 ` [PATCH 0/4] vhost-user-fs: Internal migration Stefan Hajnoczi
2023-04-13 8:20 ` Hanna Czenczek
2023-04-13 16:11 ` Michael S. Tsirkin
2023-04-13 17:53 ` [Virtio-fs] " Hanna Czenczek
2023-05-04 16:05 ` Hanna Czenczek
2023-05-04 21:14 ` Stefan Hajnoczi
2023-05-05 9:03 ` Hanna Czenczek
2023-05-05 9:51 ` Hanna Czenczek
2023-05-05 14:26 ` Eugenio Perez Martin
2023-05-05 14:37 ` Hanna Czenczek
2023-05-08 17:00 ` Hanna Czenczek
2023-05-08 17:51 ` Eugenio Perez Martin
2023-05-08 19:31 ` Eugenio Perez Martin
2023-05-09 8:59 ` Hanna Czenczek
2023-05-09 15:30 ` Stefan Hajnoczi
2023-05-09 15:43 ` Eugenio Perez Martin
2023-05-05 9:53 ` Eugenio Perez Martin
2023-05-05 12:51 ` Hanna Czenczek
2023-05-08 21:10 ` Stefan Hajnoczi
2023-05-09 8:53 ` Hanna Czenczek
2023-05-09 14:53 ` Stefan Hajnoczi
2023-05-09 15:41 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e8459c3d-7716-b62d-ac8a-b50dd65e7059@redhat.com \
--to=hreitz@redhat.com \
--cc=antonkuchin@yandex-team.ru \
--cc=eperezma@redhat.com \
--cc=gmaglione@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=sgarzare@redhat.com \
--cc=stefanha@gmail.com \
--cc=stefanha@redhat.com \
--cc=virtio-fs@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).