From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>,
Cindy Lu <lulu@redhat.com>,
Stefano Garzarella <sgarzare@redhat.com>,
qemu-level <qemu-devel@nongnu.org>,
Laurent Vivier <lvivier@redhat.com>,
Juan Quintela <quintela@redhat.com>
Subject: Re: Emulating device configuration / max_virtqueue_pairs in vhost-vdpa and vhost-user
Date: Thu, 2 Feb 2023 11:44:57 +0800 [thread overview]
Message-ID: <7c076123-42e2-a041-2b5d-95d1afd82143@redhat.com> (raw)
In-Reply-To: <CAJaqyWcmxwKSVLY7sDTmYwLdzhVV78XDa5M4FAvmHq4X2Kin8Q@mail.gmail.com>
在 2023/2/1 19:48, Eugenio Perez Martin 写道:
> On Wed, Feb 1, 2023 at 12:20 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>> On Wed, Feb 01, 2023 at 12:14:18PM +0100, Maxime Coquelin wrote:
>>> Thanks Eugenio for working on this.
>>>
>>> On 1/31/23 20:10, Eugenio Perez Martin wrote:
>>>> Hi,
>>>>
>>>> The current approach of offering an emulated CVQ to the guest and map
>>>> the commands to vhost-user is not scaling well:
>>>> * Some devices already offer it, so the transformation is redundant.
>>>> * There is no support for commands with variable length (RSS?)
>>>>
>>>> We can solve both of them by offering it through vhost-user the same
>>>> way as vhost-vdpa do. With this approach qemu needs to track the
>>>> commands, for similar reasons as vhost-vdpa: qemu needs to track the
>>>> device status for live migration. vhost-user should use the same SVQ
>>>> code for this, so we avoid duplications.
>>>>
>>>> One of the challenges here is to know what virtqueue to shadow /
>>>> isolate. The vhost-user device may not have the same queues as the
>>>> device frontend:
>>>> * The first depends on the actual vhost-user device, and qemu fetches
>>>> it with VHOST_USER_GET_QUEUE_NUM at the moment.
>>>> * The qemu device frontend's is set by netdev queues= cmdline parameter in qemu
>>>>
>>>> For the device, the CVQ is the last one it offers, but for the guest
>>>> it is the last one offered in config space.
>>>>
>>>> To create a new vhost-user command to decrease that maximum number of
>>>> queues may be an option. But we can do it without adding more
>>>> commands, remapping the CVQ index at virtqueue setup. I think it
>>>> should be doable using (struct vhost_dev).vq_index and maybe a few
>>>> adjustments here and there.
>>>>
>>>> Thoughts?
>>> I am fine with both proposals.
>>> I think index remapping will require a bit more rework in the DPDK
>>> Vhost-user library, but nothing insurmountable.
>>>
>>> I am currently working on a PoC adding support for VDUSE in the DPDK
>>> Vhost library, and recently added control queue support. We can reuse it
>>> if we want to prototype your proposal.
>>>
>>> Maxime
>>>
>>>> Thanks!
>>>>
>>
>> technically backend knows how many vqs are there, last one is cvq...
>> not sure we need full blown remapping ...
>>
> The number of queues may not be the same between cmdline and the device.
>
> If vhost-user device cmdline wants more queues than offered by the
> device qemu will print an error. But the reverse (to offer the same
> number of queues or less than the device have) is valid at this
> moment.
>
> If we add cvq with this scheme, cvq index will not be the same between
> the guest and the device.
>
> vhost-vdpa totally ignores the queues parameter, so we're losing the
> opportunity to offer a consistent config space in the event of a
> migration. I suggest we should act the same way as I'm proposing here
> with vhost-user, so:
> * QEMU can block the migration in the case the destination cannot
> offer the same number of queues.
> * The guest will not see a change of the config space under its feets.
As we discussed in the past, it would be easier to fail the device
initialization in this case.
Thanks
>
> Now there are other fields in the config space for sure (mtu, rss
> size, etc), but I think the most complex case is the number of queues
> because cvq.
>
> Is that clearer?
>
> Thanks!
>
next prev parent reply other threads:[~2023-02-02 3:45 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-31 19:10 Emulating device configuration / max_virtqueue_pairs in vhost-vdpa and vhost-user Eugenio Perez Martin
2023-01-31 19:11 ` Eugenio Perez Martin
2023-01-31 21:32 ` Michael S. Tsirkin
2023-02-01 3:29 ` Jason Wang
2023-02-01 7:49 ` Eugenio Perez Martin
2023-02-01 10:44 ` Michael S. Tsirkin
2023-02-02 3:41 ` Jason Wang
2023-02-02 18:32 ` Eugenio Perez Martin
2023-02-01 10:53 ` Michael S. Tsirkin
2023-02-01 3:27 ` Jason Wang
2023-02-01 6:55 ` Eugenio Perez Martin
2023-02-02 3:02 ` Jason Wang
2023-02-01 11:14 ` Maxime Coquelin
2023-02-01 11:19 ` Eugenio Perez Martin
2023-03-02 8:48 ` Maxime Coquelin
2023-02-01 11:20 ` Michael S. Tsirkin
2023-02-01 11:48 ` Eugenio Perez Martin
2023-02-02 3:44 ` Jason Wang [this message]
2023-02-02 18:37 ` Eugenio Perez Martin
2023-03-08 10:33 ` Maxime Coquelin
2023-03-08 12:15 ` Michael S. Tsirkin
2023-03-10 10:33 ` Maxime Coquelin
2023-03-10 10:49 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7c076123-42e2-a041-2b5d-95d1afd82143@redhat.com \
--to=jasowang@redhat.com \
--cc=eperezma@redhat.com \
--cc=lulu@redhat.com \
--cc=lvivier@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=sgarzare@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).