From: Hawkins Jiawei <yin31149@gmail.com>
To: jasowang@redhat.com, mst@redhat.com, eperezma@redhat.com
Cc: qemu-devel@nongnu.org, yin31149@gmail.com, 18801353760@163.com
Subject: [PATCH v3 0/8] vdpa: Send all CVQ state load commands in parallel
Date: Wed, 19 Jul 2023 15:53:45 +0800 [thread overview]
Message-ID: <cover.1689748694.git.yin31149@gmail.com> (raw)
This patchset allows QEMU to delay polling and checking the device
used buffer until either the SVQ is full or control commands shadow
buffers are full, instead of polling and checking immediately after
sending each SVQ control command, so that QEMU can send all the SVQ
control commands in parallel, which have better performance improvement.
I use vp_vdpa device to simulate vdpa device, and create 4094 VLANS in
guest to build a test environment for sending multiple CVQ state load
commands. This patch series can improve latency from 10023 us to
8697 us for about 4099 CVQ state load commands, about 0.32 us per command.
Note that this patch should be based on
patch "Vhost-vdpa Shadow Virtqueue VLAN support" at [1].
[1]. https://lists.gnu.org/archive/html/qemu-devel/2023-07/msg03719.html
TestStep
========
1. regression testing using vp-vdpa device
- For L0 guest, boot QEMU with two virtio-net-pci net device with
`ctrl_vq`, `ctrl_rx`, `ctrl_rx_extra` features on, command line like:
-device virtio-net-pci,disable-legacy=on,disable-modern=off,
iommu_platform=on,mq=on,ctrl_vq=on,guest_announce=off,
indirect_desc=off,queue_reset=off,ctrl_rx=on,ctrl_rx_extra=on,...
- For L1 guest, apply the patch series and compile the source code,
start QEMU with two vdpa device with svq mode on, enable the `ctrl_vq`,
`ctrl_rx`, `ctrl_rx_extra` features on, command line like:
-netdev type=vhost-vdpa,x-svq=true,...
-device virtio-net-pci,mq=on,guest_announce=off,ctrl_vq=on,
ctrl_rx=on,ctrl_rx_extra=on...
- For L2 source guest, run the following bash command:
```bash
#!/bin/sh
for idx1 in {0..9}
do
for idx2 in {0..9}
do
for idx3 in {0..6}
do
ip link add macvlan$idx1$idx2$idx3 link eth0
address 4a:30:10:19:$idx1$idx2:1$idx3 type macvlan mode bridge
ip link set macvlan$idx1$idx2$idx3 up
done
done
done
```
- Execute the live migration in L2 source monitor
- Result
* with this series, QEMU should not trigger any error or warning.
2. perf using vp-vdpa device
- For L0 guest, boot QEMU with two virtio-net-pci net device with
`ctrl_vq`, `ctrl_vlan` features on, command line like:
-device virtio-net-pci,disable-legacy=on,disable-modern=off,
iommu_platform=on,mq=on,ctrl_vq=on,guest_announce=off,
indirect_desc=off,queue_reset=off,ctrl_vlan=on,...
- For L1 guest, apply the patch series, then apply an addtional
patch to record the load time in microseconds as following:
```diff
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 6b958d6363..501b510fd2 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -295,7 +295,10 @@ static int vhost_net_start_one(struct vhost_net *net,
}
if (net->nc->info->load) {
+ int64_t start_us = g_get_monotonic_time();
r = net->nc->info->load(net->nc);
+ error_report("vhost_vdpa_net_load() = %ld us",
+ g_get_monotonic_time() - start_us);
if (r < 0) {
goto fail;
}
```
- For L1 guest, compile the code, and start QEMU with two vdpa device
with svq mode on, enable the `ctrl_vq`, `ctrl_vlan` features on,
command line like:
-netdev type=vhost-vdpa,x-svq=true,...
-device virtio-net-pci,mq=on,guest_announce=off,ctrl_vq=on,
ctrl_vlan=on...
- For L2 source guest, run the following bash command:
```bash
#!/bin/sh
for idx in {1..4094}
do
ip link add link eth0 name vlan$idx type vlan id $idx
done
```
- wait for some time, then execute the live migration in L2 source monitor
- Result
* with this series, QEMU should not trigger any warning
or error except something like "vhost_vdpa_net_load() = 8697 us"
* without this series, QEMU should not trigger any warning
or error except something like "vhost_vdpa_net_load() = 10023 us"
ChangeLog
=========
v3:
- refactor vhost_svq_poll() to accept cmds_in_flight
suggested by Jason and Eugenio
- refactor vhost_vdpa_net_cvq_add() to make control commands buffers
is not tied to `s->cvq_cmd_out_buffer` and `s->status`, so we can reuse
it suggested by Eugenio
- poll and check when SVQ is full or control commands shadow buffers is
full
v2: https://lore.kernel.org/all/cover.1683371965.git.yin31149@gmail.com/
- recover accidentally deleted rows
- remove extra newline
- refactor `need_poll_len` to `cmds_in_flight`
- return -EINVAL when vhost_svq_poll() return 0 or check
on buffers written by device fails
- change the type of `in_cursor`, and refactor the
code for updating cursor
- return directly when vhost_vdpa_net_load_{mac,mq}()
returns a failure in vhost_vdpa_net_load()
v1: https://lore.kernel.org/all/cover.1681732982.git.yin31149@gmail.com/
Hawkins Jiawei (8):
vhost: Add argument to vhost_svq_poll()
vdpa: Use iovec for vhost_vdpa_net_cvq_add()
vhost: Expose vhost_svq_available_slots()
vdpa: Avoid using vhost_vdpa_net_load_*() outside
vhost_vdpa_net_load()
vdpa: Check device ack in vhost_vdpa_net_load_rx_mode()
vdpa: Move vhost_svq_poll() to the caller of vhost_vdpa_net_cvq_add()
vdpa: Add cursors to vhost_vdpa_net_loadx()
vdpa: Send cvq state load commands in parallel
hw/virtio/vhost-shadow-virtqueue.c | 38 ++--
hw/virtio/vhost-shadow-virtqueue.h | 3 +-
net/vhost-vdpa.c | 354 ++++++++++++++++++-----------
3 files changed, 249 insertions(+), 146 deletions(-)
--
2.25.1
next reply other threads:[~2023-07-19 7:54 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-19 7:53 Hawkins Jiawei [this message]
2023-07-19 7:53 ` [PATCH v3 1/8] vhost: Add argument to vhost_svq_poll() Hawkins Jiawei
2023-08-18 15:08 ` Eugenio Perez Martin
2023-08-20 2:20 ` Hawkins Jiawei
2023-07-19 7:53 ` [PATCH v3 2/8] vdpa: Use iovec for vhost_vdpa_net_cvq_add() Hawkins Jiawei
2023-08-18 15:23 ` Eugenio Perez Martin
2023-08-20 2:33 ` Hawkins Jiawei
2023-07-19 7:53 ` [PATCH v3 3/8] vhost: Expose vhost_svq_available_slots() Hawkins Jiawei
2023-08-18 15:24 ` Eugenio Perez Martin
2023-07-19 7:53 ` [PATCH v3 4/8] vdpa: Avoid using vhost_vdpa_net_load_*() outside vhost_vdpa_net_load() Hawkins Jiawei
2023-08-18 15:39 ` Eugenio Perez Martin
2023-08-20 2:45 ` Hawkins Jiawei
2023-07-19 7:53 ` [PATCH v3 5/8] vdpa: Check device ack in vhost_vdpa_net_load_rx_mode() Hawkins Jiawei
2023-08-18 15:41 ` Eugenio Perez Martin
2023-07-19 7:53 ` [PATCH v3 6/8] vdpa: Move vhost_svq_poll() to the caller of vhost_vdpa_net_cvq_add() Hawkins Jiawei
2023-08-18 15:48 ` Eugenio Perez Martin
2023-08-20 2:52 ` Hawkins Jiawei
2023-07-19 7:53 ` [PATCH v3 7/8] vdpa: Add cursors to vhost_vdpa_net_loadx() Hawkins Jiawei
2023-07-19 7:53 ` [PATCH v3 8/8] vdpa: Send cvq state load commands in parallel Hawkins Jiawei
2023-08-18 17:27 ` Eugenio Perez Martin
2023-08-20 3:34 ` Hawkins Jiawei
2023-07-19 9:11 ` [PATCH v3 0/8] vdpa: Send all CVQ " Michael S. Tsirkin
2023-07-19 12:35 ` Hawkins Jiawei
2023-07-19 12:44 ` Lei Yang
2023-07-19 15:24 ` Hawkins Jiawei
2023-07-19 22:54 ` Lei Yang
2023-07-20 8:53 ` Lei Yang
2023-07-20 10:32 ` Hawkins Jiawei
2023-07-19 12:46 ` Michael S. Tsirkin
2023-07-19 15:11 ` Hawkins Jiawei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1689748694.git.yin31149@gmail.com \
--to=yin31149@gmail.com \
--cc=18801353760@163.com \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).