From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: Parav Pandit <parav@mellanox.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Juan Quintela <quintela@redhat.com>,
Markus Armbruster <armbru@redhat.com>,
virtualization@lists.linux-foundation.org,
Harpreet Singh Anand <hanand@xilinx.com>,
Xiao W Wang <xiao.w.wang@intel.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Eli Cohen <eli@mellanox.com>, Michael Lilja <ml@napatech.com>,
Stefano Garzarella <sgarzare@redhat.com>
Subject: [RFC v3 05/29] virtio: Add VIRTIO_F_QUEUE_STATE
Date: Wed, 19 May 2021 18:28:39 +0200 [thread overview]
Message-ID: <20210519162903.1172366-6-eperezma@redhat.com> (raw)
In-Reply-To: <20210519162903.1172366-1-eperezma@redhat.com>
Implementation of RFC of device state capability:
https://lists.oasis-open.org/archives/virtio-comment/202012/msg00005.html
With this capability, vdpa device can reset it's index so it can start
consuming from shadow virtqueue (SVQ), that start with state 0. Another
approach would be to make SVQ to start forwarding from the state of the
device when the later is stopped, but this device capability is needed
at the destination of live migration anyway.
The use case is to test SVQ with virtio-pci vdpa (vp_vdpa) with nested
virtualization. Spawning a L0 qemu with a virtio-net device, use
vp_vdpa driver to handle it in the guest, and then spawn a L1 qemu using
that vdpa device. When L1 qemu calls device to set a new state though
vdpa ioctl, vp_vdpa should set each queue state though virtio
VIRTIO_PCI_COMMON_Q_AVAIL_STATE.
Since this is only for testing vhost-vdpa, it's added here before of
proposing to kernel code. No effort is done for checking that device
can actually change its state, its layout, or if the device even
supports to change state at all. These will be added in the future.
Also, a modified version of vp_vdpa that allows to set these in PCI
config is needed.
TODO: Check for feature enabled and split in virtio pci config
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
hw/virtio/virtio-pci.h | 1 +
include/hw/virtio/virtio.h | 4 +++-
include/standard-headers/linux/virtio_config.h | 3 +++
include/standard-headers/linux/virtio_pci.h | 2 ++
hw/virtio/virtio-pci.c | 9 +++++++++
5 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
index d7d5d403a9..69e34449cd 100644
--- a/hw/virtio/virtio-pci.h
+++ b/hw/virtio/virtio-pci.h
@@ -115,6 +115,7 @@ typedef struct VirtIOPCIQueue {
uint32_t desc[2];
uint32_t avail[2];
uint32_t used[2];
+ uint16_t state;
} VirtIOPCIQueue;
struct VirtIOPCIProxy {
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index c2c7cee993..dfcc7d8350 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -289,7 +289,9 @@ typedef struct VirtIORNGConf VirtIORNGConf;
DEFINE_PROP_BIT64("iommu_platform", _state, _field, \
VIRTIO_F_IOMMU_PLATFORM, false), \
DEFINE_PROP_BIT64("packed", _state, _field, \
- VIRTIO_F_RING_PACKED, false)
+ VIRTIO_F_RING_PACKED, false), \
+ DEFINE_PROP_BIT64("save_restore_q_state", _state, _field, \
+ VIRTIO_F_QUEUE_STATE, true)
hwaddr virtio_queue_get_desc_addr(VirtIODevice *vdev, int n);
bool virtio_queue_enabled_legacy(VirtIODevice *vdev, int n);
diff --git a/include/standard-headers/linux/virtio_config.h b/include/standard-headers/linux/virtio_config.h
index 22e3a85f67..59fad3eb45 100644
--- a/include/standard-headers/linux/virtio_config.h
+++ b/include/standard-headers/linux/virtio_config.h
@@ -90,4 +90,7 @@
* Does the device support Single Root I/O Virtualization?
*/
#define VIRTIO_F_SR_IOV 37
+
+/* Device support save and restore virtqueue state */
+#define VIRTIO_F_QUEUE_STATE 40
#endif /* _LINUX_VIRTIO_CONFIG_H */
diff --git a/include/standard-headers/linux/virtio_pci.h b/include/standard-headers/linux/virtio_pci.h
index db7a8e2fcb..c8d9802a87 100644
--- a/include/standard-headers/linux/virtio_pci.h
+++ b/include/standard-headers/linux/virtio_pci.h
@@ -164,6 +164,7 @@ struct virtio_pci_common_cfg {
uint32_t queue_avail_hi; /* read-write */
uint32_t queue_used_lo; /* read-write */
uint32_t queue_used_hi; /* read-write */
+ uint16_t queue_avail_state; /* read-write */
};
/* Fields in VIRTIO_PCI_CAP_PCI_CFG: */
@@ -202,6 +203,7 @@ struct virtio_pci_cfg_cap {
#define VIRTIO_PCI_COMMON_Q_AVAILHI 44
#define VIRTIO_PCI_COMMON_Q_USEDLO 48
#define VIRTIO_PCI_COMMON_Q_USEDHI 52
+#define VIRTIO_PCI_COMMON_Q_AVAIL_STATE 56
#endif /* VIRTIO_PCI_NO_MODERN */
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 883045a223..ddb6fff098 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1216,6 +1216,9 @@ static uint64_t virtio_pci_common_read(void *opaque, hwaddr addr,
case VIRTIO_PCI_COMMON_Q_USEDHI:
val = proxy->vqs[vdev->queue_sel].used[1];
break;
+ case VIRTIO_PCI_COMMON_Q_AVAIL_STATE:
+ val = virtio_queue_get_last_avail_idx(vdev, vdev->queue_sel);
+ break;
default:
val = 0;
}
@@ -1298,6 +1301,8 @@ static void virtio_pci_common_write(void *opaque, hwaddr addr,
proxy->vqs[vdev->queue_sel].avail[0],
((uint64_t)proxy->vqs[vdev->queue_sel].used[1]) << 32 |
proxy->vqs[vdev->queue_sel].used[0]);
+ virtio_queue_set_last_avail_idx(vdev, vdev->queue_sel,
+ proxy->vqs[vdev->queue_sel].state);
proxy->vqs[vdev->queue_sel].enabled = 1;
} else {
virtio_error(vdev, "wrong value for queue_enable %"PRIx64, val);
@@ -1321,6 +1326,9 @@ static void virtio_pci_common_write(void *opaque, hwaddr addr,
case VIRTIO_PCI_COMMON_Q_USEDHI:
proxy->vqs[vdev->queue_sel].used[1] = val;
break;
+ case VIRTIO_PCI_COMMON_Q_AVAIL_STATE:
+ proxy->vqs[vdev->queue_sel].state = val;
+ break;
default:
break;
}
@@ -1900,6 +1908,7 @@ static void virtio_pci_reset(DeviceState *qdev)
proxy->vqs[i].desc[0] = proxy->vqs[i].desc[1] = 0;
proxy->vqs[i].avail[0] = proxy->vqs[i].avail[1] = 0;
proxy->vqs[i].used[0] = proxy->vqs[i].used[1] = 0;
+ proxy->vqs[i].state = 0;
}
if (pci_is_express(dev)) {
--
2.27.0
next prev parent reply other threads:[~2021-05-19 16:35 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-19 16:28 [RFC v3 00/29] vDPA software assisted live migration Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 01/29] virtio: Add virtio_queue_is_host_notifier_enabled Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 02/29] vhost: Save masked_notifier state Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 03/29] vhost: Add VhostShadowVirtqueue Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 04/29] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez
2021-05-21 7:05 ` Markus Armbruster
2021-05-24 7:13 ` Eugenio Perez Martin
2021-06-08 14:23 ` Markus Armbruster
2021-06-08 15:26 ` Eugenio Perez Martin
2021-06-09 11:46 ` Markus Armbruster
2021-06-09 14:06 ` Eugenio Perez Martin
2021-05-19 16:28 ` Eugenio Pérez [this message]
2021-05-19 16:28 ` [RFC v3 06/29] virtio-net: Honor VIRTIO_CONFIG_S_DEVICE_STOPPED Eugenio Pérez
2021-05-26 1:06 ` Jason Wang
2021-05-26 1:10 ` Jason Wang
2021-06-01 7:13 ` Eugenio Perez Martin
2021-06-03 3:12 ` Jason Wang
2021-05-19 16:28 ` [RFC v3 07/29] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 08/29] vhost: Route host->guest " Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 09/29] vhost: Avoid re-set masked notifier in shadow vq Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 10/29] virtio: Add vhost_shadow_vq_get_vring_addr Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 11/29] vhost: Add vhost_vring_pause operation Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 12/29] vhost: add vhost_kernel_vring_pause Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 13/29] vhost: Add vhost_get_iova_range operation Eugenio Pérez
2021-05-26 1:14 ` Jason Wang
2021-05-26 17:49 ` Eugenio Perez Martin
2021-05-27 4:51 ` Jason Wang
2021-06-01 7:17 ` Eugenio Perez Martin
2021-06-03 3:13 ` Jason Wang
2021-05-19 16:28 ` [RFC v3 14/29] vhost: add vhost_has_limited_iova_range Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 15/29] vhost: Add enable_custom_iommu to VhostOps Eugenio Pérez
2021-05-31 9:01 ` Jason Wang
2021-06-01 7:49 ` Eugenio Perez Martin
2021-05-19 16:28 ` [RFC v3 16/29] vhost-vdpa: Add vhost_vdpa_enable_custom_iommu Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 17/29] vhost: Shadow virtqueue buffers forwarding Eugenio Pérez
2021-06-02 9:50 ` Jason Wang
2021-06-02 17:18 ` Eugenio Perez Martin
2021-06-03 3:34 ` Jason Wang
2021-06-04 8:37 ` Eugenio Perez Martin
2021-05-19 16:28 ` [RFC v3 18/29] vhost: Use vhost_enable_custom_iommu to unmap everything if available Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 19/29] vhost: Check for device VRING_USED_F_NO_NOTIFY at shadow virtqueue kick Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 20/29] vhost: Use VRING_AVAIL_F_NO_INTERRUPT at device call on shadow virtqueue Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 21/29] vhost: Add VhostIOVATree Eugenio Pérez
2021-05-31 9:40 ` Jason Wang
2021-06-01 8:15 ` Eugenio Perez Martin
2021-07-14 3:04 ` Jason Wang
2021-07-14 6:54 ` Eugenio Perez Martin
2021-07-14 9:14 ` Eugenio Perez Martin
2021-07-14 9:33 ` Jason Wang
2021-05-19 16:28 ` [RFC v3 22/29] vhost: Add iova_rev_maps_find_iova to IOVAReverseMaps Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 23/29] vhost: Use a tree to store memory mappings Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 24/29] vhost: Add iova_rev_maps_alloc Eugenio Pérez
2021-05-19 16:28 ` [RFC v3 25/29] vhost: Add custom IOTLB translations to SVQ Eugenio Pérez
2021-06-02 9:51 ` Jason Wang
2021-06-02 17:51 ` Eugenio Perez Martin
2021-06-03 3:39 ` Jason Wang
2021-06-04 9:07 ` Eugenio Perez Martin
2021-05-19 16:29 ` [RFC v3 26/29] vhost: Map in vdpa-dev Eugenio Pérez
2021-05-19 16:29 ` [RFC v3 27/29] vhost-vdpa: Implement vhost_vdpa_vring_pause operation Eugenio Pérez
2021-05-19 16:29 ` [RFC v3 28/29] vhost-vdpa: never map with vDPA listener Eugenio Pérez
2021-05-19 16:29 ` [RFC v3 29/29] vhost: Start vhost-vdpa SVQ directly Eugenio Pérez
2021-05-24 9:38 ` [RFC v3 00/29] vDPA software assisted live migration Michael S. Tsirkin
2021-05-24 10:37 ` Eugenio Perez Martin
2021-05-24 11:29 ` Michael S. Tsirkin
2021-07-19 14:13 ` Stefan Hajnoczi
2021-05-25 0:09 ` Jason Wang
2021-06-02 9:59 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210519162903.1172366-6-eperezma@redhat.com \
--to=eperezma@redhat.com \
--cc=armbru@redhat.com \
--cc=eli@mellanox.com \
--cc=hanand@xilinx.com \
--cc=jasowang@redhat.com \
--cc=ml@napatech.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=xiao.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).