From: Wei Wang <wei.w.wang@intel.com>
To: marcandre.lureau@gmail.com, mst@redhat.com, stefanha@redhat.com,
pbonzini@redhat.com, qemu-devel@nongnu.org,
virtio-dev@lists.oasis-open.org
Cc: Wei Wang <wei.w.wang@intel.com>
Subject: [Qemu-devel] [RESEND Patch v1 16/37] vhost-pci-slave/msg: VHOST_USER_SET_VRING_NUM
Date: Mon, 19 Dec 2016 13:58:51 +0800 [thread overview]
Message-ID: <1482127152-84732-17-git-send-email-wei.w.wang@intel.com> (raw)
In-Reply-To: <1482127152-84732-1-git-send-email-wei.w.wang@intel.com>
The protocol doesn't have a message to tell the slave how many virtqueues
that the master device has. So, the slave side implementation uses a list to
manage the virtqueue info sent from the master. SET_VRING_NUM is the
first virtqueue info passed from the master, so the slave allocates a node
when receiving this message, and inserts the node to the head of the
list. Subsequent virtqueue info (e.g. base, addresses) will be updated
to the head node.
The list of virtqueue info will be packed together and delievered to the
driver via the controlq.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
hw/virtio/vhost-pci-slave.c | 28 ++++++++++++++++++++++++++++
include/hw/virtio/vhost-pci-slave.h | 7 +++++++
2 files changed, 35 insertions(+)
diff --git a/hw/virtio/vhost-pci-slave.c b/hw/virtio/vhost-pci-slave.c
index 9d42566..77a2f68 100644
--- a/hw/virtio/vhost-pci-slave.c
+++ b/hw/virtio/vhost-pci-slave.c
@@ -30,6 +30,7 @@ static void vp_slave_cleanup(void)
{
int ret;
uint32_t i, nregions;
+ PeerVqNode *pvq_node;
nregions = vp_slave->pmem_msg.nregions;
for (i = 0; i < nregions; i++) {
@@ -39,6 +40,13 @@ static void vp_slave_cleanup(void)
}
memory_region_del_subregion(vp_slave->bar_mr, vp_slave->sub_mr + i);
}
+
+ if (!QLIST_EMPTY(&vp_slave->pvq_list)) {
+ QLIST_FOREACH(pvq_node, &vp_slave->pvq_list, node)
+ g_free(pvq_node);
+ }
+ QLIST_INIT(&vp_slave->pvq_list);
+ vp_slave->pvq_num = 0;
}
static int vp_slave_write(CharBackend *chr_be, VhostUserMsg *msg)
@@ -194,6 +202,20 @@ static int vp_slave_set_mem_table(VhostUserMsg *msg, int *fds, int fd_num)
return 0;
}
+static void vp_slave_alloc_pvq_node(void)
+{
+ PeerVqNode *pvq_node = g_malloc0(sizeof(PeerVqNode));
+ QLIST_INSERT_HEAD(&vp_slave->pvq_list, pvq_node, node);
+ vp_slave->pvq_num++;
+}
+
+static void vp_slave_set_vring_num(VhostUserMsg *msg)
+{
+ PeerVqNode *pvq_node = QLIST_FIRST(&vp_slave->pvq_list);
+
+ pvq_node->vring_num = msg->payload.u64;
+}
+
static int vp_slave_can_read(void *opaque)
{
return VHOST_USER_HDR_SIZE;
@@ -260,6 +282,10 @@ static void vp_slave_read(void *opaque, const uint8_t *buf, int size)
fd_num = qemu_chr_fe_get_msgfds(chr_be, fds, sizeof(fds) / sizeof(int));
vp_slave_set_mem_table(&msg, fds, fd_num);
break;
+ case VHOST_USER_SET_VRING_NUM:
+ vp_slave_alloc_pvq_node();
+ vp_slave_set_vring_num(&msg);
+ break;
default:
error_report("vhost-pci-slave does not support msg request = %d",
msg.request);
@@ -295,6 +321,8 @@ int vhost_pci_slave_init(QemuOpts *opts)
vp_slave->feature_bits = 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
vp_slave->bar_mr = NULL;
vp_slave->sub_mr = NULL;
+ QLIST_INIT(&vp_slave->pvq_list);
+ vp_slave->pvq_num = 0;
qemu_chr_fe_init(&vp_slave->chr_be, chr, &error_abort);
qemu_chr_fe_set_handlers(&vp_slave->chr_be, vp_slave_can_read,
vp_slave_read, vp_slave_event,
diff --git a/include/hw/virtio/vhost-pci-slave.h b/include/hw/virtio/vhost-pci-slave.h
index 03e23eb..fe4824c 100644
--- a/include/hw/virtio/vhost-pci-slave.h
+++ b/include/hw/virtio/vhost-pci-slave.h
@@ -5,6 +5,11 @@
#include "exec/memory.h"
#include "standard-headers/linux/vhost_pci_net.h"
+typedef struct PeerVqNode {
+ uint32_t vring_num;
+ QLIST_ENTRY(PeerVqNode) node;
+} PeerVqNode;
+
typedef struct VhostPCISlave {
CharBackend chr_be;
uint16_t dev_type;
@@ -16,6 +21,8 @@ typedef struct VhostPCISlave {
void *mr_map_base[MAX_GUEST_REGION];
uint64_t mr_map_size[MAX_GUEST_REGION];
struct peer_mem_msg pmem_msg;
+ uint16_t pvq_num;
+ QLIST_HEAD(, PeerVqNode) pvq_list;
} VhostPCISlave;
extern VhostPCISlave *vp_slave;
--
2.7.4
next prev parent reply other threads:[~2016-12-19 6:00 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-19 5:58 [Qemu-devel] [RESEND Patch v1 00/37] Implementation of vhost-pci for inter-vm commucation Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 01/37] vhost-pci-net: the fundamental vhost-pci-net device emulation Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 02/37] vhost-pci-net: the fundamental implementation of vhost-pci-net-pci Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 03/37] vhost-user: share the vhost-user protocol related structures Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 04/37] vl: add the vhost-pci-slave command line option Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 05/37] vhost-pci-slave: start the implementation of vhost-pci-slave Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 06/37] vhost-pci-slave: set up the fundamental handlers for the server socket Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 07/37] vhost-pci-slave/msg: VHOST_USER_GET_FEATURES Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 08/37] vhost-pci-slave/msg: VHOST_USER_SET_FEATURES Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 09/37] vhost-pci-slave/msg: VHOST_USER_GET_PROTOCOL_FEATURES Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 10/37] vhost-pci-slave/msg: VHOST_USER_SET_PROTOCOL_FEATURES Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 11/37] vhost-user/msg: VHOST_USER_PROTOCOL_F_SET_DEVICE_ID Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 12/37] vhost-pci-slave/msg: VHOST_USER_SET_DEVICE_ID Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 13/37] vhost-pci-slave/msg: VHOST_USER_GET_QUEUE_NUM Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 14/37] vhost-pci-slave/msg: VHOST_USER_SET_OWNER Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 15/37] vhost-pci-slave/msg: VHOST_USER_SET_MEM_TABLE Wei Wang
2016-12-19 5:58 ` Wei Wang [this message]
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 17/37] vhost-pci-slave/msg: VHOST_USER_SET_VRING_BASE Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 18/37] vhost-user: send guest physical address of virtqueues to the slave Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 19/37] vhost-pci-slave/msg: VHOST_USER_SET_VRING_ADDR Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 20/37] vhost-pci-slave/msg: VHOST_USER_SET_VRING_KICK Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 21/37] vhost-pci-slave/msg: VHOST_USER_SET_VRING_CALL Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 22/37] vhost-pci-slave/msg: VHOST_USER_SET_VRING_ENABLE Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 23/37] vhost-pci-slave/msg: VHOST_USER_SET_LOG_BASE Wei Wang
2016-12-19 5:58 ` [Qemu-devel] [RESEND Patch v1 24/37] vhost-pci-slave/msg: VHOST_USER_SET_LOG_FD Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 25/37] vhost-pci-slave/msg: VHOST_USER_SEND_RARP Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 26/37] vhost-pci-slave/msg: VHOST_USER_GET_VRING_BASE Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 27/37] vhost-pci-net: pass the info collected by vp_slave to the device Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 28/37] vhost-pci-net: pass the mem and vring info to the driver Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 29/37] vhost-pci-slave/msg: VHOST_USER_SET_VHOST_PCI (start) Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 30/37] vhost-pci-slave/msg: VHOST_USER_SET_VHOST_PCI (stop) Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 31/37] vhost-user/msg: send VHOST_USER_SET_VHOST_PCI (start/stop) Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 32/37] vhost-user: add asynchronous read for the vhost-user master Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 33/37] vhost-pci-net: send the negotiated feature bits to the master Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 34/37] vhost-pci-slave: add "peer_reset" Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 35/37] vhost-pci-net: start the vhost-pci-net device Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 36/37] vhost-user/msg: handling VHOST_USER_SET_FEATURES Wei Wang
2016-12-19 8:28 ` Wei Wang
2016-12-19 5:59 ` [Qemu-devel] [RESEND Patch v1 37/37] vl: enable vhost-pci-slave Wei Wang
2016-12-19 7:17 ` [Qemu-devel] [RESEND Patch v1 00/37] Implementation of vhost-pci for inter-vm commucation no-reply
2016-12-19 16:43 ` Marc-André Lureau
2016-12-20 4:32 ` Wei Wang
2016-12-20 7:22 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-01-09 5:13 ` [Qemu-devel] " Wei Wang
2017-01-05 7:34 ` Wei Wang
2017-01-05 7:47 ` [Qemu-devel] [virtio-dev] " Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1482127152-84732-17-git-send-email-wei.w.wang@intel.com \
--to=wei.w.wang@intel.com \
--cc=marcandre.lureau@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=virtio-dev@lists.oasis-open.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).