* [Qemu-devel] [PATCH v11 1/7] vhost-user: use VHOST_USER_XXX macro for switch statement
2015-09-23 4:19 [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Yuanhan Liu
@ 2015-09-23 4:19 ` Yuanhan Liu
2015-09-24 10:05 ` Marcel Apfelbaum
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 2/7] vhost-user: add protocol feature negotiation Yuanhan Liu
` (6 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-23 4:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Yuanhan Liu, jasowang, Changchun.ouyang, mst
So that we could let vhost_user_call to handle extented requests,
such as VHOST_USER_GET/SET_PROTOCOL_FEATURES, instead of invoking
vhost_user_read/write and constructing the msg again by ourself.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
hw/virtio/vhost-user.c | 38 ++++++++++++++++++++++----------------
1 file changed, 22 insertions(+), 16 deletions(-)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index e7ab829..13677ac 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -193,27 +193,33 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
- msg_request = vhost_user_request_translate(request);
+ /* only translate vhost ioctl requests */
+ if (request > VHOST_USER_MAX) {
+ msg_request = vhost_user_request_translate(request);
+ } else {
+ msg_request = request;
+ }
+
msg.request = msg_request;
msg.flags = VHOST_USER_VERSION;
msg.size = 0;
- switch (request) {
- case VHOST_GET_FEATURES:
+ switch (msg_request) {
+ case VHOST_USER_GET_FEATURES:
need_reply = 1;
break;
- case VHOST_SET_FEATURES:
- case VHOST_SET_LOG_BASE:
+ case VHOST_USER_SET_FEATURES:
+ case VHOST_USER_SET_LOG_BASE:
msg.u64 = *((__u64 *) arg);
msg.size = sizeof(m.u64);
break;
- case VHOST_SET_OWNER:
- case VHOST_RESET_OWNER:
+ case VHOST_USER_SET_OWNER:
+ case VHOST_USER_RESET_OWNER:
break;
- case VHOST_SET_MEM_TABLE:
+ case VHOST_USER_SET_MEM_TABLE:
for (i = 0; i < dev->mem->nregions; ++i) {
struct vhost_memory_region *reg = dev->mem->regions + i;
ram_addr_t ram_addr;
@@ -246,30 +252,30 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
break;
- case VHOST_SET_LOG_FD:
+ case VHOST_USER_SET_LOG_FD:
fds[fd_num++] = *((int *) arg);
break;
- case VHOST_SET_VRING_NUM:
- case VHOST_SET_VRING_BASE:
+ case VHOST_USER_SET_VRING_NUM:
+ case VHOST_USER_SET_VRING_BASE:
memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
msg.size = sizeof(m.state);
break;
- case VHOST_GET_VRING_BASE:
+ case VHOST_USER_GET_VRING_BASE:
memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
msg.size = sizeof(m.state);
need_reply = 1;
break;
- case VHOST_SET_VRING_ADDR:
+ case VHOST_USER_SET_VRING_ADDR:
memcpy(&msg.addr, arg, sizeof(struct vhost_vring_addr));
msg.size = sizeof(m.addr);
break;
- case VHOST_SET_VRING_KICK:
- case VHOST_SET_VRING_CALL:
- case VHOST_SET_VRING_ERR:
+ case VHOST_USER_SET_VRING_KICK:
+ case VHOST_USER_SET_VRING_CALL:
+ case VHOST_USER_SET_VRING_ERR:
file = arg;
msg.u64 = file->index & VHOST_USER_VRING_IDX_MASK;
msg.size = sizeof(m.u64);
--
1.9.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 1/7] vhost-user: use VHOST_USER_XXX macro for switch statement
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 1/7] vhost-user: use VHOST_USER_XXX macro for switch statement Yuanhan Liu
@ 2015-09-24 10:05 ` Marcel Apfelbaum
2015-09-24 10:18 ` Michael S. Tsirkin
0 siblings, 1 reply; 22+ messages in thread
From: Marcel Apfelbaum @ 2015-09-24 10:05 UTC (permalink / raw)
To: Yuanhan Liu, qemu-devel; +Cc: jasowang, Changchun.ouyang, mst
On 09/23/2015 07:19 AM, Yuanhan Liu wrote:
> So that we could let vhost_user_call to handle extented requests,
> such as VHOST_USER_GET/SET_PROTOCOL_FEATURES, instead of invoking
> vhost_user_read/write and constructing the msg again by ourself.
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
> hw/virtio/vhost-user.c | 38 ++++++++++++++++++++++----------------
> 1 file changed, 22 insertions(+), 16 deletions(-)
>
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index e7ab829..13677ac 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -193,27 +193,33 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
>
> assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
>
> - msg_request = vhost_user_request_translate(request);
> + /* only translate vhost ioctl requests */
> + if (request > VHOST_USER_MAX) {
> + msg_request = vhost_user_request_translate(request);
> + } else {
> + msg_request = request;
> + }
> +
> msg.request = msg_request;
> msg.flags = VHOST_USER_VERSION;
> msg.size = 0;
>
> - switch (request) {
> - case VHOST_GET_FEATURES:
> + switch (msg_request) {
> + case VHOST_USER_GET_FEATURES:
> need_reply = 1;
> break;
>
> - case VHOST_SET_FEATURES:
> - case VHOST_SET_LOG_BASE:
> + case VHOST_USER_SET_FEATURES:
> + case VHOST_USER_SET_LOG_BASE:
> msg.u64 = *((__u64 *) arg);
> msg.size = sizeof(m.u64);
> break;
>
> - case VHOST_SET_OWNER:
> - case VHOST_RESET_OWNER:
> + case VHOST_USER_SET_OWNER:
> + case VHOST_USER_RESET_OWNER:
> break;
>
> - case VHOST_SET_MEM_TABLE:
> + case VHOST_USER_SET_MEM_TABLE:
> for (i = 0; i < dev->mem->nregions; ++i) {
> struct vhost_memory_region *reg = dev->mem->regions + i;
> ram_addr_t ram_addr;
> @@ -246,30 +252,30 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
>
> break;
>
> - case VHOST_SET_LOG_FD:
> + case VHOST_USER_SET_LOG_FD:
> fds[fd_num++] = *((int *) arg);
> break;
>
> - case VHOST_SET_VRING_NUM:
> - case VHOST_SET_VRING_BASE:
> + case VHOST_USER_SET_VRING_NUM:
> + case VHOST_USER_SET_VRING_BASE:
> memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
> msg.size = sizeof(m.state);
> break;
>
> - case VHOST_GET_VRING_BASE:
> + case VHOST_USER_GET_VRING_BASE:
> memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
> msg.size = sizeof(m.state);
> need_reply = 1;
> break;
>
> - case VHOST_SET_VRING_ADDR:
> + case VHOST_USER_SET_VRING_ADDR:
> memcpy(&msg.addr, arg, sizeof(struct vhost_vring_addr));
> msg.size = sizeof(m.addr);
> break;
>
> - case VHOST_SET_VRING_KICK:
> - case VHOST_SET_VRING_CALL:
> - case VHOST_SET_VRING_ERR:
> + case VHOST_USER_SET_VRING_KICK:
> + case VHOST_USER_SET_VRING_CALL:
> + case VHOST_USER_SET_VRING_ERR:
> file = arg;
> msg.u64 = file->index & VHOST_USER_VRING_IDX_MASK;
> msg.size = sizeof(m.u64);
>
--
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 1/7] vhost-user: use VHOST_USER_XXX macro for switch statement
2015-09-24 10:05 ` Marcel Apfelbaum
@ 2015-09-24 10:18 ` Michael S. Tsirkin
2015-09-24 10:27 ` Marcel Apfelbaum
0 siblings, 1 reply; 22+ messages in thread
From: Michael S. Tsirkin @ 2015-09-24 10:18 UTC (permalink / raw)
To: Marcel Apfelbaum; +Cc: jasowang, Yuanhan Liu, qemu-devel, Changchun.ouyang
On Thu, Sep 24, 2015 at 01:05:11PM +0300, Marcel Apfelbaum wrote:
> On 09/23/2015 07:19 AM, Yuanhan Liu wrote:
> >So that we could let vhost_user_call to handle extented requests,
> >such as VHOST_USER_GET/SET_PROTOCOL_FEATURES, instead of invoking
> >vhost_user_read/write and constructing the msg again by ourself.
> >
> >Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> >---
> > hw/virtio/vhost-user.c | 38 ++++++++++++++++++++++----------------
> > 1 file changed, 22 insertions(+), 16 deletions(-)
> >
> >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> >index e7ab829..13677ac 100644
> >--- a/hw/virtio/vhost-user.c
> >+++ b/hw/virtio/vhost-user.c
> >@@ -193,27 +193,33 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> >
> > assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
> >
> >- msg_request = vhost_user_request_translate(request);
> >+ /* only translate vhost ioctl requests */
> >+ if (request > VHOST_USER_MAX) {
> >+ msg_request = vhost_user_request_translate(request);
> >+ } else {
> >+ msg_request = request;
> >+ }
> >+
> > msg.request = msg_request;
> > msg.flags = VHOST_USER_VERSION;
> > msg.size = 0;
> >
> >- switch (request) {
> >- case VHOST_GET_FEATURES:
> >+ switch (msg_request) {
> >+ case VHOST_USER_GET_FEATURES:
> > need_reply = 1;
> > break;
> >
> >- case VHOST_SET_FEATURES:
> >- case VHOST_SET_LOG_BASE:
> >+ case VHOST_USER_SET_FEATURES:
> >+ case VHOST_USER_SET_LOG_BASE:
> > msg.u64 = *((__u64 *) arg);
> > msg.size = sizeof(m.u64);
> > break;
> >
> >- case VHOST_SET_OWNER:
> >- case VHOST_RESET_OWNER:
> >+ case VHOST_USER_SET_OWNER:
> >+ case VHOST_USER_RESET_OWNER:
> > break;
> >
> >- case VHOST_SET_MEM_TABLE:
> >+ case VHOST_USER_SET_MEM_TABLE:
> > for (i = 0; i < dev->mem->nregions; ++i) {
> > struct vhost_memory_region *reg = dev->mem->regions + i;
> > ram_addr_t ram_addr;
> >@@ -246,30 +252,30 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> >
> > break;
> >
> >- case VHOST_SET_LOG_FD:
> >+ case VHOST_USER_SET_LOG_FD:
> > fds[fd_num++] = *((int *) arg);
> > break;
> >
> >- case VHOST_SET_VRING_NUM:
> >- case VHOST_SET_VRING_BASE:
> >+ case VHOST_USER_SET_VRING_NUM:
> >+ case VHOST_USER_SET_VRING_BASE:
> > memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
> > msg.size = sizeof(m.state);
> > break;
> >
> >- case VHOST_GET_VRING_BASE:
> >+ case VHOST_USER_GET_VRING_BASE:
> > memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
> > msg.size = sizeof(m.state);
> > need_reply = 1;
> > break;
> >
> >- case VHOST_SET_VRING_ADDR:
> >+ case VHOST_USER_SET_VRING_ADDR:
> > memcpy(&msg.addr, arg, sizeof(struct vhost_vring_addr));
> > msg.size = sizeof(m.addr);
> > break;
> >
> >- case VHOST_SET_VRING_KICK:
> >- case VHOST_SET_VRING_CALL:
> >- case VHOST_SET_VRING_ERR:
> >+ case VHOST_USER_SET_VRING_KICK:
> >+ case VHOST_USER_SET_VRING_CALL:
> >+ case VHOST_USER_SET_VRING_ERR:
> > file = arg;
> > msg.u64 = file->index & VHOST_USER_VRING_IDX_MASK;
> > msg.size = sizeof(m.u64);
> >
>
>
> --
> Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
The -- is the signature separator.
Please don't put text after that - I almost missed it.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 1/7] vhost-user: use VHOST_USER_XXX macro for switch statement
2015-09-24 10:18 ` Michael S. Tsirkin
@ 2015-09-24 10:27 ` Marcel Apfelbaum
0 siblings, 0 replies; 22+ messages in thread
From: Marcel Apfelbaum @ 2015-09-24 10:27 UTC (permalink / raw)
To: Michael S. Tsirkin, Marcel Apfelbaum
Cc: jasowang, Yuanhan Liu, Changchun.ouyang, qemu-devel
On 09/24/2015 01:18 PM, Michael S. Tsirkin wrote:
> On Thu, Sep 24, 2015 at 01:05:11PM +0300, Marcel Apfelbaum wrote:
>> On 09/23/2015 07:19 AM, Yuanhan Liu wrote:
>>> So that we could let vhost_user_call to handle extented requests,
>>> such as VHOST_USER_GET/SET_PROTOCOL_FEATURES, instead of invoking
>>> vhost_user_read/write and constructing the msg again by ourself.
>>>
>>> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>>> ---
>>> hw/virtio/vhost-user.c | 38 ++++++++++++++++++++++----------------
>>> 1 file changed, 22 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>>> index e7ab829..13677ac 100644
>>> --- a/hw/virtio/vhost-user.c
>>> +++ b/hw/virtio/vhost-user.c
>>> @@ -193,27 +193,33 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
>>>
>>> assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
>>>
>>> - msg_request = vhost_user_request_translate(request);
>>> + /* only translate vhost ioctl requests */
>>> + if (request > VHOST_USER_MAX) {
>>> + msg_request = vhost_user_request_translate(request);
>>> + } else {
>>> + msg_request = request;
>>> + }
>>> +
>>> msg.request = msg_request;
>>> msg.flags = VHOST_USER_VERSION;
>>> msg.size = 0;
>>>
>>> - switch (request) {
>>> - case VHOST_GET_FEATURES:
>>> + switch (msg_request) {
>>> + case VHOST_USER_GET_FEATURES:
>>> need_reply = 1;
>>> break;
>>>
>>> - case VHOST_SET_FEATURES:
>>> - case VHOST_SET_LOG_BASE:
>>> + case VHOST_USER_SET_FEATURES:
>>> + case VHOST_USER_SET_LOG_BASE:
>>> msg.u64 = *((__u64 *) arg);
>>> msg.size = sizeof(m.u64);
>>> break;
>>>
>>> - case VHOST_SET_OWNER:
>>> - case VHOST_RESET_OWNER:
>>> + case VHOST_USER_SET_OWNER:
>>> + case VHOST_USER_RESET_OWNER:
>>> break;
>>>
>>> - case VHOST_SET_MEM_TABLE:
>>> + case VHOST_USER_SET_MEM_TABLE:
>>> for (i = 0; i < dev->mem->nregions; ++i) {
>>> struct vhost_memory_region *reg = dev->mem->regions + i;
>>> ram_addr_t ram_addr;
>>> @@ -246,30 +252,30 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
>>>
>>> break;
>>>
>>> - case VHOST_SET_LOG_FD:
>>> + case VHOST_USER_SET_LOG_FD:
>>> fds[fd_num++] = *((int *) arg);
>>> break;
>>>
>>> - case VHOST_SET_VRING_NUM:
>>> - case VHOST_SET_VRING_BASE:
>>> + case VHOST_USER_SET_VRING_NUM:
>>> + case VHOST_USER_SET_VRING_BASE:
>>> memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
>>> msg.size = sizeof(m.state);
>>> break;
>>>
>>> - case VHOST_GET_VRING_BASE:
>>> + case VHOST_USER_GET_VRING_BASE:
>>> memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
>>> msg.size = sizeof(m.state);
>>> need_reply = 1;
>>> break;
>>>
>>> - case VHOST_SET_VRING_ADDR:
>>> + case VHOST_USER_SET_VRING_ADDR:
>>> memcpy(&msg.addr, arg, sizeof(struct vhost_vring_addr));
>>> msg.size = sizeof(m.addr);
>>> break;
>>>
>>> - case VHOST_SET_VRING_KICK:
>>> - case VHOST_SET_VRING_CALL:
>>> - case VHOST_SET_VRING_ERR:
>>> + case VHOST_USER_SET_VRING_KICK:
>>> + case VHOST_USER_SET_VRING_CALL:
>>> + case VHOST_USER_SET_VRING_ERR:
>>> file = arg;
>>> msg.u64 = file->index & VHOST_USER_VRING_IDX_MASK;
>>> msg.size = sizeof(m.u64);
>>>
>>
>>
>> --
>> Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
>
> The -- is the signature separator.
> Please don't put text after that - I almost missed it.
>
Ooops, I'll be more careful next time (I did it already for this series...)
Thanks,
Marcel
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [Qemu-devel] [PATCH v11 2/7] vhost-user: add protocol feature negotiation
2015-09-23 4:19 [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Yuanhan Liu
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 1/7] vhost-user: use VHOST_USER_XXX macro for switch statement Yuanhan Liu
@ 2015-09-23 4:19 ` Yuanhan Liu
2015-09-24 10:13 ` Marcel Apfelbaum
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 3/7] vhost: rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE Yuanhan Liu
` (5 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-23 4:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Yuanhan Liu, jasowang, Changchun.ouyang, mst
From: "Michael S. Tsirkin" <mst@redhat.com>
Support a separate bitmask for vhost-user protocol features,
and messages to get/set protocol features.
Invoke them at init.
No features are defined yet.
[ leverage vhost_user_call for request handling -- Yuanhan Liu ]
Signed-off-by: Michael S. Tsirkin <address@hidden>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
docs/specs/vhost-user.txt | 37 +++++++++++++++++++++++++++++++++++++
hw/net/vhost_net.c | 2 ++
hw/virtio/vhost-user.c | 31 +++++++++++++++++++++++++++++++
include/hw/virtio/vhost.h | 1 +
4 files changed, 71 insertions(+)
diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index 650bb18..70da3b1 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -113,6 +113,7 @@ message replies. Most of the requests don't require replies. Here is a list of
the ones that do:
* VHOST_GET_FEATURES
+ * VHOST_GET_PROTOCOL_FEATURES
* VHOST_GET_VRING_BASE
There are several messages that the master sends with file descriptors passed
@@ -127,6 +128,13 @@ in the ancillary data:
If Master is unable to send the full message or receives a wrong reply it will
close the connection. An optional reconnection mechanism can be implemented.
+Any protocol extensions are gated by protocol feature bits,
+which allows full backwards compatibility on both master
+and slave.
+As older slaves don't support negotiating protocol features,
+a feature bit was dedicated for this purpose:
+#define VHOST_USER_F_PROTOCOL_FEATURES 30
+
Message types
-------------
@@ -138,6 +146,8 @@ Message types
Slave payload: u64
Get from the underlying vhost implementation the features bitmask.
+ Feature bit VHOST_USER_F_PROTOCOL_FEATURES signals slave support for
+ VHOST_USER_GET_PROTOCOL_FEATURES and VHOST_USER_SET_PROTOCOL_FEATURES.
* VHOST_USER_SET_FEATURES
@@ -146,6 +156,33 @@ Message types
Master payload: u64
Enable features in the underlying vhost implementation using a bitmask.
+ Feature bit VHOST_USER_F_PROTOCOL_FEATURES signals slave support for
+ VHOST_USER_GET_PROTOCOL_FEATURES and VHOST_USER_SET_PROTOCOL_FEATURES.
+
+ * VHOST_USER_GET_PROTOCOL_FEATURES
+
+ Id: 15
+ Equivalent ioctl: VHOST_GET_FEATURES
+ Master payload: N/A
+ Slave payload: u64
+
+ Get the protocol feature bitmask from the underlying vhost implementation.
+ Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
+ VHOST_USER_GET_FEATURES.
+ Note: slave that reported VHOST_USER_F_PROTOCOL_FEATURES must support
+ this message even before VHOST_USER_SET_FEATURES was called.
+
+ * VHOST_USER_SET_PROTOCOL_FEATURES
+
+ Id: 16
+ Ioctl: VHOST_SET_FEATURES
+ Master payload: u64
+
+ Enable protocol features in the underlying vhost implementation.
+ Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
+ VHOST_USER_GET_FEATURES.
+ Note: slave that reported VHOST_USER_F_PROTOCOL_FEATURES must support
+ this message even before VHOST_USER_SET_FEATURES was called.
* VHOST_USER_SET_OWNER
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 1d76b94..9d32d76 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -152,8 +152,10 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
net->dev.backend_features = qemu_has_vnet_hdr(options->net_backend)
? 0 : (1ULL << VHOST_NET_F_VIRTIO_NET_HDR);
net->backend = r;
+ net->dev.protocol_features = 0;
} else {
net->dev.backend_features = 0;
+ net->dev.protocol_features = 0;
net->backend = -1;
}
net->nc = options->net_backend;
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 13677ac..7fe35c6 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -24,6 +24,8 @@
#include <linux/vhost.h>
#define VHOST_MEMORY_MAX_NREGIONS 8
+#define VHOST_USER_F_PROTOCOL_FEATURES 30
+#define VHOST_USER_PROTOCOL_FEATURE_MASK 0x0ULL
typedef enum VhostUserRequest {
VHOST_USER_NONE = 0,
@@ -41,6 +43,8 @@ typedef enum VhostUserRequest {
VHOST_USER_SET_VRING_KICK = 12,
VHOST_USER_SET_VRING_CALL = 13,
VHOST_USER_SET_VRING_ERR = 14,
+ VHOST_USER_GET_PROTOCOL_FEATURES = 15,
+ VHOST_USER_SET_PROTOCOL_FEATURES = 16,
VHOST_USER_MAX
} VhostUserRequest;
@@ -206,11 +210,13 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
switch (msg_request) {
case VHOST_USER_GET_FEATURES:
+ case VHOST_USER_GET_PROTOCOL_FEATURES:
need_reply = 1;
break;
case VHOST_USER_SET_FEATURES:
case VHOST_USER_SET_LOG_BASE:
+ case VHOST_USER_SET_PROTOCOL_FEATURES:
msg.u64 = *((__u64 *) arg);
msg.size = sizeof(m.u64);
break;
@@ -308,6 +314,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
switch (msg_request) {
case VHOST_USER_GET_FEATURES:
+ case VHOST_USER_GET_PROTOCOL_FEATURES:
if (msg.size != sizeof(m.u64)) {
error_report("Received bad msg size.");
return -1;
@@ -333,10 +340,34 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
static int vhost_user_init(struct vhost_dev *dev, void *opaque)
{
+ unsigned long long features;
+ int err;
+
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
dev->opaque = opaque;
+ err = vhost_user_call(dev, VHOST_USER_GET_FEATURES, &features);
+ if (err < 0) {
+ return err;
+ }
+
+ if (virtio_has_feature(features, VHOST_USER_F_PROTOCOL_FEATURES)) {
+ dev->backend_features |= 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
+
+ err = vhost_user_call(dev, VHOST_USER_GET_PROTOCOL_FEATURES, &features);
+ if (err < 0) {
+ return err;
+ }
+
+ dev->protocol_features = features & VHOST_USER_PROTOCOL_FEATURE_MASK;
+ err = vhost_user_call(dev, VHOST_USER_SET_PROTOCOL_FEATURES,
+ &dev->protocol_features);
+ if (err < 0) {
+ return err;
+ }
+ }
+
return 0;
}
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index dd51050..6467c73 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -47,6 +47,7 @@ struct vhost_dev {
unsigned long long features;
unsigned long long acked_features;
unsigned long long backend_features;
+ unsigned long long protocol_features;
bool started;
bool log_enabled;
unsigned long long log_size;
--
1.9.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 2/7] vhost-user: add protocol feature negotiation
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 2/7] vhost-user: add protocol feature negotiation Yuanhan Liu
@ 2015-09-24 10:13 ` Marcel Apfelbaum
2015-09-24 11:25 ` Yuanhan Liu
2015-09-24 11:29 ` Yuanhan Liu
0 siblings, 2 replies; 22+ messages in thread
From: Marcel Apfelbaum @ 2015-09-24 10:13 UTC (permalink / raw)
To: Yuanhan Liu, qemu-devel; +Cc: jasowang, Changchun.ouyang, mst
On 09/23/2015 07:19 AM, Yuanhan Liu wrote:
> From: "Michael S. Tsirkin" <mst@redhat.com>
>
> Support a separate bitmask for vhost-user protocol features,
> and messages to get/set protocol features.
>
> Invoke them at init.
>
> No features are defined yet.
>
> [ leverage vhost_user_call for request handling -- Yuanhan Liu ]
>
> Signed-off-by: Michael S. Tsirkin <address@hidden>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
> docs/specs/vhost-user.txt | 37 +++++++++++++++++++++++++++++++++++++
> hw/net/vhost_net.c | 2 ++
> hw/virtio/vhost-user.c | 31 +++++++++++++++++++++++++++++++
> include/hw/virtio/vhost.h | 1 +
> 4 files changed, 71 insertions(+)
>
> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> index 650bb18..70da3b1 100644
> --- a/docs/specs/vhost-user.txt
> +++ b/docs/specs/vhost-user.txt
> @@ -113,6 +113,7 @@ message replies. Most of the requests don't require replies. Here is a list of
> the ones that do:
>
> * VHOST_GET_FEATURES
> + * VHOST_GET_PROTOCOL_FEATURES
> * VHOST_GET_VRING_BASE
>
> There are several messages that the master sends with file descriptors passed
> @@ -127,6 +128,13 @@ in the ancillary data:
> If Master is unable to send the full message or receives a wrong reply it will
> close the connection. An optional reconnection mechanism can be implemented.
>
> +Any protocol extensions are gated by protocol feature bits,
> +which allows full backwards compatibility on both master
> +and slave.
> +As older slaves don't support negotiating protocol features,
> +a feature bit was dedicated for this purpose:
> +#define VHOST_USER_F_PROTOCOL_FEATURES 30
> +
> Message types
> -------------
>
> @@ -138,6 +146,8 @@ Message types
> Slave payload: u64
>
> Get from the underlying vhost implementation the features bitmask.
> + Feature bit VHOST_USER_F_PROTOCOL_FEATURES signals slave support for
> + VHOST_USER_GET_PROTOCOL_FEATURES and VHOST_USER_SET_PROTOCOL_FEATURES.
>
> * VHOST_USER_SET_FEATURES
>
> @@ -146,6 +156,33 @@ Message types
> Master payload: u64
>
> Enable features in the underlying vhost implementation using a bitmask.
> + Feature bit VHOST_USER_F_PROTOCOL_FEATURES signals slave support for
> + VHOST_USER_GET_PROTOCOL_FEATURES and VHOST_USER_SET_PROTOCOL_FEATURES.
> +
> + * VHOST_USER_GET_PROTOCOL_FEATURES
> +
> + Id: 15
> + Equivalent ioctl: VHOST_GET_FEATURES
VHOST_USER_GET_PROTOCOL_FEATURES does not have an equivalent ioctl
> + Master payload: N/A
> + Slave payload: u64
> +
> + Get the protocol feature bitmask from the underlying vhost implementation.
> + Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
> + VHOST_USER_GET_FEATURES.
> + Note: slave that reported VHOST_USER_F_PROTOCOL_FEATURES must support
> + this message even before VHOST_USER_SET_FEATURES was called.
> +
> + * VHOST_USER_SET_PROTOCOL_FEATURES
> +
> + Id: 16
> + Ioctl: VHOST_SET_FEATURES
Same here
> + Master payload: u64
> +
> + Enable protocol features in the underlying vhost implementation.
> + Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
> + VHOST_USER_GET_FEATURES.
> + Note: slave that reported VHOST_USER_F_PROTOCOL_FEATURES must support
> + this message even before VHOST_USER_SET_FEATURES was called.
>
> * VHOST_USER_SET_OWNER
>
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index 1d76b94..9d32d76 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -152,8 +152,10 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> net->dev.backend_features = qemu_has_vnet_hdr(options->net_backend)
> ? 0 : (1ULL << VHOST_NET_F_VIRTIO_NET_HDR);
> net->backend = r;
> + net->dev.protocol_features = 0;
> } else {
> net->dev.backend_features = 0;
> + net->dev.protocol_features = 0;
> net->backend = -1;
> }
Maybe protocol_features assignment should be outside the if clause.
(assigned to 0 in both cases)
> net->nc = options->net_backend;
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 13677ac..7fe35c6 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -24,6 +24,8 @@
> #include <linux/vhost.h>
>
> #define VHOST_MEMORY_MAX_NREGIONS 8
> +#define VHOST_USER_F_PROTOCOL_FEATURES 30
> +#define VHOST_USER_PROTOCOL_FEATURE_MASK 0x0ULL
>
> typedef enum VhostUserRequest {
> VHOST_USER_NONE = 0,
> @@ -41,6 +43,8 @@ typedef enum VhostUserRequest {
> VHOST_USER_SET_VRING_KICK = 12,
> VHOST_USER_SET_VRING_CALL = 13,
> VHOST_USER_SET_VRING_ERR = 14,
> + VHOST_USER_GET_PROTOCOL_FEATURES = 15,
> + VHOST_USER_SET_PROTOCOL_FEATURES = 16,
> VHOST_USER_MAX
> } VhostUserRequest;
>
> @@ -206,11 +210,13 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
>
> switch (msg_request) {
> case VHOST_USER_GET_FEATURES:
> + case VHOST_USER_GET_PROTOCOL_FEATURES:
> need_reply = 1;
> break;
>
> case VHOST_USER_SET_FEATURES:
> case VHOST_USER_SET_LOG_BASE:
> + case VHOST_USER_SET_PROTOCOL_FEATURES:
> msg.u64 = *((__u64 *) arg);
> msg.size = sizeof(m.u64);
> break;
> @@ -308,6 +314,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
>
> switch (msg_request) {
> case VHOST_USER_GET_FEATURES:
> + case VHOST_USER_GET_PROTOCOL_FEATURES:
> if (msg.size != sizeof(m.u64)) {
> error_report("Received bad msg size.");
> return -1;
> @@ -333,10 +340,34 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
>
> static int vhost_user_init(struct vhost_dev *dev, void *opaque)
> {
> + unsigned long long features;
> + int err;
> +
> assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
>
> dev->opaque = opaque;
>
> + err = vhost_user_call(dev, VHOST_USER_GET_FEATURES, &features);
> + if (err < 0) {
> + return err;
> + }
> +
> + if (virtio_has_feature(features, VHOST_USER_F_PROTOCOL_FEATURES)) {
> + dev->backend_features |= 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
> +
> + err = vhost_user_call(dev, VHOST_USER_GET_PROTOCOL_FEATURES, &features);
> + if (err < 0) {
> + return err;
> + }
> +
> + dev->protocol_features = features & VHOST_USER_PROTOCOL_FEATURE_MASK;
> + err = vhost_user_call(dev, VHOST_USER_SET_PROTOCOL_FEATURES,
> + &dev->protocol_features);
> + if (err < 0) {
> + return err;
> + }
> + }
> +
> return 0;
> }
>
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index dd51050..6467c73 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -47,6 +47,7 @@ struct vhost_dev {
> unsigned long long features;
> unsigned long long acked_features;
> unsigned long long backend_features;
> + unsigned long long protocol_features;
> bool started;
> bool log_enabled;
> unsigned long long log_size;
>
The above comments can be addressed on top of this series because
does not affect the correctness of the patch.
--
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 2/7] vhost-user: add protocol feature negotiation
2015-09-24 10:13 ` Marcel Apfelbaum
@ 2015-09-24 11:25 ` Yuanhan Liu
2015-09-24 11:29 ` Yuanhan Liu
1 sibling, 0 replies; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-24 11:25 UTC (permalink / raw)
To: Marcel Apfelbaum; +Cc: jasowang, mst, qemu-devel, Changchun.ouyang
On Thu, Sep 24, 2015 at 01:13:24PM +0300, Marcel Apfelbaum wrote:
> On 09/23/2015 07:19 AM, Yuanhan Liu wrote:
> >From: "Michael S. Tsirkin" <mst@redhat.com>
> >
> >Support a separate bitmask for vhost-user protocol features,
> >and messages to get/set protocol features.
> >
> >Invoke them at init.
> >
> >No features are defined yet.
> >
> >[ leverage vhost_user_call for request handling -- Yuanhan Liu ]
> >
> >Signed-off-by: Michael S. Tsirkin <address@hidden>
> >Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> >---
> > docs/specs/vhost-user.txt | 37 +++++++++++++++++++++++++++++++++++++
> > hw/net/vhost_net.c | 2 ++
> > hw/virtio/vhost-user.c | 31 +++++++++++++++++++++++++++++++
> > include/hw/virtio/vhost.h | 1 +
> > 4 files changed, 71 insertions(+)
> >
> >diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> >index 650bb18..70da3b1 100644
> >--- a/docs/specs/vhost-user.txt
> >+++ b/docs/specs/vhost-user.txt
> >@@ -113,6 +113,7 @@ message replies. Most of the requests don't require replies. Here is a list of
> > the ones that do:
> >
> > * VHOST_GET_FEATURES
> >+ * VHOST_GET_PROTOCOL_FEATURES
> > * VHOST_GET_VRING_BASE
> >
> > There are several messages that the master sends with file descriptors passed
> >@@ -127,6 +128,13 @@ in the ancillary data:
> > If Master is unable to send the full message or receives a wrong reply it will
> > close the connection. An optional reconnection mechanism can be implemented.
> >
> >+Any protocol extensions are gated by protocol feature bits,
> >+which allows full backwards compatibility on both master
> >+and slave.
> >+As older slaves don't support negotiating protocol features,
> >+a feature bit was dedicated for this purpose:
> >+#define VHOST_USER_F_PROTOCOL_FEATURES 30
> >+
> > Message types
> > -------------
> >
> >@@ -138,6 +146,8 @@ Message types
> > Slave payload: u64
> >
> > Get from the underlying vhost implementation the features bitmask.
> >+ Feature bit VHOST_USER_F_PROTOCOL_FEATURES signals slave support for
> >+ VHOST_USER_GET_PROTOCOL_FEATURES and VHOST_USER_SET_PROTOCOL_FEATURES.
> >
> > * VHOST_USER_SET_FEATURES
> >
> >@@ -146,6 +156,33 @@ Message types
> > Master payload: u64
> >
> > Enable features in the underlying vhost implementation using a bitmask.
> >+ Feature bit VHOST_USER_F_PROTOCOL_FEATURES signals slave support for
> >+ VHOST_USER_GET_PROTOCOL_FEATURES and VHOST_USER_SET_PROTOCOL_FEATURES.
> >+
> >+ * VHOST_USER_GET_PROTOCOL_FEATURES
> >+
> >+ Id: 15
> >+ Equivalent ioctl: VHOST_GET_FEATURES
>
> VHOST_USER_GET_PROTOCOL_FEATURES does not have an equivalent ioctl
Oops, I didn't notice that. Will fix it soon, and thank you for the review!
--yliu
>
> >+ Master payload: N/A
> >+ Slave payload: u64
> >+
> >+ Get the protocol feature bitmask from the underlying vhost implementation.
> >+ Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
> >+ VHOST_USER_GET_FEATURES.
> >+ Note: slave that reported VHOST_USER_F_PROTOCOL_FEATURES must support
> >+ this message even before VHOST_USER_SET_FEATURES was called.
> >+
> >+ * VHOST_USER_SET_PROTOCOL_FEATURES
> >+
> >+ Id: 16
> >+ Ioctl: VHOST_SET_FEATURES
>
> Same here
>
> >+ Master payload: u64
> >+
> >+ Enable protocol features in the underlying vhost implementation.
> >+ Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
> >+ VHOST_USER_GET_FEATURES.
> >+ Note: slave that reported VHOST_USER_F_PROTOCOL_FEATURES must support
> >+ this message even before VHOST_USER_SET_FEATURES was called.
> >
> > * VHOST_USER_SET_OWNER
> >
> >diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> >index 1d76b94..9d32d76 100644
> >--- a/hw/net/vhost_net.c
> >+++ b/hw/net/vhost_net.c
> >@@ -152,8 +152,10 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> > net->dev.backend_features = qemu_has_vnet_hdr(options->net_backend)
> > ? 0 : (1ULL << VHOST_NET_F_VIRTIO_NET_HDR);
> > net->backend = r;
> >+ net->dev.protocol_features = 0;
> > } else {
> > net->dev.backend_features = 0;
> >+ net->dev.protocol_features = 0;
> > net->backend = -1;
> > }
>
> Maybe protocol_features assignment should be outside the if clause.
> (assigned to 0 in both cases)
>
> > net->nc = options->net_backend;
> >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> >index 13677ac..7fe35c6 100644
> >--- a/hw/virtio/vhost-user.c
> >+++ b/hw/virtio/vhost-user.c
> >@@ -24,6 +24,8 @@
> > #include <linux/vhost.h>
> >
> > #define VHOST_MEMORY_MAX_NREGIONS 8
> >+#define VHOST_USER_F_PROTOCOL_FEATURES 30
> >+#define VHOST_USER_PROTOCOL_FEATURE_MASK 0x0ULL
> >
> > typedef enum VhostUserRequest {
> > VHOST_USER_NONE = 0,
> >@@ -41,6 +43,8 @@ typedef enum VhostUserRequest {
> > VHOST_USER_SET_VRING_KICK = 12,
> > VHOST_USER_SET_VRING_CALL = 13,
> > VHOST_USER_SET_VRING_ERR = 14,
> >+ VHOST_USER_GET_PROTOCOL_FEATURES = 15,
> >+ VHOST_USER_SET_PROTOCOL_FEATURES = 16,
> > VHOST_USER_MAX
> > } VhostUserRequest;
> >
> >@@ -206,11 +210,13 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> >
> > switch (msg_request) {
> > case VHOST_USER_GET_FEATURES:
> >+ case VHOST_USER_GET_PROTOCOL_FEATURES:
> > need_reply = 1;
> > break;
> >
> > case VHOST_USER_SET_FEATURES:
> > case VHOST_USER_SET_LOG_BASE:
> >+ case VHOST_USER_SET_PROTOCOL_FEATURES:
> > msg.u64 = *((__u64 *) arg);
> > msg.size = sizeof(m.u64);
> > break;
> >@@ -308,6 +314,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> >
> > switch (msg_request) {
> > case VHOST_USER_GET_FEATURES:
> >+ case VHOST_USER_GET_PROTOCOL_FEATURES:
> > if (msg.size != sizeof(m.u64)) {
> > error_report("Received bad msg size.");
> > return -1;
> >@@ -333,10 +340,34 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> >
> > static int vhost_user_init(struct vhost_dev *dev, void *opaque)
> > {
> >+ unsigned long long features;
> >+ int err;
> >+
> > assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
> >
> > dev->opaque = opaque;
> >
> >+ err = vhost_user_call(dev, VHOST_USER_GET_FEATURES, &features);
> >+ if (err < 0) {
> >+ return err;
> >+ }
> >+
> >+ if (virtio_has_feature(features, VHOST_USER_F_PROTOCOL_FEATURES)) {
> >+ dev->backend_features |= 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
> >+
> >+ err = vhost_user_call(dev, VHOST_USER_GET_PROTOCOL_FEATURES, &features);
> >+ if (err < 0) {
> >+ return err;
> >+ }
> >+
> >+ dev->protocol_features = features & VHOST_USER_PROTOCOL_FEATURE_MASK;
> >+ err = vhost_user_call(dev, VHOST_USER_SET_PROTOCOL_FEATURES,
> >+ &dev->protocol_features);
> >+ if (err < 0) {
> >+ return err;
> >+ }
> >+ }
> >+
> > return 0;
> > }
> >
> >diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> >index dd51050..6467c73 100644
> >--- a/include/hw/virtio/vhost.h
> >+++ b/include/hw/virtio/vhost.h
> >@@ -47,6 +47,7 @@ struct vhost_dev {
> > unsigned long long features;
> > unsigned long long acked_features;
> > unsigned long long backend_features;
> >+ unsigned long long protocol_features;
> > bool started;
> > bool log_enabled;
> > unsigned long long log_size;
> >
>
> The above comments can be addressed on top of this series because
> does not affect the correctness of the patch.
>
> --
> Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 2/7] vhost-user: add protocol feature negotiation
2015-09-24 10:13 ` Marcel Apfelbaum
2015-09-24 11:25 ` Yuanhan Liu
@ 2015-09-24 11:29 ` Yuanhan Liu
1 sibling, 0 replies; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-24 11:29 UTC (permalink / raw)
To: Marcel Apfelbaum; +Cc: jasowang, mst, qemu-devel, Changchun.ouyang
On Thu, Sep 24, 2015 at 01:13:24PM +0300, Marcel Apfelbaum wrote:
> >diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> >index 1d76b94..9d32d76 100644
> >--- a/hw/net/vhost_net.c
> >+++ b/hw/net/vhost_net.c
> >@@ -152,8 +152,10 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> > net->dev.backend_features = qemu_has_vnet_hdr(options->net_backend)
> > ? 0 : (1ULL << VHOST_NET_F_VIRTIO_NET_HDR);
> > net->backend = r;
> >+ net->dev.protocol_features = 0;
> > } else {
> > net->dev.backend_features = 0;
> >+ net->dev.protocol_features = 0;
> > net->backend = -1;
> > }
>
> Maybe protocol_features assignment should be outside the if clause.
> (assigned to 0 in both cases)
Yeah, we could do that. However, it seems that it will take more effort,
for handling patch conflicts while rebase, than it worths. Therefore
I will keep it.
--yliu
^ permalink raw reply [flat|nested] 22+ messages in thread
* [Qemu-devel] [PATCH v11 3/7] vhost: rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE
2015-09-23 4:19 [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Yuanhan Liu
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 1/7] vhost-user: use VHOST_USER_XXX macro for switch statement Yuanhan Liu
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 2/7] vhost-user: add protocol feature negotiation Yuanhan Liu
@ 2015-09-23 4:19 ` Yuanhan Liu
2015-09-24 10:18 ` Marcel Apfelbaum
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 4/7] vhost-user: add VHOST_USER_GET_QUEUE_NUM message Yuanhan Liu
` (4 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-23 4:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Yuanhan Liu, jasowang, Changchun.ouyang, mst
Quote from Michael:
We really should rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE.
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
docs/specs/vhost-user.txt | 4 ++--
hw/net/vhost_net.c | 2 +-
hw/virtio/vhost-user.c | 6 +++---
| 2 +-
tests/vhost-user-test.c | 2 +-
5 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index 70da3b1..ccbbcbb 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -194,10 +194,10 @@ Message types
as an owner of the session. This can be used on the Slave as a
"session start" flag.
- * VHOST_USER_RESET_OWNER
+ * VHOST_USER_RESET_DEVICE
Id: 4
- Equivalent ioctl: VHOST_RESET_OWNER
+ Equivalent ioctl: VHOST_RESET_DEVICE
Master payload: N/A
Issued when a new connection is about to be closed. The Master will no
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 9d32d76..b7d29b7 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -287,7 +287,7 @@ static void vhost_net_stop_one(struct vhost_net *net,
} else if (net->nc->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
for (file.index = 0; file.index < net->dev.nvqs; ++file.index) {
const VhostOps *vhost_ops = net->dev.vhost_ops;
- int r = vhost_ops->vhost_call(&net->dev, VHOST_RESET_OWNER,
+ int r = vhost_ops->vhost_call(&net->dev, VHOST_RESET_DEVICE,
NULL);
assert(r >= 0);
}
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 7fe35c6..9cb2f52 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -32,7 +32,7 @@ typedef enum VhostUserRequest {
VHOST_USER_GET_FEATURES = 1,
VHOST_USER_SET_FEATURES = 2,
VHOST_USER_SET_OWNER = 3,
- VHOST_USER_RESET_OWNER = 4,
+ VHOST_USER_RESET_DEVICE = 4,
VHOST_USER_SET_MEM_TABLE = 5,
VHOST_USER_SET_LOG_BASE = 6,
VHOST_USER_SET_LOG_FD = 7,
@@ -98,7 +98,7 @@ static unsigned long int ioctl_to_vhost_user_request[VHOST_USER_MAX] = {
VHOST_GET_FEATURES, /* VHOST_USER_GET_FEATURES */
VHOST_SET_FEATURES, /* VHOST_USER_SET_FEATURES */
VHOST_SET_OWNER, /* VHOST_USER_SET_OWNER */
- VHOST_RESET_OWNER, /* VHOST_USER_RESET_OWNER */
+ VHOST_RESET_DEVICE, /* VHOST_USER_RESET_DEVICE */
VHOST_SET_MEM_TABLE, /* VHOST_USER_SET_MEM_TABLE */
VHOST_SET_LOG_BASE, /* VHOST_USER_SET_LOG_BASE */
VHOST_SET_LOG_FD, /* VHOST_USER_SET_LOG_FD */
@@ -222,7 +222,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
break;
case VHOST_USER_SET_OWNER:
- case VHOST_USER_RESET_OWNER:
+ case VHOST_USER_RESET_DEVICE:
break;
case VHOST_USER_SET_MEM_TABLE:
--git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
index ead86db..14a0160 100644
--- a/linux-headers/linux/vhost.h
+++ b/linux-headers/linux/vhost.h
@@ -78,7 +78,7 @@ struct vhost_memory {
#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)
/* Give up ownership, and reset the device to default values.
* Allows subsequent call to VHOST_OWNER_SET to succeed. */
-#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)
+#define VHOST_RESET_DEVICE _IO(VHOST_VIRTIO, 0x02)
/* Set up/modify memory layout */
#define VHOST_SET_MEM_TABLE _IOW(VHOST_VIRTIO, 0x03, struct vhost_memory)
diff --git a/tests/vhost-user-test.c b/tests/vhost-user-test.c
index 75fedf0..e301db7 100644
--- a/tests/vhost-user-test.c
+++ b/tests/vhost-user-test.c
@@ -58,7 +58,7 @@ typedef enum VhostUserRequest {
VHOST_USER_GET_FEATURES = 1,
VHOST_USER_SET_FEATURES = 2,
VHOST_USER_SET_OWNER = 3,
- VHOST_USER_RESET_OWNER = 4,
+ VHOST_USER_RESET_DEVICE = 4,
VHOST_USER_SET_MEM_TABLE = 5,
VHOST_USER_SET_LOG_BASE = 6,
VHOST_USER_SET_LOG_FD = 7,
--
1.9.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 3/7] vhost: rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 3/7] vhost: rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE Yuanhan Liu
@ 2015-09-24 10:18 ` Marcel Apfelbaum
0 siblings, 0 replies; 22+ messages in thread
From: Marcel Apfelbaum @ 2015-09-24 10:18 UTC (permalink / raw)
To: Yuanhan Liu, qemu-devel; +Cc: jasowang, Changchun.ouyang, mst
On 09/23/2015 07:19 AM, Yuanhan Liu wrote:
> Quote from Michael:
>
> We really should rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE.
I suggest to change this to a better commit message :)
You can take anything from the mail thread discussion about it.
>
> Suggested-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Besides that:
--
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
> ---
> docs/specs/vhost-user.txt | 4 ++--
> hw/net/vhost_net.c | 2 +-
> hw/virtio/vhost-user.c | 6 +++---
> linux-headers/linux/vhost.h | 2 +-
> tests/vhost-user-test.c | 2 +-
> 5 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> index 70da3b1..ccbbcbb 100644
> --- a/docs/specs/vhost-user.txt
> +++ b/docs/specs/vhost-user.txt
> @@ -194,10 +194,10 @@ Message types
> as an owner of the session. This can be used on the Slave as a
> "session start" flag.
>
> - * VHOST_USER_RESET_OWNER
> + * VHOST_USER_RESET_DEVICE
>
> Id: 4
> - Equivalent ioctl: VHOST_RESET_OWNER
> + Equivalent ioctl: VHOST_RESET_DEVICE
> Master payload: N/A
>
> Issued when a new connection is about to be closed. The Master will no
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index 9d32d76..b7d29b7 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -287,7 +287,7 @@ static void vhost_net_stop_one(struct vhost_net *net,
> } else if (net->nc->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
> for (file.index = 0; file.index < net->dev.nvqs; ++file.index) {
> const VhostOps *vhost_ops = net->dev.vhost_ops;
> - int r = vhost_ops->vhost_call(&net->dev, VHOST_RESET_OWNER,
> + int r = vhost_ops->vhost_call(&net->dev, VHOST_RESET_DEVICE,
> NULL);
> assert(r >= 0);
> }
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 7fe35c6..9cb2f52 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -32,7 +32,7 @@ typedef enum VhostUserRequest {
> VHOST_USER_GET_FEATURES = 1,
> VHOST_USER_SET_FEATURES = 2,
> VHOST_USER_SET_OWNER = 3,
> - VHOST_USER_RESET_OWNER = 4,
> + VHOST_USER_RESET_DEVICE = 4,
> VHOST_USER_SET_MEM_TABLE = 5,
> VHOST_USER_SET_LOG_BASE = 6,
> VHOST_USER_SET_LOG_FD = 7,
> @@ -98,7 +98,7 @@ static unsigned long int ioctl_to_vhost_user_request[VHOST_USER_MAX] = {
> VHOST_GET_FEATURES, /* VHOST_USER_GET_FEATURES */
> VHOST_SET_FEATURES, /* VHOST_USER_SET_FEATURES */
> VHOST_SET_OWNER, /* VHOST_USER_SET_OWNER */
> - VHOST_RESET_OWNER, /* VHOST_USER_RESET_OWNER */
> + VHOST_RESET_DEVICE, /* VHOST_USER_RESET_DEVICE */
> VHOST_SET_MEM_TABLE, /* VHOST_USER_SET_MEM_TABLE */
> VHOST_SET_LOG_BASE, /* VHOST_USER_SET_LOG_BASE */
> VHOST_SET_LOG_FD, /* VHOST_USER_SET_LOG_FD */
> @@ -222,7 +222,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> break;
>
> case VHOST_USER_SET_OWNER:
> - case VHOST_USER_RESET_OWNER:
> + case VHOST_USER_RESET_DEVICE:
> break;
>
> case VHOST_USER_SET_MEM_TABLE:
> diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
> index ead86db..14a0160 100644
> --- a/linux-headers/linux/vhost.h
> +++ b/linux-headers/linux/vhost.h
> @@ -78,7 +78,7 @@ struct vhost_memory {
> #define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)
> /* Give up ownership, and reset the device to default values.
> * Allows subsequent call to VHOST_OWNER_SET to succeed. */
> -#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)
> +#define VHOST_RESET_DEVICE _IO(VHOST_VIRTIO, 0x02)
>
> /* Set up/modify memory layout */
> #define VHOST_SET_MEM_TABLE _IOW(VHOST_VIRTIO, 0x03, struct vhost_memory)
> diff --git a/tests/vhost-user-test.c b/tests/vhost-user-test.c
> index 75fedf0..e301db7 100644
> --- a/tests/vhost-user-test.c
> +++ b/tests/vhost-user-test.c
> @@ -58,7 +58,7 @@ typedef enum VhostUserRequest {
> VHOST_USER_GET_FEATURES = 1,
> VHOST_USER_SET_FEATURES = 2,
> VHOST_USER_SET_OWNER = 3,
> - VHOST_USER_RESET_OWNER = 4,
> + VHOST_USER_RESET_DEVICE = 4,
> VHOST_USER_SET_MEM_TABLE = 5,
> VHOST_USER_SET_LOG_BASE = 6,
> VHOST_USER_SET_LOG_FD = 7,
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [Qemu-devel] [PATCH v11 4/7] vhost-user: add VHOST_USER_GET_QUEUE_NUM message
2015-09-23 4:19 [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Yuanhan Liu
` (2 preceding siblings ...)
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 3/7] vhost: rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE Yuanhan Liu
@ 2015-09-23 4:19 ` Yuanhan Liu
2015-09-24 10:25 ` Marcel Apfelbaum
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 5/7] vhost: introduce vhost_backend_get_vq_index method Yuanhan Liu
` (3 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-23 4:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Yuanhan Liu, jasowang, Changchun.ouyang, mst
This is for querying how many queues the backend supports if it has mq
support(when VHOST_USER_PROTOCOL_F_MQ flag is set from the quried
protocol features).
vhost_net_get_max_queues() is the interface to export that value, and
to tell if the backend supports # of queues user requested, which is
done in the following patch.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
v11: define a dummy vhost_net_get_max_queues when !CONFIG_VHOST_NET.
---
docs/specs/vhost-user.txt | 11 +++++++++++
hw/net/vhost_net.c | 12 ++++++++++++
hw/virtio/vhost-user.c | 15 ++++++++++++++-
include/hw/virtio/vhost.h | 1 +
include/net/vhost_net.h | 1 +
5 files changed, 39 insertions(+), 1 deletion(-)
diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index ccbbcbb..43db9b4 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -301,3 +301,14 @@ Message types
Bits (0-7) of the payload contain the vring index. Bit 8 is the
invalid FD flag. This flag is set when there is no file descriptor
in the ancillary data.
+
+ * VHOST_USER_GET_QUEUE_NUM
+
+ Id: 17
+ Equivalent ioctl: N/A
+ Master payload: N/A
+ Slave payload: u64
+
+ Query how many queues the backend supports. This request should be
+ sent only when VHOST_USER_PROTOCOL_F_MQ is set in quried protocol
+ features by VHOST_USER_GET_PROTOCOL_FEATURES.
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index b7d29b7..f663e5a 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -122,6 +122,11 @@ void vhost_net_ack_features(struct vhost_net *net, uint64_t features)
vhost_ack_features(&net->dev, vhost_net_get_feature_bits(net), features);
}
+uint64_t vhost_net_get_max_queues(VHostNetState *net)
+{
+ return net->dev.max_queues;
+}
+
static int vhost_net_get_fd(NetClientState *backend)
{
switch (backend->info->type) {
@@ -144,6 +149,8 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
goto fail;
}
+ net->dev.max_queues = 1;
+
if (backend_kernel) {
r = vhost_net_get_fd(options->net_backend);
if (r < 0) {
@@ -414,6 +421,11 @@ VHostNetState *get_vhost_net(NetClientState *nc)
return vhost_net;
}
#else
+uint64_t vhost_net_get_max_queues(VHostNetState *net)
+{
+ return 1;
+}
+
struct vhost_net *vhost_net_init(VhostNetOptions *options)
{
error_report("vhost-net support is not compiled in");
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 9cb2f52..694fde5 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -25,7 +25,9 @@
#define VHOST_MEMORY_MAX_NREGIONS 8
#define VHOST_USER_F_PROTOCOL_FEATURES 30
-#define VHOST_USER_PROTOCOL_FEATURE_MASK 0x0ULL
+#define VHOST_USER_PROTOCOL_FEATURE_MASK 0x1ULL
+
+#define VHOST_USER_PROTOCOL_F_MQ 0
typedef enum VhostUserRequest {
VHOST_USER_NONE = 0,
@@ -45,6 +47,7 @@ typedef enum VhostUserRequest {
VHOST_USER_SET_VRING_ERR = 14,
VHOST_USER_GET_PROTOCOL_FEATURES = 15,
VHOST_USER_SET_PROTOCOL_FEATURES = 16,
+ VHOST_USER_GET_QUEUE_NUM = 17,
VHOST_USER_MAX
} VhostUserRequest;
@@ -211,6 +214,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
switch (msg_request) {
case VHOST_USER_GET_FEATURES:
case VHOST_USER_GET_PROTOCOL_FEATURES:
+ case VHOST_USER_GET_QUEUE_NUM:
need_reply = 1;
break;
@@ -315,6 +319,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
switch (msg_request) {
case VHOST_USER_GET_FEATURES:
case VHOST_USER_GET_PROTOCOL_FEATURES:
+ case VHOST_USER_GET_QUEUE_NUM:
if (msg.size != sizeof(m.u64)) {
error_report("Received bad msg size.");
return -1;
@@ -366,6 +371,14 @@ static int vhost_user_init(struct vhost_dev *dev, void *opaque)
if (err < 0) {
return err;
}
+
+ /* query the max queues we support if backend supports Multiple Queue */
+ if (dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MQ)) {
+ err = vhost_user_call(dev, VHOST_USER_GET_QUEUE_NUM, &dev->max_queues);
+ if (err < 0) {
+ return err;
+ }
+ }
}
return 0;
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 6467c73..c3758f3 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -48,6 +48,7 @@ struct vhost_dev {
unsigned long long acked_features;
unsigned long long backend_features;
unsigned long long protocol_features;
+ unsigned long long max_queues;
bool started;
bool log_enabled;
unsigned long long log_size;
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index 840d4b1..8db723e 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -13,6 +13,7 @@ typedef struct VhostNetOptions {
void *opaque;
} VhostNetOptions;
+uint64_t vhost_net_get_max_queues(VHostNetState *net);
struct vhost_net *vhost_net_init(VhostNetOptions *options);
int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
--
1.9.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 4/7] vhost-user: add VHOST_USER_GET_QUEUE_NUM message
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 4/7] vhost-user: add VHOST_USER_GET_QUEUE_NUM message Yuanhan Liu
@ 2015-09-24 10:25 ` Marcel Apfelbaum
0 siblings, 0 replies; 22+ messages in thread
From: Marcel Apfelbaum @ 2015-09-24 10:25 UTC (permalink / raw)
To: Yuanhan Liu, qemu-devel; +Cc: jasowang, Changchun.ouyang, mst
On 09/23/2015 07:19 AM, Yuanhan Liu wrote:
> This is for querying how many queues the backend supports if it has mq
> support(when VHOST_USER_PROTOCOL_F_MQ flag is set from the quried
/s/quried/queried
Only if you plan to send another version, we can fix it on top.
> protocol features).
>
> vhost_net_get_max_queues() is the interface to export that value, and
> to tell if the backend supports # of queues user requested, which is
> done in the following patch.
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>
> ---
> v11: define a dummy vhost_net_get_max_queues when !CONFIG_VHOST_NET.
> ---
> docs/specs/vhost-user.txt | 11 +++++++++++
> hw/net/vhost_net.c | 12 ++++++++++++
> hw/virtio/vhost-user.c | 15 ++++++++++++++-
> include/hw/virtio/vhost.h | 1 +
> include/net/vhost_net.h | 1 +
> 5 files changed, 39 insertions(+), 1 deletion(-)
>
> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> index ccbbcbb..43db9b4 100644
> --- a/docs/specs/vhost-user.txt
> +++ b/docs/specs/vhost-user.txt
> @@ -301,3 +301,14 @@ Message types
> Bits (0-7) of the payload contain the vring index. Bit 8 is the
> invalid FD flag. This flag is set when there is no file descriptor
> in the ancillary data.
> +
> + * VHOST_USER_GET_QUEUE_NUM
> +
> + Id: 17
> + Equivalent ioctl: N/A
> + Master payload: N/A
> + Slave payload: u64
> +
> + Query how many queues the backend supports. This request should be
> + sent only when VHOST_USER_PROTOCOL_F_MQ is set in quried protocol
> + features by VHOST_USER_GET_PROTOCOL_FEATURES.
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index b7d29b7..f663e5a 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -122,6 +122,11 @@ void vhost_net_ack_features(struct vhost_net *net, uint64_t features)
> vhost_ack_features(&net->dev, vhost_net_get_feature_bits(net), features);
> }
>
> +uint64_t vhost_net_get_max_queues(VHostNetState *net)
> +{
> + return net->dev.max_queues;
> +}
> +
> static int vhost_net_get_fd(NetClientState *backend)
> {
> switch (backend->info->type) {
> @@ -144,6 +149,8 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> goto fail;
> }
>
> + net->dev.max_queues = 1;
> +
> if (backend_kernel) {
> r = vhost_net_get_fd(options->net_backend);
> if (r < 0) {
> @@ -414,6 +421,11 @@ VHostNetState *get_vhost_net(NetClientState *nc)
> return vhost_net;
> }
> #else
> +uint64_t vhost_net_get_max_queues(VHostNetState *net)
> +{
> + return 1;
> +}
> +
> struct vhost_net *vhost_net_init(VhostNetOptions *options)
> {
> error_report("vhost-net support is not compiled in");
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 9cb2f52..694fde5 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -25,7 +25,9 @@
>
> #define VHOST_MEMORY_MAX_NREGIONS 8
> #define VHOST_USER_F_PROTOCOL_FEATURES 30
> -#define VHOST_USER_PROTOCOL_FEATURE_MASK 0x0ULL
> +#define VHOST_USER_PROTOCOL_FEATURE_MASK 0x1ULL
> +
> +#define VHOST_USER_PROTOCOL_F_MQ 0
>
> typedef enum VhostUserRequest {
> VHOST_USER_NONE = 0,
> @@ -45,6 +47,7 @@ typedef enum VhostUserRequest {
> VHOST_USER_SET_VRING_ERR = 14,
> VHOST_USER_GET_PROTOCOL_FEATURES = 15,
> VHOST_USER_SET_PROTOCOL_FEATURES = 16,
> + VHOST_USER_GET_QUEUE_NUM = 17,
> VHOST_USER_MAX
> } VhostUserRequest;
>
> @@ -211,6 +214,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> switch (msg_request) {
> case VHOST_USER_GET_FEATURES:
> case VHOST_USER_GET_PROTOCOL_FEATURES:
> + case VHOST_USER_GET_QUEUE_NUM:
> need_reply = 1;
> break;
>
> @@ -315,6 +319,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> switch (msg_request) {
> case VHOST_USER_GET_FEATURES:
> case VHOST_USER_GET_PROTOCOL_FEATURES:
> + case VHOST_USER_GET_QUEUE_NUM:
> if (msg.size != sizeof(m.u64)) {
> error_report("Received bad msg size.");
> return -1;
> @@ -366,6 +371,14 @@ static int vhost_user_init(struct vhost_dev *dev, void *opaque)
> if (err < 0) {
> return err;
> }
> +
> + /* query the max queues we support if backend supports Multiple Queue */
> + if (dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MQ)) {
> + err = vhost_user_call(dev, VHOST_USER_GET_QUEUE_NUM, &dev->max_queues);
> + if (err < 0) {
> + return err;
> + }
> + }
> }
>
> return 0;
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index 6467c73..c3758f3 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -48,6 +48,7 @@ struct vhost_dev {
> unsigned long long acked_features;
> unsigned long long backend_features;
> unsigned long long protocol_features;
> + unsigned long long max_queues;
> bool started;
> bool log_enabled;
> unsigned long long log_size;
> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> index 840d4b1..8db723e 100644
> --- a/include/net/vhost_net.h
> +++ b/include/net/vhost_net.h
> @@ -13,6 +13,7 @@ typedef struct VhostNetOptions {
> void *opaque;
> } VhostNetOptions;
>
> +uint64_t vhost_net_get_max_queues(VHostNetState *net);
> struct vhost_net *vhost_net_init(VhostNetOptions *options);
>
> int vhost_net_start(VirtIODevice *dev, NetClientState *ncs, int total_queues);
>
--
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [Qemu-devel] [PATCH v11 5/7] vhost: introduce vhost_backend_get_vq_index method
2015-09-23 4:19 [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Yuanhan Liu
` (3 preceding siblings ...)
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 4/7] vhost-user: add VHOST_USER_GET_QUEUE_NUM message Yuanhan Liu
@ 2015-09-23 4:19 ` Yuanhan Liu
2015-09-23 4:20 ` [Qemu-devel] [PATCH v11 6/7] vhost-user: add multiple queue support Yuanhan Liu
` (2 subsequent siblings)
7 siblings, 0 replies; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-23 4:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Yuanhan Liu, jasowang, Changchun.ouyang, mst
Minusing the idx with the base(dev->vq_index) for vhost-kernel, and
then adding it back for vhost-user doesn't seem right. Here introduces
a new method vhost_backend_get_vq_index() for getting the right vq
index for following vhost messages calls.
Suggested-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
---
hw/virtio/vhost-backend.c | 10 +++++++++-
hw/virtio/vhost-user.c | 12 ++++++++++--
hw/virtio/vhost.c | 15 ++++++---------
include/hw/virtio/vhost-backend.h | 2 ++
4 files changed, 27 insertions(+), 12 deletions(-)
diff --git a/hw/virtio/vhost-backend.c b/hw/virtio/vhost-backend.c
index 4d68a27..72d1392 100644
--- a/hw/virtio/vhost-backend.c
+++ b/hw/virtio/vhost-backend.c
@@ -42,11 +42,19 @@ static int vhost_kernel_cleanup(struct vhost_dev *dev)
return close(fd);
}
+static int vhost_kernel_get_vq_index(struct vhost_dev *dev, int idx)
+{
+ assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
+
+ return idx - dev->vq_index;
+}
+
static const VhostOps kernel_ops = {
.backend_type = VHOST_BACKEND_TYPE_KERNEL,
.vhost_call = vhost_kernel_call,
.vhost_backend_init = vhost_kernel_init,
- .vhost_backend_cleanup = vhost_kernel_cleanup
+ .vhost_backend_cleanup = vhost_kernel_cleanup,
+ .vhost_backend_get_vq_index = vhost_kernel_get_vq_index,
};
int vhost_set_backend_type(struct vhost_dev *dev, VhostBackendType backend_type)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 694fde5..5018fd6 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -393,9 +393,17 @@ static int vhost_user_cleanup(struct vhost_dev *dev)
return 0;
}
+static int vhost_user_get_vq_index(struct vhost_dev *dev, int idx)
+{
+ assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
+
+ return idx;
+}
+
const VhostOps user_ops = {
.backend_type = VHOST_BACKEND_TYPE_USER,
.vhost_call = vhost_user_call,
.vhost_backend_init = vhost_user_init,
- .vhost_backend_cleanup = vhost_user_cleanup
- };
+ .vhost_backend_cleanup = vhost_user_cleanup,
+ .vhost_backend_get_vq_index = vhost_user_get_vq_index,
+};
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index a08c36b..7a7812d 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -719,7 +719,7 @@ static int vhost_virtqueue_start(struct vhost_dev *dev,
{
hwaddr s, l, a;
int r;
- int vhost_vq_index = idx - dev->vq_index;
+ int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, idx);
struct vhost_vring_file file = {
.index = vhost_vq_index
};
@@ -728,7 +728,6 @@ static int vhost_virtqueue_start(struct vhost_dev *dev,
};
struct VirtQueue *vvq = virtio_get_queue(vdev, idx);
- assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
vq->num = state.num = virtio_queue_get_num(vdev, idx);
r = dev->vhost_ops->vhost_call(dev, VHOST_SET_VRING_NUM, &state);
@@ -822,12 +821,12 @@ static void vhost_virtqueue_stop(struct vhost_dev *dev,
struct vhost_virtqueue *vq,
unsigned idx)
{
- int vhost_vq_index = idx - dev->vq_index;
+ int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, idx);
struct vhost_vring_state state = {
.index = vhost_vq_index,
};
int r;
- assert(idx >= dev->vq_index && idx < dev->vq_index + dev->nvqs);
+
r = dev->vhost_ops->vhost_call(dev, VHOST_GET_VRING_BASE, &state);
if (r < 0) {
fprintf(stderr, "vhost VQ %d ring restore failed: %d\n", idx, r);
@@ -1066,17 +1065,15 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
{
struct VirtQueue *vvq = virtio_get_queue(vdev, n);
int r, index = n - hdev->vq_index;
+ struct vhost_vring_file file;
- assert(n >= hdev->vq_index && n < hdev->vq_index + hdev->nvqs);
-
- struct vhost_vring_file file = {
- .index = index
- };
if (mask) {
file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
} else {
file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
}
+
+ file.index = hdev->vhost_ops->vhost_backend_get_vq_index(hdev, n);
r = hdev->vhost_ops->vhost_call(hdev, VHOST_SET_VRING_CALL, &file);
assert(r >= 0);
}
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index e472f29..e1dfc6d 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -24,12 +24,14 @@ typedef int (*vhost_call)(struct vhost_dev *dev, unsigned long int request,
void *arg);
typedef int (*vhost_backend_init)(struct vhost_dev *dev, void *opaque);
typedef int (*vhost_backend_cleanup)(struct vhost_dev *dev);
+typedef int (*vhost_backend_get_vq_index)(struct vhost_dev *dev, int idx);
typedef struct VhostOps {
VhostBackendType backend_type;
vhost_call vhost_call;
vhost_backend_init vhost_backend_init;
vhost_backend_cleanup vhost_backend_cleanup;
+ vhost_backend_get_vq_index vhost_backend_get_vq_index;
} VhostOps;
extern const VhostOps user_ops;
--
1.9.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [Qemu-devel] [PATCH v11 6/7] vhost-user: add multiple queue support
2015-09-23 4:19 [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Yuanhan Liu
` (4 preceding siblings ...)
2015-09-23 4:19 ` [Qemu-devel] [PATCH v11 5/7] vhost: introduce vhost_backend_get_vq_index method Yuanhan Liu
@ 2015-09-23 4:20 ` Yuanhan Liu
2015-09-23 6:56 ` Yuanhan Liu
2015-09-23 4:20 ` [Qemu-devel] [PATCH v11 7/7] vhost-user: add a new message to disable/enable a specific virt queue Yuanhan Liu
2015-09-24 10:03 ` [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Marcel Apfelbaum
7 siblings, 1 reply; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-23 4:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Yuanhan Liu, mst, jasowang, Changchun.ouyang, Changchun Ouyang
From: Changchun Ouyang <changchun.ouyang@intel.com>
This patch is initially based a patch from Nikolay Nikolaev.
This patch adds vhost-user multiple queue support, by creating a nc
and vhost_net pair for each queue.
Qemu exits if find that the backend can't support the number of requested
queues (by providing queues=# option). The max number is queried by a
new message, VHOST_USER_GET_QUEUE_NUM, and is sent only when protocol
feature VHOST_USER_PROTOCOL_F_MQ is present first.
The max queue check is done at vhost-user initiation stage. We initiate
one queue first, which, in the meantime, also gets the max_queues the
backend supports.
In older version, it was reported that some messages are sent more times
than necessary. Here we came an agreement with Michael that we could
categorize vhost user messages to 2 types: non-vring specific messages,
which should be sent only once, and vring specific messages, which should
be sent per queue.
Here I introduced a helper function vhost_user_one_time_request(), which
lists following messages as non-vring specific messages:
VHOST_USER_SET_OWNER
VHOST_USER_RESET_DEVICE
VHOST_USER_SET_MEM_TABLE
VHOST_USER_GET_QUEUE_NUM
For above messages, we simply ignore them when they are not sent the first
time.
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
v11: inovke qmp_set_link() directly -- Suggested by Jason Wang
v10: don't treat VHOST_USER_SET/GET_[PROTOCOL]_FEATURES as one time
request, as the two feature bits need to be stored at per-device.
v9: per suggested by Jason Wang, we could invoke qemu_chr_add_handlers()
once only, and invoke qemu_find_net_clients_except() at the handler
to gather all related ncs, for initiating all queues. Which addresses
a hidden bug in older verion in a more QEMU-like way.
v8: set net->dev.vq_index correctly inside vhost_net_init() based on the
value from net->nc->queue_index.
---
docs/specs/vhost-user.txt | 15 +++++
hw/net/vhost_net.c | 10 ++--
hw/virtio/vhost-user.c | 22 ++++++++
hw/virtio/vhost.c | 5 +-
net/vhost-user.c | 141 ++++++++++++++++++++++++++++++----------------
qapi-schema.json | 6 +-
qemu-options.hx | 5 +-
7 files changed, 147 insertions(+), 57 deletions(-)
diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index 43db9b4..cfc9d41 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -135,6 +135,21 @@ As older slaves don't support negotiating protocol features,
a feature bit was dedicated for this purpose:
#define VHOST_USER_F_PROTOCOL_FEATURES 30
+Multiple queue support
+----------------------
+
+Multiple queue is treated as a protocol extension, hence the slave has to
+implement protocol features first. The multiple queues feature is supported
+only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set:
+#define VHOST_USER_PROTOCOL_F_MQ 0
+
+The max number of queues the slave supports can be queried with message
+VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
+requested queues is bigger than that.
+
+As all queues share one connection, the master uses a unique index for each
+queue in the sent message to identify a specified queue.
+
Message types
-------------
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index f663e5a..ad82b5c 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -148,8 +148,11 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
fprintf(stderr, "vhost-net requires net backend to be setup\n");
goto fail;
}
+ net->nc = options->net_backend;
net->dev.max_queues = 1;
+ net->dev.nvqs = 2;
+ net->dev.vqs = net->vqs;
if (backend_kernel) {
r = vhost_net_get_fd(options->net_backend);
@@ -164,11 +167,10 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
net->dev.backend_features = 0;
net->dev.protocol_features = 0;
net->backend = -1;
- }
- net->nc = options->net_backend;
- net->dev.nvqs = 2;
- net->dev.vqs = net->vqs;
+ /* vhost-user needs vq_index to initiate a specific queue pair */
+ net->dev.vq_index = net->nc->queue_index * net->dev.nvqs;
+ }
r = vhost_dev_init(&net->dev, options->opaque,
options->backend_type);
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 5018fd6..e42fde6 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -187,6 +187,19 @@ static int vhost_user_write(struct vhost_dev *dev, VhostUserMsg *msg,
0 : -1;
}
+static bool vhost_user_one_time_request(VhostUserRequest request)
+{
+ switch (request) {
+ case VHOST_USER_SET_OWNER:
+ case VHOST_USER_RESET_DEVICE:
+ case VHOST_USER_SET_MEM_TABLE:
+ case VHOST_USER_GET_QUEUE_NUM:
+ return true;
+ default:
+ return false;
+ }
+}
+
static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
void *arg)
{
@@ -207,6 +220,15 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
msg_request = request;
}
+ /*
+ * For non-vring specific requests, like VHOST_USER_SET_MEM_TABLE,
+ * we just need send it once in the first time. For later such
+ * request, we just ignore it.
+ */
+ if (vhost_user_one_time_request(msg_request) && dev->vq_index != 0) {
+ return 0;
+ }
+
msg.request = msg_request;
msg.flags = VHOST_USER_VERSION;
msg.size = 0;
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 7a7812d..c0ed5b2 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -874,8 +874,9 @@ static void vhost_eventfd_del(MemoryListener *listener,
static int vhost_virtqueue_init(struct vhost_dev *dev,
struct vhost_virtqueue *vq, int n)
{
+ int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, n);
struct vhost_vring_file file = {
- .index = n,
+ .index = vhost_vq_index,
};
int r = event_notifier_init(&vq->masked_notifier, 0);
if (r < 0) {
@@ -926,7 +927,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
}
for (i = 0; i < hdev->nvqs; ++i) {
- r = vhost_virtqueue_init(hdev, hdev->vqs + i, i);
+ r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i);
if (r < 0) {
goto fail_vq;
}
diff --git a/net/vhost-user.c b/net/vhost-user.c
index 93dcecd..8f354eb 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -14,6 +14,7 @@
#include "sysemu/char.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
+#include "qmp-commands.h"
typedef struct VhostUserState {
NetClientState nc;
@@ -39,37 +40,77 @@ static int vhost_user_running(VhostUserState *s)
return (s->vhost_net) ? 1 : 0;
}
-static int vhost_user_start(VhostUserState *s)
+static void vhost_user_stop(int queues, NetClientState *ncs[])
{
- VhostNetOptions options;
-
- if (vhost_user_running(s)) {
- return 0;
- }
+ VhostUserState *s;
+ int i;
- options.backend_type = VHOST_BACKEND_TYPE_USER;
- options.net_backend = &s->nc;
- options.opaque = s->chr;
+ for (i = 0; i < queues; i++) {
+ assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
- s->vhost_net = vhost_net_init(&options);
+ s = DO_UPCAST(VhostUserState, nc, ncs[i]);
+ if (!vhost_user_running(s)) {
+ continue;
+ }
- return vhost_user_running(s) ? 0 : -1;
+ if (s->vhost_net) {
+ vhost_net_cleanup(s->vhost_net);
+ s->vhost_net = NULL;
+ }
+ }
}
-static void vhost_user_stop(VhostUserState *s)
+static int vhost_user_start(int queues, NetClientState *ncs[])
{
- if (vhost_user_running(s)) {
- vhost_net_cleanup(s->vhost_net);
+ VhostNetOptions options;
+ VhostUserState *s;
+ int max_queues;
+ int i;
+
+ options.backend_type = VHOST_BACKEND_TYPE_USER;
+
+ for (i = 0; i < queues; i++) {
+ assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
+
+ s = DO_UPCAST(VhostUserState, nc, ncs[i]);
+ if (vhost_user_running(s)) {
+ continue;
+ }
+
+ options.net_backend = ncs[i];
+ options.opaque = s->chr;
+ s->vhost_net = vhost_net_init(&options);
+ if (!s->vhost_net) {
+ error_report("failed to init vhost_net for queue %d\n", i);
+ goto err;
+ }
+
+ if (i == 0) {
+ max_queues = vhost_net_get_max_queues(s->vhost_net);
+ if (queues > max_queues) {
+ error_report("you are asking more queues than "
+ "supported: %d\n", max_queues);
+ goto err;
+ }
+ }
}
- s->vhost_net = 0;
+ return 0;
+
+err:
+ vhost_user_stop(i + 1, ncs);
+ return -1;
}
static void vhost_user_cleanup(NetClientState *nc)
{
VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
- vhost_user_stop(s);
+ if (s->vhost_net) {
+ vhost_net_cleanup(s->vhost_net);
+ s->vhost_net = NULL;
+ }
+
qemu_purge_queued_packets(nc);
}
@@ -95,59 +136,61 @@ static NetClientInfo net_vhost_user_info = {
.has_ufo = vhost_user_has_ufo,
};
-static void net_vhost_link_down(VhostUserState *s, bool link_down)
-{
- s->nc.link_down = link_down;
-
- if (s->nc.peer) {
- s->nc.peer->link_down = link_down;
- }
-
- if (s->nc.info->link_status_changed) {
- s->nc.info->link_status_changed(&s->nc);
- }
-
- if (s->nc.peer && s->nc.peer->info->link_status_changed) {
- s->nc.peer->info->link_status_changed(s->nc.peer);
- }
-}
-
static void net_vhost_user_event(void *opaque, int event)
{
- VhostUserState *s = opaque;
+ const char *name = opaque;
+ NetClientState *ncs[MAX_QUEUE_NUM];
+ VhostUserState *s;
+ Error *err = NULL;
+ int queues;
+ queues = qemu_find_net_clients_except(name, ncs,
+ NET_CLIENT_OPTIONS_KIND_NIC,
+ MAX_QUEUE_NUM);
+ s = DO_UPCAST(VhostUserState, nc, ncs[0]);
switch (event) {
case CHR_EVENT_OPENED:
- vhost_user_start(s);
- net_vhost_link_down(s, false);
+ if (vhost_user_start(queues, ncs) < 0) {
+ exit(1);
+ }
+ qmp_set_link(name, true, &err);
error_report("chardev \"%s\" went up", s->chr->label);
break;
case CHR_EVENT_CLOSED:
- net_vhost_link_down(s, true);
- vhost_user_stop(s);
+ qmp_set_link(name, true, &err);
+ vhost_user_stop(queues, ncs);
error_report("chardev \"%s\" went down", s->chr->label);
break;
}
+
+ if (err) {
+ error_report_err(err);
+ }
}
static int net_vhost_user_init(NetClientState *peer, const char *device,
- const char *name, CharDriverState *chr)
+ const char *name, CharDriverState *chr,
+ int queues)
{
NetClientState *nc;
VhostUserState *s;
+ int i;
- nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
+ for (i = 0; i < queues; i++) {
+ nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
- snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
- chr->label);
+ snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
+ i, chr->label);
- s = DO_UPCAST(VhostUserState, nc, nc);
+ /* We don't provide a receive callback */
+ nc->receive_disabled = 1;
+ nc->queue_index = i;
- /* We don't provide a receive callback */
- s->nc.receive_disabled = 1;
- s->chr = chr;
+ s = DO_UPCAST(VhostUserState, nc, nc);
+ s->chr = chr;
+ }
- qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
+ qemu_chr_add_handlers(chr, NULL, NULL, net_vhost_user_event, (void*)name);
return 0;
}
@@ -226,6 +269,7 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
int net_init_vhost_user(const NetClientOptions *opts, const char *name,
NetClientState *peer, Error **errp)
{
+ int queues;
const NetdevVhostUserOptions *vhost_user_opts;
CharDriverState *chr;
@@ -243,6 +287,7 @@ int net_init_vhost_user(const NetClientOptions *opts, const char *name,
return -1;
}
+ queues = vhost_user_opts->has_queues ? vhost_user_opts->queues : 1;
- return net_vhost_user_init(peer, "vhost_user", name, chr);
+ return net_vhost_user_init(peer, "vhost_user", name, chr, queues);
}
diff --git a/qapi-schema.json b/qapi-schema.json
index 527690d..582a817 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -2481,12 +2481,16 @@
#
# @vhostforce: #optional vhost on for non-MSIX virtio guests (default: false).
#
+# @queues: #optional number of queues to be created for multiqueue vhost-user
+# (default: 1) (Since 2.5)
+#
# Since 2.1
##
{ 'struct': 'NetdevVhostUserOptions',
'data': {
'chardev': 'str',
- '*vhostforce': 'bool' } }
+ '*vhostforce': 'bool',
+ '*queues': 'int' } }
##
# @NetClientOptions
diff --git a/qemu-options.hx b/qemu-options.hx
index 7e147b8..328404c 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -1990,13 +1990,14 @@ The hubport netdev lets you connect a NIC to a QEMU "vlan" instead of a single
netdev. @code{-net} and @code{-device} with parameter @option{vlan} create the
required hub automatically.
-@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off]
+@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off][,queues=n]
Establish a vhost-user netdev, backed by a chardev @var{id}. The chardev should
be a unix domain socket backed one. The vhost-user uses a specifically defined
protocol to pass vhost ioctl replacement messages to an application on the other
end of the socket. On non-MSIX guests, the feature can be forced with
-@var{vhostforce}.
+@var{vhostforce}. Use 'queues=@var{n}' to specify the number of queues to
+be created for multiqueue vhost-user.
Example:
@example
--
1.9.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 6/7] vhost-user: add multiple queue support
2015-09-23 4:20 ` [Qemu-devel] [PATCH v11 6/7] vhost-user: add multiple queue support Yuanhan Liu
@ 2015-09-23 6:56 ` Yuanhan Liu
2015-09-24 5:34 ` Jason Wang
0 siblings, 1 reply; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-23 6:56 UTC (permalink / raw)
To: qemu-devel; +Cc: jasowang, Changchun Ouyang, Changchun.ouyang, mst
On Wed, Sep 23, 2015 at 12:20:00PM +0800, Yuanhan Liu wrote:
> From: Changchun Ouyang <changchun.ouyang@intel.com>
>
[...]
> static void net_vhost_user_event(void *opaque, int event)
> {
> - VhostUserState *s = opaque;
> + const char *name = opaque;
> + NetClientState *ncs[MAX_QUEUE_NUM];
> + VhostUserState *s;
> + Error *err = NULL;
> + int queues;
>
> + queues = qemu_find_net_clients_except(name, ncs,
> + NET_CLIENT_OPTIONS_KIND_NIC,
> + MAX_QUEUE_NUM);
> + s = DO_UPCAST(VhostUserState, nc, ncs[0]);
> switch (event) {
> case CHR_EVENT_OPENED:
> - vhost_user_start(s);
> - net_vhost_link_down(s, false);
> + if (vhost_user_start(queues, ncs) < 0) {
> + exit(1);
> + }
> + qmp_set_link(name, true, &err);
> error_report("chardev \"%s\" went up", s->chr->label);
> break;
> case CHR_EVENT_CLOSED:
> - net_vhost_link_down(s, true);
> - vhost_user_stop(s);
> + qmp_set_link(name, true, &err);
> + vhost_user_stop(queues, ncs);
Sigh... A silly Copy & Paste typo. Here is the udpated patch:
--yliu
---8<---
>From 3793772867024dfabb9eb8eb5c53ae6714f88618 Mon Sep 17 00:00:00 2001
From: Changchun Ouyang <changchun.ouyang@intel.com>
Date: Tue, 15 Sep 2015 14:28:38 +0800
Subject: [PATCH v11 6/7] vhost-user: add multiple queue support
This patch is initially based a patch from Nikolay Nikolaev.
This patch adds vhost-user multiple queue support, by creating a nc
and vhost_net pair for each queue.
Qemu exits if find that the backend can't support the number of requested
queues (by providing queues=# option). The max number is queried by a
new message, VHOST_USER_GET_QUEUE_NUM, and is sent only when protocol
feature VHOST_USER_PROTOCOL_F_MQ is present first.
The max queue check is done at vhost-user initiation stage. We initiate
one queue first, which, in the meantime, also gets the max_queues the
backend supports.
In older version, it was reported that some messages are sent more times
than necessary. Here we came an agreement with Michael that we could
categorize vhost user messages to 2 types: non-vring specific messages,
which should be sent only once, and vring specific messages, which should
be sent per queue.
Here I introduced a helper function vhost_user_one_time_request(), which
lists following messages as non-vring specific messages:
VHOST_USER_SET_OWNER
VHOST_USER_RESET_DEVICE
VHOST_USER_SET_MEM_TABLE
VHOST_USER_GET_QUEUE_NUM
For above messages, we simply ignore them when they are not sent the first
time.
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
v11: inovke qmp_set_link() directly -- Suggested by Jason Wang
v10: don't treat VHOST_USER_SET/GET_[PROTOCOL]_FEATURES as one time
request, as the two feature bits need to be stored at per-device.
v9: per suggested by Jason Wang, we could invoke qemu_chr_add_handlers()
once only, and invoke qemu_find_net_clients_except() at the handler
to gather all related ncs, for initiating all queues. Which addresses
a hidden bug in older verion in a more QEMU-like way.
v8: set net->dev.vq_index correctly inside vhost_net_init() based on the
value from net->nc->queue_index.
---
docs/specs/vhost-user.txt | 15 +++++
hw/net/vhost_net.c | 10 ++--
hw/virtio/vhost-user.c | 22 ++++++++
hw/virtio/vhost.c | 5 +-
net/vhost-user.c | 141 ++++++++++++++++++++++++++++++----------------
qapi-schema.json | 6 +-
qemu-options.hx | 5 +-
7 files changed, 147 insertions(+), 57 deletions(-)
diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index 43db9b4..cfc9d41 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -135,6 +135,21 @@ As older slaves don't support negotiating protocol features,
a feature bit was dedicated for this purpose:
#define VHOST_USER_F_PROTOCOL_FEATURES 30
+Multiple queue support
+----------------------
+
+Multiple queue is treated as a protocol extension, hence the slave has to
+implement protocol features first. The multiple queues feature is supported
+only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set:
+#define VHOST_USER_PROTOCOL_F_MQ 0
+
+The max number of queues the slave supports can be queried with message
+VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
+requested queues is bigger than that.
+
+As all queues share one connection, the master uses a unique index for each
+queue in the sent message to identify a specified queue.
+
Message types
-------------
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index f663e5a..ad82b5c 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -148,8 +148,11 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
fprintf(stderr, "vhost-net requires net backend to be setup\n");
goto fail;
}
+ net->nc = options->net_backend;
net->dev.max_queues = 1;
+ net->dev.nvqs = 2;
+ net->dev.vqs = net->vqs;
if (backend_kernel) {
r = vhost_net_get_fd(options->net_backend);
@@ -164,11 +167,10 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
net->dev.backend_features = 0;
net->dev.protocol_features = 0;
net->backend = -1;
- }
- net->nc = options->net_backend;
- net->dev.nvqs = 2;
- net->dev.vqs = net->vqs;
+ /* vhost-user needs vq_index to initiate a specific queue pair */
+ net->dev.vq_index = net->nc->queue_index * net->dev.nvqs;
+ }
r = vhost_dev_init(&net->dev, options->opaque,
options->backend_type);
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 5018fd6..e42fde6 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -187,6 +187,19 @@ static int vhost_user_write(struct vhost_dev *dev, VhostUserMsg *msg,
0 : -1;
}
+static bool vhost_user_one_time_request(VhostUserRequest request)
+{
+ switch (request) {
+ case VHOST_USER_SET_OWNER:
+ case VHOST_USER_RESET_DEVICE:
+ case VHOST_USER_SET_MEM_TABLE:
+ case VHOST_USER_GET_QUEUE_NUM:
+ return true;
+ default:
+ return false;
+ }
+}
+
static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
void *arg)
{
@@ -207,6 +220,15 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
msg_request = request;
}
+ /*
+ * For non-vring specific requests, like VHOST_USER_SET_MEM_TABLE,
+ * we just need send it once in the first time. For later such
+ * request, we just ignore it.
+ */
+ if (vhost_user_one_time_request(msg_request) && dev->vq_index != 0) {
+ return 0;
+ }
+
msg.request = msg_request;
msg.flags = VHOST_USER_VERSION;
msg.size = 0;
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 7a7812d..c0ed5b2 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -874,8 +874,9 @@ static void vhost_eventfd_del(MemoryListener *listener,
static int vhost_virtqueue_init(struct vhost_dev *dev,
struct vhost_virtqueue *vq, int n)
{
+ int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, n);
struct vhost_vring_file file = {
- .index = n,
+ .index = vhost_vq_index,
};
int r = event_notifier_init(&vq->masked_notifier, 0);
if (r < 0) {
@@ -926,7 +927,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
}
for (i = 0; i < hdev->nvqs; ++i) {
- r = vhost_virtqueue_init(hdev, hdev->vqs + i, i);
+ r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i);
if (r < 0) {
goto fail_vq;
}
diff --git a/net/vhost-user.c b/net/vhost-user.c
index 93dcecd..1abec97 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -14,6 +14,7 @@
#include "sysemu/char.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
+#include "qmp-commands.h"
typedef struct VhostUserState {
NetClientState nc;
@@ -39,37 +40,77 @@ static int vhost_user_running(VhostUserState *s)
return (s->vhost_net) ? 1 : 0;
}
-static int vhost_user_start(VhostUserState *s)
+static void vhost_user_stop(int queues, NetClientState *ncs[])
{
- VhostNetOptions options;
-
- if (vhost_user_running(s)) {
- return 0;
- }
+ VhostUserState *s;
+ int i;
- options.backend_type = VHOST_BACKEND_TYPE_USER;
- options.net_backend = &s->nc;
- options.opaque = s->chr;
+ for (i = 0; i < queues; i++) {
+ assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
- s->vhost_net = vhost_net_init(&options);
+ s = DO_UPCAST(VhostUserState, nc, ncs[i]);
+ if (!vhost_user_running(s)) {
+ continue;
+ }
- return vhost_user_running(s) ? 0 : -1;
+ if (s->vhost_net) {
+ vhost_net_cleanup(s->vhost_net);
+ s->vhost_net = NULL;
+ }
+ }
}
-static void vhost_user_stop(VhostUserState *s)
+static int vhost_user_start(int queues, NetClientState *ncs[])
{
- if (vhost_user_running(s)) {
- vhost_net_cleanup(s->vhost_net);
+ VhostNetOptions options;
+ VhostUserState *s;
+ int max_queues;
+ int i;
+
+ options.backend_type = VHOST_BACKEND_TYPE_USER;
+
+ for (i = 0; i < queues; i++) {
+ assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
+
+ s = DO_UPCAST(VhostUserState, nc, ncs[i]);
+ if (vhost_user_running(s)) {
+ continue;
+ }
+
+ options.net_backend = ncs[i];
+ options.opaque = s->chr;
+ s->vhost_net = vhost_net_init(&options);
+ if (!s->vhost_net) {
+ error_report("failed to init vhost_net for queue %d\n", i);
+ goto err;
+ }
+
+ if (i == 0) {
+ max_queues = vhost_net_get_max_queues(s->vhost_net);
+ if (queues > max_queues) {
+ error_report("you are asking more queues than "
+ "supported: %d\n", max_queues);
+ goto err;
+ }
+ }
}
- s->vhost_net = 0;
+ return 0;
+
+err:
+ vhost_user_stop(i + 1, ncs);
+ return -1;
}
static void vhost_user_cleanup(NetClientState *nc)
{
VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
- vhost_user_stop(s);
+ if (s->vhost_net) {
+ vhost_net_cleanup(s->vhost_net);
+ s->vhost_net = NULL;
+ }
+
qemu_purge_queued_packets(nc);
}
@@ -95,59 +136,61 @@ static NetClientInfo net_vhost_user_info = {
.has_ufo = vhost_user_has_ufo,
};
-static void net_vhost_link_down(VhostUserState *s, bool link_down)
-{
- s->nc.link_down = link_down;
-
- if (s->nc.peer) {
- s->nc.peer->link_down = link_down;
- }
-
- if (s->nc.info->link_status_changed) {
- s->nc.info->link_status_changed(&s->nc);
- }
-
- if (s->nc.peer && s->nc.peer->info->link_status_changed) {
- s->nc.peer->info->link_status_changed(s->nc.peer);
- }
-}
-
static void net_vhost_user_event(void *opaque, int event)
{
- VhostUserState *s = opaque;
+ const char *name = opaque;
+ NetClientState *ncs[MAX_QUEUE_NUM];
+ VhostUserState *s;
+ Error *err = NULL;
+ int queues;
+ queues = qemu_find_net_clients_except(name, ncs,
+ NET_CLIENT_OPTIONS_KIND_NIC,
+ MAX_QUEUE_NUM);
+ s = DO_UPCAST(VhostUserState, nc, ncs[0]);
switch (event) {
case CHR_EVENT_OPENED:
- vhost_user_start(s);
- net_vhost_link_down(s, false);
+ if (vhost_user_start(queues, ncs) < 0) {
+ exit(1);
+ }
+ qmp_set_link(name, true, &err);
error_report("chardev \"%s\" went up", s->chr->label);
break;
case CHR_EVENT_CLOSED:
- net_vhost_link_down(s, true);
- vhost_user_stop(s);
+ qmp_set_link(name, false, &err);
+ vhost_user_stop(queues, ncs);
error_report("chardev \"%s\" went down", s->chr->label);
break;
}
+
+ if (err) {
+ error_report_err(err);
+ }
}
static int net_vhost_user_init(NetClientState *peer, const char *device,
- const char *name, CharDriverState *chr)
+ const char *name, CharDriverState *chr,
+ int queues)
{
NetClientState *nc;
VhostUserState *s;
+ int i;
- nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
+ for (i = 0; i < queues; i++) {
+ nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
- snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
- chr->label);
+ snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
+ i, chr->label);
- s = DO_UPCAST(VhostUserState, nc, nc);
+ /* We don't provide a receive callback */
+ nc->receive_disabled = 1;
+ nc->queue_index = i;
- /* We don't provide a receive callback */
- s->nc.receive_disabled = 1;
- s->chr = chr;
+ s = DO_UPCAST(VhostUserState, nc, nc);
+ s->chr = chr;
+ }
- qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
+ qemu_chr_add_handlers(chr, NULL, NULL, net_vhost_user_event, (void*)name);
return 0;
}
@@ -226,6 +269,7 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
int net_init_vhost_user(const NetClientOptions *opts, const char *name,
NetClientState *peer, Error **errp)
{
+ int queues;
const NetdevVhostUserOptions *vhost_user_opts;
CharDriverState *chr;
@@ -243,6 +287,7 @@ int net_init_vhost_user(const NetClientOptions *opts, const char *name,
return -1;
}
+ queues = vhost_user_opts->has_queues ? vhost_user_opts->queues : 1;
- return net_vhost_user_init(peer, "vhost_user", name, chr);
+ return net_vhost_user_init(peer, "vhost_user", name, chr, queues);
}
diff --git a/qapi-schema.json b/qapi-schema.json
index 527690d..582a817 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -2481,12 +2481,16 @@
#
# @vhostforce: #optional vhost on for non-MSIX virtio guests (default: false).
#
+# @queues: #optional number of queues to be created for multiqueue vhost-user
+# (default: 1) (Since 2.5)
+#
# Since 2.1
##
{ 'struct': 'NetdevVhostUserOptions',
'data': {
'chardev': 'str',
- '*vhostforce': 'bool' } }
+ '*vhostforce': 'bool',
+ '*queues': 'int' } }
##
# @NetClientOptions
diff --git a/qemu-options.hx b/qemu-options.hx
index 7e147b8..328404c 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -1990,13 +1990,14 @@ The hubport netdev lets you connect a NIC to a QEMU "vlan" instead of a single
netdev. @code{-net} and @code{-device} with parameter @option{vlan} create the
required hub automatically.
-@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off]
+@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off][,queues=n]
Establish a vhost-user netdev, backed by a chardev @var{id}. The chardev should
be a unix domain socket backed one. The vhost-user uses a specifically defined
protocol to pass vhost ioctl replacement messages to an application on the other
end of the socket. On non-MSIX guests, the feature can be forced with
-@var{vhostforce}.
+@var{vhostforce}. Use 'queues=@var{n}' to specify the number of queues to
+be created for multiqueue vhost-user.
Example:
@example
--
1.9.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 6/7] vhost-user: add multiple queue support
2015-09-23 6:56 ` Yuanhan Liu
@ 2015-09-24 5:34 ` Jason Wang
2015-09-24 5:57 ` Yuanhan Liu
0 siblings, 1 reply; 22+ messages in thread
From: Jason Wang @ 2015-09-24 5:34 UTC (permalink / raw)
To: Yuanhan Liu, qemu-devel; +Cc: mst, Changchun.ouyang, Changchun Ouyang
On 09/23/2015 02:56 PM, Yuanhan Liu wrote:
> On Wed, Sep 23, 2015 at 12:20:00PM +0800, Yuanhan Liu wrote:
>> From: Changchun Ouyang <changchun.ouyang@intel.com>
>>
> [...]
>> static void net_vhost_user_event(void *opaque, int event)
>> {
>> - VhostUserState *s = opaque;
>> + const char *name = opaque;
>> + NetClientState *ncs[MAX_QUEUE_NUM];
>> + VhostUserState *s;
>> + Error *err = NULL;
>> + int queues;
>>
>> + queues = qemu_find_net_clients_except(name, ncs,
>> + NET_CLIENT_OPTIONS_KIND_NIC,
>> + MAX_QUEUE_NUM);
>> + s = DO_UPCAST(VhostUserState, nc, ncs[0]);
>> switch (event) {
>> case CHR_EVENT_OPENED:
>> - vhost_user_start(s);
>> - net_vhost_link_down(s, false);
>> + if (vhost_user_start(queues, ncs) < 0) {
>> + exit(1);
>> + }
>> + qmp_set_link(name, true, &err);
>> error_report("chardev \"%s\" went up", s->chr->label);
>> break;
>> case CHR_EVENT_CLOSED:
>> - net_vhost_link_down(s, true);
>> - vhost_user_stop(s);
>> + qmp_set_link(name, true, &err);
>> + vhost_user_stop(queues, ncs);
> Sigh... A silly Copy & Paste typo. Here is the udpated patch:
>
> --yliu
>
> ---8<---
> From 3793772867024dfabb9eb8eb5c53ae6714f88618 Mon Sep 17 00:00:00 2001
> From: Changchun Ouyang <changchun.ouyang@intel.com>
> Date: Tue, 15 Sep 2015 14:28:38 +0800
> Subject: [PATCH v11 6/7] vhost-user: add multiple queue support
>
> This patch is initially based a patch from Nikolay Nikolaev.
>
> This patch adds vhost-user multiple queue support, by creating a nc
> and vhost_net pair for each queue.
>
> Qemu exits if find that the backend can't support the number of requested
> queues (by providing queues=# option). The max number is queried by a
> new message, VHOST_USER_GET_QUEUE_NUM, and is sent only when protocol
> feature VHOST_USER_PROTOCOL_F_MQ is present first.
>
> The max queue check is done at vhost-user initiation stage. We initiate
> one queue first, which, in the meantime, also gets the max_queues the
> backend supports.
>
> In older version, it was reported that some messages are sent more times
> than necessary. Here we came an agreement with Michael that we could
> categorize vhost user messages to 2 types: non-vring specific messages,
> which should be sent only once, and vring specific messages, which should
> be sent per queue.
>
> Here I introduced a helper function vhost_user_one_time_request(), which
> lists following messages as non-vring specific messages:
>
> VHOST_USER_SET_OWNER
> VHOST_USER_RESET_DEVICE
> VHOST_USER_SET_MEM_TABLE
> VHOST_USER_GET_QUEUE_NUM
>
> For above messages, we simply ignore them when they are not sent the first
> time.
>
> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>
> ---
> v11: inovke qmp_set_link() directly -- Suggested by Jason Wang
>
> v10: don't treat VHOST_USER_SET/GET_[PROTOCOL]_FEATURES as one time
> request, as the two feature bits need to be stored at per-device.
>
> v9: per suggested by Jason Wang, we could invoke qemu_chr_add_handlers()
> once only, and invoke qemu_find_net_clients_except() at the handler
> to gather all related ncs, for initiating all queues. Which addresses
> a hidden bug in older verion in a more QEMU-like way.
>
> v8: set net->dev.vq_index correctly inside vhost_net_init() based on the
> value from net->nc->queue_index.
> ---
> docs/specs/vhost-user.txt | 15 +++++
> hw/net/vhost_net.c | 10 ++--
> hw/virtio/vhost-user.c | 22 ++++++++
> hw/virtio/vhost.c | 5 +-
> net/vhost-user.c | 141 ++++++++++++++++++++++++++++++----------------
> qapi-schema.json | 6 +-
> qemu-options.hx | 5 +-
> 7 files changed, 147 insertions(+), 57 deletions(-)
Some nitpicks and comments. If you plan to send another version, please
consider to fix them.
Reviewed-by: Jason Wang <jasowang@redhat.com>
Btw, it's better to extend current vhost-user unit test to support
multiqueue. Please consider this on top this series.
Thanks
>
> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> index 43db9b4..cfc9d41 100644
> --- a/docs/specs/vhost-user.txt
> +++ b/docs/specs/vhost-user.txt
> @@ -135,6 +135,21 @@ As older slaves don't support negotiating protocol features,
> a feature bit was dedicated for this purpose:
> #define VHOST_USER_F_PROTOCOL_FEATURES 30
>
> +Multiple queue support
> +----------------------
> +
> +Multiple queue is treated as a protocol extension, hence the slave has to
> +implement protocol features first. The multiple queues feature is supported
> +only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set:
> +#define VHOST_USER_PROTOCOL_F_MQ 0
> +
> +The max number of queues the slave supports can be queried with message
> +VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
> +requested queues is bigger than that.
> +
> +As all queues share one connection, the master uses a unique index for each
> +queue in the sent message to identify a specified queue.
> +
> Message types
> -------------
>
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index f663e5a..ad82b5c 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -148,8 +148,11 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> fprintf(stderr, "vhost-net requires net backend to be setup\n");
> goto fail;
> }
> + net->nc = options->net_backend;
>
> net->dev.max_queues = 1;
> + net->dev.nvqs = 2;
> + net->dev.vqs = net->vqs;
>
> if (backend_kernel) {
> r = vhost_net_get_fd(options->net_backend);
> @@ -164,11 +167,10 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> net->dev.backend_features = 0;
> net->dev.protocol_features = 0;
> net->backend = -1;
> - }
> - net->nc = options->net_backend;
>
> - net->dev.nvqs = 2;
> - net->dev.vqs = net->vqs;
> + /* vhost-user needs vq_index to initiate a specific queue pair */
> + net->dev.vq_index = net->nc->queue_index * net->dev.nvqs;
Better use vhost_net_set_vq_index() here.
> + }
>
> r = vhost_dev_init(&net->dev, options->opaque,
> options->backend_type);
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 5018fd6..e42fde6 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -187,6 +187,19 @@ static int vhost_user_write(struct vhost_dev *dev, VhostUserMsg *msg,
> 0 : -1;
> }
>
> +static bool vhost_user_one_time_request(VhostUserRequest request)
> +{
> + switch (request) {
> + case VHOST_USER_SET_OWNER:
> + case VHOST_USER_RESET_DEVICE:
> + case VHOST_USER_SET_MEM_TABLE:
> + case VHOST_USER_GET_QUEUE_NUM:
> + return true;
> + default:
> + return false;
> + }
> +}
> +
> static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> void *arg)
> {
> @@ -207,6 +220,15 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> msg_request = request;
> }
>
> + /*
> + * For non-vring specific requests, like VHOST_USER_SET_MEM_TABLE,
> + * we just need send it once in the first time. For later such
> + * request, we just ignore it.
> + */
> + if (vhost_user_one_time_request(msg_request) && dev->vq_index != 0) {
> + return 0;
> + }
Looks like vhost_user_dev_request() is a better name here.
> +
> msg.request = msg_request;
> msg.flags = VHOST_USER_VERSION;
> msg.size = 0;
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 7a7812d..c0ed5b2 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -874,8 +874,9 @@ static void vhost_eventfd_del(MemoryListener *listener,
> static int vhost_virtqueue_init(struct vhost_dev *dev,
> struct vhost_virtqueue *vq, int n)
> {
> + int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, n);
> struct vhost_vring_file file = {
> - .index = n,
> + .index = vhost_vq_index,
> };
> int r = event_notifier_init(&vq->masked_notifier, 0);
> if (r < 0) {
> @@ -926,7 +927,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> }
>
> for (i = 0; i < hdev->nvqs; ++i) {
> - r = vhost_virtqueue_init(hdev, hdev->vqs + i, i);
> + r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i);
> if (r < 0) {
> goto fail_vq;
> }
> diff --git a/net/vhost-user.c b/net/vhost-user.c
> index 93dcecd..1abec97 100644
> --- a/net/vhost-user.c
> +++ b/net/vhost-user.c
> @@ -14,6 +14,7 @@
> #include "sysemu/char.h"
> #include "qemu/config-file.h"
> #include "qemu/error-report.h"
> +#include "qmp-commands.h"
>
> typedef struct VhostUserState {
> NetClientState nc;
> @@ -39,37 +40,77 @@ static int vhost_user_running(VhostUserState *s)
> return (s->vhost_net) ? 1 : 0;
> }
>
> -static int vhost_user_start(VhostUserState *s)
> +static void vhost_user_stop(int queues, NetClientState *ncs[])
> {
> - VhostNetOptions options;
> -
> - if (vhost_user_running(s)) {
> - return 0;
> - }
> + VhostUserState *s;
> + int i;
>
> - options.backend_type = VHOST_BACKEND_TYPE_USER;
> - options.net_backend = &s->nc;
> - options.opaque = s->chr;
> + for (i = 0; i < queues; i++) {
> + assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
>
> - s->vhost_net = vhost_net_init(&options);
> + s = DO_UPCAST(VhostUserState, nc, ncs[i]);
> + if (!vhost_user_running(s)) {
> + continue;
> + }
>
> - return vhost_user_running(s) ? 0 : -1;
> + if (s->vhost_net) {
> + vhost_net_cleanup(s->vhost_net);
> + s->vhost_net = NULL;
> + }
> + }
> }
>
> -static void vhost_user_stop(VhostUserState *s)
> +static int vhost_user_start(int queues, NetClientState *ncs[])
> {
> - if (vhost_user_running(s)) {
> - vhost_net_cleanup(s->vhost_net);
> + VhostNetOptions options;
> + VhostUserState *s;
> + int max_queues;
> + int i;
> +
> + options.backend_type = VHOST_BACKEND_TYPE_USER;
> +
> + for (i = 0; i < queues; i++) {
> + assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
> +
> + s = DO_UPCAST(VhostUserState, nc, ncs[i]);
> + if (vhost_user_running(s)) {
> + continue;
> + }
> +
> + options.net_backend = ncs[i];
> + options.opaque = s->chr;
> + s->vhost_net = vhost_net_init(&options);
> + if (!s->vhost_net) {
> + error_report("failed to init vhost_net for queue %d\n", i);
> + goto err;
> + }
> +
> + if (i == 0) {
> + max_queues = vhost_net_get_max_queues(s->vhost_net);
> + if (queues > max_queues) {
> + error_report("you are asking more queues than "
> + "supported: %d\n", max_queues);
> + goto err;
> + }
> + }
> }
>
> - s->vhost_net = 0;
> + return 0;
> +
> +err:
> + vhost_user_stop(i + 1, ncs);
> + return -1;
> }
>
> static void vhost_user_cleanup(NetClientState *nc)
> {
> VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
>
> - vhost_user_stop(s);
> + if (s->vhost_net) {
> + vhost_net_cleanup(s->vhost_net);
> + s->vhost_net = NULL;
> + }
> +
> qemu_purge_queued_packets(nc);
> }
>
> @@ -95,59 +136,61 @@ static NetClientInfo net_vhost_user_info = {
> .has_ufo = vhost_user_has_ufo,
> };
>
> -static void net_vhost_link_down(VhostUserState *s, bool link_down)
> -{
> - s->nc.link_down = link_down;
> -
> - if (s->nc.peer) {
> - s->nc.peer->link_down = link_down;
> - }
> -
> - if (s->nc.info->link_status_changed) {
> - s->nc.info->link_status_changed(&s->nc);
> - }
> -
> - if (s->nc.peer && s->nc.peer->info->link_status_changed) {
> - s->nc.peer->info->link_status_changed(s->nc.peer);
> - }
> -}
> -
> static void net_vhost_user_event(void *opaque, int event)
> {
> - VhostUserState *s = opaque;
> + const char *name = opaque;
> + NetClientState *ncs[MAX_QUEUE_NUM];
> + VhostUserState *s;
> + Error *err = NULL;
> + int queues;
>
> + queues = qemu_find_net_clients_except(name, ncs,
> + NET_CLIENT_OPTIONS_KIND_NIC,
> + MAX_QUEUE_NUM);
> + s = DO_UPCAST(VhostUserState, nc, ncs[0]);
> switch (event) {
> case CHR_EVENT_OPENED:
> - vhost_user_start(s);
> - net_vhost_link_down(s, false);
> + if (vhost_user_start(queues, ncs) < 0) {
> + exit(1);
How about report a specific error instead of just abort here?
> + }
> + qmp_set_link(name, true, &err);
> error_report("chardev \"%s\" went up", s->chr->label);
> break;
> case CHR_EVENT_CLOSED:
> - net_vhost_link_down(s, true);
> - vhost_user_stop(s);
> + qmp_set_link(name, false, &err);
> + vhost_user_stop(queues, ncs);
> error_report("chardev \"%s\" went down", s->chr->label);
> break;
> }
> +
> + if (err) {
> + error_report_err(err);
> + }
> }
>
> static int net_vhost_user_init(NetClientState *peer, const char *device,
> - const char *name, CharDriverState *chr)
> + const char *name, CharDriverState *chr,
> + int queues)
> {
> NetClientState *nc;
> VhostUserState *s;
> + int i;
>
> - nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
> + for (i = 0; i < queues; i++) {
> + nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
>
> - snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
> - chr->label);
> + snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
> + i, chr->label);
>
> - s = DO_UPCAST(VhostUserState, nc, nc);
> + /* We don't provide a receive callback */
> + nc->receive_disabled = 1;
> + nc->queue_index = i;
>
> - /* We don't provide a receive callback */
> - s->nc.receive_disabled = 1;
> - s->chr = chr;
> + s = DO_UPCAST(VhostUserState, nc, nc);
> + s->chr = chr;
> + }
>
> - qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
> + qemu_chr_add_handlers(chr, NULL, NULL, net_vhost_user_event, (void*)name);
>
> return 0;
> }
> @@ -226,6 +269,7 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
> int net_init_vhost_user(const NetClientOptions *opts, const char *name,
> NetClientState *peer, Error **errp)
> {
> + int queues;
> const NetdevVhostUserOptions *vhost_user_opts;
> CharDriverState *chr;
>
> @@ -243,6 +287,7 @@ int net_init_vhost_user(const NetClientOptions *opts, const char *name,
> return -1;
> }
>
> + queues = vhost_user_opts->has_queues ? vhost_user_opts->queues : 1;
>
> - return net_vhost_user_init(peer, "vhost_user", name, chr);
> + return net_vhost_user_init(peer, "vhost_user", name, chr, queues);
> }
> diff --git a/qapi-schema.json b/qapi-schema.json
> index 527690d..582a817 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -2481,12 +2481,16 @@
> #
> # @vhostforce: #optional vhost on for non-MSIX virtio guests (default: false).
> #
> +# @queues: #optional number of queues to be created for multiqueue vhost-user
> +# (default: 1) (Since 2.5)
> +#
> # Since 2.1
> ##
> { 'struct': 'NetdevVhostUserOptions',
> 'data': {
> 'chardev': 'str',
> - '*vhostforce': 'bool' } }
> + '*vhostforce': 'bool',
> + '*queues': 'int' } }
>
> ##
> # @NetClientOptions
> diff --git a/qemu-options.hx b/qemu-options.hx
> index 7e147b8..328404c 100644
> --- a/qemu-options.hx
> +++ b/qemu-options.hx
> @@ -1990,13 +1990,14 @@ The hubport netdev lets you connect a NIC to a QEMU "vlan" instead of a single
> netdev. @code{-net} and @code{-device} with parameter @option{vlan} create the
> required hub automatically.
>
> -@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off]
> +@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off][,queues=n]
>
> Establish a vhost-user netdev, backed by a chardev @var{id}. The chardev should
> be a unix domain socket backed one. The vhost-user uses a specifically defined
> protocol to pass vhost ioctl replacement messages to an application on the other
> end of the socket. On non-MSIX guests, the feature can be forced with
> -@var{vhostforce}.
> +@var{vhostforce}. Use 'queues=@var{n}' to specify the number of queues to
> +be created for multiqueue vhost-user.
>
> Example:
> @example
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 6/7] vhost-user: add multiple queue support
2015-09-24 5:34 ` Jason Wang
@ 2015-09-24 5:57 ` Yuanhan Liu
2015-09-24 6:15 ` Jason Wang
0 siblings, 1 reply; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-24 5:57 UTC (permalink / raw)
To: Jason Wang, mst; +Cc: qemu-devel, Changchun.ouyang, Changchun Ouyang
On Thu, Sep 24, 2015 at 01:34:31PM +0800, Jason Wang wrote:
>
>
> Some nitpicks and comments. If you plan to send another version, please
> consider to fix them.
I will, but I'd like to hold a while before getting more comments from
Michael; I don't want to repost a whole new version too often for minor
changes.
BTW, Michael, is this patchset being ready for merge? If not, please
kindly tell me what is wrong so that I can fix them. TBH, I'd really
like to see it be merged soon, as I'm gonna have holidays soon for
two weeks start from this Saturday. I could still reply emails, even
make patches during that, but I guess I only have limited time for that.
>
> Reviewed-by: Jason Wang <jasowang@redhat.com>
Thank you!
>
> Btw, it's better to extend current vhost-user unit test to support
> multiqueue. Please consider this on top this series.
Yeah, I will. But, may I do that later? (And if possible, after the
merge?)
>
> >
> > diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> > index 43db9b4..cfc9d41 100644
> > --- a/docs/specs/vhost-user.txt
> > +++ b/docs/specs/vhost-user.txt
> > @@ -135,6 +135,21 @@ As older slaves don't support negotiating protocol features,
> > a feature bit was dedicated for this purpose:
> > #define VHOST_USER_F_PROTOCOL_FEATURES 30
> >
> > +Multiple queue support
> > +----------------------
> > +
> > +Multiple queue is treated as a protocol extension, hence the slave has to
> > +implement protocol features first. The multiple queues feature is supported
> > +only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set:
> > +#define VHOST_USER_PROTOCOL_F_MQ 0
> > +
> > +The max number of queues the slave supports can be queried with message
> > +VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
> > +requested queues is bigger than that.
> > +
> > +As all queues share one connection, the master uses a unique index for each
> > +queue in the sent message to identify a specified queue.
> > +
> > Message types
> > -------------
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index f663e5a..ad82b5c 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -148,8 +148,11 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> > fprintf(stderr, "vhost-net requires net backend to be setup\n");
> > goto fail;
> > }
> > + net->nc = options->net_backend;
> >
> > net->dev.max_queues = 1;
> > + net->dev.nvqs = 2;
> > + net->dev.vqs = net->vqs;
> >
> > if (backend_kernel) {
> > r = vhost_net_get_fd(options->net_backend);
> > @@ -164,11 +167,10 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
> > net->dev.backend_features = 0;
> > net->dev.protocol_features = 0;
> > net->backend = -1;
> > - }
> > - net->nc = options->net_backend;
> >
> > - net->dev.nvqs = 2;
> > - net->dev.vqs = net->vqs;
> > + /* vhost-user needs vq_index to initiate a specific queue pair */
> > + net->dev.vq_index = net->nc->queue_index * net->dev.nvqs;
>
> Better use vhost_net_set_vq_index() here.
I could do that.
>
> > + }
> >
> > r = vhost_dev_init(&net->dev, options->opaque,
> > options->backend_type);
> > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > index 5018fd6..e42fde6 100644
> > --- a/hw/virtio/vhost-user.c
> > +++ b/hw/virtio/vhost-user.c
> > @@ -187,6 +187,19 @@ static int vhost_user_write(struct vhost_dev *dev, VhostUserMsg *msg,
> > 0 : -1;
> > }
> >
> > +static bool vhost_user_one_time_request(VhostUserRequest request)
> > +{
> > + switch (request) {
> > + case VHOST_USER_SET_OWNER:
> > + case VHOST_USER_RESET_DEVICE:
> > + case VHOST_USER_SET_MEM_TABLE:
> > + case VHOST_USER_GET_QUEUE_NUM:
> > + return true;
> > + default:
> > + return false;
> > + }
> > +}
> > +
> > static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> > void *arg)
> > {
> > @@ -207,6 +220,15 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> > msg_request = request;
> > }
> >
> > + /*
> > + * For non-vring specific requests, like VHOST_USER_SET_MEM_TABLE,
> > + * we just need send it once in the first time. For later such
> > + * request, we just ignore it.
> > + */
> > + if (vhost_user_one_time_request(msg_request) && dev->vq_index != 0) {
> > + return 0;
> > + }
>
> Looks like vhost_user_dev_request() is a better name here.
Yeah, indeed. Thanks.
>
> > +
> > msg.request = msg_request;
> > msg.flags = VHOST_USER_VERSION;
> > msg.size = 0;
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > index 7a7812d..c0ed5b2 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -874,8 +874,9 @@ static void vhost_eventfd_del(MemoryListener *listener,
> > static int vhost_virtqueue_init(struct vhost_dev *dev,
> > struct vhost_virtqueue *vq, int n)
> > {
> > + int vhost_vq_index = dev->vhost_ops->vhost_backend_get_vq_index(dev, n);
> > struct vhost_vring_file file = {
> > - .index = n,
> > + .index = vhost_vq_index,
> > };
> > int r = event_notifier_init(&vq->masked_notifier, 0);
> > if (r < 0) {
> > @@ -926,7 +927,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > }
> >
> > for (i = 0; i < hdev->nvqs; ++i) {
> > - r = vhost_virtqueue_init(hdev, hdev->vqs + i, i);
> > + r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i);
> > if (r < 0) {
> > goto fail_vq;
> > }
> > diff --git a/net/vhost-user.c b/net/vhost-user.c
> > index 93dcecd..1abec97 100644
> > --- a/net/vhost-user.c
> > +++ b/net/vhost-user.c
> > @@ -14,6 +14,7 @@
> > #include "sysemu/char.h"
> > #include "qemu/config-file.h"
> > #include "qemu/error-report.h"
> > +#include "qmp-commands.h"
> >
> > typedef struct VhostUserState {
> > NetClientState nc;
> > @@ -39,37 +40,77 @@ static int vhost_user_running(VhostUserState *s)
> > return (s->vhost_net) ? 1 : 0;
> > }
> >
> > -static int vhost_user_start(VhostUserState *s)
> > +static void vhost_user_stop(int queues, NetClientState *ncs[])
> > {
> > - VhostNetOptions options;
> > -
> > - if (vhost_user_running(s)) {
> > - return 0;
> > - }
> > + VhostUserState *s;
> > + int i;
> >
> > - options.backend_type = VHOST_BACKEND_TYPE_USER;
> > - options.net_backend = &s->nc;
> > - options.opaque = s->chr;
> > + for (i = 0; i < queues; i++) {
> > + assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
> >
> > - s->vhost_net = vhost_net_init(&options);
> > + s = DO_UPCAST(VhostUserState, nc, ncs[i]);
> > + if (!vhost_user_running(s)) {
> > + continue;
> > + }
> >
> > - return vhost_user_running(s) ? 0 : -1;
> > + if (s->vhost_net) {
> > + vhost_net_cleanup(s->vhost_net);
> > + s->vhost_net = NULL;
> > + }
> > + }
> > }
> >
> > -static void vhost_user_stop(VhostUserState *s)
> > +static int vhost_user_start(int queues, NetClientState *ncs[])
> > {
> > - if (vhost_user_running(s)) {
> > - vhost_net_cleanup(s->vhost_net);
> > + VhostNetOptions options;
> > + VhostUserState *s;
> > + int max_queues;
> > + int i;
> > +
> > + options.backend_type = VHOST_BACKEND_TYPE_USER;
> > +
> > + for (i = 0; i < queues; i++) {
> > + assert (ncs[i]->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER);
> > +
> > + s = DO_UPCAST(VhostUserState, nc, ncs[i]);
> > + if (vhost_user_running(s)) {
> > + continue;
> > + }
> > +
> > + options.net_backend = ncs[i];
> > + options.opaque = s->chr;
> > + s->vhost_net = vhost_net_init(&options);
> > + if (!s->vhost_net) {
> > + error_report("failed to init vhost_net for queue %d\n", i);
> > + goto err;
> > + }
> > +
> > + if (i == 0) {
> > + max_queues = vhost_net_get_max_queues(s->vhost_net);
> > + if (queues > max_queues) {
> > + error_report("you are asking more queues than "
> > + "supported: %d\n", max_queues);
> > + goto err;
> > + }
> > + }
> > }
> >
> > - s->vhost_net = 0;
> > + return 0;
> > +
> > +err:
> > + vhost_user_stop(i + 1, ncs);
> > + return -1;
> > }
> >
> > static void vhost_user_cleanup(NetClientState *nc)
> > {
> > VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
> >
> > - vhost_user_stop(s);
> > + if (s->vhost_net) {
> > + vhost_net_cleanup(s->vhost_net);
> > + s->vhost_net = NULL;
> > + }
> > +
> > qemu_purge_queued_packets(nc);
> > }
> >
> > @@ -95,59 +136,61 @@ static NetClientInfo net_vhost_user_info = {
> > .has_ufo = vhost_user_has_ufo,
> > };
> >
> > -static void net_vhost_link_down(VhostUserState *s, bool link_down)
> > -{
> > - s->nc.link_down = link_down;
> > -
> > - if (s->nc.peer) {
> > - s->nc.peer->link_down = link_down;
> > - }
> > -
> > - if (s->nc.info->link_status_changed) {
> > - s->nc.info->link_status_changed(&s->nc);
> > - }
> > -
> > - if (s->nc.peer && s->nc.peer->info->link_status_changed) {
> > - s->nc.peer->info->link_status_changed(s->nc.peer);
> > - }
> > -}
> > -
> > static void net_vhost_user_event(void *opaque, int event)
> > {
> > - VhostUserState *s = opaque;
> > + const char *name = opaque;
> > + NetClientState *ncs[MAX_QUEUE_NUM];
> > + VhostUserState *s;
> > + Error *err = NULL;
> > + int queues;
> >
> > + queues = qemu_find_net_clients_except(name, ncs,
> > + NET_CLIENT_OPTIONS_KIND_NIC,
> > + MAX_QUEUE_NUM);
> > + s = DO_UPCAST(VhostUserState, nc, ncs[0]);
> > switch (event) {
> > case CHR_EVENT_OPENED:
> > - vhost_user_start(s);
> > - net_vhost_link_down(s, false);
> > + if (vhost_user_start(queues, ncs) < 0) {
> > + exit(1);
>
> How about report a specific error instead of just abort here?
There is an error report at vhost_user_start():
error_report("you are asking more queues than "
"supported: %d\n", max_queues);
Or, do you mean something else?
--yliu
>
> > + }
> > + qmp_set_link(name, true, &err);
> > error_report("chardev \"%s\" went up", s->chr->label);
> > break;
> > case CHR_EVENT_CLOSED:
> > - net_vhost_link_down(s, true);
> > - vhost_user_stop(s);
> > + qmp_set_link(name, false, &err);
> > + vhost_user_stop(queues, ncs);
> > error_report("chardev \"%s\" went down", s->chr->label);
> > break;
> > }
> > +
> > + if (err) {
> > + error_report_err(err);
> > + }
> > }
> >
> > static int net_vhost_user_init(NetClientState *peer, const char *device,
> > - const char *name, CharDriverState *chr)
> > + const char *name, CharDriverState *chr,
> > + int queues)
> > {
> > NetClientState *nc;
> > VhostUserState *s;
> > + int i;
> >
> > - nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
> > + for (i = 0; i < queues; i++) {
> > + nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name);
> >
> > - snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
> > - chr->label);
> > + snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
> > + i, chr->label);
> >
> > - s = DO_UPCAST(VhostUserState, nc, nc);
> > + /* We don't provide a receive callback */
> > + nc->receive_disabled = 1;
> > + nc->queue_index = i;
> >
> > - /* We don't provide a receive callback */
> > - s->nc.receive_disabled = 1;
> > - s->chr = chr;
> > + s = DO_UPCAST(VhostUserState, nc, nc);
> > + s->chr = chr;
> > + }
> >
> > - qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
> > + qemu_chr_add_handlers(chr, NULL, NULL, net_vhost_user_event, (void*)name);
> >
> > return 0;
> > }
> > @@ -226,6 +269,7 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
> > int net_init_vhost_user(const NetClientOptions *opts, const char *name,
> > NetClientState *peer, Error **errp)
> > {
> > + int queues;
> > const NetdevVhostUserOptions *vhost_user_opts;
> > CharDriverState *chr;
> >
> > @@ -243,6 +287,7 @@ int net_init_vhost_user(const NetClientOptions *opts, const char *name,
> > return -1;
> > }
> >
> > + queues = vhost_user_opts->has_queues ? vhost_user_opts->queues : 1;
> >
> > - return net_vhost_user_init(peer, "vhost_user", name, chr);
> > + return net_vhost_user_init(peer, "vhost_user", name, chr, queues);
> > }
> > diff --git a/qapi-schema.json b/qapi-schema.json
> > index 527690d..582a817 100644
> > --- a/qapi-schema.json
> > +++ b/qapi-schema.json
> > @@ -2481,12 +2481,16 @@
> > #
> > # @vhostforce: #optional vhost on for non-MSIX virtio guests (default: false).
> > #
> > +# @queues: #optional number of queues to be created for multiqueue vhost-user
> > +# (default: 1) (Since 2.5)
> > +#
> > # Since 2.1
> > ##
> > { 'struct': 'NetdevVhostUserOptions',
> > 'data': {
> > 'chardev': 'str',
> > - '*vhostforce': 'bool' } }
> > + '*vhostforce': 'bool',
> > + '*queues': 'int' } }
> >
> > ##
> > # @NetClientOptions
> > diff --git a/qemu-options.hx b/qemu-options.hx
> > index 7e147b8..328404c 100644
> > --- a/qemu-options.hx
> > +++ b/qemu-options.hx
> > @@ -1990,13 +1990,14 @@ The hubport netdev lets you connect a NIC to a QEMU "vlan" instead of a single
> > netdev. @code{-net} and @code{-device} with parameter @option{vlan} create the
> > required hub automatically.
> >
> > -@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off]
> > +@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off][,queues=n]
> >
> > Establish a vhost-user netdev, backed by a chardev @var{id}. The chardev should
> > be a unix domain socket backed one. The vhost-user uses a specifically defined
> > protocol to pass vhost ioctl replacement messages to an application on the other
> > end of the socket. On non-MSIX guests, the feature can be forced with
> > -@var{vhostforce}.
> > +@var{vhostforce}. Use 'queues=@var{n}' to specify the number of queues to
> > +be created for multiqueue vhost-user.
> >
> > Example:
> > @example
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 6/7] vhost-user: add multiple queue support
2015-09-24 5:57 ` Yuanhan Liu
@ 2015-09-24 6:15 ` Jason Wang
0 siblings, 0 replies; 22+ messages in thread
From: Jason Wang @ 2015-09-24 6:15 UTC (permalink / raw)
To: Yuanhan Liu, mst
Cc: Markus Armbruster, Changchun Ouyang, qemu-devel, Changchun.ouyang
On 09/24/2015 01:57 PM, Yuanhan Liu wrote:
> On Thu, Sep 24, 2015 at 01:34:31PM +0800, Jason Wang wrote:
>>
>> Some nitpicks and comments. If you plan to send another version, please
>> consider to fix them.
> I will, but I'd like to hold a while before getting more comments from
> Michael; I don't want to repost a whole new version too often for minor
> changes.
>
> BTW, Michael, is this patchset being ready for merge? If not, please
> kindly tell me what is wrong so that I can fix them. TBH, I'd really
> like to see it be merged soon, as I'm gonna have holidays soon for
> two weeks start from this Saturday. I could still reply emails, even
> make patches during that, but I guess I only have limited time for that.
>
>> Reviewed-by: Jason Wang <jasowang@redhat.com>
> Thank you!
>
>> Btw, it's better to extend current vhost-user unit test to support
>> multiqueue. Please consider this on top this series.
> Yeah, I will. But, may I do that later? (And if possible, after the
> merge?)
>
Yes, of course :)
>>> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
>>> index 43db9b4..cfc9d41 100644
>>> --- a/docs/specs/vhost-user.txt
>>> +++ b/docs/specs/vhost-user.txt
>>> @@ -135,6 +135,21 @@ As older slaves don't support negotiating protocol features,
>>> a feature bit was dedicated for this purpose:
>>> #define VHOST_USER_F_PROTOCOL_FEATURES 30
>>>
[...]
>>>
>>> static void vhost_user_cleanup(NetClientState *nc)
>>> {
>>> VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
>>>
>>> - vhost_user_stop(s);
>>> + if (s->vhost_net) {
>>> + vhost_net_cleanup(s->vhost_net);
>>> + s->vhost_net = NULL;
>>> + }
>>> +
>>> qemu_purge_queued_packets(nc);
>>> }
>>>
>>> @@ -95,59 +136,61 @@ static NetClientInfo net_vhost_user_info = {
>>> .has_ufo = vhost_user_has_ufo,
>>> };
>>>
>>> -static void net_vhost_link_down(VhostUserState *s, bool link_down)
>>> -{
>>> - s->nc.link_down = link_down;
>>> -
>>> - if (s->nc.peer) {
>>> - s->nc.peer->link_down = link_down;
>>> - }
>>> -
>>> - if (s->nc.info->link_status_changed) {
>>> - s->nc.info->link_status_changed(&s->nc);
>>> - }
>>> -
>>> - if (s->nc.peer && s->nc.peer->info->link_status_changed) {
>>> - s->nc.peer->info->link_status_changed(s->nc.peer);
>>> - }
>>> -}
>>> -
>>> static void net_vhost_user_event(void *opaque, int event)
>>> {
>>> - VhostUserState *s = opaque;
>>> + const char *name = opaque;
>>> + NetClientState *ncs[MAX_QUEUE_NUM];
>>> + VhostUserState *s;
>>> + Error *err = NULL;
>>> + int queues;
>>>
>>> + queues = qemu_find_net_clients_except(name, ncs,
>>> + NET_CLIENT_OPTIONS_KIND_NIC,
>>> + MAX_QUEUE_NUM);
>>> + s = DO_UPCAST(VhostUserState, nc, ncs[0]);
>>> switch (event) {
>>> case CHR_EVENT_OPENED:
>>> - vhost_user_start(s);
>>> - net_vhost_link_down(s, false);
>>> + if (vhost_user_start(queues, ncs) < 0) {
>>> + exit(1);
>> How about report a specific error instead of just abort here?
> There is an error report at vhost_user_start():
>
> error_report("you are asking more queues than "
> "supported: %d\n", max_queues);
>
> Or, do you mean something else?
>
>
> --yliu
Not sure, but we have error_abort and error_fatal. Markus may know more
about this.
[...]
^ permalink raw reply [flat|nested] 22+ messages in thread
* [Qemu-devel] [PATCH v11 7/7] vhost-user: add a new message to disable/enable a specific virt queue.
2015-09-23 4:19 [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Yuanhan Liu
` (5 preceding siblings ...)
2015-09-23 4:20 ` [Qemu-devel] [PATCH v11 6/7] vhost-user: add multiple queue support Yuanhan Liu
@ 2015-09-23 4:20 ` Yuanhan Liu
2015-09-24 5:43 ` Jason Wang
2015-09-24 10:03 ` [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Marcel Apfelbaum
7 siblings, 1 reply; 22+ messages in thread
From: Yuanhan Liu @ 2015-09-23 4:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Yuanhan Liu, mst, jasowang, Changchun.ouyang, Changchun Ouyang
From: Changchun Ouyang <changchun.ouyang@intel.com>
Add a new message, VHOST_USER_SET_VRING_ENABLE, to enable or disable
a specific virt queue, which is similar to attach/detach queue for
tap device.
virtio driver on guest doesn't have to use max virt queue pair, it
could enable any number of virt queue ranging from 1 to max virt
queue pair.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
v11: fix typo
---
docs/specs/vhost-user.txt | 12 +++++++++++-
hw/net/vhost_net.c | 18 ++++++++++++++++++
hw/net/virtio-net.c | 8 ++++++++
hw/virtio/vhost-user.c | 19 +++++++++++++++++++
include/hw/virtio/vhost-backend.h | 2 ++
include/net/vhost_net.h | 2 ++
6 files changed, 60 insertions(+), 1 deletion(-)
diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index cfc9d41..4eadad1 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -148,7 +148,9 @@ VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
requested queues is bigger than that.
As all queues share one connection, the master uses a unique index for each
-queue in the sent message to identify a specified queue.
+queue in the sent message to identify a specified queue. One queue pair
+is enabled initially. More queues are enabled dynamically, by sending
+message VHOST_USER_SET_VRING_ENABLE.
Message types
-------------
@@ -327,3 +329,11 @@ Message types
Query how many queues the backend supports. This request should be
sent only when VHOST_USER_PROTOCOL_F_MQ is set in quried protocol
features by VHOST_USER_GET_PROTOCOL_FEATURES.
+
+ * VHOST_USER_SET_VRING_ENABLE
+
+ Id: 18
+ Equivalent ioctl: N/A
+ Master payload: vring state description
+
+ Signal slave to enable or disable corresponding vring.
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index ad82b5c..2bce891 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -422,6 +422,19 @@ VHostNetState *get_vhost_net(NetClientState *nc)
return vhost_net;
}
+
+int vhost_set_vring_enable(NetClientState *nc, int enable)
+{
+ VHostNetState *net = get_vhost_net(nc);
+ const VhostOps *vhost_ops = net->dev.vhost_ops;
+
+ if (vhost_ops->vhost_backend_set_vring_enable) {
+ return vhost_ops->vhost_backend_set_vring_enable(&net->dev, enable);
+ }
+
+ return 0;
+}
+
#else
uint64_t vhost_net_get_max_queues(VHostNetState *net)
{
@@ -472,4 +485,9 @@ VHostNetState *get_vhost_net(NetClientState *nc)
{
return 0;
}
+
+int vhost_set_vring_enable(NetClientState *nc, int enable)
+{
+ return 0;
+}
#endif
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index f72eebf..916222f 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -406,6 +406,10 @@ static int peer_attach(VirtIONet *n, int index)
return 0;
}
+ if (nc->peer->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
+ vhost_set_vring_enable(nc->peer, 1);
+ }
+
if (nc->peer->info->type != NET_CLIENT_OPTIONS_KIND_TAP) {
return 0;
}
@@ -421,6 +425,10 @@ static int peer_detach(VirtIONet *n, int index)
return 0;
}
+ if (nc->peer->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
+ vhost_set_vring_enable(nc->peer, 0);
+ }
+
if (nc->peer->info->type != NET_CLIENT_OPTIONS_KIND_TAP) {
return 0;
}
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index e42fde6..b11c0d2 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -48,6 +48,7 @@ typedef enum VhostUserRequest {
VHOST_USER_GET_PROTOCOL_FEATURES = 15,
VHOST_USER_SET_PROTOCOL_FEATURES = 16,
VHOST_USER_GET_QUEUE_NUM = 17,
+ VHOST_USER_SET_VRING_ENABLE = 18,
VHOST_USER_MAX
} VhostUserRequest;
@@ -290,6 +291,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
case VHOST_USER_SET_VRING_NUM:
case VHOST_USER_SET_VRING_BASE:
+ case VHOST_USER_SET_VRING_ENABLE:
memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
msg.size = sizeof(m.state);
break;
@@ -406,6 +408,22 @@ static int vhost_user_init(struct vhost_dev *dev, void *opaque)
return 0;
}
+static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
+{
+ struct vhost_vring_state state = {
+ .index = dev->vq_index,
+ .num = enable,
+ };
+
+ assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
+
+ if (!(dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MQ))) {
+ return -1;
+ }
+
+ return vhost_user_call(dev, VHOST_USER_SET_VRING_ENABLE, &state);
+}
+
static int vhost_user_cleanup(struct vhost_dev *dev)
{
assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
@@ -428,4 +446,5 @@ const VhostOps user_ops = {
.vhost_backend_init = vhost_user_init,
.vhost_backend_cleanup = vhost_user_cleanup,
.vhost_backend_get_vq_index = vhost_user_get_vq_index,
+ .vhost_backend_set_vring_enable = vhost_user_set_vring_enable,
};
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index e1dfc6d..3a0f6e2 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -25,6 +25,7 @@ typedef int (*vhost_call)(struct vhost_dev *dev, unsigned long int request,
typedef int (*vhost_backend_init)(struct vhost_dev *dev, void *opaque);
typedef int (*vhost_backend_cleanup)(struct vhost_dev *dev);
typedef int (*vhost_backend_get_vq_index)(struct vhost_dev *dev, int idx);
+typedef int (*vhost_backend_set_vring_enable)(struct vhost_dev *dev, int enable);
typedef struct VhostOps {
VhostBackendType backend_type;
@@ -32,6 +33,7 @@ typedef struct VhostOps {
vhost_backend_init vhost_backend_init;
vhost_backend_cleanup vhost_backend_cleanup;
vhost_backend_get_vq_index vhost_backend_get_vq_index;
+ vhost_backend_set_vring_enable vhost_backend_set_vring_enable;
} VhostOps;
extern const VhostOps user_ops;
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index 8db723e..0188c4d 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -28,4 +28,6 @@ bool vhost_net_virtqueue_pending(VHostNetState *net, int n);
void vhost_net_virtqueue_mask(VHostNetState *net, VirtIODevice *dev,
int idx, bool mask);
VHostNetState *get_vhost_net(NetClientState *nc);
+
+int vhost_set_vring_enable(NetClientState * nc, int enable);
#endif
--
1.9.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 7/7] vhost-user: add a new message to disable/enable a specific virt queue.
2015-09-23 4:20 ` [Qemu-devel] [PATCH v11 7/7] vhost-user: add a new message to disable/enable a specific virt queue Yuanhan Liu
@ 2015-09-24 5:43 ` Jason Wang
0 siblings, 0 replies; 22+ messages in thread
From: Jason Wang @ 2015-09-24 5:43 UTC (permalink / raw)
To: Yuanhan Liu, qemu-devel; +Cc: Changchun Ouyang, Changchun.ouyang, mst
On 09/23/2015 12:20 PM, Yuanhan Liu wrote:
> From: Changchun Ouyang <changchun.ouyang@intel.com>
>
> Add a new message, VHOST_USER_SET_VRING_ENABLE, to enable or disable
> a specific virt queue, which is similar to attach/detach queue for
> tap device.
>
> virtio driver on guest doesn't have to use max virt queue pair, it
> could enable any number of virt queue ranging from 1 to max virt
> queue pair.
>
> Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
>
> ---
> v11: fix typo
> ---
> docs/specs/vhost-user.txt | 12 +++++++++++-
> hw/net/vhost_net.c | 18 ++++++++++++++++++
> hw/net/virtio-net.c | 8 ++++++++
> hw/virtio/vhost-user.c | 19 +++++++++++++++++++
> include/hw/virtio/vhost-backend.h | 2 ++
> include/net/vhost_net.h | 2 ++
> 6 files changed, 60 insertions(+), 1 deletion(-)
>
> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> index cfc9d41..4eadad1 100644
> --- a/docs/specs/vhost-user.txt
> +++ b/docs/specs/vhost-user.txt
> @@ -148,7 +148,9 @@ VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
> requested queues is bigger than that.
>
> As all queues share one connection, the master uses a unique index for each
> -queue in the sent message to identify a specified queue.
> +queue in the sent message to identify a specified queue. One queue pair
> +is enabled initially. More queues are enabled dynamically, by sending
> +message VHOST_USER_SET_VRING_ENABLE.
>
> Message types
> -------------
> @@ -327,3 +329,11 @@ Message types
> Query how many queues the backend supports. This request should be
> sent only when VHOST_USER_PROTOCOL_F_MQ is set in quried protocol
> features by VHOST_USER_GET_PROTOCOL_FEATURES.
> +
> + * VHOST_USER_SET_VRING_ENABLE
> +
> + Id: 18
> + Equivalent ioctl: N/A
> + Master payload: vring state description
> +
> + Signal slave to enable or disable corresponding vring.
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index ad82b5c..2bce891 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -422,6 +422,19 @@ VHostNetState *get_vhost_net(NetClientState *nc)
>
> return vhost_net;
> }
> +
> +int vhost_set_vring_enable(NetClientState *nc, int enable)
> +{
> + VHostNetState *net = get_vhost_net(nc);
> + const VhostOps *vhost_ops = net->dev.vhost_ops;
> +
> + if (vhost_ops->vhost_backend_set_vring_enable) {
> + return vhost_ops->vhost_backend_set_vring_enable(&net->dev, enable);
> + }
> +
> + return 0;
> +}
> +
> #else
> uint64_t vhost_net_get_max_queues(VHostNetState *net)
> {
> @@ -472,4 +485,9 @@ VHostNetState *get_vhost_net(NetClientState *nc)
> {
> return 0;
> }
> +
> +int vhost_set_vring_enable(NetClientState *nc, int enable)
> +{
> + return 0;
> +}
> #endif
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index f72eebf..916222f 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -406,6 +406,10 @@ static int peer_attach(VirtIONet *n, int index)
> return 0;
> }
>
> + if (nc->peer->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
> + vhost_set_vring_enable(nc->peer, 1);
> + }
> +
> if (nc->peer->info->type != NET_CLIENT_OPTIONS_KIND_TAP) {
> return 0;
> }
> @@ -421,6 +425,10 @@ static int peer_detach(VirtIONet *n, int index)
> return 0;
> }
>
> + if (nc->peer->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
> + vhost_set_vring_enable(nc->peer, 0);
> + }
> +
> if (nc->peer->info->type != NET_CLIENT_OPTIONS_KIND_TAP) {
> return 0;
> }
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index e42fde6..b11c0d2 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -48,6 +48,7 @@ typedef enum VhostUserRequest {
> VHOST_USER_GET_PROTOCOL_FEATURES = 15,
> VHOST_USER_SET_PROTOCOL_FEATURES = 16,
> VHOST_USER_GET_QUEUE_NUM = 17,
> + VHOST_USER_SET_VRING_ENABLE = 18,
> VHOST_USER_MAX
> } VhostUserRequest;
>
> @@ -290,6 +291,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
>
> case VHOST_USER_SET_VRING_NUM:
> case VHOST_USER_SET_VRING_BASE:
> + case VHOST_USER_SET_VRING_ENABLE:
> memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
> msg.size = sizeof(m.state);
> break;
> @@ -406,6 +408,22 @@ static int vhost_user_init(struct vhost_dev *dev, void *opaque)
> return 0;
> }
>
> +static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
> +{
> + struct vhost_vring_state state = {
> + .index = dev->vq_index,
> + .num = enable,
> + };
> +
> + assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
> +
> + if (!(dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MQ))) {
> + return -1;
> + }
> +
> + return vhost_user_call(dev, VHOST_USER_SET_VRING_ENABLE, &state);
> +}
> +
> static int vhost_user_cleanup(struct vhost_dev *dev)
> {
> assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
> @@ -428,4 +446,5 @@ const VhostOps user_ops = {
> .vhost_backend_init = vhost_user_init,
> .vhost_backend_cleanup = vhost_user_cleanup,
> .vhost_backend_get_vq_index = vhost_user_get_vq_index,
> + .vhost_backend_set_vring_enable = vhost_user_set_vring_enable,
> };
> diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
> index e1dfc6d..3a0f6e2 100644
> --- a/include/hw/virtio/vhost-backend.h
> +++ b/include/hw/virtio/vhost-backend.h
> @@ -25,6 +25,7 @@ typedef int (*vhost_call)(struct vhost_dev *dev, unsigned long int request,
> typedef int (*vhost_backend_init)(struct vhost_dev *dev, void *opaque);
> typedef int (*vhost_backend_cleanup)(struct vhost_dev *dev);
> typedef int (*vhost_backend_get_vq_index)(struct vhost_dev *dev, int idx);
> +typedef int (*vhost_backend_set_vring_enable)(struct vhost_dev *dev, int enable);
>
> typedef struct VhostOps {
> VhostBackendType backend_type;
> @@ -32,6 +33,7 @@ typedef struct VhostOps {
> vhost_backend_init vhost_backend_init;
> vhost_backend_cleanup vhost_backend_cleanup;
> vhost_backend_get_vq_index vhost_backend_get_vq_index;
> + vhost_backend_set_vring_enable vhost_backend_set_vring_enable;
> } VhostOps;
>
> extern const VhostOps user_ops;
> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> index 8db723e..0188c4d 100644
> --- a/include/net/vhost_net.h
> +++ b/include/net/vhost_net.h
> @@ -28,4 +28,6 @@ bool vhost_net_virtqueue_pending(VHostNetState *net, int n);
> void vhost_net_virtqueue_mask(VHostNetState *net, VirtIODevice *dev,
> int idx, bool mask);
> VHostNetState *get_vhost_net(NetClientState *nc);
> +
> +int vhost_set_vring_enable(NetClientState * nc, int enable);
> #endif
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support
2015-09-23 4:19 [Qemu-devel] [PATCH v11 0/7] vhost-user multiple queue support Yuanhan Liu
` (6 preceding siblings ...)
2015-09-23 4:20 ` [Qemu-devel] [PATCH v11 7/7] vhost-user: add a new message to disable/enable a specific virt queue Yuanhan Liu
@ 2015-09-24 10:03 ` Marcel Apfelbaum
7 siblings, 0 replies; 22+ messages in thread
From: Marcel Apfelbaum @ 2015-09-24 10:03 UTC (permalink / raw)
To: Yuanhan Liu, qemu-devel; +Cc: jasowang, Changchun.ouyang, mst
On 09/23/2015 07:19 AM, Yuanhan Liu wrote:
> Hi,
>
> Here is the updated patch set for enabling vhost-user multiple queue. And I did
> proper and formal testing this time.
>
> This patch set introduces 2 more vhost user messages: VHOST_USER_GET_QUEUE_NUM,
> for querying how many queues the backend supports, and VHOST_USER_SET_VRING_ENABLE,
> for enabling/disabling a specific virt queue.
>
> Both of the two new messages are treated as vhost protocol extension,
> and that's why Michaels's patch "vhost-user: add protocol feature
> negotiation" is also included here.
>
> Patch 1-5 are all prepare works for actually enabling multiple queue.
>
> Patch 6 is the major patch for enabling multiple queue, which also tries
> to address two major concerns from Michael: no feedback from backend if
> it can't support # of requested queues, and all messages are sent N time.
> It also fixes a hidden bug.
>
> Patch 7 introduces the VHOST_USER_SET_VRING_ENABLE message, to enable
> or disable a specific vring.
>
> v11: - typo fixes pointed out by Eric
>
> - define a dummy vhost_net_get_max_queues() when !CONFIG_VHOST_NET.
>
> - Per suggested by Jason Wang, invoke qmp_set_link() directly to remove
> duplicate code.
>
>
> v10: - typo fixes pointed out by Eric
>
> - [PATCH 6]: don't treat VHOST_USER_SET/GET_[PROTOCOL]_FEATURES as
> one time request, as the two feature bits need to be stored at
> per-device.
>
> v9: - Per suggested by Jason Wang, patch 5 introduces a new vhost
> backend method: vhost_backend_get_vq_index().
>
> - Use qemu_find_net_clients_except() at net/vhost-user.c for
> gathering all related ncs so that we could register chr dev
> event handler once. Which is also suggested by Jason Wang.
>
>
> Thanks.
>
> --yliu
>
> ---
> Changchun Ouyang (2):
> vhost-user: add multiple queue support
> vhost-user: add a new message to disable/enable a specific virt queue.
>
> Michael S. Tsirkin (1):
> vhost-user: add protocol feature negotiation
>
> Yuanhan Liu (4):
> vhost-user: use VHOST_USER_XXX macro for switch statement
> vhost: rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE
> vhost-user: add VHOST_USER_GET_QUEUE_NUM message
> vhost: introduce vhost_backend_get_vq_index method
>
> docs/specs/vhost-user.txt | 77 ++++++++++++++++++++-
> hw/net/vhost_net.c | 44 ++++++++++--
> hw/net/virtio-net.c | 8 +++
> hw/virtio/vhost-backend.c | 10 ++-
> hw/virtio/vhost-user.c | 139 +++++++++++++++++++++++++++++++------
> hw/virtio/vhost.c | 20 +++---
> include/hw/virtio/vhost-backend.h | 4 ++
> include/hw/virtio/vhost.h | 2 +
> include/net/vhost_net.h | 3 +
> linux-headers/linux/vhost.h | 2 +-
> net/vhost-user.c | 141 +++++++++++++++++++++++++-------------
> qapi-schema.json | 6 +-
> qemu-options.hx | 5 +-
> tests/vhost-user-test.c | 2 +-
> 14 files changed, 371 insertions(+), 92 deletions(-)
>
Thanks a lot for your work!
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
^ permalink raw reply [flat|nested] 22+ messages in thread