qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 0/8] Net patches
@ 2022-09-27  7:30 Jason Wang
  2022-09-27  7:30 ` [PULL 1/8] e1000e: set RX desc status with DD flag in a separate operation Jason Wang
                   ` (8 more replies)
  0 siblings, 9 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha; +Cc: Jason Wang

The following changes since commit 99d6b11b5b44d7dd64f4cb1973184e40a4a174f8:

  Merge tag 'pull-target-arm-20220922' of https://git.linaro.org/people/pmaydell/qemu-arm into staging (2022-09-26 13:38:26 -0400)

are available in the git repository at:

  https://github.com/jasowang/qemu.git tags/net-pull-request

for you to fetch changes up to bf769f742c3624952f125b303878a77ea870c156:

  virtio: del net client if net_init_tap_one failed (2022-09-27 15:14:37 +0800)

----------------------------------------------------------------

----------------------------------------------------------------
Ding Hui (1):
      e1000e: set RX desc status with DD flag in a separate operation

Eugenio Pérez (6):
      vdpa: Make VhostVDPAState cvq_cmd_in_buffer control ack type
      vdpa: extract vhost_vdpa_net_load_mac from vhost_vdpa_net_load
      vdpa: Add vhost_vdpa_net_load_mq
      vdpa: validate MQ CVQ commands
      virtio-net: Update virtio-net curr_queue_pairs in vdpa backends
      vdpa: Allow MQ feature in SVQ

lu zhipeng (1):
      virtio: del net client if net_init_tap_one failed

 hw/net/e1000e_core.c |  53 ++++++++++++++++++++++-
 hw/net/virtio-net.c  |  17 +++-----
 net/tap.c            |  18 +++++---
 net/vhost-vdpa.c     | 119 +++++++++++++++++++++++++++++++++++++--------------
 4 files changed, 157 insertions(+), 50 deletions(-)

Ding Hui (1):
  e1000e: set RX desc status with DD flag in a separate operation

Eugenio Pérez (6):
  vdpa: Make VhostVDPAState cvq_cmd_in_buffer control ack type
  vdpa: extract vhost_vdpa_net_load_mac from vhost_vdpa_net_load
  vdpa: Add vhost_vdpa_net_load_mq
  vdpa: validate MQ CVQ commands
  virtio-net: Update virtio-net curr_queue_pairs in vdpa backends
  vdpa: Allow MQ feature in SVQ

lu zhipeng (1):
  virtio: del net client if net_init_tap_one failed

 hw/net/e1000e_core.c |  53 ++++++++++++++++++++++-
 hw/net/virtio-net.c  |  17 +++-----
 net/tap.c            |  18 +++++---
 net/vhost-vdpa.c     | 119 +++++++++++++++++++++++++++++++++++++--------------
 4 files changed, 157 insertions(+), 50 deletions(-)

-- 
2.7.4



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PULL 1/8] e1000e: set RX desc status with DD flag in a separate operation
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
@ 2022-09-27  7:30 ` Jason Wang
  2022-09-27  7:30 ` [PULL 2/8] vdpa: Make VhostVDPAState cvq_cmd_in_buffer control ack type Jason Wang
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha; +Cc: Ding Hui, Jason Wang

From: Ding Hui <dinghui@sangfor.com.cn>

Like commit 034d00d48581 ("e1000: set RX descriptor status in
a separate operation"), there is also same issue in e1000e, which
would cause lost packets or stop sending packets to VM with DPDK.

Do similar fix in e1000e.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/402
Signed-off-by: Ding Hui <dinghui@sangfor.com.cn>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/e1000e_core.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 52 insertions(+), 1 deletion(-)

diff --git a/hw/net/e1000e_core.c b/hw/net/e1000e_core.c
index 82aa61f..fc9cdb4 100644
--- a/hw/net/e1000e_core.c
+++ b/hw/net/e1000e_core.c
@@ -1364,6 +1364,57 @@ struct NetRxPkt *pkt, const E1000E_RSSInfo *rss_info,
     }
 }
 
+static inline void
+e1000e_pci_dma_write_rx_desc(E1000ECore *core, dma_addr_t addr,
+                             uint8_t *desc, dma_addr_t len)
+{
+    PCIDevice *dev = core->owner;
+
+    if (e1000e_rx_use_legacy_descriptor(core)) {
+        struct e1000_rx_desc *d = (struct e1000_rx_desc *) desc;
+        size_t offset = offsetof(struct e1000_rx_desc, status);
+        uint8_t status = d->status;
+
+        d->status &= ~E1000_RXD_STAT_DD;
+        pci_dma_write(dev, addr, desc, len);
+
+        if (status & E1000_RXD_STAT_DD) {
+            d->status = status;
+            pci_dma_write(dev, addr + offset, &status, sizeof(status));
+        }
+    } else {
+        if (core->mac[RCTL] & E1000_RCTL_DTYP_PS) {
+            union e1000_rx_desc_packet_split *d =
+                (union e1000_rx_desc_packet_split *) desc;
+            size_t offset = offsetof(union e1000_rx_desc_packet_split,
+                wb.middle.status_error);
+            uint32_t status = d->wb.middle.status_error;
+
+            d->wb.middle.status_error &= ~E1000_RXD_STAT_DD;
+            pci_dma_write(dev, addr, desc, len);
+
+            if (status & E1000_RXD_STAT_DD) {
+                d->wb.middle.status_error = status;
+                pci_dma_write(dev, addr + offset, &status, sizeof(status));
+            }
+        } else {
+            union e1000_rx_desc_extended *d =
+                (union e1000_rx_desc_extended *) desc;
+            size_t offset = offsetof(union e1000_rx_desc_extended,
+                wb.upper.status_error);
+            uint32_t status = d->wb.upper.status_error;
+
+            d->wb.upper.status_error &= ~E1000_RXD_STAT_DD;
+            pci_dma_write(dev, addr, desc, len);
+
+            if (status & E1000_RXD_STAT_DD) {
+                d->wb.upper.status_error = status;
+                pci_dma_write(dev, addr + offset, &status, sizeof(status));
+            }
+        }
+    }
+}
+
 typedef struct e1000e_ba_state_st {
     uint16_t written[MAX_PS_BUFFERS];
     uint8_t cur_idx;
@@ -1600,7 +1651,7 @@ e1000e_write_packet_to_guest(E1000ECore *core, struct NetRxPkt *pkt,
 
         e1000e_write_rx_descr(core, desc, is_last ? core->rx_pkt : NULL,
                            rss_info, do_ps ? ps_hdr_len : 0, &bastate.written);
-        pci_dma_write(d, base, &desc, core->rx_desc_len);
+        e1000e_pci_dma_write_rx_desc(core, base, desc, core->rx_desc_len);
 
         e1000e_ring_advance(core, rxi,
                             core->rx_desc_len / E1000_MIN_RX_DESC_LEN);
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PULL 2/8] vdpa: Make VhostVDPAState cvq_cmd_in_buffer control ack type
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
  2022-09-27  7:30 ` [PULL 1/8] e1000e: set RX desc status with DD flag in a separate operation Jason Wang
@ 2022-09-27  7:30 ` Jason Wang
  2022-09-27  7:30 ` [PULL 3/8] vdpa: extract vhost_vdpa_net_load_mac from vhost_vdpa_net_load Jason Wang
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha; +Cc: Eugenio Pérez, Jason Wang

From: Eugenio Pérez <eperezma@redhat.com>

This allows to simplify the code. Rename to status while we're at it.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 6ce68fc..535315c 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -35,7 +35,9 @@ typedef struct VhostVDPAState {
     VHostNetState *vhost_net;
 
     /* Control commands shadow buffers */
-    void *cvq_cmd_out_buffer, *cvq_cmd_in_buffer;
+    void *cvq_cmd_out_buffer;
+    virtio_net_ctrl_ack *status;
+
     bool started;
 } VhostVDPAState;
 
@@ -158,7 +160,7 @@ static void vhost_vdpa_cleanup(NetClientState *nc)
     struct vhost_dev *dev = &s->vhost_net->dev;
 
     qemu_vfree(s->cvq_cmd_out_buffer);
-    qemu_vfree(s->cvq_cmd_in_buffer);
+    qemu_vfree(s->status);
     if (dev->vq_index + dev->nvqs == dev->vq_index_end) {
         g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
     }
@@ -310,7 +312,7 @@ static int vhost_vdpa_net_cvq_start(NetClientState *nc)
         return r;
     }
 
-    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer,
+    r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->status,
                                vhost_vdpa_net_cvq_cmd_page_len(), true);
     if (unlikely(r < 0)) {
         vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
@@ -327,7 +329,7 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
 
     if (s->vhost_vdpa.shadow_vqs_enabled) {
         vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
-        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_in_buffer);
+        vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->status);
     }
 }
 
@@ -340,7 +342,7 @@ static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
         .iov_len = out_len,
     };
     const struct iovec in = {
-        .iov_base = s->cvq_cmd_in_buffer,
+        .iov_base = s->status,
         .iov_len = sizeof(virtio_net_ctrl_ack),
     };
     VhostShadowVirtqueue *svq = g_ptr_array_index(s->vhost_vdpa.shadow_vqs, 0);
@@ -396,7 +398,7 @@ static int vhost_vdpa_net_load(NetClientState *nc)
             return dev_written;
         }
 
-        return *((virtio_net_ctrl_ack *)s->cvq_cmd_in_buffer) != VIRTIO_NET_OK;
+        return *s->status != VIRTIO_NET_OK;
     }
 
     return 0;
@@ -491,8 +493,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
         goto out;
     }
 
-    memcpy(&status, s->cvq_cmd_in_buffer, sizeof(status));
-    if (status != VIRTIO_NET_OK) {
+    if (*s->status != VIRTIO_NET_OK) {
         return VIRTIO_NET_ERR;
     }
 
@@ -549,9 +550,9 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
         s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
                                             vhost_vdpa_net_cvq_cmd_page_len());
         memset(s->cvq_cmd_out_buffer, 0, vhost_vdpa_net_cvq_cmd_page_len());
-        s->cvq_cmd_in_buffer = qemu_memalign(qemu_real_host_page_size(),
-                                            vhost_vdpa_net_cvq_cmd_page_len());
-        memset(s->cvq_cmd_in_buffer, 0, vhost_vdpa_net_cvq_cmd_page_len());
+        s->status = qemu_memalign(qemu_real_host_page_size(),
+                                  vhost_vdpa_net_cvq_cmd_page_len());
+        memset(s->status, 0, vhost_vdpa_net_cvq_cmd_page_len());
 
         s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
         s->vhost_vdpa.shadow_vq_ops_opaque = s;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PULL 3/8] vdpa: extract vhost_vdpa_net_load_mac from vhost_vdpa_net_load
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
  2022-09-27  7:30 ` [PULL 1/8] e1000e: set RX desc status with DD flag in a separate operation Jason Wang
  2022-09-27  7:30 ` [PULL 2/8] vdpa: Make VhostVDPAState cvq_cmd_in_buffer control ack type Jason Wang
@ 2022-09-27  7:30 ` Jason Wang
  2022-09-27  7:30 ` [PULL 4/8] vdpa: Add vhost_vdpa_net_load_mq Jason Wang
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha; +Cc: Eugenio Pérez, Jason Wang

From: Eugenio Pérez <eperezma@redhat.com>

Since there may be many commands we need to issue to load the NIC
state, let's split them in individual functions

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 62 ++++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 40 insertions(+), 22 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 535315c..e799e74 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -365,12 +365,47 @@ static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
     return vhost_svq_poll(svq);
 }
 
+static ssize_t vhost_vdpa_net_load_cmd(VhostVDPAState *s, uint8_t class,
+                                       uint8_t cmd, const void *data,
+                                       size_t data_size)
+{
+    const struct virtio_net_ctrl_hdr ctrl = {
+        .class = class,
+        .cmd = cmd,
+    };
+
+    assert(data_size < vhost_vdpa_net_cvq_cmd_page_len() - sizeof(ctrl));
+
+    memcpy(s->cvq_cmd_out_buffer, &ctrl, sizeof(ctrl));
+    memcpy(s->cvq_cmd_out_buffer + sizeof(ctrl), data, data_size);
+
+    return vhost_vdpa_net_cvq_add(s, sizeof(ctrl) + data_size,
+                                  sizeof(virtio_net_ctrl_ack));
+}
+
+static int vhost_vdpa_net_load_mac(VhostVDPAState *s, const VirtIONet *n)
+{
+    uint64_t features = n->parent_obj.guest_features;
+    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
+        ssize_t dev_written = vhost_vdpa_net_load_cmd(s, VIRTIO_NET_CTRL_MAC,
+                                                  VIRTIO_NET_CTRL_MAC_ADDR_SET,
+                                                  n->mac, sizeof(n->mac));
+        if (unlikely(dev_written < 0)) {
+            return dev_written;
+        }
+
+        return *s->status != VIRTIO_NET_OK;
+    }
+
+    return 0;
+}
+
 static int vhost_vdpa_net_load(NetClientState *nc)
 {
     VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
-    const struct vhost_vdpa *v = &s->vhost_vdpa;
+    struct vhost_vdpa *v = &s->vhost_vdpa;
     const VirtIONet *n;
-    uint64_t features;
+    int r;
 
     assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
 
@@ -379,26 +414,9 @@ static int vhost_vdpa_net_load(NetClientState *nc)
     }
 
     n = VIRTIO_NET(v->dev->vdev);
-    features = n->parent_obj.guest_features;
-    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
-        const struct virtio_net_ctrl_hdr ctrl = {
-            .class = VIRTIO_NET_CTRL_MAC,
-            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
-        };
-        char *cursor = s->cvq_cmd_out_buffer;
-        ssize_t dev_written;
-
-        memcpy(cursor, &ctrl, sizeof(ctrl));
-        cursor += sizeof(ctrl);
-        memcpy(cursor, n->mac, sizeof(n->mac));
-
-        dev_written = vhost_vdpa_net_cvq_add(s, sizeof(ctrl) + sizeof(n->mac),
-                                             sizeof(virtio_net_ctrl_ack));
-        if (unlikely(dev_written < 0)) {
-            return dev_written;
-        }
-
-        return *s->status != VIRTIO_NET_OK;
+    r = vhost_vdpa_net_load_mac(s, n);
+    if (unlikely(r < 0)) {
+        return r;
     }
 
     return 0;
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PULL 4/8] vdpa: Add vhost_vdpa_net_load_mq
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
                   ` (2 preceding siblings ...)
  2022-09-27  7:30 ` [PULL 3/8] vdpa: extract vhost_vdpa_net_load_mac from vhost_vdpa_net_load Jason Wang
@ 2022-09-27  7:30 ` Jason Wang
  2022-09-27  7:30 ` [PULL 5/8] vdpa: validate MQ CVQ commands Jason Wang
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha; +Cc: Eugenio Pérez, Jason Wang

From: Eugenio Pérez <eperezma@redhat.com>

Same way as with the MAC, restore the expected number of queues at
device's start.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index e799e74..3950e4f 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -400,6 +400,28 @@ static int vhost_vdpa_net_load_mac(VhostVDPAState *s, const VirtIONet *n)
     return 0;
 }
 
+static int vhost_vdpa_net_load_mq(VhostVDPAState *s,
+                                  const VirtIONet *n)
+{
+    struct virtio_net_ctrl_mq mq;
+    uint64_t features = n->parent_obj.guest_features;
+    ssize_t dev_written;
+
+    if (!(features & BIT_ULL(VIRTIO_NET_F_MQ))) {
+        return 0;
+    }
+
+    mq.virtqueue_pairs = cpu_to_le16(n->curr_queue_pairs);
+    dev_written = vhost_vdpa_net_load_cmd(s, VIRTIO_NET_CTRL_MQ,
+                                          VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET, &mq,
+                                          sizeof(mq));
+    if (unlikely(dev_written < 0)) {
+        return dev_written;
+    }
+
+    return *s->status != VIRTIO_NET_OK;
+}
+
 static int vhost_vdpa_net_load(NetClientState *nc)
 {
     VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
@@ -418,6 +440,10 @@ static int vhost_vdpa_net_load(NetClientState *nc)
     if (unlikely(r < 0)) {
         return r;
     }
+    r = vhost_vdpa_net_load_mq(s, n);
+    if (unlikely(r)) {
+        return r;
+    }
 
     return 0;
 }
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PULL 5/8] vdpa: validate MQ CVQ commands
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
                   ` (3 preceding siblings ...)
  2022-09-27  7:30 ` [PULL 4/8] vdpa: Add vhost_vdpa_net_load_mq Jason Wang
@ 2022-09-27  7:30 ` Jason Wang
  2022-09-27  7:30 ` [PULL 6/8] virtio-net: Update virtio-net curr_queue_pairs in vdpa backends Jason Wang
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha; +Cc: Eugenio Pérez, Jason Wang

From: Eugenio Pérez <eperezma@redhat.com>

So we are sure we can update the device model properly before sending to
the device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 3950e4f..c6cbe2f 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -486,6 +486,15 @@ static bool vhost_vdpa_net_cvq_validate_cmd(const void *out_buf, size_t len)
                           __func__, ctrl.cmd);
         };
         break;
+    case VIRTIO_NET_CTRL_MQ:
+        switch (ctrl.cmd) {
+        case VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET:
+            return true;
+        default:
+            qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid mq cmd %u\n",
+                          __func__, ctrl.cmd);
+        };
+        break;
     default:
         qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid control class %u\n",
                       __func__, ctrl.class);
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PULL 6/8] virtio-net: Update virtio-net curr_queue_pairs in vdpa backends
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
                   ` (4 preceding siblings ...)
  2022-09-27  7:30 ` [PULL 5/8] vdpa: validate MQ CVQ commands Jason Wang
@ 2022-09-27  7:30 ` Jason Wang
  2022-09-27  7:30 ` [PULL 7/8] vdpa: Allow MQ feature in SVQ Jason Wang
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha
  Cc: Eugenio Pérez, Si-Wei Liu, Jason Wang

From: Eugenio Pérez <eperezma@redhat.com>

It was returned as error before. Instead of it, simply update the
corresponding field so qemu can send it in the migration data.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Si-Wei Liu <si-wei.liu@oracle.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/net/virtio-net.c | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index dd0d056..63a8332 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1412,19 +1412,14 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
         return VIRTIO_NET_ERR;
     }
 
-    /* Avoid changing the number of queue_pairs for vdpa device in
-     * userspace handler. A future fix is needed to handle the mq
-     * change in userspace handler with vhost-vdpa. Let's disable
-     * the mq handling from userspace for now and only allow get
-     * done through the kernel. Ripples may be seen when falling
-     * back to userspace, but without doing it qemu process would
-     * crash on a recursive entry to virtio_net_set_status().
-     */
+    n->curr_queue_pairs = queue_pairs;
     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
-        return VIRTIO_NET_ERR;
+        /*
+         * Avoid updating the backend for a vdpa device: We're only interested
+         * in updating the device model queues.
+         */
+        return VIRTIO_NET_OK;
     }
-
-    n->curr_queue_pairs = queue_pairs;
     /* stop the backend before changing the number of queue_pairs to avoid handling a
      * disabled queue */
     virtio_net_set_status(vdev, vdev->status);
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PULL 7/8] vdpa: Allow MQ feature in SVQ
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
                   ` (5 preceding siblings ...)
  2022-09-27  7:30 ` [PULL 6/8] virtio-net: Update virtio-net curr_queue_pairs in vdpa backends Jason Wang
@ 2022-09-27  7:30 ` Jason Wang
  2022-09-27  7:30 ` [PULL 8/8] virtio: del net client if net_init_tap_one failed Jason Wang
  2022-09-27 18:40 ` [PULL 0/8] Net patches Stefan Hajnoczi
  8 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha; +Cc: Eugenio Pérez, Jason Wang

From: Eugenio Pérez <eperezma@redhat.com>

Finally enable SVQ with MQ feature.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/vhost-vdpa.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index c6cbe2f..4bc3fd0 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -94,6 +94,7 @@ static const uint64_t vdpa_svq_device_features =
     BIT_ULL(VIRTIO_NET_F_MRG_RXBUF) |
     BIT_ULL(VIRTIO_NET_F_STATUS) |
     BIT_ULL(VIRTIO_NET_F_CTRL_VQ) |
+    BIT_ULL(VIRTIO_NET_F_MQ) |
     BIT_ULL(VIRTIO_F_ANY_LAYOUT) |
     BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR) |
     BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PULL 8/8] virtio: del net client if net_init_tap_one failed
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
                   ` (6 preceding siblings ...)
  2022-09-27  7:30 ` [PULL 7/8] vdpa: Allow MQ feature in SVQ Jason Wang
@ 2022-09-27  7:30 ` Jason Wang
  2022-09-27 18:40 ` [PULL 0/8] Net patches Stefan Hajnoczi
  8 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-09-27  7:30 UTC (permalink / raw)
  To: qemu-devel, peter.maydell, stefanha; +Cc: lu zhipeng, Jason Wang

From: lu zhipeng <luzhipeng@cestc.cn>

If the net tap initializes successful, but failed during
network card hot-plugging, the net-tap will remains,
so cleanup.

Signed-off-by: lu zhipeng <luzhipeng@cestc.cn>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 net/tap.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/net/tap.c b/net/tap.c
index b3ddfd4..e203d07 100644
--- a/net/tap.c
+++ b/net/tap.c
@@ -686,7 +686,7 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
     tap_set_sndbuf(s->fd, tap, &err);
     if (err) {
         error_propagate(errp, err);
-        return;
+        goto failed;
     }
 
     if (tap->has_fd || tap->has_fds) {
@@ -726,12 +726,12 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
                 } else {
                     warn_report_err(err);
                 }
-                return;
+                goto failed;
             }
             if (!g_unix_set_fd_nonblocking(vhostfd, true, NULL)) {
                 error_setg_errno(errp, errno, "%s: Can't use file descriptor %d",
                                  name, fd);
-                return;
+                goto failed;
             }
         } else {
             vhostfd = open("/dev/vhost-net", O_RDWR);
@@ -743,11 +743,11 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
                     warn_report("tap: open vhost char device failed: %s",
                                 strerror(errno));
                 }
-                return;
+                goto failed;
             }
             if (!g_unix_set_fd_nonblocking(vhostfd, true, NULL)) {
                 error_setg_errno(errp, errno, "Failed to set FD nonblocking");
-                return;
+                goto failed;
             }
         }
         options.opaque = (void *)(uintptr_t)vhostfd;
@@ -760,11 +760,17 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
             } else {
                 warn_report(VHOST_NET_INIT_FAILED);
             }
-            return;
+            goto failed;
         }
     } else if (vhostfdname) {
         error_setg(errp, "vhostfd(s)= is not valid without vhost");
+        goto failed;
     }
+
+    return;
+
+failed:
+    qemu_del_net_client(&s->nc);
 }
 
 static int get_fds(char *str, char *fds[], int max)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PULL 0/8] Net patches
  2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
                   ` (7 preceding siblings ...)
  2022-09-27  7:30 ` [PULL 8/8] virtio: del net client if net_init_tap_one failed Jason Wang
@ 2022-09-27 18:40 ` Stefan Hajnoczi
  8 siblings, 0 replies; 10+ messages in thread
From: Stefan Hajnoczi @ 2022-09-27 18:40 UTC (permalink / raw)
  To: Jason Wang; +Cc: qemu-devel, peter.maydell, stefanha, Jason Wang

[-- Attachment #1: Type: text/plain, Size: 115 bytes --]

Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/7.2 for any user-visible changes.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-09-27 19:25 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-09-27  7:30 [PULL 0/8] Net patches Jason Wang
2022-09-27  7:30 ` [PULL 1/8] e1000e: set RX desc status with DD flag in a separate operation Jason Wang
2022-09-27  7:30 ` [PULL 2/8] vdpa: Make VhostVDPAState cvq_cmd_in_buffer control ack type Jason Wang
2022-09-27  7:30 ` [PULL 3/8] vdpa: extract vhost_vdpa_net_load_mac from vhost_vdpa_net_load Jason Wang
2022-09-27  7:30 ` [PULL 4/8] vdpa: Add vhost_vdpa_net_load_mq Jason Wang
2022-09-27  7:30 ` [PULL 5/8] vdpa: validate MQ CVQ commands Jason Wang
2022-09-27  7:30 ` [PULL 6/8] virtio-net: Update virtio-net curr_queue_pairs in vdpa backends Jason Wang
2022-09-27  7:30 ` [PULL 7/8] vdpa: Allow MQ feature in SVQ Jason Wang
2022-09-27  7:30 ` [PULL 8/8] virtio: del net client if net_init_tap_one failed Jason Wang
2022-09-27 18:40 ` [PULL 0/8] Net patches Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).