qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API
@ 2014-02-20 11:14 Stefan Hajnoczi
  2014-02-20 11:14 ` [Qemu-devel] [PATCH 1/3] net: remove " Stefan Hajnoczi
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2014-02-20 11:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Vincenzo Maffione

This series is based on my net tree, which already has Vincenzo's "Add netmap
backend offloadings support" patch series merged.

After merging the series I realized we were bypassing the net.h API and
directly accessing nc->info->... in some cases.  This series cleans that up, at
the cost of moving ->peer back up to offload API callers.

I think that's the right thing to do to make net.h APIs consistent (the other
functions don't have implicit ->peer) and avoid bypassing the API.

Stefan Hajnoczi (3):
  net: remove implicit peer from offload API
  vhost_net: use offload API instead of bypassing it
  virtio-net: use qemu_get_queue() where possible

 hw/net/vhost_net.c  |  6 +++---
 hw/net/virtio-net.c | 12 ++++++------
 hw/net/vmxnet3.c    | 18 +++++++++---------
 include/net/net.h   | 14 +++++++-------
 net/net.c           | 36 ++++++++++++++++++------------------
 5 files changed, 43 insertions(+), 43 deletions(-)

-- 
1.8.5.3

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Qemu-devel] [PATCH 1/3] net: remove implicit peer from offload API
  2014-02-20 11:14 [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API Stefan Hajnoczi
@ 2014-02-20 11:14 ` Stefan Hajnoczi
  2014-02-20 11:14 ` [Qemu-devel] [PATCH 2/3] vhost_net: use offload API instead of bypassing it Stefan Hajnoczi
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2014-02-20 11:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Vincenzo Maffione

The virtio_net offload APIs are used on the NIC's peer (i.e. the tap
device).  The API was defined to implicitly use nc->peer, saving the
caller the trouble.

This wasn't ideal because:
1. There are callers who have the peer but not the NIC.  Currently they
   are forced to bypass the API and access peer->info->... directly.
2. The rest of the net.h API uses nc, not nc->peer, so it is
   inconsistent.

This patch pushes nc->peer back up to callers.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/net/virtio-net.c | 12 ++++++------
 hw/net/vmxnet3.c    | 18 +++++++++---------
 include/net/net.h   | 14 +++++++-------
 net/net.c           | 36 ++++++++++++++++++------------------
 4 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index cda8c75..9218a09 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -325,7 +325,7 @@ static void peer_test_vnet_hdr(VirtIONet *n)
         return;
     }
 
-    n->has_vnet_hdr = qemu_peer_has_vnet_hdr(nc);
+    n->has_vnet_hdr = qemu_has_vnet_hdr(nc->peer);
 }
 
 static int peer_has_vnet_hdr(VirtIONet *n)
@@ -338,7 +338,7 @@ static int peer_has_ufo(VirtIONet *n)
     if (!peer_has_vnet_hdr(n))
         return 0;
 
-    n->has_ufo = qemu_peer_has_ufo(qemu_get_queue(n->nic));
+    n->has_ufo = qemu_has_ufo(qemu_get_queue(n->nic)->peer);
 
     return n->has_ufo;
 }
@@ -357,8 +357,8 @@ static void virtio_net_set_mrg_rx_bufs(VirtIONet *n, int mergeable_rx_bufs)
         nc = qemu_get_subqueue(n->nic, i);
 
         if (peer_has_vnet_hdr(n) &&
-            qemu_peer_has_vnet_hdr_len(nc, n->guest_hdr_len)) {
-            qemu_peer_set_vnet_hdr_len(nc, n->guest_hdr_len);
+            qemu_has_vnet_hdr_len(nc->peer, n->guest_hdr_len)) {
+            qemu_set_vnet_hdr_len(nc->peer, n->guest_hdr_len);
             n->host_hdr_len = n->guest_hdr_len;
         }
     }
@@ -459,7 +459,7 @@ static uint32_t virtio_net_bad_features(VirtIODevice *vdev)
 
 static void virtio_net_apply_guest_offloads(VirtIONet *n)
 {
-    qemu_peer_set_offload(qemu_get_subqueue(n->nic, 0),
+    qemu_set_offload(qemu_get_subqueue(n->nic, 0)->peer,
             !!(n->curr_guest_offloads & (1ULL << VIRTIO_NET_F_GUEST_CSUM)),
             !!(n->curr_guest_offloads & (1ULL << VIRTIO_NET_F_GUEST_TSO4)),
             !!(n->curr_guest_offloads & (1ULL << VIRTIO_NET_F_GUEST_TSO6)),
@@ -1540,7 +1540,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
     peer_test_vnet_hdr(n);
     if (peer_has_vnet_hdr(n)) {
         for (i = 0; i < n->max_queues; i++) {
-            qemu_peer_using_vnet_hdr(qemu_get_subqueue(n->nic, i), true);
+            qemu_using_vnet_hdr(qemu_get_subqueue(n->nic, i)->peer, true);
         }
         n->host_hdr_len = sizeof(struct virtio_net_hdr);
     } else {
diff --git a/hw/net/vmxnet3.c b/hw/net/vmxnet3.c
index 0524684..5be807c 100644
--- a/hw/net/vmxnet3.c
+++ b/hw/net/vmxnet3.c
@@ -1290,12 +1290,12 @@ static void vmxnet3_update_features(VMXNET3State *s)
               s->lro_supported, rxcso_supported,
               s->rx_vlan_stripping);
     if (s->peer_has_vhdr) {
-        qemu_peer_set_offload(qemu_get_queue(s->nic),
-                        rxcso_supported,
-                        s->lro_supported,
-                        s->lro_supported,
-                        0,
-                        0);
+        qemu_set_offload(qemu_get_queue(s->nic)->peer,
+                         rxcso_supported,
+                         s->lro_supported,
+                         s->lro_supported,
+                         0,
+                         0);
     }
 }
 
@@ -1885,7 +1885,7 @@ static bool vmxnet3_peer_has_vnet_hdr(VMXNET3State *s)
 {
     NetClientState *nc = qemu_get_queue(s->nic);
 
-    if (qemu_peer_has_vnet_hdr(nc)) {
+    if (qemu_has_vnet_hdr(nc->peer)) {
         return true;
     }
 
@@ -1933,10 +1933,10 @@ static void vmxnet3_net_init(VMXNET3State *s)
     s->lro_supported = false;
 
     if (s->peer_has_vhdr) {
-        qemu_peer_set_vnet_hdr_len(qemu_get_queue(s->nic),
+        qemu_set_vnet_hdr_len(qemu_get_queue(s->nic)->peer,
             sizeof(struct virtio_net_hdr));
 
-        qemu_peer_using_vnet_hdr(qemu_get_queue(s->nic), 1);
+        qemu_using_vnet_hdr(qemu_get_queue(s->nic)->peer, 1);
     }
 
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
diff --git a/include/net/net.h b/include/net/net.h
index 7b25394..8166345 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -132,13 +132,13 @@ ssize_t qemu_send_packet_async(NetClientState *nc, const uint8_t *buf,
 void qemu_purge_queued_packets(NetClientState *nc);
 void qemu_flush_queued_packets(NetClientState *nc);
 void qemu_format_nic_info_str(NetClientState *nc, uint8_t macaddr[6]);
-bool qemu_peer_has_ufo(NetClientState *nc);
-bool qemu_peer_has_vnet_hdr(NetClientState *nc);
-bool qemu_peer_has_vnet_hdr_len(NetClientState *nc, int len);
-void qemu_peer_using_vnet_hdr(NetClientState *nc, bool enable);
-void qemu_peer_set_offload(NetClientState *nc, int csum, int tso4, int tso6,
-                           int ecn, int ufo);
-void qemu_peer_set_vnet_hdr_len(NetClientState *nc, int len);
+bool qemu_has_ufo(NetClientState *nc);
+bool qemu_has_vnet_hdr(NetClientState *nc);
+bool qemu_has_vnet_hdr_len(NetClientState *nc, int len);
+void qemu_using_vnet_hdr(NetClientState *nc, bool enable);
+void qemu_set_offload(NetClientState *nc, int csum, int tso4, int tso6,
+                      int ecn, int ufo);
+void qemu_set_vnet_hdr_len(NetClientState *nc, int len);
 void qemu_macaddr_default_if_unset(MACAddr *macaddr);
 int qemu_show_nic_models(const char *arg, const char *const *models);
 void qemu_check_nic_model(NICInfo *nd, const char *model);
diff --git a/net/net.c b/net/net.c
index 173673c..912991b 100644
--- a/net/net.c
+++ b/net/net.c
@@ -378,59 +378,59 @@ void qemu_foreach_nic(qemu_nic_foreach func, void *opaque)
     }
 }
 
-bool qemu_peer_has_ufo(NetClientState *nc)
+bool qemu_has_ufo(NetClientState *nc)
 {
-    if (!nc->peer || !nc->peer->info->has_ufo) {
+    if (!nc || !nc->info->has_ufo) {
         return false;
     }
 
-    return nc->peer->info->has_ufo(nc->peer);
+    return nc->info->has_ufo(nc);
 }
 
-bool qemu_peer_has_vnet_hdr(NetClientState *nc)
+bool qemu_has_vnet_hdr(NetClientState *nc)
 {
-    if (!nc->peer || !nc->peer->info->has_vnet_hdr) {
+    if (!nc || !nc->info->has_vnet_hdr) {
         return false;
     }
 
-    return nc->peer->info->has_vnet_hdr(nc->peer);
+    return nc->info->has_vnet_hdr(nc);
 }
 
-bool qemu_peer_has_vnet_hdr_len(NetClientState *nc, int len)
+bool qemu_has_vnet_hdr_len(NetClientState *nc, int len)
 {
-    if (!nc->peer || !nc->peer->info->has_vnet_hdr_len) {
+    if (!nc || !nc->info->has_vnet_hdr_len) {
         return false;
     }
 
-    return nc->peer->info->has_vnet_hdr_len(nc->peer, len);
+    return nc->info->has_vnet_hdr_len(nc, len);
 }
 
-void qemu_peer_using_vnet_hdr(NetClientState *nc, bool enable)
+void qemu_using_vnet_hdr(NetClientState *nc, bool enable)
 {
-    if (!nc->peer || !nc->peer->info->using_vnet_hdr) {
+    if (!nc || !nc->info->using_vnet_hdr) {
         return;
     }
 
-    nc->peer->info->using_vnet_hdr(nc->peer, enable);
+    nc->info->using_vnet_hdr(nc, enable);
 }
 
-void qemu_peer_set_offload(NetClientState *nc, int csum, int tso4, int tso6,
+void qemu_set_offload(NetClientState *nc, int csum, int tso4, int tso6,
                           int ecn, int ufo)
 {
-    if (!nc->peer || !nc->peer->info->set_offload) {
+    if (!nc || !nc->info->set_offload) {
         return;
     }
 
-    nc->peer->info->set_offload(nc->peer, csum, tso4, tso6, ecn, ufo);
+    nc->info->set_offload(nc, csum, tso4, tso6, ecn, ufo);
 }
 
-void qemu_peer_set_vnet_hdr_len(NetClientState *nc, int len)
+void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
 {
-    if (!nc->peer || !nc->peer->info->set_vnet_hdr_len) {
+    if (!nc || !nc->info->set_vnet_hdr_len) {
         return;
     }
 
-    nc->peer->info->set_vnet_hdr_len(nc->peer, len);
+    nc->info->set_vnet_hdr_len(nc, len);
 }
 
 int qemu_can_send_packet(NetClientState *sender)
-- 
1.8.5.3

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [Qemu-devel] [PATCH 2/3] vhost_net: use offload API instead of bypassing it
  2014-02-20 11:14 [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API Stefan Hajnoczi
  2014-02-20 11:14 ` [Qemu-devel] [PATCH 1/3] net: remove " Stefan Hajnoczi
@ 2014-02-20 11:14 ` Stefan Hajnoczi
  2014-02-20 11:14 ` [Qemu-devel] [PATCH 3/3] virtio-net: use qemu_get_queue() where possible Stefan Hajnoczi
  2014-02-20 11:58 ` [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API Vincenzo Maffione
  3 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2014-02-20 11:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Vincenzo Maffione

There is no need to access backend->info->has_vnet_hdr() and friends
anymore.  Use the qemu_has_vnet_hdr() API instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/net/vhost_net.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index c90b9ec..a1de2f4 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -106,7 +106,7 @@ struct vhost_net *vhost_net_init(NetClientState *backend, int devfd,
         goto fail;
     }
     net->nc = backend;
-    net->dev.backend_features = backend->info->has_vnet_hdr(backend) ? 0 :
+    net->dev.backend_features = qemu_has_vnet_hdr(backend) ? 0 :
         (1 << VHOST_NET_F_VIRTIO_NET_HDR);
     net->backend = r;
 
@@ -117,8 +117,8 @@ struct vhost_net *vhost_net_init(NetClientState *backend, int devfd,
     if (r < 0) {
         goto fail;
     }
-    if (!backend->info->has_vnet_hdr_len(backend,
-                              sizeof(struct virtio_net_hdr_mrg_rxbuf))) {
+    if (!qemu_has_vnet_hdr_len(backend,
+                               sizeof(struct virtio_net_hdr_mrg_rxbuf))) {
         net->dev.features &= ~(1 << VIRTIO_NET_F_MRG_RXBUF);
     }
     if (~net->dev.features & net->dev.backend_features) {
-- 
1.8.5.3

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [Qemu-devel] [PATCH 3/3] virtio-net: use qemu_get_queue() where possible
  2014-02-20 11:14 [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API Stefan Hajnoczi
  2014-02-20 11:14 ` [Qemu-devel] [PATCH 1/3] net: remove " Stefan Hajnoczi
  2014-02-20 11:14 ` [Qemu-devel] [PATCH 2/3] vhost_net: use offload API instead of bypassing it Stefan Hajnoczi
@ 2014-02-20 11:14 ` Stefan Hajnoczi
  2014-02-20 11:58 ` [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API Vincenzo Maffione
  3 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2014-02-20 11:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Vincenzo Maffione

qemu_get_queue() is a shorthand for qemu_get_subqueue(n->nic, 0).  Use
the shorthand where possible.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/net/virtio-net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 9218a09..3c0342e 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -459,7 +459,7 @@ static uint32_t virtio_net_bad_features(VirtIODevice *vdev)
 
 static void virtio_net_apply_guest_offloads(VirtIONet *n)
 {
-    qemu_set_offload(qemu_get_subqueue(n->nic, 0)->peer,
+    qemu_set_offload(qemu_get_queue(n->nic)->peer,
             !!(n->curr_guest_offloads & (1ULL << VIRTIO_NET_F_GUEST_CSUM)),
             !!(n->curr_guest_offloads & (1ULL << VIRTIO_NET_F_GUEST_TSO4)),
             !!(n->curr_guest_offloads & (1ULL << VIRTIO_NET_F_GUEST_TSO6)),
-- 
1.8.5.3

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API
  2014-02-20 11:14 [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2014-02-20 11:14 ` [Qemu-devel] [PATCH 3/3] virtio-net: use qemu_get_queue() where possible Stefan Hajnoczi
@ 2014-02-20 11:58 ` Vincenzo Maffione
  3 siblings, 0 replies; 5+ messages in thread
From: Vincenzo Maffione @ 2014-02-20 11:58 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1134 bytes --]

Hello,
  It looks ok to me.

Cheers
  Vincenzo


2014-02-20 12:14 GMT+01:00 Stefan Hajnoczi <stefanha@redhat.com>:

> This series is based on my net tree, which already has Vincenzo's "Add
> netmap
> backend offloadings support" patch series merged.
>
> After merging the series I realized we were bypassing the net.h API and
> directly accessing nc->info->... in some cases.  This series cleans that
> up, at
> the cost of moving ->peer back up to offload API callers.
>
> I think that's the right thing to do to make net.h APIs consistent (the
> other
> functions don't have implicit ->peer) and avoid bypassing the API.
>
> Stefan Hajnoczi (3):
>   net: remove implicit peer from offload API
>   vhost_net: use offload API instead of bypassing it
>   virtio-net: use qemu_get_queue() where possible
>
>  hw/net/vhost_net.c  |  6 +++---
>  hw/net/virtio-net.c | 12 ++++++------
>  hw/net/vmxnet3.c    | 18 +++++++++---------
>  include/net/net.h   | 14 +++++++-------
>  net/net.c           | 36 ++++++++++++++++++------------------
>  5 files changed, 43 insertions(+), 43 deletions(-)
>
> --
> 1.8.5.3
>
>


-- 
Vincenzo Maffione

[-- Attachment #2: Type: text/html, Size: 1661 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-02-20 11:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-20 11:14 [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API Stefan Hajnoczi
2014-02-20 11:14 ` [Qemu-devel] [PATCH 1/3] net: remove " Stefan Hajnoczi
2014-02-20 11:14 ` [Qemu-devel] [PATCH 2/3] vhost_net: use offload API instead of bypassing it Stefan Hajnoczi
2014-02-20 11:14 ` [Qemu-devel] [PATCH 3/3] virtio-net: use qemu_get_queue() where possible Stefan Hajnoczi
2014-02-20 11:58 ` [Qemu-devel] [PATCH 0/3] net: drop implicit peer from offload API Vincenzo Maffione

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).