From mboxrd@z Thu Jan 1 00:00:00 1970 From: Maxime Coquelin Subject: [PATCH v2] vhost: avoid memory barriers when no descriptors dequeued Date: Tue, 23 Oct 2018 12:07:10 +0200 Message-ID: <20181023100710.14739-1-maxime.coquelin@redhat.com> Cc: Maxime Coquelin , stable@dpdk.org To: dev@dpdk.org, tiwei.bie@intel.com, jfreimann@redhat.com, zhihong.wang@intel.com Return-path: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In both split and packed dequeue paths, flush_shadow_used_ring and vhost_ring_call variants gets called even if not packets have been dequeued, and so no descriptors updates happened. It has an impact on CPU pipeline, as memory barriers are used in these functions. This patch don't call these functions if no descriptors have been dequeued. The performance gain with split ring when dequeue zero-copy is disabled should be null, but should be noticeable with packed ring or dequeue zero-copy enabled. Fixes: ae999ce49dcb ("vhost: add Tx support for packed ring") Fixes: 915cf9404225 ("vhost: use shadow used ring in dequeue path") Cc: stable@dpdk.org Signed-off-by: Maxime Coquelin --- Changes in v2: - Fix shadow_used_idx reset in error path (Tiwei) lib/librte_vhost/virtio_net.c | 24 ++++++++++++++++-------- 1 file changed, 16 insertions(+), 8 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 1c31c0562..8ad30c94a 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1360,8 +1360,10 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, } } - flush_shadow_used_ring_split(dev, vq); - vhost_vring_call_split(dev, vq); + if (likely(vq->shadow_used_idx)) { + flush_shadow_used_ring_split(dev, vq); + vhost_vring_call_split(dev, vq); + } } rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); @@ -1440,8 +1442,10 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_dequeue(vq); if (unlikely(i < count)) vq->shadow_used_idx = i; - flush_shadow_used_ring_split(dev, vq); - vhost_vring_call_split(dev, vq); + if (likely(vq->shadow_used_idx)) { + flush_shadow_used_ring_split(dev, vq); + vhost_vring_call_split(dev, vq); + } } return i; @@ -1476,8 +1480,10 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } } - flush_shadow_used_ring_packed(dev, vq); - vhost_vring_call_packed(dev, vq); + if (likely(vq->shadow_used_idx)) { + flush_shadow_used_ring_packed(dev, vq); + vhost_vring_call_packed(dev, vq); + } } VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__); @@ -1555,8 +1561,10 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_dequeue(vq); if (unlikely(i < count)) vq->shadow_used_idx = i; - flush_shadow_used_ring_packed(dev, vq); - vhost_vring_call_packed(dev, vq); + if (likely(vq->shadow_used_idx)) { + flush_shadow_used_ring_packed(dev, vq); + vhost_vring_call_packed(dev, vq); + } } return i; -- 2.17.1