From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tiwei Bie Subject: [PATCH 0/5] Fixes and enhancements for Tx path in Virtio PMD Date: Tue, 19 Feb 2019 18:59:46 +0800 Message-ID: <20190219105951.31046-1-tiwei.bie@intel.com> To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Return-path: Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id B17105920 for ; Tue, 19 Feb 2019 12:02:37 +0100 (CET) List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Below is a quick (unofficial) performance test (macfwd loop, 64B) for the packed ring optimizations in this series on an Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz platform: w/o this series: packed ring normal/in-order: ~10.4 Mpps w/ this series: packed ring normal: ~10.9 Mpps packed ring in-order: ~11.3 Mpps In the test, we need to make sure that the vhost side is fast enough. So 4 forwarding cores are used in vhost side, and 1 forwarding core is used in virtio side. vhost side: ./x86_64-native-linuxapp-gcc/app/testpmd \ -l 13,14,15,16,17 \ --socket-mem 1024,0 \ --file-prefix=vhost \ --vdev=net_vhost0,iface=/tmp/vhost0,queues=4 \ -- \ --forward-mode=mac \ -i \ --rxq=4 \ --txq=4 \ --nb-cores 4 virtio side: ./x86_64-native-linuxapp-gcc/app/testpmd \ -l 8,9,10,11,12 \ --socket-mem 1024,0 \ --single-file-segments \ --file-prefix=virtio-user \ --vdev=virtio_user0,path=/tmp/vhost0,queues=4,in_order=1,packed_vq=1 \ -- \ --forward-mode=mac \ -i \ --rxq=4 \ --txq=4 \ --nb-cores 1 Tiwei Bie (5): net/virtio: fix Tx desc cleanup for packed ring net/virtio: fix in-order Tx path for split ring net/virtio: fix in-order Tx path for packed ring net/virtio: introduce a helper for clearing net header net/virtio: optimize xmit enqueue for packed ring drivers/net/virtio/virtio_ethdev.c | 4 +- drivers/net/virtio/virtio_rxtx.c | 203 ++++++++++++++++++++--------- 2 files changed, 146 insertions(+), 61 deletions(-) -- 2.17.1