From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wei Wang Subject: [PATCH 6/6] Vhost-pci RFC: Experimental Results Date: Sun, 29 May 2016 07:36:35 +0800 Message-ID: <1464478595-146533-7-git-send-email-wei.w.wang@intel.com> References: <1464478595-146533-1-git-send-email-wei.w.wang@intel.com> Cc: Wei Wang To: kvm@vger.kernel.org, qemu-devel@nongnu.org, virtio-comment@lists.oasis-open.org, virtio-dev@lists.oasis-open.org, mst@redhat.com, stefanha@redhat.com, pbonzini@redhat.com Return-path: Received: from mga09.intel.com ([134.134.136.24]:29126 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752934AbcE1Pki (ORCPT ); Sat, 28 May 2016 11:40:38 -0400 In-Reply-To: <1464478595-146533-1-git-send-email-wei.w.wang@intel.com> Sender: kvm-owner@vger.kernel.org List-ID: Signed-off-by: Wei Wang --- Results | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 Results diff --git a/Results b/Results new file mode 100644 index 0000000..7402826 --- /dev/null +++ b/Results @@ -0,0 +1,18 @@ +We have built a fundamental vhost-pci based inter-VM communication framework +for network packet transmission. To test the throughput affected by scaling +with more VMs to stream out packets, we chain 2 to 5 VMs, and follow the vsperf +test methodology proposed by OPNFV, as shown in Fig. 2. The first VM is +passthrough-ed with a physical NIC to inject packets from an external packet +generator, and the last VM is passthrough-ed with a physical NIC to eject +packets back to the external generator. A layer2 forwarding module in each VM +is responsible for forwarding incoming packets from NIC1 (the injection NIC) to +NIC2 (the ejection NIC). In the traditional way, NIC2 is a virtio-net device +connected to the vhost-user backend in OVS. With our proposed solution, NIC2 is +a vhost-pci device, which directly copies packets to the next VM. The packet +generator implements the RFC2544 standard, which keeps running at a 0 packet +loss rate. + +Fig. 3 shows the scalability test results. In the vhost-user case, a +significant performance drop (40%~55%) occurs when 4 and 5 VMs are chained +together. The vhost-pci based inter-VM communication scales well (no +significant throughput drop) with more VMs are chained together. -- 1.8.3.1