From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stephen Hemminger Subject: Re: Poor SRIOV performance with ESXi Linux guest Date: Wed, 2 Sep 2015 15:31:04 -0700 Message-ID: <20150902153104.65a7d70d@urahara> References: Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: "dev@dpdk.org" To: Ale Mansoor Return-path: Received: from mail-pa0-f44.google.com (mail-pa0-f44.google.com [209.85.220.44]) by dpdk.org (Postfix) with ESMTP id 2D86F58CB for ; Thu, 3 Sep 2015 00:30:54 +0200 (CEST) Received: by pacwi10 with SMTP id wi10so24583165pac.3 for ; Wed, 02 Sep 2015 15:30:53 -0700 (PDT) In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, 2 Sep 2015 22:18:27 +0000 Ale Mansoor wrote: > Getting less than 100 packets per second throughput between VF's under my Fedora FC20 VM running under ESXi 6.0 with DPDK l2fwd (Used as ./l2fwd -c 0xf -n 4 -- -p 0x3 -T 1) That is many orders of magnitude less than expected. > Questions: > --------------- > > Q1) Is DPDK + SRIOV under ESXi supposed to use the igb_uio driver or the vfio-pci driver inside Linux guest os ? You have to use igb_uio, there is no emulated IOMMU in ESX > Q2) What is the expected l2fwd performance when running DPDK under the Linux guest OS under ESXI with SRIOV ? Depends on many things. With SRIOV you should reach 10Mpps or more. Did you try running Linux on baremetal on same hardware first? > Q3) Any idea what may be preventing vfio-pci driver from binding to the VF's inside the guest instance ? vfio-pci needs IOMMU which is not available in guest. > Q4) Why is igb_uio performing so poorly ? Don't blame igb_uio. It is probably something in system or vmware.