From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Hall Subject: Re: VMXNET3 on vmware, ping delay Date: Thu, 25 Jun 2015 08:18:34 -0700 Message-ID: <20150625151834.GA29296@mhcomputing.net> References: <792CF0A6B0883C45AF8C719B2ECA946E42B2430F@DEMUMBX003.nsn-intra.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "dev@dpdk.org" To: "Vass, Sandor (Nokia - HU/Budapest)" Return-path: Received: from mail.mhcomputing.net (master.mhcomputing.net [74.208.46.186]) by dpdk.org (Postfix) with ESMTP id 13945C6AA for ; Thu, 25 Jun 2015 17:21:13 +0200 (CEST) Content-Disposition: inline In-Reply-To: <792CF0A6B0883C45AF8C719B2ECA946E42B2430F@DEMUMBX003.nsn-intra.net> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Jun 25, 2015 at 09:14:53AM +0000, Vass, Sandor (Nokia - HU/Budapest) wrote: > According to my understanding each packet should go > through BR as fast as possible, but it seems that the rte_eth_rx_burst > retrieves packets only when there are at least 2 packets on the RX queue of > the NIC. At least most of the times as there are cases (rarely - according > to my console log) when it can retrieve 1 packet also and sometimes only 3 > packets can be retrieved... By default DPDK is optimized for throughput not latency. Try a test with heavier traffic. There is also some work going on now for DPDK interrupt-driven mode, which will work more like traditional Ethernet drivers instead of polling mode Ethernet drivers. Though I'm not an expert on it, there is also a series of ways to optimize for latency, which hopefully some others could discuss... or maybe search the archives / web site / Intel tuning documentation. Matthew.