From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Monjalon Subject: Re: [ [PATCH v2] 01/13] virtio: Introduce config RTE_VIRTIO_INC_VECTOR Date: Fri, 18 Dec 2015 11:41:29 +0100 Message-ID: <3336335.A6HdtgA8Ga@xps13> References: <1450098032-21198-1-git-send-email-sshukla@mvista.com> <20151217152435.3c733ac1@xeon-e3> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit Cc: dev@dpdk.org To: Stephen Hemminger Return-path: Received: from mail-wm0-f44.google.com (mail-wm0-f44.google.com [74.125.82.44]) by dpdk.org (Postfix) with ESMTP id 0414411C5 for ; Fri, 18 Dec 2015 11:42:45 +0100 (CET) Received: by mail-wm0-f44.google.com with SMTP id l126so59939851wml.1 for ; Fri, 18 Dec 2015 02:42:44 -0800 (PST) In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 2015-12-18 09:52, Xie, Huawei: > On 12/18/2015 7:25 AM, Stephen Hemminger wrote: > > On Thu, 17 Dec 2015 17:32:38 +0530 > > Santosh Shukla wrote: > > > >> On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla wrote: > >>> virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx > >>> instructions. For arm64 in particular, virtio vector implementation does not > >>> exist(todo). > >>> > >>> So virtio pmd driver wont build for targets like i686, arm64. By making > >>> RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work > >>> in non-vectored virtio mode. > >>> > >>> Signed-off-by: Santosh Shukla > >>> --- > >> Ping? > >> > >> any review / comment on this patch much appreciated. Thanks > > The patches I posted (and were ignored by Intel) to support indirect > > and any layout should have much bigger performance gain than all this > > low level SSE bit twiddling. > Hi Stephen: > We only did SSE twiddling to RX, which almost doubles the performance > comparing to normal path in virtio/vhost performance test case. Indirect > and any layout feature enabling are mostly for TX. We also did some > optimization for single segment and non-offload case in TX, without > using SSE, which also gives ~60% performance improvement, in Qian's > result. My optimization is mostly for single segment and non-offload > case, which i calls simple rx/tx. > I plan to add virtio/vhost performance benchmark so that we could easily > measure the performance difference for each patch. > > Indirect and any layout features are useful for multiple segment > transmitted packet mbufs. I had acked your patch at the first time, and > thought it is applied. I don't understand why you say it is ignored by > Intel. There was an error and Stephen never replied nor pinged about it: http://dpdk.org/ml/archives/dev/2015-October/026984.html It happens. Reminder: it is the responsibility of the author to get patches reviewed and accepted. Please let's avoid useless blaming.