From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755588AbdESQg5 (ORCPT ); Fri, 19 May 2017 12:36:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37422 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751260AbdESQgu (ORCPT ); Fri, 19 May 2017 12:36:50 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 693CB624B3 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=mst@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 693CB624B3 Date: Fri, 19 May 2017 19:36:49 +0300 From: "Michael S. Tsirkin" To: Jason Wang Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH net-next V5 0/9] vhost_net rx batch dequeuing Message-ID: <20170519193636-mutt-send-email-mst@kernel.org> References: <1494994485-12994-1-git-send-email-jasowang@redhat.com> <20170517235738-mutt-send-email-mst@kernel.org> <0d1dbf31-32c8-34b4-d8e8-48d04f2fc205@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <0d1dbf31-32c8-34b4-d8e8-48d04f2fc205@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Fri, 19 May 2017 16:36:50 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 19, 2017 at 02:27:16PM +0800, Jason Wang wrote: > > > On 2017年05月18日 04:59, Michael S. Tsirkin wrote: > > On Wed, May 17, 2017 at 12:14:36PM +0800, Jason Wang wrote: > > > This series tries to implement rx batching for vhost-net. This is done > > > by batching the dequeuing from skb_array which was exported by > > > underlayer socket and pass the sbk back through msg_control to finish > > > userspace copying. This is also the requirement for more batching > > > implemention on rx path. > > > > > > Tests shows at most 7.56% improvment bon rx pps on top of batch > > > zeroing and no obvious changes for TCP_STREAM/TCP_RR result. > > > > > > Please review. > > > > > > Thanks > > A surprisingly large gain for such as simple change. It would be nice > > to understand better why this helps - in particular, does the optimal > > batch size change if ring is bigger or smaller? > > Will test, just want to confirm. You mean virtio ring not tx_queue_len here? > > Thanks Exactly. Thanks, MST