From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932268AbcFBINQ (ORCPT ); Thu, 2 Jun 2016 04:13:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45620 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932152AbcFBINI (ORCPT ); Thu, 2 Jun 2016 04:13:08 -0400 Subject: Re: [PATCH v6 0/3] skb_array: array based FIFO for skbs To: David Miller , mst@redhat.com References: <1464785601-3074-1-git-send-email-mst@redhat.com> <20160601.215112.20935437120251822.davem@davemloft.net> Cc: linux-kernel@vger.kernel.org, eric.dumazet@gmail.com, netdev@vger.kernel.org, rostedt@goodmis.org, brouer@redhat.com, kvm@vger.kernel.org From: Jason Wang Message-ID: <574FEA8D.1020508@redhat.com> Date: Thu, 2 Jun 2016 16:13:01 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0 MIME-Version: 1.0 In-Reply-To: <20160601.215112.20935437120251822.davem@davemloft.net> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 02 Jun 2016 08:13:08 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2016年06月02日 12:51, David Miller wrote: > From: "Michael S. Tsirkin" > Date: Wed, 1 Jun 2016 15:54:34 +0300 > >> This is in response to the proposal by Jason to make tun >> rx packet queue lockless using a circular buffer. >> My testing seems to show that at least for the common usecase >> in networking, which isn't lockless, circular buffer >> with indices does not perform that well, because >> each index access causes a cache line to bounce between >> CPUs, and index access causes stalls due to the dependency. >> >> By comparison, an array of pointers where NULL means invalid >> and !NULL means valid, can be updated without messing up barriers >> at all and does not have this issue. >> >> On the flip side, cache pressure may be caused by using large queues. >> tun has a queue of 1000 entries by default and that's 8K. >> At this point I'm not sure this can be solved efficiently. >> The correct solution might be sizing the queues appropriately. >> >> Here's an implementation of this idea: it can be used more >> or less whenever sk_buff_head can be used, except you need >> to know the queue size in advance. > ... > > I have no fundamental issues with this piece of infrastructure, but when > it gets included I want this series to include at least one use case. > > This can be an adaptation of Jason's tun rx packet queue changes, or > similar. > > Thanks. Right, I'm working on using skb array for tun, will post the patch in the following days. Thanks