From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1424404AbcBQXBl (ORCPT ); Wed, 17 Feb 2016 18:01:41 -0500 Received: from wtarreau.pck.nerim.net ([62.212.114.60]:17106 "EHLO 1wt.eu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1424341AbcBQXBi (ORCPT ); Wed, 17 Feb 2016 18:01:38 -0500 Date: Wed, 17 Feb 2016 23:59:58 +0100 From: Willy Tarreau To: Gregory CLEMENT Cc: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Thomas Petazzoni , Florian Fainelli , Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , linux-arm-kernel@lists.infradead.org, Lior Amsalem , Nadav Haklai , Marcin Wojtas , Simon Guinot , Russell King - ARM Linux , Timor Kardashov , Sebastian Careba Subject: Re: [PATCH v2 net-next 0/8] API set for HW Buffer management Message-ID: <20160217225958.GA31113@1wt.eu> References: <1455636823-14470-1-git-send-email-gregory.clement@free-electrons.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1455636823-14470-1-git-send-email-gregory.clement@free-electrons.com> User-Agent: Mutt/1.4.2.3i Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Gregory, On Tue, Feb 16, 2016 at 04:33:35PM +0100, Gregory CLEMENT wrote: > Hello, > > A few weeks ago I sent a proposal for a API set for HW Buffer > management, to have a better view of the motivation for this API see > the cover letter of this proposal: > http://thread.gmane.org/gmane.linux.kernel/2125152 > > Since this version I took into account the review from Florian: > - The hardware buffer management helpers are no more built by default > and now depend on a hidden config symbol which has to be selected > by the driver if needed > - The hwbm_pool_refill() and hwbm_pool_add() now receive a gfp_t as > argument allowing the caller to specify the flag it needs. > - buf_num is now tested to ensure there is no wrapping > - A spinlock has been added to protect the hwbm_pool_add() function in > SMP or irq context. > > I also used pr_warn instead of pr_debug in case of errors. > > I fixed the mvneta implementation by returning the buffer to the pool > at various place instead of ignoring it. > > About the series itself I tried to make this series easier to merge: > - Squashed "bus: mvenus-mbus: Fix size test for > mvebu_mbus_get_dram_win_info" into bus: mvebu-mbus: provide api for > obtaining IO and DRAM window information. > - Added my signed-otf-by on all the patches as submitter of the series. > - Renamed the dts patches with the pattern "ARM: dts: platform:" > - Removed the patch "ARM: mvebu: enable SRAM support in > mvebu_v7_defconfig" of this series and already applied it > - Rodified the order of the patches. > > In order to ease the test the branch mvneta-BM-framework-v2 is > available at git@github.com:MISL-EBU-System-SW/mainline-public.git. Well, I tested this patch series on top of latest master (from today) on my fresh new clearfog board. I compared carefully with and without the patchset. My workload was haproxy receiving connections and forwarding them to my PC via the same port. I tested both with short connections (HTTP GET of an empty file) and long ones (1 MB or more). No trouble was detected at all, which is pretty good. I noticed a very tiny performance drop which is more noticeable on short connections (high packet rates), my forwarded connection rate went down from 17500/s to 17300/s. But I have not checked yet what can be tuned when using the BM, nor did I compare CPU usage. I remember having run some tests in the past, I guess it was on the XP-GP board, and noticed that the BM could save a significant amount of CPU and improve cache efficiency, so if this is the case here, we don't really care about a possible 1% performance drop. I'll try to provide more results as time permits. In the mean time if you want (or plan to submit a next batch), feel free to add a Tested-by: Willy Tarreau . cheers, Willy