From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB5B5C433F5 for ; Thu, 7 Apr 2022 19:26:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229715AbiDGT2E (ORCPT ); Thu, 7 Apr 2022 15:28:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229649AbiDGT2E (ORCPT ); Thu, 7 Apr 2022 15:28:04 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B78E8283F54 for ; Thu, 7 Apr 2022 12:25:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=In-Reply-To:Content-Disposition:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:From:Sender:Reply-To:Subject: Date:Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=OA1TG6DYmJ7cPenLz3RcMnDq1LUbrQCNBy4gYnQEdUo=; b=SIdQk3oXrTbHFHvL3L0P2JVH9V gScKyOMUG5JcmJGcro5Vm5GomKdMNdKXOXnGGjeoo/9vOG54RdP7FnSbURMrf7YsM1KZs2lhkNXqL TOM4SYipctG6LvYzDNOZdgQtrAUYXSAt/l1UIW7ml5Nbn5fSVo1o2DwreKVIJ9IPFWjM=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1ncXlO-00EhMk-Bh; Thu, 07 Apr 2022 21:25:50 +0200 Date: Thu, 7 Apr 2022 21:25:50 +0200 From: Andrew Lunn To: Ilias Apalodimas Cc: Lorenzo Bianconi , netdev@vger.kernel.org, lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, thomas.petazzoni@bootlin.com, jbrouer@redhat.com, jdamato@fastly.com Subject: Re: [RFC net-next 2/2] net: mvneta: add support for page_pool_get_stats Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, Apr 07, 2022 at 09:35:52PM +0300, Ilias Apalodimas wrote: > Hi Andrew, > > On Thu, 7 Apr 2022 at 21:25, Andrew Lunn wrote: > > > > > +static void mvneta_ethtool_pp_stats(struct mvneta_port *pp, u64 *data) > > > +{ > > > + struct page_pool_stats stats = {}; > > > + int i; > > > + > > > + for (i = 0; i < rxq_number; i++) { > > > + struct page_pool *page_pool = pp->rxqs[i].page_pool; > > > + struct page_pool_stats pp_stats = {}; > > > + > > > + if (!page_pool_get_stats(page_pool, &pp_stats)) > > > + continue; > > > + > > > + stats.alloc_stats.fast += pp_stats.alloc_stats.fast; > > > + stats.alloc_stats.slow += pp_stats.alloc_stats.slow; > > > + stats.alloc_stats.slow_high_order += > > > + pp_stats.alloc_stats.slow_high_order; > > > + stats.alloc_stats.empty += pp_stats.alloc_stats.empty; > > > + stats.alloc_stats.refill += pp_stats.alloc_stats.refill; > > > + stats.alloc_stats.waive += pp_stats.alloc_stats.waive; > > > + stats.recycle_stats.cached += pp_stats.recycle_stats.cached; > > > + stats.recycle_stats.cache_full += > > > + pp_stats.recycle_stats.cache_full; > > > + stats.recycle_stats.ring += pp_stats.recycle_stats.ring; > > > + stats.recycle_stats.ring_full += > > > + pp_stats.recycle_stats.ring_full; > > > + stats.recycle_stats.released_refcnt += > > > + pp_stats.recycle_stats.released_refcnt; > > > > We should be trying to remove this sort of code from the driver, and > > put it all in the core. It wants to be something more like: > > > > struct page_pool_stats stats = {}; > > int i; > > > > for (i = 0; i < rxq_number; i++) { > > struct page_pool *page_pool = pp->rxqs[i].page_pool; > > > > if (!page_pool_get_stats(page_pool, &stats)) > > continue; > > > > page_pool_ethtool_stats_get(data, &stats); > > > > Let page_pool_get_stats() do the accumulate as it puts values in stats. > > Unless I misunderstand this, I don't think that's doable in page pool. > That means page pool is aware of what stats to accumulate per driver > and I certainly don't want anything driver specific to creep in there. > The driver knows the number of pools he is using and he can gather > them all together. I agree that the driver knows about the number of pools. For mvneta, there is one per RX queue. Which is this part of my suggestion > > for (i = 0; i < rxq_number; i++) { > > struct page_pool *page_pool = pp->rxqs[i].page_pool; > > However, it has no idea about the stats themselves. They are purely a construct of the page pool. Hence the next part of my suggest, ask the page pool for the stats, place them into stats, doing the accumulate at the same time.: > > if (!page_pool_get_stats(page_pool, &stats)) > > continue; and now we have the accumulated stats, turn them into ethtool format: > > page_pool_ethtool_stats_get(data, &stats); Where do you see any driver knowledge required in either of page_pool_get_stats() or page_pool_ethtool_stats_get(). Andrew