netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org,
	"David S. Miller" <davem@davemloft.net>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	Eric Dumazet <edumazet@google.com>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Joe Damato <jdamato@fastly.com>,
	Leon Romanovsky <leon@kernel.org>,
	Paolo Abeni <pabeni@redhat.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Simon Horman <horms@kernel.org>, Tariq Toukan <tariqt@nvidia.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Yunsheng Lin <linyunsheng@huawei.com>
Subject: Re: [PATCH net-next v2 2/5] page_pool: Add per-queue statistics.
Date: Fri, 7 Mar 2025 08:59:46 -0800	[thread overview]
Message-ID: <20250307085946.244ddeb6@kernel.org> (raw)
In-Reply-To: <20250307165046.qZAH0XkD@linutronix.de>

On Fri, 7 Mar 2025 17:50:46 +0100 Sebastian Andrzej Siewior wrote:
> On 2025-03-07 08:11:35 [-0800], Jakub Kicinski wrote:
> > On Fri,  7 Mar 2025 12:57:19 +0100 Sebastian Andrzej Siewior wrote:  
> > > The mlx5 driver supports per-channel statistics. To make support generic
> > > it is required to have a template to fill the individual channel/ queue.
> > > 
> > > Provide page_pool_ethtool_stats_get_strings_mq() to fill the strings for
> > > multiple queue.  
> > 
> > Sorry to say this is useless as a common helper, you should move it 
> > to mlx5.
> > 
> > The page pool stats have a standard interface, they are exposed over
> > netlink. If my grep-foo isn't failing me no driver uses the exact
> > strings mlx5 uses. "New drivers" are not supposed to add these stats
> > to ethtool -S, and should just steer users towards the netlink stats.
> > 
> > IOW mlx5 is and will remain the only user of this helper forever.  
> 
> Okay, so per-runqueue stats is not something other/ new drivers are
> interested in?
> The strings are the same, except for the rx%d_ prefix, but yes this
> makes it unique.
> The mlx5 folks seem to be the only one interested in this. The veth
> driver for instance iterates over real_num_rx_queues and adds all
> per-queue stats into one counter. It could also expose this per-runqueue
> as it does with xdp_packets for instance. But then it uses the
> rx_queue_%d prefix…

What I'm saying is they are already available per queue, via netlink,
with no driver work necessary. See tools/net/ynl/samples/page-pool.c
The mlx5 stats predate the standard interface.

> I don't care, I just intended to provide some generic facility so we
> don't have every driver rolling its own thing. I have no problem to move
> this to the mlx5 driver.

Thanks, and sorry for not catching the conversation earlier.

  reply	other threads:[~2025-03-07 16:59 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-07 11:57 [PATCH net-next v2 0/5] page_pool: Convert stats to u64_stats_t Sebastian Andrzej Siewior
2025-03-07 11:57 ` [PATCH net-next v2 1/5] page_pool: Provide an empty page_pool_stats for disabled stats Sebastian Andrzej Siewior
2025-03-13 14:40   ` Ilias Apalodimas
2025-03-07 11:57 ` [PATCH net-next v2 2/5] page_pool: Add per-queue statistics Sebastian Andrzej Siewior
2025-03-07 16:11   ` Jakub Kicinski
2025-03-07 16:50     ` Sebastian Andrzej Siewior
2025-03-07 16:59       ` Jakub Kicinski [this message]
2025-03-07 17:05         ` Sebastian Andrzej Siewior
2025-03-07 11:57 ` [PATCH net-next v2 3/5] mlx5: Use generic code for page_pool statistics Sebastian Andrzej Siewior
2025-03-07 11:57 ` [PATCH net-next v2 4/5] page_pool: Convert page_pool_recycle_stats to u64_stats_t Sebastian Andrzej Siewior
2025-03-07 11:57 ` [PATCH net-next v2 5/5] page_pool: Convert page_pool_alloc_stats " Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250307085946.244ddeb6@kernel.org \
    --to=kuba@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=bigeasy@linutronix.de \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jdamato@fastly.com \
    --cc=leon@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).