From: Andrew Lunn <andrew@lunn.ch>
To: Joe Damato <jdamato@fastly.com>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
netdev@vger.kernel.org, lorenzo.bianconi@redhat.com,
davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com,
thomas.petazzoni@bootlin.com, linux@armlinux.org.uk,
jbrouer@redhat.com, ilias.apalodimas@linaro.org
Subject: Re: [PATCH net-next] net: mvneta: add support for page_pool_get_stats
Date: Thu, 7 Apr 2022 01:36:01 +0200 [thread overview]
Message-ID: <Yk4j4YCVuOLK/1uE@lunn.ch> (raw)
In-Reply-To: <20220406230136.GA96269@fastly.com>
On Wed, Apr 06, 2022 at 04:01:37PM -0700, Joe Damato wrote:
> On Wed, Apr 06, 2022 at 04:02:44PM +0200, Lorenzo Bianconi wrote:
> > > > +static void mvneta_ethtool_update_pp_stats(struct mvneta_port *pp,
> > > > + struct page_pool_stats *stats)
> > > > +{
> > > > + int i;
> > > > +
> > > > + memset(stats, 0, sizeof(*stats));
> > > > + for (i = 0; i < rxq_number; i++) {
> > > > + struct page_pool *page_pool = pp->rxqs[i].page_pool;
> > > > + struct page_pool_stats pp_stats = {};
> > > > +
> > > > + if (!page_pool_get_stats(page_pool, &pp_stats))
> > > > + continue;
> > > > +
> > > > + stats->alloc_stats.fast += pp_stats.alloc_stats.fast;
> > > > + stats->alloc_stats.slow += pp_stats.alloc_stats.slow;
> > > > + stats->alloc_stats.slow_high_order +=
> > > > + pp_stats.alloc_stats.slow_high_order;
> > > > + stats->alloc_stats.empty += pp_stats.alloc_stats.empty;
> > > > + stats->alloc_stats.refill += pp_stats.alloc_stats.refill;
> > > > + stats->alloc_stats.waive += pp_stats.alloc_stats.waive;
> > > > + stats->recycle_stats.cached += pp_stats.recycle_stats.cached;
> > > > + stats->recycle_stats.cache_full +=
> > > > + pp_stats.recycle_stats.cache_full;
> > > > + stats->recycle_stats.ring += pp_stats.recycle_stats.ring;
> > > > + stats->recycle_stats.ring_full +=
> > > > + pp_stats.recycle_stats.ring_full;
> > > > + stats->recycle_stats.released_refcnt +=
> > > > + pp_stats.recycle_stats.released_refcnt;
> > >
> > > Am i right in saying, these are all software stats? They are also
> > > generic for any receive queue using the page pool?
> >
> > yes, these stats are accounted by the kernel so they are sw stats, but I guess
> > xdp ones are sw as well, right?
> >
> > >
> > > It seems odd the driver is doing the addition here. Why not pass stats
> > > into page_pool_get_stats()? That will make it easier when you add
> > > additional statistics?
> > >
> > > I'm also wondering if ethtool -S is even the correct API. It should be
> > > for hardware dependent statistics, those which change between
> > > implementations. Where as these statistics should be generic. Maybe
> > > they should be in /sys/class/net/ethX/statistics/ and the driver
> > > itself is not even involved, the page pool code implements it?
> >
> > I do not have a strong opinion on it, but I can see an issue for some drivers
> > (e.g. mvpp2 iirc) where page_pools are not specific for each net_device but are shared
> > between multiple ports, so maybe it is better to allow the driver to decide how
> > to report them. What do you think?
>
> When I did the implementation of this API the feedback was essentially
> that the drivers should be responsible for reporting the stats of their
> active page_pool structures; this is why the first driver to use this
> (mlx5) uses the API and outputs the stats via ethtool -S.
>
> I have no strong preference, either, but I think that exposing the stats
> via an API for the drivers to consume is less tricky; the driver knows
> which page_pools are active and which pool is associated with which
> RX-queue, and so on.
>
> If there is general consensus for a different approach amongst the
> page_pool maintainers, I am happy to implement it.
If we keep it in the drivers, it would be good to try to move some of
the code into the core, to keep cut/paste to a minimum. We want the
same strings for every driver for example, and it looks like it is
going to be hard to add new counters, since you will need to touch
every driver using the page pool.
Maybe even consider adding ETH_SS_PAGE_POOL. You can then put
page_pool_get_sset_count() and page_pool_get_sset_strings() as helpers
in the core, and the driver just needs to implement the get_stats()
part, again with a helper in the core which can do most of the work.
Andrew
next prev parent reply other threads:[~2022-04-06 23:36 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-05 20:32 [PATCH net-next] net: mvneta: add support for page_pool_get_stats Lorenzo Bianconi
2022-04-06 12:49 ` Andrew Lunn
2022-04-06 12:53 ` Ilias Apalodimas
2022-04-06 13:05 ` Lorenzo Bianconi
2022-04-06 13:38 ` Andrew Lunn
2022-04-06 14:02 ` Lorenzo Bianconi
2022-04-06 23:01 ` Joe Damato
2022-04-06 23:36 ` Andrew Lunn [this message]
2022-04-07 16:48 ` Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yk4j4YCVuOLK/1uE@lunn.ch \
--to=andrew@lunn.ch \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=jbrouer@redhat.com \
--cc=jdamato@fastly.com \
--cc=kuba@kernel.org \
--cc=linux@armlinux.org.uk \
--cc=lorenzo.bianconi@redhat.com \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=thomas.petazzoni@bootlin.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).