netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Lunn <andrew@lunn.ch>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
	netdev@vger.kernel.org, lorenzo.bianconi@redhat.com,
	davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com,
	thomas.petazzoni@bootlin.com, jbrouer@redhat.com,
	jdamato@fastly.com
Subject: Re: [RFC net-next 2/2] net: mvneta: add support for page_pool_get_stats
Date: Thu, 7 Apr 2022 21:25:50 +0200	[thread overview]
Message-ID: <Yk86vuqcCOZVxgOe@lunn.ch> (raw)
In-Reply-To: <CAC_iWjKttCb-oDk27vb_Ar58qLN8vY_1cFbGtLB+YUMXtTX8nw@mail.gmail.com>

On Thu, Apr 07, 2022 at 09:35:52PM +0300, Ilias Apalodimas wrote:
> Hi Andrew,
> 
> On Thu, 7 Apr 2022 at 21:25, Andrew Lunn <andrew@lunn.ch> wrote:
> >
> > > +static void mvneta_ethtool_pp_stats(struct mvneta_port *pp, u64 *data)
> > > +{
> > > +     struct page_pool_stats stats = {};
> > > +     int i;
> > > +
> > > +     for (i = 0; i < rxq_number; i++) {
> > > +             struct page_pool *page_pool = pp->rxqs[i].page_pool;
> > > +             struct page_pool_stats pp_stats = {};
> > > +
> > > +             if (!page_pool_get_stats(page_pool, &pp_stats))
> > > +                     continue;
> > > +
> > > +             stats.alloc_stats.fast += pp_stats.alloc_stats.fast;
> > > +             stats.alloc_stats.slow += pp_stats.alloc_stats.slow;
> > > +             stats.alloc_stats.slow_high_order +=
> > > +                     pp_stats.alloc_stats.slow_high_order;
> > > +             stats.alloc_stats.empty += pp_stats.alloc_stats.empty;
> > > +             stats.alloc_stats.refill += pp_stats.alloc_stats.refill;
> > > +             stats.alloc_stats.waive += pp_stats.alloc_stats.waive;
> > > +             stats.recycle_stats.cached += pp_stats.recycle_stats.cached;
> > > +             stats.recycle_stats.cache_full +=
> > > +                     pp_stats.recycle_stats.cache_full;
> > > +             stats.recycle_stats.ring += pp_stats.recycle_stats.ring;
> > > +             stats.recycle_stats.ring_full +=
> > > +                     pp_stats.recycle_stats.ring_full;
> > > +             stats.recycle_stats.released_refcnt +=
> > > +                     pp_stats.recycle_stats.released_refcnt;
> >
> > We should be trying to remove this sort of code from the driver, and
> > put it all in the core.  It wants to be something more like:
> >
> >         struct page_pool_stats stats = {};
> >         int i;
> >
> >         for (i = 0; i < rxq_number; i++) {
> >                 struct page_pool *page_pool = pp->rxqs[i].page_pool;
> >
> >                 if (!page_pool_get_stats(page_pool, &stats))
> >                         continue;
> >
> >         page_pool_ethtool_stats_get(data, &stats);
> >
> > Let page_pool_get_stats() do the accumulate as it puts values in stats.
> 
> Unless I misunderstand this, I don't think that's doable in page pool.
> That means page pool is aware of what stats to accumulate per driver
> and I certainly don't want anything driver specific to creep in there.
> The driver knows the number of pools he is using and he can gather
> them all together.

I agree that the driver knows about the number of pools. For mvneta,
there is one per RX queue. Which is this part of my suggestion

> >         for (i = 0; i < rxq_number; i++) {
> >                 struct page_pool *page_pool = pp->rxqs[i].page_pool;
> >

However, it has no idea about the stats themselves. They are purely a
construct of the page pool. Hence the next part of my suggest, ask the
page pool for the stats, place them into stats, doing the accumulate
at the same time.:

> >                 if (!page_pool_get_stats(page_pool, &stats))
> >                         continue;

and now we have the accumulated stats, turn them into ethtool format:

> >         page_pool_ethtool_stats_get(data, &stats);

Where do you see any driver knowledge required in either of
page_pool_get_stats() or page_pool_ethtool_stats_get().

      Andrew

  reply	other threads:[~2022-04-07 19:26 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-07 16:55 [RFC net-next 0/2] Add page_pool_stats ethtool APIs Lorenzo Bianconi
2022-04-07 16:55 ` [RFC net-next 1/2] net: page_pool: introduce ethtool stats Lorenzo Bianconi
2022-04-08  3:30   ` Jakub Kicinski
2022-04-08  8:11     ` Lorenzo Bianconi
2022-04-07 16:55 ` [RFC net-next 2/2] net: mvneta: add support for page_pool_get_stats Lorenzo Bianconi
2022-04-07 18:25   ` Andrew Lunn
2022-04-07 18:35     ` Ilias Apalodimas
2022-04-07 19:25       ` Andrew Lunn [this message]
2022-04-07 20:19         ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yk86vuqcCOZVxgOe@lunn.ch \
    --to=andrew@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jbrouer@redhat.com \
    --cc=jdamato@fastly.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=thomas.petazzoni@bootlin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).