netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
@ 2022-04-05 20:52 Lorenzo Bianconi
  2022-04-06 23:15 ` Joe Damato
  2022-04-09 17:20 ` Joe Damato
  0 siblings, 2 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-04-05 20:52 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, kuba, pabeni, jbrouer, ilias.apalodimas,
	jdamato

Add missing recycle stats to page_pool_put_page_bulk routine.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 net/core/page_pool.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 1943c0f0307d..4af55d28ffa3 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -36,6 +36,12 @@
 		this_cpu_inc(s->__stat);						\
 	} while (0)
 
+#define recycle_stat_add(pool, __stat, val)						\
+	do {										\
+		struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;	\
+		this_cpu_add(s->__stat, val);						\
+	} while (0)
+
 bool page_pool_get_stats(struct page_pool *pool,
 			 struct page_pool_stats *stats)
 {
@@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
 #else
 #define alloc_stat_inc(pool, __stat)
 #define recycle_stat_inc(pool, __stat)
+#define recycle_stat_add(pool, __stat, val)
 #endif
 
 static int page_pool_init(struct page_pool *pool,
@@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 	/* Bulk producer into ptr_ring page_pool cache */
 	page_pool_ring_lock(pool);
 	for (i = 0; i < bulk_len; i++) {
-		if (__ptr_ring_produce(&pool->ring, data[i]))
-			break; /* ring full */
+		if (__ptr_ring_produce(&pool->ring, data[i])) {
+			/* ring full */
+			recycle_stat_inc(pool, ring_full);
+			break;
+		}
 	}
+	recycle_stat_add(pool, ring, i);
 	page_pool_ring_unlock(pool);
 
 	/* Hopefully all pages was return into ptr_ring */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
  2022-04-05 20:52 [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk Lorenzo Bianconi
@ 2022-04-06 23:15 ` Joe Damato
  2022-04-07  7:43   ` Lorenzo Bianconi
  2022-04-07 20:14   ` Ilias Apalodimas
  2022-04-09 17:20 ` Joe Damato
  1 sibling, 2 replies; 8+ messages in thread
From: Joe Damato @ 2022-04-06 23:15 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, lorenzo.bianconi, davem, kuba, pabeni, jbrouer,
	ilias.apalodimas

On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> Add missing recycle stats to page_pool_put_page_bulk routine.

Thanks for proposing this change. I did miss this path when adding
stats.

I'm sort of torn on this. It almost seems that we might want to track
bulking events separately as their own stat.

Maybe Ilias has an opinion on this; I did implement the stats, but I'm not
a maintainer of the page_pool so I'm not sure what I think matters all
that much ;) 

> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
>  net/core/page_pool.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 1943c0f0307d..4af55d28ffa3 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -36,6 +36,12 @@
>  		this_cpu_inc(s->__stat);						\
>  	} while (0)
>  
> +#define recycle_stat_add(pool, __stat, val)						\
> +	do {										\
> +		struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;	\
> +		this_cpu_add(s->__stat, val);						\
> +	} while (0)
> +
>  bool page_pool_get_stats(struct page_pool *pool,
>  			 struct page_pool_stats *stats)
>  {
> @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
>  #else
>  #define alloc_stat_inc(pool, __stat)
>  #define recycle_stat_inc(pool, __stat)
> +#define recycle_stat_add(pool, __stat, val)
>  #endif
>  
>  static int page_pool_init(struct page_pool *pool,
> @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
>  	/* Bulk producer into ptr_ring page_pool cache */
>  	page_pool_ring_lock(pool);
>  	for (i = 0; i < bulk_len; i++) {
> -		if (__ptr_ring_produce(&pool->ring, data[i]))
> -			break; /* ring full */
> +		if (__ptr_ring_produce(&pool->ring, data[i])) {
> +			/* ring full */
> +			recycle_stat_inc(pool, ring_full);
> +			break;
> +		}
>  	}
> +	recycle_stat_add(pool, ring, i);

If we do go with this approach (instead of adding bulking-specific stats),
we might want to replicate this change in __page_pool_alloc_pages_slow; we
currently only count the single allocation returned by the slow path, but
the rest of the pages which refilled the cache are not counted.

>  	page_pool_ring_unlock(pool);
>  
>  	/* Hopefully all pages was return into ptr_ring */
> -- 
> 2.35.1
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
  2022-04-06 23:15 ` Joe Damato
@ 2022-04-07  7:43   ` Lorenzo Bianconi
  2022-04-07 20:14   ` Ilias Apalodimas
  1 sibling, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-04-07  7:43 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, lorenzo.bianconi, davem, kuba, pabeni, jbrouer,
	ilias.apalodimas

[-- Attachment #1: Type: text/plain, Size: 2806 bytes --]

> On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> > Add missing recycle stats to page_pool_put_page_bulk routine.
> 
> Thanks for proposing this change. I did miss this path when adding
> stats.
> 
> I'm sort of torn on this. It almost seems that we might want to track
> bulking events separately as their own stat.
> 
> Maybe Ilias has an opinion on this; I did implement the stats, but I'm not
> a maintainer of the page_pool so I'm not sure what I think matters all
> that much ;) 
> 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> >  net/core/page_pool.c | 15 +++++++++++++--
> >  1 file changed, 13 insertions(+), 2 deletions(-)
> > 
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index 1943c0f0307d..4af55d28ffa3 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -36,6 +36,12 @@
> >  		this_cpu_inc(s->__stat);						\
> >  	} while (0)
> >  
> > +#define recycle_stat_add(pool, __stat, val)						\
> > +	do {										\
> > +		struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;	\
> > +		this_cpu_add(s->__stat, val);						\
> > +	} while (0)
> > +
> >  bool page_pool_get_stats(struct page_pool *pool,
> >  			 struct page_pool_stats *stats)
> >  {
> > @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
> >  #else
> >  #define alloc_stat_inc(pool, __stat)
> >  #define recycle_stat_inc(pool, __stat)
> > +#define recycle_stat_add(pool, __stat, val)
> >  #endif
> >  
> >  static int page_pool_init(struct page_pool *pool,
> > @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> >  	/* Bulk producer into ptr_ring page_pool cache */
> >  	page_pool_ring_lock(pool);
> >  	for (i = 0; i < bulk_len; i++) {
> > -		if (__ptr_ring_produce(&pool->ring, data[i]))
> > -			break; /* ring full */
> > +		if (__ptr_ring_produce(&pool->ring, data[i])) {
> > +			/* ring full */
> > +			recycle_stat_inc(pool, ring_full);
> > +			break;
> > +		}
> >  	}
> > +	recycle_stat_add(pool, ring, i);
> 
> If we do go with this approach (instead of adding bulking-specific stats),
> we might want to replicate this change in __page_pool_alloc_pages_slow; we
> currently only count the single allocation returned by the slow path, but
> the rest of the pages which refilled the cache are not counted.

Hi Joe,

do you mean to add an event like "bulk_ring_refill" and just count one for
this? I guess the "bulk_ring_refill" event is just a ring refill on "n" pages
so I think it is more meaningful to increment ring refill counter of "n".
What do you think?

Regards,
Lorenzo

> 
> >  	page_pool_ring_unlock(pool);
> >  
> >  	/* Hopefully all pages was return into ptr_ring */
> > -- 
> > 2.35.1
> > 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
  2022-04-06 23:15 ` Joe Damato
  2022-04-07  7:43   ` Lorenzo Bianconi
@ 2022-04-07 20:14   ` Ilias Apalodimas
  2022-04-09  5:22     ` Joe Damato
  1 sibling, 1 reply; 8+ messages in thread
From: Ilias Apalodimas @ 2022-04-07 20:14 UTC (permalink / raw)
  To: Joe Damato
  Cc: Lorenzo Bianconi, netdev, lorenzo.bianconi, davem, kuba, pabeni,
	jbrouer

Hi Joe,

On Thu, 7 Apr 2022 at 02:15, Joe Damato <jdamato@fastly.com> wrote:
>
> On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> > Add missing recycle stats to page_pool_put_page_bulk routine.
>
> Thanks for proposing this change. I did miss this path when adding
> stats.
>
> I'm sort of torn on this. It almost seems that we might want to track
> bulking events separately as their own stat.
>
> Maybe Ilias has an opinion on this; I did implement the stats, but I'm not
> a maintainer of the page_pool so I'm not sure what I think matters all
> that much ;)

It does.  In fact I think people that actually use the stats for
something have a better understanding on what's useful and what's not.
OTOH page_pool_put_page_bulk() is used on the XDP path for now but it
ends up returning pages on a for loop.  So personally I think we are
fine without it. The page will be either returned to the ptr_ring
cache or be free'd and we account for both of those.

However looking at the code I noticed another issue.
__page_pool_alloc_pages_slow() increments the 'slow' stat by one. But
we are not only allocating a single page in there we allocate nr_pages
and we feed all of them but one to the cache.  So imho here we should
bump the slow counter appropriately.  The next allocations will
probably be served from the cache and they will get their own proper
counters.

>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> >  net/core/page_pool.c | 15 +++++++++++++--
> >  1 file changed, 13 insertions(+), 2 deletions(-)
> >
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index 1943c0f0307d..4af55d28ffa3 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -36,6 +36,12 @@
> >               this_cpu_inc(s->__stat);                                                \
> >       } while (0)
> >
> > +#define recycle_stat_add(pool, __stat, val)                                          \
> > +     do {                                                                            \
> > +             struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;       \
> > +             this_cpu_add(s->__stat, val);                                           \
> > +     } while (0)
> > +
> >  bool page_pool_get_stats(struct page_pool *pool,
> >                        struct page_pool_stats *stats)
> >  {
> > @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
> >  #else
> >  #define alloc_stat_inc(pool, __stat)
> >  #define recycle_stat_inc(pool, __stat)
> > +#define recycle_stat_add(pool, __stat, val)
> >  #endif
> >
> >  static int page_pool_init(struct page_pool *pool,
> > @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> >       /* Bulk producer into ptr_ring page_pool cache */
> >       page_pool_ring_lock(pool);
> >       for (i = 0; i < bulk_len; i++) {
> > -             if (__ptr_ring_produce(&pool->ring, data[i]))
> > -                     break; /* ring full */
> > +             if (__ptr_ring_produce(&pool->ring, data[i])) {
> > +                     /* ring full */
> > +                     recycle_stat_inc(pool, ring_full);
> > +                     break;
> > +             }
> >       }
> > +     recycle_stat_add(pool, ring, i);
>
> If we do go with this approach (instead of adding bulking-specific stats),
> we might want to replicate this change in __page_pool_alloc_pages_slow; we
> currently only count the single allocation returned by the slow path, but
> the rest of the pages which refilled the cache are not counted.

Ah yes we are saying the same thing here

Thanks
/Ilias
>
> >       page_pool_ring_unlock(pool);
> >
> >       /* Hopefully all pages was return into ptr_ring */
> > --
> > 2.35.1
> >

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
  2022-04-07 20:14   ` Ilias Apalodimas
@ 2022-04-09  5:22     ` Joe Damato
  2022-04-09 10:22       ` Ilias Apalodimas
  0 siblings, 1 reply; 8+ messages in thread
From: Joe Damato @ 2022-04-09  5:22 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: Lorenzo Bianconi, netdev, lorenzo.bianconi, davem, kuba, pabeni,
	jbrouer

On Thu, Apr 07, 2022 at 11:14:15PM +0300, Ilias Apalodimas wrote:
> Hi Joe,
> 
> On Thu, 7 Apr 2022 at 02:15, Joe Damato <jdamato@fastly.com> wrote:
> >
> > On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> > > Add missing recycle stats to page_pool_put_page_bulk routine.
> >
> > Thanks for proposing this change. I did miss this path when adding
> > stats.
> >
> > I'm sort of torn on this. It almost seems that we might want to track
> > bulking events separately as their own stat.
> >
> > Maybe Ilias has an opinion on this; I did implement the stats, but I'm not
> > a maintainer of the page_pool so I'm not sure what I think matters all
> > that much ;)
> 
> It does.  In fact I think people that actually use the stats for
> something have a better understanding on what's useful and what's not.
> OTOH page_pool_put_page_bulk() is used on the XDP path for now but it
> ends up returning pages on a for loop.  So personally I think we are
> fine without it. The page will be either returned to the ptr_ring
> cache or be free'd and we account for both of those.
> 
> However looking at the code I noticed another issue.
> __page_pool_alloc_pages_slow() increments the 'slow' stat by one. But
> we are not only allocating a single page in there we allocate nr_pages
> and we feed all of them but one to the cache.  So imho here we should
> bump the slow counter appropriately.  The next allocations will
> probably be served from the cache and they will get their own proper
> counters.

After thinking about this a bit more... I'm not sure.

__page_pool_alloc_pages_slow increments slow by 1 because that one page is
returned to the user via the slow path. The side-effect of landing in the
slow path is that nr_pages-1 pages will be fed into the cache... but not
necessarily allocated to the driver.

As you mention, follow up allocations will count them properly as fast path
allocations.

It might be OK as it is. If we add nr_pages to the number of slow allocs
(even though they were never actually allocated as far as the user is
concerned), it may be a bit confusing -- essentially double counting those
allocations as both slow and fast.

So, I think Lorenzo's original patch is correct as is and my comment on it
about __page_pool_alloc_pages_slow was wrong.

> >
> > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > > ---
> > >  net/core/page_pool.c | 15 +++++++++++++--
> > >  1 file changed, 13 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > > index 1943c0f0307d..4af55d28ffa3 100644
> > > --- a/net/core/page_pool.c
> > > +++ b/net/core/page_pool.c
> > > @@ -36,6 +36,12 @@
> > >               this_cpu_inc(s->__stat);                                                \
> > >       } while (0)
> > >
> > > +#define recycle_stat_add(pool, __stat, val)                                          \
> > > +     do {                                                                            \
> > > +             struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;       \
> > > +             this_cpu_add(s->__stat, val);                                           \
> > > +     } while (0)
> > > +
> > >  bool page_pool_get_stats(struct page_pool *pool,
> > >                        struct page_pool_stats *stats)
> > >  {
> > > @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
> > >  #else
> > >  #define alloc_stat_inc(pool, __stat)
> > >  #define recycle_stat_inc(pool, __stat)
> > > +#define recycle_stat_add(pool, __stat, val)
> > >  #endif
> > >
> > >  static int page_pool_init(struct page_pool *pool,
> > > @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> > >       /* Bulk producer into ptr_ring page_pool cache */
> > >       page_pool_ring_lock(pool);
> > >       for (i = 0; i < bulk_len; i++) {
> > > -             if (__ptr_ring_produce(&pool->ring, data[i]))
> > > -                     break; /* ring full */
> > > +             if (__ptr_ring_produce(&pool->ring, data[i])) {
> > > +                     /* ring full */
> > > +                     recycle_stat_inc(pool, ring_full);
> > > +                     break;
> > > +             }
> > >       }
> > > +     recycle_stat_add(pool, ring, i);
> >
> > If we do go with this approach (instead of adding bulking-specific stats),
> > we might want to replicate this change in __page_pool_alloc_pages_slow; we
> > currently only count the single allocation returned by the slow path, but
> > the rest of the pages which refilled the cache are not counted.
> 
> Ah yes we are saying the same thing here
> 
> Thanks
> /Ilias
> >
> > >       page_pool_ring_unlock(pool);
> > >
> > >       /* Hopefully all pages was return into ptr_ring */
> > > --
> > > 2.35.1
> > >

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
  2022-04-09  5:22     ` Joe Damato
@ 2022-04-09 10:22       ` Ilias Apalodimas
  2022-04-09 17:17         ` Lorenzo Bianconi
  0 siblings, 1 reply; 8+ messages in thread
From: Ilias Apalodimas @ 2022-04-09 10:22 UTC (permalink / raw)
  To: Joe Damato
  Cc: Lorenzo Bianconi, netdev, lorenzo.bianconi, davem, kuba, pabeni,
	jbrouer

Hi Joe,

On Sat, 9 Apr 2022 at 08:22, Joe Damato <jdamato@fastly.com> wrote:
>
> On Thu, Apr 07, 2022 at 11:14:15PM +0300, Ilias Apalodimas wrote:
> > Hi Joe,
> >
> > On Thu, 7 Apr 2022 at 02:15, Joe Damato <jdamato@fastly.com> wrote:
> > >
> > > On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> > > > Add missing recycle stats to page_pool_put_page_bulk routine.
> > >
> > > Thanks for proposing this change. I did miss this path when adding
> > > stats.
> > >
> > > I'm sort of torn on this. It almost seems that we might want to track
> > > bulking events separately as their own stat.
> > >
> > > Maybe Ilias has an opinion on this; I did implement the stats, but I'm not
> > > a maintainer of the page_pool so I'm not sure what I think matters all
> > > that much ;)
> >
> > It does.  In fact I think people that actually use the stats for
> > something have a better understanding on what's useful and what's not.
> > OTOH page_pool_put_page_bulk() is used on the XDP path for now but it
> > ends up returning pages on a for loop.  So personally I think we are
> > fine without it. The page will be either returned to the ptr_ring
> > cache or be free'd and we account for both of those.
> >
> > However looking at the code I noticed another issue.
> > __page_pool_alloc_pages_slow() increments the 'slow' stat by one. But
> > we are not only allocating a single page in there we allocate nr_pages
> > and we feed all of them but one to the cache.  So imho here we should
> > bump the slow counter appropriately.  The next allocations will
> > probably be served from the cache and they will get their own proper
> > counters.
>
> After thinking about this a bit more... I'm not sure.
>
> __page_pool_alloc_pages_slow increments slow by 1 because that one page is
> returned to the user via the slow path. The side-effect of landing in the
> slow path is that nr_pages-1 pages will be fed into the cache... but not
> necessarily allocated to the driver.

Well they are in the cache *because* we allocated the from the slow path.

>
> As you mention, follow up allocations will count them properly as fast path
> allocations.
>
> It might be OK as it is. If we add nr_pages to the number of slow allocs
> (even though they were never actually allocated as far as the user is
> concerned), it may be a bit confusing -- essentially double counting those
> allocations as both slow and fast.

Those allocations didn't magically appear in the fast cache.  (At
least) Once in the lifetime of the driver you allocated some packets.
Shouldn't that be reflected into the stats?  The recycled stats
packets basically means "How many of the original slow path allocated
packets did I manage to feed from my cache" isn't it ?

>
> So, I think Lorenzo's original patch is correct as is and my comment on it
> about __page_pool_alloc_pages_slow was wrong.

Me too, I think we need Lorenzo's additions regardless.

Thanks
/Ilias
>
> > >
> > > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > > > ---
> > > >  net/core/page_pool.c | 15 +++++++++++++--
> > > >  1 file changed, 13 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > > > index 1943c0f0307d..4af55d28ffa3 100644
> > > > --- a/net/core/page_pool.c
> > > > +++ b/net/core/page_pool.c
> > > > @@ -36,6 +36,12 @@
> > > >               this_cpu_inc(s->__stat);                                                \
> > > >       } while (0)
> > > >
> > > > +#define recycle_stat_add(pool, __stat, val)                                          \
> > > > +     do {                                                                            \
> > > > +             struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;       \
> > > > +             this_cpu_add(s->__stat, val);                                           \
> > > > +     } while (0)
> > > > +
> > > >  bool page_pool_get_stats(struct page_pool *pool,
> > > >                        struct page_pool_stats *stats)
> > > >  {
> > > > @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
> > > >  #else
> > > >  #define alloc_stat_inc(pool, __stat)
> > > >  #define recycle_stat_inc(pool, __stat)
> > > > +#define recycle_stat_add(pool, __stat, val)
> > > >  #endif
> > > >
> > > >  static int page_pool_init(struct page_pool *pool,
> > > > @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> > > >       /* Bulk producer into ptr_ring page_pool cache */
> > > >       page_pool_ring_lock(pool);
> > > >       for (i = 0; i < bulk_len; i++) {
> > > > -             if (__ptr_ring_produce(&pool->ring, data[i]))
> > > > -                     break; /* ring full */
> > > > +             if (__ptr_ring_produce(&pool->ring, data[i])) {
> > > > +                     /* ring full */
> > > > +                     recycle_stat_inc(pool, ring_full);
> > > > +                     break;
> > > > +             }
> > > >       }
> > > > +     recycle_stat_add(pool, ring, i);
> > >
> > > If we do go with this approach (instead of adding bulking-specific stats),
> > > we might want to replicate this change in __page_pool_alloc_pages_slow; we
> > > currently only count the single allocation returned by the slow path, but
> > > the rest of the pages which refilled the cache are not counted.
> >
> > Ah yes we are saying the same thing here
> >
> > Thanks
> > /Ilias
> > >
> > > >       page_pool_ring_unlock(pool);
> > > >
> > > >       /* Hopefully all pages was return into ptr_ring */
> > > > --
> > > > 2.35.1
> > > >

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
  2022-04-09 10:22       ` Ilias Apalodimas
@ 2022-04-09 17:17         ` Lorenzo Bianconi
  0 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-04-09 17:17 UTC (permalink / raw)
  To: davem, kuba, pabeni
  Cc: Joe Damato, Lorenzo Bianconi, netdev, jbrouer, ilias.apalodimas

[-- Attachment #1: Type: text/plain, Size: 6216 bytes --]

> Hi Joe,
> 
> On Sat, 9 Apr 2022 at 08:22, Joe Damato <jdamato@fastly.com> wrote:
> >
> > On Thu, Apr 07, 2022 at 11:14:15PM +0300, Ilias Apalodimas wrote:
> > > Hi Joe,
> > >
> > > On Thu, 7 Apr 2022 at 02:15, Joe Damato <jdamato@fastly.com> wrote:
> > > >
> > > > On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> > > > > Add missing recycle stats to page_pool_put_page_bulk routine.
> > > >
> > > > Thanks for proposing this change. I did miss this path when adding
> > > > stats.
> > > >
> > > > I'm sort of torn on this. It almost seems that we might want to track
> > > > bulking events separately as their own stat.
> > > >
> > > > Maybe Ilias has an opinion on this; I did implement the stats, but I'm not
> > > > a maintainer of the page_pool so I'm not sure what I think matters all
> > > > that much ;)
> > >
> > > It does.  In fact I think people that actually use the stats for
> > > something have a better understanding on what's useful and what's not.
> > > OTOH page_pool_put_page_bulk() is used on the XDP path for now but it
> > > ends up returning pages on a for loop.  So personally I think we are
> > > fine without it. The page will be either returned to the ptr_ring
> > > cache or be free'd and we account for both of those.
> > >
> > > However looking at the code I noticed another issue.
> > > __page_pool_alloc_pages_slow() increments the 'slow' stat by one. But
> > > we are not only allocating a single page in there we allocate nr_pages
> > > and we feed all of them but one to the cache.  So imho here we should
> > > bump the slow counter appropriately.  The next allocations will
> > > probably be served from the cache and they will get their own proper
> > > counters.
> >
> > After thinking about this a bit more... I'm not sure.
> >
> > __page_pool_alloc_pages_slow increments slow by 1 because that one page is
> > returned to the user via the slow path. The side-effect of landing in the
> > slow path is that nr_pages-1 pages will be fed into the cache... but not
> > necessarily allocated to the driver.
> 
> Well they are in the cache *because* we allocated the from the slow path.
> 
> >
> > As you mention, follow up allocations will count them properly as fast path
> > allocations.
> >
> > It might be OK as it is. If we add nr_pages to the number of slow allocs
> > (even though they were never actually allocated as far as the user is
> > concerned), it may be a bit confusing -- essentially double counting those
> > allocations as both slow and fast.
> 
> Those allocations didn't magically appear in the fast cache.  (At
> least) Once in the lifetime of the driver you allocated some packets.
> Shouldn't that be reflected into the stats?  The recycled stats
> packets basically means "How many of the original slow path allocated
> packets did I manage to feed from my cache" isn't it ?
> 
> >
> > So, I think Lorenzo's original patch is correct as is and my comment on it
> > about __page_pool_alloc_pages_slow was wrong.
> 
> Me too, I think we need Lorenzo's additions regardless.

Hi Dave, Jakub and Paolo,

since we agreed this patch is fine and it is not related to the ongoing discussion
and the patch is marked as "change requested' in patchwork, do I need to repost or
is it ok to apply the current version?

Regards,
Lorenzo

> 
> Thanks
> /Ilias
> >
> > > >
> > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > > > > ---
> > > > >  net/core/page_pool.c | 15 +++++++++++++--
> > > > >  1 file changed, 13 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > > > > index 1943c0f0307d..4af55d28ffa3 100644
> > > > > --- a/net/core/page_pool.c
> > > > > +++ b/net/core/page_pool.c
> > > > > @@ -36,6 +36,12 @@
> > > > >               this_cpu_inc(s->__stat);                                                \
> > > > >       } while (0)
> > > > >
> > > > > +#define recycle_stat_add(pool, __stat, val)                                          \
> > > > > +     do {                                                                            \
> > > > > +             struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;       \
> > > > > +             this_cpu_add(s->__stat, val);                                           \
> > > > > +     } while (0)
> > > > > +
> > > > >  bool page_pool_get_stats(struct page_pool *pool,
> > > > >                        struct page_pool_stats *stats)
> > > > >  {
> > > > > @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
> > > > >  #else
> > > > >  #define alloc_stat_inc(pool, __stat)
> > > > >  #define recycle_stat_inc(pool, __stat)
> > > > > +#define recycle_stat_add(pool, __stat, val)
> > > > >  #endif
> > > > >
> > > > >  static int page_pool_init(struct page_pool *pool,
> > > > > @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> > > > >       /* Bulk producer into ptr_ring page_pool cache */
> > > > >       page_pool_ring_lock(pool);
> > > > >       for (i = 0; i < bulk_len; i++) {
> > > > > -             if (__ptr_ring_produce(&pool->ring, data[i]))
> > > > > -                     break; /* ring full */
> > > > > +             if (__ptr_ring_produce(&pool->ring, data[i])) {
> > > > > +                     /* ring full */
> > > > > +                     recycle_stat_inc(pool, ring_full);
> > > > > +                     break;
> > > > > +             }
> > > > >       }
> > > > > +     recycle_stat_add(pool, ring, i);
> > > >
> > > > If we do go with this approach (instead of adding bulking-specific stats),
> > > > we might want to replicate this change in __page_pool_alloc_pages_slow; we
> > > > currently only count the single allocation returned by the slow path, but
> > > > the rest of the pages which refilled the cache are not counted.
> > >
> > > Ah yes we are saying the same thing here
> > >
> > > Thanks
> > > /Ilias
> > > >
> > > > >       page_pool_ring_unlock(pool);
> > > > >
> > > > >       /* Hopefully all pages was return into ptr_ring */
> > > > > --
> > > > > 2.35.1
> > > > >
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk
  2022-04-05 20:52 [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk Lorenzo Bianconi
  2022-04-06 23:15 ` Joe Damato
@ 2022-04-09 17:20 ` Joe Damato
  1 sibling, 0 replies; 8+ messages in thread
From: Joe Damato @ 2022-04-09 17:20 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, lorenzo.bianconi, davem, kuba, pabeni, jbrouer,
	ilias.apalodimas

On Tue, Apr 05, 2022 at 10:52:55PM +0200, Lorenzo Bianconi wrote:
> Add missing recycle stats to page_pool_put_page_bulk routine.
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
>  net/core/page_pool.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 1943c0f0307d..4af55d28ffa3 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -36,6 +36,12 @@
>  		this_cpu_inc(s->__stat);						\
>  	} while (0)
>  
> +#define recycle_stat_add(pool, __stat, val)						\
> +	do {										\
> +		struct page_pool_recycle_stats __percpu *s = pool->recycle_stats;	\
> +		this_cpu_add(s->__stat, val);						\
> +	} while (0)
> +
>  bool page_pool_get_stats(struct page_pool *pool,
>  			 struct page_pool_stats *stats)
>  {
> @@ -63,6 +69,7 @@ EXPORT_SYMBOL(page_pool_get_stats);
>  #else
>  #define alloc_stat_inc(pool, __stat)
>  #define recycle_stat_inc(pool, __stat)
> +#define recycle_stat_add(pool, __stat, val)
>  #endif
>  
>  static int page_pool_init(struct page_pool *pool,
> @@ -566,9 +573,13 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
>  	/* Bulk producer into ptr_ring page_pool cache */
>  	page_pool_ring_lock(pool);
>  	for (i = 0; i < bulk_len; i++) {
> -		if (__ptr_ring_produce(&pool->ring, data[i]))
> -			break; /* ring full */
> +		if (__ptr_ring_produce(&pool->ring, data[i])) {
> +			/* ring full */
> +			recycle_stat_inc(pool, ring_full);
> +			break;
> +		}
>  	}
> +	recycle_stat_add(pool, ring, i);
>  	page_pool_ring_unlock(pool);
>  
>  	/* Hopefully all pages was return into ptr_ring */
> -- 
> 2.35.1
> 

Thanks for doing this!

Reviewed-by: Joe Damato <jdamato@fastly.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-04-09 17:20 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-04-05 20:52 [PATCH net-next] page_pool: Add recycle stats to page_pool_put_page_bulk Lorenzo Bianconi
2022-04-06 23:15 ` Joe Damato
2022-04-07  7:43   ` Lorenzo Bianconi
2022-04-07 20:14   ` Ilias Apalodimas
2022-04-09  5:22     ` Joe Damato
2022-04-09 10:22       ` Ilias Apalodimas
2022-04-09 17:17         ` Lorenzo Bianconi
2022-04-09 17:20 ` Joe Damato

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).