From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: "David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: Paul Menzel <pmenzel@molgen.mpg.de>,
Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Larysa Zaremba <larysa.zaremba@intel.com>,
netdev@vger.kernel.org, Alexander Duyck <alexanderduyck@fb.com>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
linux-kernel@vger.kernel.org,
Alexander Lobakin <aleksander.lobakin@intel.com>,
Yunsheng Lin <linyunsheng@huawei.com>,
Michal Kubiak <michal.kubiak@intel.com>,
intel-wired-lan@lists.osuosl.org,
David Christensen <drc@linux.vnet.ibm.com>
Subject: [Intel-wired-lan] [PATCH net-next v5 13/14] libie: add per-queue Page Pool stats
Date: Fri, 24 Nov 2023 16:47:31 +0100 [thread overview]
Message-ID: <20231124154732.1623518-14-aleksander.lobakin@intel.com> (raw)
In-Reply-To: <20231124154732.1623518-1-aleksander.lobakin@intel.com>
Expand the libie generic per-queue stats with the generic Page Pool
stats provided by the API itself, when CONFIG_PAGE_POOL_STATS is
enabled. When it's not, there'll be no such fields in the stats
structure, so no space wasted.
They are also a bit special in terms of how they are obtained. One
&page_pool accumulates statistics until it's destroyed obviously, which
happens on ifdown. So, in order to not lose any statistics, get the
stats and store them in the queue container before destroying the pool.
This container survives ifups/downs, so it basically stores the
statistics accumulated since the very first pool was allocated on this
queue. When it's needed to export the stats, first get the numbers from
this container and then add the "live" numbers -- the ones that the
current active pool returns. The result values will always represent
the actual device-lifetime stats.
There's a cast from &page_pool_stats to `u64 *` in a couple functions,
but they are guarded with stats asserts to make sure it's safe to do.
FWIW it saves a lot of object code.
Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
drivers/net/ethernet/intel/libie/internal.h | 20 ++++++
drivers/net/ethernet/intel/libie/rx.c | 9 +++
drivers/net/ethernet/intel/libie/stats.c | 68 +++++++++++++++++++++
include/linux/net/intel/libie/stats.h | 34 ++++++++++-
4 files changed, 130 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/intel/libie/internal.h
diff --git a/drivers/net/ethernet/intel/libie/internal.h b/drivers/net/ethernet/intel/libie/internal.h
new file mode 100644
index 000000000000..13bb0a89f59e
--- /dev/null
+++ b/drivers/net/ethernet/intel/libie/internal.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* libie internal declarations not to be used in the drivers.
+ *
+ * Copyright(c) 2023 Intel Corporation.
+ */
+
+#ifndef __LIBIE_INTERNAL_H
+#define __LIBIE_INTERNAL_H
+
+struct libie_rx_queue;
+
+#ifdef CONFIG_PAGE_POOL_STATS
+void libie_rq_stats_sync_pp(const struct libie_rx_queue *rq);
+#else
+static inline void libie_rq_stats_sync_pp(const struct libie_rx_queue *rq)
+{
+}
+#endif
+
+#endif /* __LIBIE_INTERNAL_H */
diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c
index 520a269f7d31..fcc5c3c44645 100644
--- a/drivers/net/ethernet/intel/libie/rx.c
+++ b/drivers/net/ethernet/intel/libie/rx.c
@@ -3,6 +3,8 @@
#include <linux/net/intel/libie/rx.h>
+#include "internal.h"
+
/* Rx buffer management */
/**
@@ -64,9 +66,16 @@ EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE);
/**
* libie_rx_page_pool_destroy - destroy a &page_pool created by libie
* @rq: receive queue to process
+ *
+ * As the stats usually has the same lifetime as the device, but PP is usually
+ * created/destroyed on ifup/ifdown, in order to not lose the stats accumulated
+ * during the last ifup, the PP stats need to be added to the driver stats
+ * container. Then the PP gets destroyed.
*/
void libie_rx_page_pool_destroy(struct libie_rx_queue *rq)
{
+ libie_rq_stats_sync_pp(rq);
+
page_pool_destroy(rq->pp);
rq->pp = NULL;
}
diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c
index bdcbe4304c55..9c4ef237af08 100644
--- a/drivers/net/ethernet/intel/libie/stats.c
+++ b/drivers/net/ethernet/intel/libie/stats.c
@@ -6,6 +6,8 @@
#include <linux/net/intel/libie/rx.h>
#include <linux/net/intel/libie/stats.h>
+#include "internal.h"
+
/* Rx per-queue stats */
static const char * const libie_rq_stats_str[] = {
@@ -16,6 +18,70 @@ static const char * const libie_rq_stats_str[] = {
#define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str)
+#ifdef CONFIG_PAGE_POOL_STATS
+/**
+ * libie_rq_stats_get_pp - get the current stats from a &page_pool
+ * @sarr: local array to add stats to
+ * @pool: pool to get the stats from
+ *
+ * Adds the current "live" stats from an online PP to the stats read from
+ * the RQ container, so that the actual totals will be returned.
+ */
+static void libie_rq_stats_get_pp(u64 *sarr, const struct page_pool *pool)
+{
+ struct page_pool_stats *pps;
+ /* Used only to calculate pos below */
+ struct libie_rq_stats tmp;
+ u32 pos;
+
+ /* Validate the libie PP stats array can be casted <-> PP struct */
+ static_assert(sizeof(tmp.pp) == sizeof(*pps));
+
+ if (!pool)
+ return;
+
+ /* Position of the first Page Pool stats field */
+ pos = (u64_stats_t *)&tmp.pp - tmp.raw;
+ pps = (typeof(pps))&sarr[pos];
+
+ page_pool_get_stats(pool, pps);
+}
+
+/**
+ * libie_rq_stats_sync_pp - add the current PP stats to the RQ stats container
+ * @rq: Rx queue to synchronize
+ *
+ * Called by libie_rx_page_pool_destroy() to save the stats before destroying
+ * the pool.
+ */
+void libie_rq_stats_sync_pp(const struct libie_rx_queue *rq)
+{
+ struct libie_rq_stats *stats = rq->stats;
+ struct page_pool_stats pps = { };
+ u64 *sarr = (u64 *)&pps;
+ u64_stats_t *qarr;
+
+ if (!stats)
+ return;
+
+ qarr = (u64_stats_t *)&stats->pp;
+ page_pool_get_stats(rq->pp, &pps);
+
+ u64_stats_update_begin(&stats->syncp);
+
+ for (u32 i = 0; i < sizeof(pps) / sizeof(*sarr); i++)
+ u64_stats_add(&qarr[i], sarr[i]);
+
+ u64_stats_update_end(&stats->syncp);
+}
+#else
+static void libie_rq_stats_get_pp(u64 *sarr, const struct page_pool *pool)
+{
+}
+
+/* static inline void libie_rq_stats_sync_pp() is declared in "internal.h" */
+#endif
+
/**
* libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided
*
@@ -57,6 +123,8 @@ void libie_rq_stats_get_data(u64 **data, const struct libie_rx_queue *rq)
sarr[i] = u64_stats_read(&stats->raw[i]);
} while (u64_stats_fetch_retry(&stats->syncp, start));
+ libie_rq_stats_get_pp(sarr, rq->pp);
+
for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++)
(*data)[i] += sarr[i];
diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h
index 4e6dfb8c715f..f913968d7516 100644
--- a/include/linux/net/intel/libie/stats.h
+++ b/include/linux/net/intel/libie/stats.h
@@ -49,6 +49,17 @@
* fragments: number of processed descriptors carrying only a fragment
* alloc_page_fail: number of Rx page allocation fails
* build_skb_fail: number of build_skb() fails
+ * pp_alloc_fast: pages taken from the cache or ring
+ * pp_alloc_slow: actual page allocations
+ * pp_alloc_slow_ho: non-order-0 page allocations
+ * pp_alloc_empty: number of times the pool was empty
+ * pp_alloc_refill: number of cache refills
+ * pp_alloc_waive: NUMA node mismatches during recycling
+ * pp_recycle_cached: direct recyclings into the cache
+ * pp_recycle_cache_full: number of times the cache was full
+ * pp_recycle_ring: recyclings into the ring
+ * pp_recycle_ring_full: number of times the ring was full
+ * pp_recycle_released_ref: pages released due to elevated refcnt
*/
#define DECLARE_LIBIE_RQ_NAPI_STATS(act) \
@@ -60,9 +71,27 @@
act(alloc_page_fail) \
act(build_skb_fail)
+#ifdef CONFIG_PAGE_POOL_STATS
+#define DECLARE_LIBIE_RQ_PP_STATS(act) \
+ act(pp_alloc_fast) \
+ act(pp_alloc_slow) \
+ act(pp_alloc_slow_ho) \
+ act(pp_alloc_empty) \
+ act(pp_alloc_refill) \
+ act(pp_alloc_waive) \
+ act(pp_recycle_cached) \
+ act(pp_recycle_cache_full) \
+ act(pp_recycle_ring) \
+ act(pp_recycle_ring_full) \
+ act(pp_recycle_released_ref)
+#else
+#define DECLARE_LIBIE_RQ_PP_STATS(act)
+#endif
+
#define DECLARE_LIBIE_RQ_STATS(act) \
DECLARE_LIBIE_RQ_NAPI_STATS(act) \
- DECLARE_LIBIE_RQ_FAIL_STATS(act)
+ DECLARE_LIBIE_RQ_FAIL_STATS(act) \
+ DECLARE_LIBIE_RQ_PP_STATS(act)
struct libie_rx_queue;
@@ -74,6 +103,9 @@ struct libie_rq_stats {
#define act(s) u64_stats_t s;
DECLARE_LIBIE_RQ_NAPI_STATS(act);
DECLARE_LIBIE_RQ_FAIL_STATS(act);
+ struct_group(pp,
+ DECLARE_LIBIE_RQ_PP_STATS(act);
+ );
#undef act
};
DECLARE_FLEX_ARRAY(u64_stats_t, raw);
--
2.42.0
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
next prev parent reply other threads:[~2023-11-24 15:51 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-24 15:47 [Intel-wired-lan] [PATCH net-next v5 00/14] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 01/14] page_pool: make sure frag API fields don't span between cachelines Alexander Lobakin
2023-11-25 12:29 ` Yunsheng Lin
2023-11-27 14:08 ` Alexander Lobakin
2023-11-29 2:55 ` Yunsheng Lin
2023-11-29 13:12 ` Alexander Lobakin
2023-11-26 22:54 ` Jakub Kicinski
2023-11-27 14:12 ` Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 02/14] page_pool: don't use driver-set flags field directly Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 03/14] page_pool: avoid calling no-op externals when possible Alexander Lobakin
2023-11-25 13:04 ` Yunsheng Lin
2023-11-27 14:32 ` Alexander Lobakin
2023-11-27 18:17 ` Jakub Kicinski
2023-11-28 16:50 ` Alexander Lobakin
2023-11-29 3:17 ` Yunsheng Lin
2023-11-29 13:17 ` Alexander Lobakin
2023-11-30 8:46 ` Yunsheng Lin
2023-11-30 11:58 ` Alexander Lobakin
2023-11-30 12:20 ` Yunsheng Lin
2023-12-01 14:37 ` Alexander Lobakin
2023-12-12 15:25 ` Christoph Hellwig
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 04/14] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 05/14] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 06/14] iavf: drop page splitting and recycling Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 07/14] page_pool: constify some read-only function arguments Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 08/14] page_pool: add DMA-sync-for-CPU inline helpers Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 09/14] libie: add Rx buffer management (via Page Pool) Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 10/14] iavf: pack iavf_ring more efficiently Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 11/14] iavf: switch to Page Pool Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 12/14] libie: add common queue stats Alexander Lobakin
2023-11-24 15:47 ` Alexander Lobakin [this message]
2023-11-29 13:40 ` [Intel-wired-lan] [PATCH net-next v5 13/14] libie: add per-queue Page Pool stats Alexander Lobakin
2023-11-29 14:29 ` Jakub Kicinski
2023-11-30 16:01 ` Alexander Lobakin
2023-11-30 16:45 ` Alexander Lobakin
2023-12-01 6:55 ` Jakub Kicinski
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 14/14] iavf: switch queue stats to libie Alexander Lobakin
2023-11-27 9:04 ` [Intel-wired-lan] [PATCH net-next v5 00/14] net: intel: start The Great Code Dedup + Page Pool for iavf Jiri Pirko
2023-11-27 10:23 ` Przemek Kitszel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231124154732.1623518-14-aleksander.lobakin@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=alexanderduyck@fb.com \
--cc=davem@davemloft.net \
--cc=drc@linux.vnet.ibm.com \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=kuba@kernel.org \
--cc=larysa.zaremba@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=maciej.fijalkowski@intel.com \
--cc=michal.kubiak@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pmenzel@molgen.mpg.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox