From: Amit Cohen <amcohen@nvidia.com>
To: <kuba@kernel.org>
Cc: <davem@davemloft.net>, <edumazet@google.com>, <pabeni@redhat.com>,
<hawk@kernel.org>, <idosch@nvidia.com>, <petrm@nvidia.com>,
<mlxsw@nvidia.com>, <netdev@vger.kernel.org>,
Amit Cohen <amcohen@nvidia.com>
Subject: [PATCH RFC net-next 1/4] net: core: page_pool_user: Allow flexibility of 'ifindex' value
Date: Tue, 25 Jun 2024 15:08:04 +0300 [thread overview]
Message-ID: <20240625120807.1165581-2-amcohen@nvidia.com> (raw)
In-Reply-To: <20240625120807.1165581-1-amcohen@nvidia.com>
Netlink message of page pool query includes 'ifindex'. Currently, this
value is always set to 'pool->slow.netdev->ifindex'. This allows getting
responses only for page pools which holds pointer to real netdevice.
In case that driver does not have 1:1 mapping between page pool and
netdevice, 'pool->slow.netdev->ifindex' will not point to netdevice. That
means that such drivers cannot query page pools info and statistics.
The functions page_pool_nl_stats_fill()/page_pool_nl_fill() get page
pool structure and use 'ifindex' which is stored in the pool to fill
netlink message. Instead, let the callers decide which 'ifindex' should
be used. For now, all the callers pass 'pool->slow.netdev->ifindex', so
there is no behavior change. The next patch will change dump behavior.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
---
net/core/page_pool_user.c | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c
index 3a3277ba167b..44948f7b9d68 100644
--- a/net/core/page_pool_user.c
+++ b/net/core/page_pool_user.c
@@ -30,7 +30,7 @@ static DEFINE_MUTEX(page_pools_lock);
*/
typedef int (*pp_nl_fill_cb)(struct sk_buff *rsp, const struct page_pool *pool,
- const struct genl_info *info);
+ const struct genl_info *info, int ifindex);
static int
netdev_nl_page_pool_get_do(struct genl_info *info, u32 id, pp_nl_fill_cb fill)
@@ -53,7 +53,7 @@ netdev_nl_page_pool_get_do(struct genl_info *info, u32 id, pp_nl_fill_cb fill)
goto err_unlock;
}
- err = fill(rsp, pool, info);
+ err = fill(rsp, pool, info, pool->slow.netdev->ifindex);
if (err)
goto err_free_msg;
@@ -92,7 +92,7 @@ netdev_nl_page_pool_get_dump(struct sk_buff *skb, struct netlink_callback *cb,
continue;
state->pp_id = pool->user.id;
- err = fill(skb, pool, info);
+ err = fill(skb, pool, info, pool->slow.netdev->ifindex);
if (err)
goto out;
}
@@ -108,7 +108,7 @@ netdev_nl_page_pool_get_dump(struct sk_buff *skb, struct netlink_callback *cb,
static int
page_pool_nl_stats_fill(struct sk_buff *rsp, const struct page_pool *pool,
- const struct genl_info *info)
+ const struct genl_info *info, int ifindex)
{
#ifdef CONFIG_PAGE_POOL_STATS
struct page_pool_stats stats = {};
@@ -125,9 +125,8 @@ page_pool_nl_stats_fill(struct sk_buff *rsp, const struct page_pool *pool,
nest = nla_nest_start(rsp, NETDEV_A_PAGE_POOL_STATS_INFO);
if (nla_put_uint(rsp, NETDEV_A_PAGE_POOL_ID, pool->user.id) ||
- (pool->slow.netdev->ifindex != LOOPBACK_IFINDEX &&
- nla_put_u32(rsp, NETDEV_A_PAGE_POOL_IFINDEX,
- pool->slow.netdev->ifindex)))
+ (ifindex != LOOPBACK_IFINDEX &&
+ nla_put_u32(rsp, NETDEV_A_PAGE_POOL_IFINDEX, ifindex)))
goto err_cancel_nest;
nla_nest_end(rsp, nest);
@@ -210,7 +209,7 @@ int netdev_nl_page_pool_stats_get_dumpit(struct sk_buff *skb,
static int
page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool,
- const struct genl_info *info)
+ const struct genl_info *info, int ifindex)
{
size_t inflight, refsz;
void *hdr;
@@ -222,9 +221,8 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool,
if (nla_put_uint(rsp, NETDEV_A_PAGE_POOL_ID, pool->user.id))
goto err_cancel;
- if (pool->slow.netdev->ifindex != LOOPBACK_IFINDEX &&
- nla_put_u32(rsp, NETDEV_A_PAGE_POOL_IFINDEX,
- pool->slow.netdev->ifindex))
+ if (ifindex != LOOPBACK_IFINDEX &&
+ nla_put_u32(rsp, NETDEV_A_PAGE_POOL_IFINDEX, ifindex))
goto err_cancel;
if (pool->user.napi_id &&
nla_put_uint(rsp, NETDEV_A_PAGE_POOL_NAPI_ID, pool->user.napi_id))
@@ -271,7 +269,7 @@ static void netdev_nl_page_pool_event(const struct page_pool *pool, u32 cmd)
if (!ntf)
return;
- if (page_pool_nl_fill(ntf, pool, &info)) {
+ if (page_pool_nl_fill(ntf, pool, &info, pool->slow.netdev->ifindex)) {
nlmsg_free(ntf);
return;
}
--
2.45.1
next prev parent reply other threads:[~2024-06-25 12:08 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-25 12:08 [PATCH RFC net-next 0/4] Adjust page pool netlink filling to non common case Amit Cohen
2024-06-25 12:08 ` Amit Cohen [this message]
2024-06-25 12:08 ` [PATCH RFC net-next 2/4] net: core: page_pool_user: Change 'ifindex' for page pool dump Amit Cohen
2024-06-25 12:08 ` [PATCH RFC net-next 3/4] mlxsw: pci: Allow get page pool info/stats via netlink Amit Cohen
2024-06-25 12:08 ` [PATCH RFC net-next 4/4] mlxsw: Set page pools list for netdevices Amit Cohen
2024-06-25 14:35 ` [PATCH RFC net-next 0/4] Adjust page pool netlink filling to non common case Jakub Kicinski
2024-06-25 15:37 ` Amit Cohen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240625120807.1165581-2-amcohen@nvidia.com \
--to=amcohen@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=idosch@nvidia.com \
--cc=kuba@kernel.org \
--cc=mlxsw@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=petrm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).