* [PATCH v2 iproute2-next 1/4] rdma: Update headers
2026-03-30 17:31 [PATCH v2 iproute2-next 0/4] Introduce FRMR pools Chiara Meiohas
@ 2026-03-30 17:31 ` Chiara Meiohas
2026-03-30 17:31 ` [PATCH v2 iproute2-next 2/4] rdma: Add resource FRMR pools show command Chiara Meiohas
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Chiara Meiohas @ 2026-03-30 17:31 UTC (permalink / raw)
To: leon, dsahern, stephen
Cc: michaelgur, jgg, linux-rdma, netdev, Patrisious Haddad,
Chiara Meiohas
From: Michael Guralnik <michaelgur@nvidia.com>
Update rdma_netlink.h file up to kernel commit dbd0472fd7a5
("RDMA/nldev: Expose kernel-internal FRMR pools in netlink")
Signed-off-by: Michael Guralnik <michaelgur@nvidia.com>
Reviewed-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Chiara Meiohas <cmeiohas@nvidia.com>
---
rdma/include/uapi/rdma/rdma_netlink.h | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/rdma/include/uapi/rdma/rdma_netlink.h b/rdma/include/uapi/rdma/rdma_netlink.h
index ec8c19ca..8709e558 100644
--- a/rdma/include/uapi/rdma/rdma_netlink.h
+++ b/rdma/include/uapi/rdma/rdma_netlink.h
@@ -308,6 +308,10 @@ enum rdma_nldev_command {
RDMA_NLDEV_CMD_MONITOR,
+ RDMA_NLDEV_CMD_RES_FRMR_POOLS_GET, /* can dump */
+
+ RDMA_NLDEV_CMD_RES_FRMR_POOLS_SET,
+
RDMA_NLDEV_NUM_OPS
};
@@ -582,6 +586,24 @@ enum rdma_nldev_attr {
RDMA_NLDEV_SYS_ATTR_MONITOR_MODE, /* u8 */
RDMA_NLDEV_ATTR_STAT_OPCOUNTER_ENABLED, /* u8 */
+
+ /*
+ * FRMR Pools attributes
+ */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOLS, /* nested table */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_ENTRY, /* nested table */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY, /* nested table */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ATS, /* u8 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ACCESS_FLAGS, /* u32 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_VENDOR_KEY, /* u64 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_NUM_DMA_BLOCKS, /* u64 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_QUEUE_HANDLES, /* u32 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_MAX_IN_USE, /* u64 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_IN_USE, /* u64 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_AGING_PERIOD, /* u32 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_PINNED, /* u32 */
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_KERNEL_VENDOR_KEY, /* u64 */
+
/*
* Always the end
*/
--
2.38.1
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH v2 iproute2-next 2/4] rdma: Add resource FRMR pools show command
2026-03-30 17:31 [PATCH v2 iproute2-next 0/4] Introduce FRMR pools Chiara Meiohas
2026-03-30 17:31 ` [PATCH v2 iproute2-next 1/4] rdma: Update headers Chiara Meiohas
@ 2026-03-30 17:31 ` Chiara Meiohas
2026-03-30 17:31 ` [PATCH v2 iproute2-next 3/4] rdma: Add FRMR pools set aging command Chiara Meiohas
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Chiara Meiohas @ 2026-03-30 17:31 UTC (permalink / raw)
To: leon, dsahern, stephen
Cc: michaelgur, jgg, linux-rdma, netdev, Patrisious Haddad,
Chiara Meiohas
From: Michael Guralnik <michaelgur@nvidia.com>
Allow users to see the FRMR pools that were created on the devices,
their properties and their usage statistics.
The set of properties of each pool are encoded as a colon-separated
list of hexadecimal fields (vendor_key:num_dma_blocks:access_flags:ats)
to simplify referencing a specific pool in 'set' commands.
Sample output:
$rdma resource show frmr_pools
dev rocep8s0f0 key 0:1000:0:0 queue 0 in_use 0 max_in_use 200
dev rocep8s0f0 key 0:800:0:0 queue 0 in_use 0 max_in_use 200
dev rocep8s0f0 key 0:400:0:0 queue 0 in_use 0 max_in_use 200
$rdma resource show frmr_pools -d
dev rocep8s0f0 key 0:1000:0:0 ats 0 access_flags 0 vendor_key 0 num_dma_blocks 4096 queue 0 in_use 0 max_in_use 200
dev rocep8s0f0 key 0:800:0:0 ats 0 access_flags 0 vendor_key 0 num_dma_blocks 2048 queue 0 in_use 0 max_in_use 200
dev rocep8s0f0 key 0:400:0:0 ats 0 access_flags 0 vendor_key 0 num_dma_blocks 1024 queue 0 in_use 0 max_in_use 200
$rdma resource show frmr_pools num_dma_blocks 2048
dev rocep8s0f0 key 0:800:0:0 queue 0 in_use 0 max_in_use 200
Signed-off-by: Michael Guralnik <michaelgur@nvidia.com>
Reviewed-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Chiara Meiohas <cmeiohas@nvidia.com>
---
man/man8/rdma-resource.8 | 8 +-
rdma/Makefile | 2 +-
rdma/res-frmr-pools.c | 174 +++++++++++++++++++++++++++++++++++++++
rdma/res.c | 5 +-
rdma/res.h | 18 ++++
5 files changed, 204 insertions(+), 3 deletions(-)
create mode 100644 rdma/res-frmr-pools.c
diff --git a/man/man8/rdma-resource.8 b/man/man8/rdma-resource.8
index 61bec471..4e2ba39a 100644
--- a/man/man8/rdma-resource.8
+++ b/man/man8/rdma-resource.8
@@ -13,7 +13,8 @@ rdma-resource \- rdma resource configuration
.ti -8
.IR RESOURCE " := { "
-.BR cm_id " | " cq " | " mr " | " pd " | " qp " | " ctx " | " srq " }"
+.BR cm_id " | " cq " | " mr " | " pd " | " qp " | " ctx " | " srq " | "
+.BR frmr_pools " }"
.sp
.ti -8
@@ -113,6 +114,11 @@ rdma resource show srq lqpn 5-7
Show SRQs that the QPs with lqpn 5-7 are associated with.
.RE
.PP
+rdma resource show frmr_pools ats 1
+.RS
+Show FRMR pools that have ats attribute set.
+.RE
+.PP
.SH SEE ALSO
.BR rdma (8),
diff --git a/rdma/Makefile b/rdma/Makefile
index ed3c1c1c..66fe53f9 100644
--- a/rdma/Makefile
+++ b/rdma/Makefile
@@ -5,7 +5,7 @@ CFLAGS += -I./include/uapi/
RDMA_OBJ = rdma.o utils.o dev.o link.o res.o res-pd.o res-mr.o res-cq.o \
res-cmid.o res-qp.o sys.o stat.o stat-mr.o res-ctx.o res-srq.o \
- monitor.o
+ monitor.o res-frmr-pools.o
TARGETS += rdma
diff --git a/rdma/res-frmr-pools.c b/rdma/res-frmr-pools.c
new file mode 100644
index 00000000..7d99a728
--- /dev/null
+++ b/rdma/res-frmr-pools.c
@@ -0,0 +1,174 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/*
+ * res-frmr-pools.c RDMA tool
+ * Authors: Michael Guralnik <michaelgur@nvidia.com>
+ */
+
+#include "res.h"
+#include <inttypes.h>
+
+struct frmr_pool_key {
+ uint64_t vendor_key;
+ uint64_t num_dma_blocks;
+ uint32_t access_flags;
+ uint8_t ats;
+};
+
+/* vendor_key(16) + ':' + num_dma_blocks(16) + ':' + access_flags(8) + ':' + ats(1) + '\0' */
+#define FRMR_POOL_KEY_MAX_LEN 45
+
+static int res_frmr_pools_line(struct rd *rd, const char *name, int idx,
+ struct nlattr **nla_line)
+{
+ uint64_t in_use = 0, max_in_use = 0, kernel_vendor_key = 0;
+ struct nlattr *key_tb[RDMA_NLDEV_ATTR_MAX] = {};
+ char key_str[FRMR_POOL_KEY_MAX_LEN];
+ struct frmr_pool_key key = { 0 };
+ uint32_t queue_handles = 0;
+
+ if (nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY]) {
+ if (mnl_attr_parse_nested(
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY],
+ rd_attr_cb, key_tb) != MNL_CB_OK)
+ return MNL_CB_ERROR;
+
+ if (key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ATS])
+ key.ats = mnl_attr_get_u8(
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ATS]);
+ if (key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ACCESS_FLAGS])
+ key.access_flags = mnl_attr_get_u32(
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ACCESS_FLAGS]);
+ if (key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_VENDOR_KEY])
+ key.vendor_key = mnl_attr_get_u64(
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_VENDOR_KEY]);
+ if (key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_NUM_DMA_BLOCKS])
+ key.num_dma_blocks = mnl_attr_get_u64(
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_NUM_DMA_BLOCKS]);
+ if (key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_KERNEL_VENDOR_KEY])
+ kernel_vendor_key = mnl_attr_get_u64(
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_KERNEL_VENDOR_KEY]);
+
+ if (rd_is_filtered_attr(
+ rd, "ats", key.ats,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ATS]))
+ goto out;
+
+ if (rd_is_filtered_attr(
+ rd, "access_flags", key.access_flags,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ACCESS_FLAGS]))
+ goto out;
+
+ if (rd_is_filtered_attr(
+ rd, "vendor_key", key.vendor_key,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_VENDOR_KEY]))
+ goto out;
+
+ if (rd_is_filtered_attr(
+ rd, "num_dma_blocks", key.num_dma_blocks,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_NUM_DMA_BLOCKS]))
+ goto out;
+ }
+
+ if (nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_QUEUE_HANDLES])
+ queue_handles = mnl_attr_get_u32(
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_QUEUE_HANDLES]);
+ if (rd_is_filtered_attr(
+ rd, "queue", queue_handles,
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_QUEUE_HANDLES]))
+ goto out;
+
+ if (nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_IN_USE])
+ in_use = mnl_attr_get_u64(
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_IN_USE]);
+ if (rd_is_filtered_attr(rd, "in_use", in_use,
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_IN_USE]))
+ goto out;
+
+ if (nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_MAX_IN_USE])
+ max_in_use = mnl_attr_get_u64(
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_MAX_IN_USE]);
+ if (rd_is_filtered_attr(
+ rd, "max_in_use", max_in_use,
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_MAX_IN_USE]))
+ goto out;
+
+ open_json_object(NULL);
+ print_dev(idx, name);
+
+ if (nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY]) {
+ snprintf(key_str, sizeof(key_str),
+ "%" PRIx64 ":%" PRIx64 ":%x:%s",
+ key.vendor_key, key.num_dma_blocks,
+ key.access_flags, key.ats ? "1" : "0");
+ print_string(PRINT_ANY, "key", "key %s ", key_str);
+
+ if (rd->show_details) {
+ res_print_u32(
+ "ats", key.ats,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ATS]);
+ res_print_u32(
+ "access_flags", key.access_flags,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ACCESS_FLAGS]);
+ res_print_u64(
+ "vendor_key", key.vendor_key,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_VENDOR_KEY]);
+ res_print_u64(
+ "num_dma_blocks", key.num_dma_blocks,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_NUM_DMA_BLOCKS]);
+ res_print_u64(
+ "kernel_vendor_key", kernel_vendor_key,
+ key_tb[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_KERNEL_VENDOR_KEY]);
+ }
+ }
+
+ res_print_u32("queue", queue_handles,
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_QUEUE_HANDLES]);
+ res_print_u64("in_use", in_use,
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_IN_USE]);
+ res_print_u64("max_in_use", max_in_use,
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_MAX_IN_USE]);
+
+ print_driver_table(rd, nla_line[RDMA_NLDEV_ATTR_DRIVER]);
+ close_json_object();
+ newline();
+
+out:
+ return MNL_CB_OK;
+}
+
+int res_frmr_pools_idx_parse_cb(const struct nlmsghdr *nlh, void *data)
+{
+ return MNL_CB_OK;
+}
+
+int res_frmr_pools_parse_cb(const struct nlmsghdr *nlh, void *data)
+{
+ struct nlattr *tb[RDMA_NLDEV_ATTR_MAX] = {};
+ struct nlattr *nla_table, *nla_entry;
+ struct rd *rd = data;
+ int ret = MNL_CB_OK;
+ const char *name;
+ uint32_t idx;
+
+ mnl_attr_parse(nlh, 0, rd_attr_cb, tb);
+ if (!tb[RDMA_NLDEV_ATTR_DEV_INDEX] || !tb[RDMA_NLDEV_ATTR_DEV_NAME] ||
+ !tb[RDMA_NLDEV_ATTR_RES_FRMR_POOLS])
+ return MNL_CB_ERROR;
+
+ name = mnl_attr_get_str(tb[RDMA_NLDEV_ATTR_DEV_NAME]);
+ idx = mnl_attr_get_u32(tb[RDMA_NLDEV_ATTR_DEV_INDEX]);
+ nla_table = tb[RDMA_NLDEV_ATTR_RES_FRMR_POOLS];
+
+ mnl_attr_for_each_nested(nla_entry, nla_table) {
+ struct nlattr *nla_line[RDMA_NLDEV_ATTR_MAX] = {};
+
+ ret = mnl_attr_parse_nested(nla_entry, rd_attr_cb, nla_line);
+ if (ret != MNL_CB_OK)
+ break;
+
+ ret = res_frmr_pools_line(rd, name, idx, nla_line);
+ if (ret != MNL_CB_OK)
+ break;
+ }
+ return ret;
+}
diff --git a/rdma/res.c b/rdma/res.c
index 7e7de042..f1f13d74 100644
--- a/rdma/res.c
+++ b/rdma/res.c
@@ -11,7 +11,7 @@ static int res_help(struct rd *rd)
{
pr_out("Usage: %s resource\n", rd->filename);
pr_out(" resource show [DEV]\n");
- pr_out(" resource show [qp|cm_id|pd|mr|cq|ctx|srq]\n");
+ pr_out(" resource show [qp|cm_id|pd|mr|cq|ctx|srq|frmr_pools]\n");
pr_out(" resource show qp link [DEV/PORT]\n");
pr_out(" resource show qp link [DEV/PORT] [FILTER-NAME FILTER-VALUE]\n");
pr_out(" resource show cm_id link [DEV/PORT]\n");
@@ -26,6 +26,8 @@ static int res_help(struct rd *rd)
pr_out(" resource show ctx dev [DEV] [FILTER-NAME FILTER-VALUE]\n");
pr_out(" resource show srq dev [DEV]\n");
pr_out(" resource show srq dev [DEV] [FILTER-NAME FILTER-VALUE]\n");
+ pr_out(" resource show frmr_pools dev [DEV]\n");
+ pr_out(" resource show frmr_pools dev [DEV] [FILTER-NAME FILTER-VALUE]\n");
return 0;
}
@@ -237,6 +239,7 @@ static int res_show(struct rd *rd)
{ "pd", res_pd },
{ "ctx", res_ctx },
{ "srq", res_srq },
+ { "frmr_pools", res_frmr_pools },
{ 0 }
};
diff --git a/rdma/res.h b/rdma/res.h
index fd09ce7d..30edb8f8 100644
--- a/rdma/res.h
+++ b/rdma/res.h
@@ -26,6 +26,8 @@ int res_ctx_parse_cb(const struct nlmsghdr *nlh, void *data);
int res_ctx_idx_parse_cb(const struct nlmsghdr *nlh, void *data);
int res_srq_parse_cb(const struct nlmsghdr *nlh, void *data);
int res_srq_idx_parse_cb(const struct nlmsghdr *nlh, void *data);
+int res_frmr_pools_parse_cb(const struct nlmsghdr *nlh, void *data);
+int res_frmr_pools_idx_parse_cb(const struct nlmsghdr *nlh, void *data);
static inline uint32_t res_get_command(uint32_t command, struct rd *rd)
{
@@ -185,6 +187,22 @@ struct filters srq_valid_filters[MAX_NUMBER_OF_FILTERS] = {
RES_FUNC(res_srq, RDMA_NLDEV_CMD_RES_SRQ_GET, srq_valid_filters, true,
RDMA_NLDEV_ATTR_RES_SRQN);
+
+static const
+struct filters frmr_pools_valid_filters[MAX_NUMBER_OF_FILTERS] = {
+ { .name = "dev", .is_number = false },
+ { .name = "ats", .is_number = true },
+ { .name = "access_flags", .is_number = true },
+ { .name = "vendor_key", .is_number = true },
+ { .name = "num_dma_blocks", .is_number = true },
+ { .name = "queue", .is_number = true },
+ { .name = "in_use", .is_number = true },
+ { .name = "max_in_use", .is_number = true },
+};
+
+RES_FUNC(res_frmr_pools, RDMA_NLDEV_CMD_RES_FRMR_POOLS_GET,
+ frmr_pools_valid_filters, true, 0);
+
void print_dev(uint32_t idx, const char *name);
void print_link(uint32_t idx, const char *name, uint32_t port, struct nlattr **nla_line);
void print_key(const char *name, uint64_t val, struct nlattr *nlattr);
--
2.38.1
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH v2 iproute2-next 3/4] rdma: Add FRMR pools set aging command
2026-03-30 17:31 [PATCH v2 iproute2-next 0/4] Introduce FRMR pools Chiara Meiohas
2026-03-30 17:31 ` [PATCH v2 iproute2-next 1/4] rdma: Update headers Chiara Meiohas
2026-03-30 17:31 ` [PATCH v2 iproute2-next 2/4] rdma: Add resource FRMR pools show command Chiara Meiohas
@ 2026-03-30 17:31 ` Chiara Meiohas
2026-03-30 17:31 ` [PATCH v2 iproute2-next 4/4] rdma: Add FRMR pools set pinned command Chiara Meiohas
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Chiara Meiohas @ 2026-03-30 17:31 UTC (permalink / raw)
To: leon, dsahern, stephen
Cc: michaelgur, jgg, linux-rdma, netdev, Patrisious Haddad,
Chiara Meiohas
From: Michael Guralnik <michaelgur@nvidia.com>
Add support for configuring the aging period of FRMR pools.
The aging mechanism frees unused FRMR handles that have not been
in use for the specified period.
Usage:
rdma resource set frmr_pools dev DEV aging AGING_PERIOD
Signed-off-by: Michael Guralnik <michaelgur@nvidia.com>
Reviewed-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Chiara Meiohas <cmeiohas@nvidia.com>
---
man/man8/rdma-resource.8 | 22 +++++++++++++++
rdma/res-frmr-pools.c | 59 ++++++++++++++++++++++++++++++++++++++++
rdma/res.c | 13 +++++++++
rdma/res.h | 1 +
4 files changed, 95 insertions(+)
diff --git a/man/man8/rdma-resource.8 b/man/man8/rdma-resource.8
index 4e2ba39a..a6dc33f3 100644
--- a/man/man8/rdma-resource.8
+++ b/man/man8/rdma-resource.8
@@ -26,6 +26,13 @@ rdma-resource \- rdma resource configuration
.B rdma resource show
.RI "[ " DEV/PORT_INDEX " ]"
+.ti -8
+.B rdma resource set frmr_pools
+.BR dev
+.IR DEV
+.BR aging
+.IR AGING_PERIOD
+
.ti -8
.B rdma resource help
@@ -37,6 +44,16 @@ rdma-resource \- rdma resource configuration
- specifies the RDMA link to show.
If this argument is omitted all links are listed.
+.SS rdma resource set - configure resource related parameters
+
+.PP
+.I "DEV"
+- specifies the RDMA device to configure.
+
+.PP
+.I "AGING_PERIOD"
+- specifies the aging period in seconds for unused FRMR handles. Handles unused for this period will be freed.
+
.SH "EXAMPLES"
.PP
rdma resource show
@@ -119,6 +136,11 @@ rdma resource show frmr_pools ats 1
Show FRMR pools that have ats attribute set.
.RE
.PP
+rdma resource set frmr_pools dev rocep8s0f0 aging 120
+.RS 4
+Set the aging period for FRMR pools on device rocep8s0f0 to 120 seconds.
+.RE
+.PP
.SH SEE ALSO
.BR rdma (8),
diff --git a/rdma/res-frmr-pools.c b/rdma/res-frmr-pools.c
index 7d99a728..c9d80c4b 100644
--- a/rdma/res-frmr-pools.c
+++ b/rdma/res-frmr-pools.c
@@ -172,3 +172,62 @@ int res_frmr_pools_parse_cb(const struct nlmsghdr *nlh, void *data)
}
return ret;
}
+
+static int res_frmr_pools_one_set_aging(struct rd *rd)
+{
+ uint32_t aging_period;
+ uint32_t seq;
+
+ if (rd_no_arg(rd)) {
+ pr_err("Please provide aging period value.\n");
+ return -EINVAL;
+ }
+
+ if (get_u32(&aging_period, rd_argv(rd), 10)) {
+ pr_err("Invalid aging period value: %s\n", rd_argv(rd));
+ return -EINVAL;
+ }
+
+ if (aging_period == 0) {
+ pr_err("Setting the aging period to zero is not supported.\n");
+ return -EINVAL;
+ }
+
+ rd_prepare_msg(rd, RDMA_NLDEV_CMD_RES_FRMR_POOLS_SET, &seq,
+ (NLM_F_REQUEST | NLM_F_ACK));
+ mnl_attr_put_u32(rd->nlh, RDMA_NLDEV_ATTR_DEV_INDEX, rd->dev_idx);
+ mnl_attr_put_u32(rd->nlh, RDMA_NLDEV_ATTR_RES_FRMR_POOL_AGING_PERIOD,
+ aging_period);
+
+ return rd_sendrecv_msg(rd, seq);
+}
+
+static int res_frmr_pools_one_set_help(struct rd *rd)
+{
+ pr_out("Usage: %s set frmr_pools dev DEV aging AGING_PERIOD\n",
+ rd->filename);
+ return 0;
+}
+
+static int res_frmr_pools_one_set(struct rd *rd)
+{
+ const struct rd_cmd cmds[] = {
+ { NULL, res_frmr_pools_one_set_help },
+ { "help", res_frmr_pools_one_set_help },
+ { "aging", res_frmr_pools_one_set_aging },
+ { 0 }
+ };
+
+ return rd_exec_cmd(rd, cmds, "resource set frmr_pools command");
+}
+
+int res_frmr_pools_set(struct rd *rd)
+{
+ int ret;
+
+ ret = rd_set_arg_to_devname(rd);
+ if (ret)
+ return ret;
+
+ return rd_exec_require_dev(rd, res_frmr_pools_one_set);
+}
diff --git a/rdma/res.c b/rdma/res.c
index f1f13d74..63d8386a 100644
--- a/rdma/res.c
+++ b/rdma/res.c
@@ -28,6 +28,7 @@ static int res_help(struct rd *rd)
pr_out(" resource show srq dev [DEV] [FILTER-NAME FILTER-VALUE]\n");
pr_out(" resource show frmr_pools dev [DEV]\n");
pr_out(" resource show frmr_pools dev [DEV] [FILTER-NAME FILTER-VALUE]\n");
+ pr_out(" resource set frmr_pools dev DEV aging AGING_PERIOD\n");
return 0;
}
@@ -252,11 +253,23 @@ static int res_show(struct rd *rd)
return rd_exec_cmd(rd, cmds, "parameter");
}
+static int res_set(struct rd *rd)
+{
+ const struct rd_cmd cmds[] = {
+ { NULL, res_help },
+ { "frmr_pools", res_frmr_pools_set },
+ { 0 }
+ };
+
+ return rd_exec_cmd(rd, cmds, "resource set command");
+}
+
int cmd_res(struct rd *rd)
{
const struct rd_cmd cmds[] = {
{ NULL, res_show },
{ "show", res_show },
+ { "set", res_set },
{ "list", res_show },
{ "help", res_help },
{ 0 }
diff --git a/rdma/res.h b/rdma/res.h
index 30edb8f8..dffbdb52 100644
--- a/rdma/res.h
+++ b/rdma/res.h
@@ -203,6 +203,7 @@ struct filters frmr_pools_valid_filters[MAX_NUMBER_OF_FILTERS] = {
RES_FUNC(res_frmr_pools, RDMA_NLDEV_CMD_RES_FRMR_POOLS_GET,
frmr_pools_valid_filters, true, 0);
+int res_frmr_pools_set(struct rd *rd);
void print_dev(uint32_t idx, const char *name);
void print_link(uint32_t idx, const char *name, uint32_t port, struct nlattr **nla_line);
void print_key(const char *name, uint64_t val, struct nlattr *nlattr);
--
2.38.1
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH v2 iproute2-next 4/4] rdma: Add FRMR pools set pinned command
2026-03-30 17:31 [PATCH v2 iproute2-next 0/4] Introduce FRMR pools Chiara Meiohas
` (2 preceding siblings ...)
2026-03-30 17:31 ` [PATCH v2 iproute2-next 3/4] rdma: Add FRMR pools set aging command Chiara Meiohas
@ 2026-03-30 17:31 ` Chiara Meiohas
2026-04-05 17:09 ` [PATCH v2 iproute2-next 0/4] Introduce FRMR pools David Ahern
2026-04-05 17:10 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 8+ messages in thread
From: Chiara Meiohas @ 2026-03-30 17:31 UTC (permalink / raw)
To: leon, dsahern, stephen
Cc: michaelgur, jgg, linux-rdma, netdev, Patrisious Haddad,
Chiara Meiohas
From: Michael Guralnik <michaelgur@nvidia.com>
Add an option to set the amount of pinned handles to FRMR pool.
Pinned handles are not affected by aging and stay available for reuse in
the FRMR pool.
The pool is identified by a colon-separated key of hexadecimal fields
(vendor_key:num_dma_blocks:access_flags:ats) as shown in the 'show'
command output.
Usage:
Set 250 pinned handles to FRMR pool with key 0:800:0:0 on
device rocep8s0f0
$rdma resource set frmr_pools dev rocep8s0f0 pinned 0:800:0:0 250
Signed-off-by: Michael Guralnik <michaelgur@nvidia.com>
Reviewed-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Chiara Meiohas <cmeiohas@nvidia.com>
---
man/man8/rdma-resource.8 | 21 +++++++
rdma/res-frmr-pools.c | 121 ++++++++++++++++++++++++++++++++++++++-
rdma/res.c | 1 +
rdma/res.h | 1 +
4 files changed, 143 insertions(+), 1 deletion(-)
diff --git a/man/man8/rdma-resource.8 b/man/man8/rdma-resource.8
index a6dc33f3..1138cd23 100644
--- a/man/man8/rdma-resource.8
+++ b/man/man8/rdma-resource.8
@@ -33,6 +33,14 @@ rdma-resource \- rdma resource configuration
.BR aging
.IR AGING_PERIOD
+.ti -8
+.B rdma resource set frmr_pools
+.BR dev
+.IR DEV
+.BR pinned
+.IR POOL_KEY
+.IR PINNED_VALUE
+
.ti -8
.B rdma resource help
@@ -54,6 +62,14 @@ If this argument is omitted all links are listed.
.I "AGING_PERIOD"
- specifies the aging period in seconds for unused FRMR handles. Handles unused for this period will be freed.
+.PP
+.I "POOL_KEY"
+- specifies the pool key that identifies a specific FRMR pool. The key is a colon-separated list of hexadecimal fields in the format vendor_key:num_dma_blocks:access_flags:ats.
+
+.PP
+.I "PINNED_VALUE"
+- specifies the pinned value for the FRMR pool. A non-zero value pins handles to the pool, preventing them from being freed by the aging mechanism.
+
.SH "EXAMPLES"
.PP
rdma resource show
@@ -141,6 +157,11 @@ rdma resource set frmr_pools dev rocep8s0f0 aging 120
Set the aging period for FRMR pools on device rocep8s0f0 to 120 seconds.
.RE
.PP
+rdma resource set frmr_pools dev rocep8s0f0 pinned 0:1000:0:0 25000
+.RS 4
+Pin 25000 handles to the FRMR pool identified by key 0:1000:0:0 on device rocep8s0f0 to prevent them from being freed.
+.RE
+.PP
.SH SEE ALSO
.BR rdma (8),
diff --git a/rdma/res-frmr-pools.c b/rdma/res-frmr-pools.c
index c9d80c4b..abcd2188 100644
--- a/rdma/res-frmr-pools.c
+++ b/rdma/res-frmr-pools.c
@@ -17,14 +17,68 @@ struct frmr_pool_key {
/* vendor_key(16) + ':' + num_dma_blocks(16) + ':' + access_flags(8) + ':' + ats(1) + '\0' */
#define FRMR_POOL_KEY_MAX_LEN 45
+static int decode_pool_key(const char *str, struct frmr_pool_key *key)
+{
+ const char *p = str;
+ char *end;
+ int i = 0;
+
+ while (*p) {
+ uint64_t val;
+
+ errno = 0;
+ val = strtoull(p, &end, 16);
+ if (errno == ERANGE || end == p || (*end != ':' && *end != '\0')) {
+ pr_err("Invalid pool key: %s\n", str);
+ return -EINVAL;
+ }
+
+ switch (i) {
+ case 0:
+ key->vendor_key = val;
+ break;
+ case 1:
+ key->num_dma_blocks = val;
+ break;
+ case 2:
+ if (val > UINT32_MAX)
+ goto out_of_range;
+ key->access_flags = val;
+ break;
+ case 3:
+ if (val != 0 && val != 1)
+ goto out_of_range;
+ key->ats = val;
+ break;
+ default:
+ if (val) {
+ pr_err("Unsupported pool attributes passed in pool key\n");
+ return -EINVAL;
+ }
+ }
+ i++;
+ p = *end ? end + 1 : end;
+ }
+
+ if (i < 4) {
+ pr_err("Invalid pool key: %s, expected 4 fields\n", str);
+ return -EINVAL;
+ }
+ return 0;
+
+out_of_range:
+ pr_err("Pool key field at index %d value out of range\n", i);
+ return -EINVAL;
+}
+
static int res_frmr_pools_line(struct rd *rd, const char *name, int idx,
struct nlattr **nla_line)
{
uint64_t in_use = 0, max_in_use = 0, kernel_vendor_key = 0;
struct nlattr *key_tb[RDMA_NLDEV_ATTR_MAX] = {};
+ uint32_t queue_handles = 0, pinned_handles = 0;
char key_str[FRMR_POOL_KEY_MAX_LEN];
struct frmr_pool_key key = { 0 };
- uint32_t queue_handles = 0;
if (nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY]) {
if (mnl_attr_parse_nested(
@@ -92,6 +146,13 @@ static int res_frmr_pools_line(struct rd *rd, const char *name, int idx,
nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_MAX_IN_USE]))
goto out;
+ if (nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_PINNED])
+ pinned_handles = mnl_attr_get_u32(
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_PINNED]);
+ if (rd_is_filtered_attr(rd, "pinned", pinned_handles,
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_PINNED]))
+ goto out;
+
open_json_object(NULL);
print_dev(idx, name);
@@ -127,6 +188,8 @@ static int res_frmr_pools_line(struct rd *rd, const char *name, int idx,
nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_IN_USE]);
res_print_u64("max_in_use", max_in_use,
nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_MAX_IN_USE]);
+ res_print_u32("pinned", pinned_handles,
+ nla_line[RDMA_NLDEV_ATTR_RES_FRMR_POOL_PINNED]);
print_driver_table(rd, nla_line[RDMA_NLDEV_ATTR_DRIVER]);
close_json_object();
@@ -202,10 +265,65 @@ static int res_frmr_pools_one_set_aging(struct rd *rd)
return rd_sendrecv_msg(rd, seq);
}
+static int res_frmr_pools_one_set_pinned(struct rd *rd)
+{
+ struct frmr_pool_key pool_key = { 0 };
+ struct nlattr *key_attr;
+ uint32_t pinned_value;
+ const char *key_str;
+ uint32_t seq;
+
+ if (rd_no_arg(rd)) {
+ pr_err("Please provide pool key and pinned value.\n");
+ return -EINVAL;
+ }
+
+ key_str = rd_argv(rd);
+ rd_arg_inc(rd);
+
+ if (decode_pool_key(key_str, &pool_key))
+ return -EINVAL;
+
+ if (rd_no_arg(rd)) {
+ pr_err("Please provide pinned value.\n");
+ return -EINVAL;
+ }
+
+ if (get_u32(&pinned_value, rd_argv(rd), 10)) {
+ pr_err("Invalid pinned value: %s\n", rd_argv(rd));
+ return -EINVAL;
+ }
+
+ rd_prepare_msg(rd, RDMA_NLDEV_CMD_RES_FRMR_POOLS_SET, &seq,
+ (NLM_F_REQUEST | NLM_F_ACK));
+ mnl_attr_put_u32(rd->nlh, RDMA_NLDEV_ATTR_DEV_INDEX, rd->dev_idx);
+
+ mnl_attr_put_u32(rd->nlh, RDMA_NLDEV_ATTR_RES_FRMR_POOL_PINNED,
+ pinned_value);
+
+ key_attr =
+ mnl_attr_nest_start(rd->nlh, RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY);
+ mnl_attr_put_u8(rd->nlh, RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ATS,
+ pool_key.ats);
+ mnl_attr_put_u32(rd->nlh,
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_ACCESS_FLAGS,
+ pool_key.access_flags);
+ mnl_attr_put_u64(rd->nlh, RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_VENDOR_KEY,
+ pool_key.vendor_key);
+ mnl_attr_put_u64(rd->nlh,
+ RDMA_NLDEV_ATTR_RES_FRMR_POOL_KEY_NUM_DMA_BLOCKS,
+ pool_key.num_dma_blocks);
+ mnl_attr_nest_end(rd->nlh, key_attr);
+
+ return rd_sendrecv_msg(rd, seq);
+}
+
static int res_frmr_pools_one_set_help(struct rd *rd)
{
pr_out("Usage: %s set frmr_pools dev DEV aging AGING_PERIOD\n",
rd->filename);
+ pr_out("Usage: %s set frmr_pools dev DEV pinned POOL_KEY PINNED_VALUE\n",
+ rd->filename);
return 0;
}
@@ -215,6 +333,7 @@ static int res_frmr_pools_one_set(struct rd *rd)
{ NULL, res_frmr_pools_one_set_help },
{ "help", res_frmr_pools_one_set_help },
{ "aging", res_frmr_pools_one_set_aging },
+ { "pinned", res_frmr_pools_one_set_pinned },
{ 0 }
};
diff --git a/rdma/res.c b/rdma/res.c
index 63d8386a..062f0007 100644
--- a/rdma/res.c
+++ b/rdma/res.c
@@ -29,6 +29,7 @@ static int res_help(struct rd *rd)
pr_out(" resource show frmr_pools dev [DEV]\n");
pr_out(" resource show frmr_pools dev [DEV] [FILTER-NAME FILTER-VALUE]\n");
pr_out(" resource set frmr_pools dev DEV aging AGING_PERIOD\n");
+ pr_out(" resource set frmr_pools dev DEV pinned POOL_KEY PINNED_VALUE\n");
return 0;
}
diff --git a/rdma/res.h b/rdma/res.h
index dffbdb52..4758f2ea 100644
--- a/rdma/res.h
+++ b/rdma/res.h
@@ -198,6 +198,7 @@ struct filters frmr_pools_valid_filters[MAX_NUMBER_OF_FILTERS] = {
{ .name = "queue", .is_number = true },
{ .name = "in_use", .is_number = true },
{ .name = "max_in_use", .is_number = true },
+ { .name = "pinned", .is_number = true },
};
RES_FUNC(res_frmr_pools, RDMA_NLDEV_CMD_RES_FRMR_POOLS_GET,
--
2.38.1
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH v2 iproute2-next 0/4] Introduce FRMR pools
2026-03-30 17:31 [PATCH v2 iproute2-next 0/4] Introduce FRMR pools Chiara Meiohas
` (3 preceding siblings ...)
2026-03-30 17:31 ` [PATCH v2 iproute2-next 4/4] rdma: Add FRMR pools set pinned command Chiara Meiohas
@ 2026-04-05 17:09 ` David Ahern
2026-04-05 17:44 ` Chiara Meiohas
2026-04-05 17:10 ` patchwork-bot+netdevbpf
5 siblings, 1 reply; 8+ messages in thread
From: David Ahern @ 2026-04-05 17:09 UTC (permalink / raw)
To: Chiara Meiohas, leon, stephen; +Cc: michaelgur, jgg, linux-rdma, netdev
On 3/30/26 11:31 AM, Chiara Meiohas wrote:
> From Michael:
>
> This series adds support for managing Fast Registration Memory Region
> (FRMR) pools in rdma tool, enabling users to monitor and configure FRMR
> pool behavior.
>
> FRMR pools are used to cache and reuse Fast Registration Memory Region
> handles to improve performance by avoiding the overhead of repeated
> memory region creation and destruction. This series introduces commands
> to view FRMR pool statistics and configure pool parameters such as
> aging time and pinned handle count.
>
> The 'show' command allows users to display FRMR pools created on
> devices, their properties, and usage statistics. Each pool is identified
> by a unique key (hex-encoded properties) for easy reference in
> subsequent operations.
>
> The aging 'set' command allows users to modify the aging time parameter,
> which controls how long unused FRMR handles remain in the pool before
> being released.
>
> The pinned 'set' command allows users to configure the number of pinned
> handles in a pool. Pinned handles are exempt from aging and remain
> permanently available for reuse, which is useful for workloads with
> predictable memory region usage patterns.
>
> Command usage and examples are included in the commits and man pages.
>
> These patches are complimentary to the kernel patches:
> https://lore.kernel.org/linux-rdma/20260226-frmr_pools-v4-0-95360b54f15e@nvidia.com/
>
applied after fixing up a few nits.
Please clone the ai review prompts from:
https://github.com/masoncl/review-prompts.git
Run the setup scripts and have ai review patches before sending. This
should really be part of both kernel and iproute2 development workflow now.
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v2 iproute2-next 0/4] Introduce FRMR pools
2026-03-30 17:31 [PATCH v2 iproute2-next 0/4] Introduce FRMR pools Chiara Meiohas
` (4 preceding siblings ...)
2026-04-05 17:09 ` [PATCH v2 iproute2-next 0/4] Introduce FRMR pools David Ahern
@ 2026-04-05 17:10 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 8+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-04-05 17:10 UTC (permalink / raw)
To: Chiara Meiohas
Cc: leon, dsahern, stephen, michaelgur, jgg, linux-rdma, netdev
Hello:
This series was applied to iproute2/iproute2-next.git (main)
by David Ahern <dsahern@kernel.org>:
On Mon, 30 Mar 2026 20:31:14 +0300 you wrote:
> From Michael:
>
> This series adds support for managing Fast Registration Memory Region
> (FRMR) pools in rdma tool, enabling users to monitor and configure FRMR
> pool behavior.
>
> FRMR pools are used to cache and reuse Fast Registration Memory Region
> handles to improve performance by avoiding the overhead of repeated
> memory region creation and destruction. This series introduces commands
> to view FRMR pool statistics and configure pool parameters such as
> aging time and pinned handle count.
>
> [...]
Here is the summary with links:
- [v2,iproute2-next,1/4] rdma: Update headers
https://git.kernel.org/pub/scm/network/iproute2/iproute2-next.git/commit/?id=93368ee34528
- [v2,iproute2-next,2/4] rdma: Add resource FRMR pools show command
(no matching commit)
- [v2,iproute2-next,3/4] rdma: Add FRMR pools set aging command
https://git.kernel.org/pub/scm/network/iproute2/iproute2-next.git/commit/?id=26c8bc1e6563
- [v2,iproute2-next,4/4] rdma: Add FRMR pools set pinned command
(no matching commit)
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 8+ messages in thread