public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Edward Srouji <edwards@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
	netdev@vger.kernel.org, Michael Guralnik <michaelgur@nvidia.com>,
	Yishai Hadas <yishaih@nvidia.com>
Subject: Re: [PATCH rdma-next v2 02/11] IB/core: Introduce FRMR pools
Date: Tue, 20 Jan 2026 12:44:38 -0400	[thread overview]
Message-ID: <20260120164438.GR961572@ziepe.ca> (raw)
In-Reply-To: <20251222-frmr_pools-v2-2-f06a99caa538@nvidia.com>

On Mon, Dec 22, 2025 at 02:40:37PM +0200, Edward Srouji wrote:
> +static int compare_keys(struct ib_frmr_key *key1, struct ib_frmr_key *key2)
> +{
> +	int res;
> +
> +	res = key1->ats - key2->ats;
> +	if (res)
> +		return res;
> +
> +	res = key1->access_flags - key2->access_flags;
> +	if (res)
> +		return res;
> +
> +	res = key1->vendor_key - key2->vendor_key;
> +	if (res)
> +		return res;
> +
> +	res = key1->kernel_vendor_key - key2->kernel_vendor_key;
> +	if (res)
> +		return res;

This stuff should be using cmp_int().

> +static struct ib_frmr_pool *ib_frmr_pool_find(struct ib_frmr_pools *pools,
> +					      struct ib_frmr_key *key)
> +{
> +	struct rb_node *node = pools->rb_root.rb_node;
> +	struct ib_frmr_pool *pool;
> +	int cmp;
> +
> +	/* find operation is done under read lock for performance reasons.
> +	 * The case of threads failing to find the same pool and creating it
> +	 * is handled by the create_frmr_pool function.
> +	 */
> +	read_lock(&pools->rb_lock);
> +	while (node) {
> +		pool = rb_entry(node, struct ib_frmr_pool, node);
> +		cmp = compare_keys(&pool->key, key);
> +		if (cmp < 0) {
> +			node = node->rb_right;
> +		} else if (cmp > 0) {
> +			node = node->rb_left;
> +		} else {
> +			read_unlock(&pools->rb_lock);
> +			return pool;
> +		}

Use the rb_find() helper

> +static struct ib_frmr_pool *create_frmr_pool(struct ib_device *device,
> +					     struct ib_frmr_key *key)
> +{
> +	struct rb_node **new = &device->frmr_pools->rb_root.rb_node,
> +		       *parent = NULL;
> +	struct ib_frmr_pools *pools = device->frmr_pools;
> +	struct ib_frmr_pool *pool;
> +	int cmp;
> +
> +	pool = kzalloc(sizeof(*pool), GFP_KERNEL);
> +	if (!pool)
> +		return ERR_PTR(-ENOMEM);
> +
> +	memcpy(&pool->key, key, sizeof(*key));
> +	INIT_LIST_HEAD(&pool->queue.pages_list);
> +	spin_lock_init(&pool->lock);
> +
> +	write_lock(&pools->rb_lock);
> +	while (*new) {
> +		parent = *new;
> +		cmp = compare_keys(
> +			&rb_entry(parent, struct ib_frmr_pool, node)->key, key);
> +		if (cmp < 0)
> +			new = &((*new)->rb_left);
> +		else
> +			new = &((*new)->rb_right);
> +		/* If a different thread has already created the pool, return
> +		 * it. The insert operation is done under the write lock so we
> +		 * are sure that the pool is not inserted twice.
> +		 */
> +		if (cmp == 0) {
> +			write_unlock(&pools->rb_lock);
> +			kfree(pool);
> +			return rb_entry(parent, struct ib_frmr_pool, node);
> +		}
> +	}
> +
> +	rb_link_node(&pool->node, parent, new);

I think this is rb_find_add() ?

Jason

  reply	other threads:[~2026-01-20 16:44 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-22 12:40 [PATCH rdma-next v2 00/11] RDMA/core: Introduce FRMR pools infrastructure Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 01/11] RDMA/mlx5: Move device async_ctx initialization Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 02/11] IB/core: Introduce FRMR pools Edward Srouji
2026-01-20 16:44   ` Jason Gunthorpe [this message]
2026-01-26 22:55     ` Michael Gur
2025-12-22 12:40 ` [PATCH rdma-next v2 03/11] RDMA/core: Add aging to " Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 04/11] RDMA/core: Add FRMR pools statistics Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 05/11] RDMA/core: Add pinned handles to FRMR pools Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 06/11] RDMA/mlx5: Switch from MR cache " Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 07/11] net/mlx5: Drop MR cache related code Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 08/11] RDMA/nldev: Add command to get FRMR pools Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 09/11] RDMA/core: Add netlink command to modify FRMR aging Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 10/11] RDMA/nldev: Add command to set pinned FRMR handles Edward Srouji
2025-12-22 12:40 ` [PATCH rdma-next v2 11/11] RDMA/nldev: Expose kernel-internal FRMR pools in netlink Edward Srouji

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260120164438.GR961572@ziepe.ca \
    --to=jgg@ziepe.ca \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=edwards@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=michaelgur@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox