public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Edward Srouji <edwards@nvidia.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, Leon Romanovsky <leon@kernel.org>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
	netdev@vger.kernel.org, Michael Guralnik <michaelgur@nvidia.com>,
	Yishai Hadas <yishaih@nvidia.com>
Subject: Re: [PATCH rdma-next v3 02/11] IB/core: Introduce FRMR pools
Date: Sun, 22 Feb 2026 19:39:39 +0200	[thread overview]
Message-ID: <72191e37-8f9d-43f8-bd56-493d30719494@nvidia.com> (raw)
In-Reply-To: <20260202-frmr_pools-v3-2-d4543e82744a@nvidia.com>


On 2/2/2026 5:31 PM, Edward Srouji wrote:
> From: Michael Guralnik <michaelgur@nvidia.com>
>
> Add a generic Fast Registration Memory Region pools mechanism to allow
> drivers to optimize memory registration performance.
> Drivers that have the ability to reuse MRs or their underlying HW
> objects can take advantage of the mechanism to keep a 'handle' for those
> objects and use them upon user request.
> We assume that to achieve this goal a driver and its HW should implement
> a modify operation for the MRs that is able to at least clear and set the
> MRs and in more advanced implementations also support changing a subset
> of the MRs properties.
>
> The mechanism is built using an RB-tree consisting of pools, each pool
> represents a set of MR properties that are shared by all of the MRs
> residing in the pool and are unmodifiable by the vendor driver or HW.
>
> The exposed API from ib_core to the driver has 4 operations:
> Init and cleanup - handles data structs and locks for the pools.
> Push and pop - store and retrieve 'handle' for a memory registration
> or deregistrations request.
>
> The FRMR pools mechanism implements the logic to search the RB-tree for
> a pool with matching properties and create a new one when needed and
> requires the driver to implement creation and destruction of a 'handle'
> when pool is empty or a handle is requested or is being destroyed.
>
> Later patch will introduce Netlink API to interact with the FRMR pools
> mechanism to allow users to both configure and track its usage.
> A vendor wishing to configure FRMR pool without exposing it or without
> exposing internal MR properties to users, should use the
> kernel_vendor_key field in the pools key. This can be useful in a few
> cases, e.g, when the FRMR handle has a vendor-specific un-modifiable
> property that the user registering the memory might not be aware of.
>
> Signed-off-by: Michael Guralnik <michaelgur@nvidia.com>
> Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
> Signed-off-by: Edward Srouji <edwards@nvidia.com>
> ---
>   drivers/infiniband/core/Makefile     |   2 +-
>   drivers/infiniband/core/frmr_pools.c | 323 +++++++++++++++++++++++++++++++++++
>   drivers/infiniband/core/frmr_pools.h |  48 ++++++
>   include/rdma/frmr_pools.h            |  37 ++++
>   include/rdma/ib_verbs.h              |   8 +
>   5 files changed, 417 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile
> index f483e0c12444..7089a982b876 100644
> --- a/drivers/infiniband/core/Makefile
> +++ b/drivers/infiniband/core/Makefile
> @@ -12,7 +12,7 @@ ib_core-y :=			packer.o ud_header.o verbs.o cq.o rw.o sysfs.o \
>   				roce_gid_mgmt.o mr_pool.o addr.o sa_query.o \
>   				multicast.o mad.o smi.o agent.o mad_rmpp.o \
>   				nldev.o restrack.o counters.o ib_core_uverbs.o \
> -				trace.o lag.o
> +				trace.o lag.o frmr_pools.o
>   
>   ib_core-$(CONFIG_SECURITY_INFINIBAND) += security.o
>   ib_core-$(CONFIG_CGROUP_RDMA) += cgroup.o
> diff --git a/drivers/infiniband/core/frmr_pools.c b/drivers/infiniband/core/frmr_pools.c
> new file mode 100644
> index 000000000000..eae15894a3b2
> --- /dev/null
> +++ b/drivers/infiniband/core/frmr_pools.c
> @@ -0,0 +1,323 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
> + */
> +
> +#include <linux/slab.h>
> +#include <linux/rbtree.h>
> +#include <linux/sort.h>
> +#include <linux/spinlock.h>
> +#include <rdma/ib_verbs.h>
> +
> +#include "frmr_pools.h"
> +
> +static int push_handle_to_queue_locked(struct frmr_queue *queue, u32 handle)
> +{
> +	u32 tmp = queue->ci % NUM_HANDLES_PER_PAGE;
> +	struct frmr_handles_page *page;
> +
> +	if (queue->ci >= queue->num_pages * NUM_HANDLES_PER_PAGE) {
> +		page = kzalloc(sizeof(*page), GFP_ATOMIC);
> +		if (!page)
> +			return -ENOMEM;
> +		queue->num_pages++;
> +		list_add_tail(&page->list, &queue->pages_list);
> +	} else {
> +		page = list_last_entry(&queue->pages_list,
> +				       struct frmr_handles_page, list);
> +	}
> +
> +	page->handles[tmp] = handle;
> +	queue->ci++;
> +	return 0;
> +}
> +
> +static u32 pop_handle_from_queue_locked(struct frmr_queue *queue)
> +{
> +	u32 tmp = (queue->ci - 1) % NUM_HANDLES_PER_PAGE;
> +	struct frmr_handles_page *page;
> +	u32 handle;
> +
> +	page = list_last_entry(&queue->pages_list, struct frmr_handles_page,
> +			       list);
> +	handle = page->handles[tmp];
> +	queue->ci--;
> +
> +	if (!tmp) {
> +		list_del(&page->list);
> +		queue->num_pages--;
> +		kfree(page);
> +	}
> +
> +	return handle;
> +}
> +
> +static bool pop_frmr_handles_page(struct ib_frmr_pool *pool,
> +				  struct frmr_queue *queue,
> +				  struct frmr_handles_page **page, u32 *count)
> +{
> +	spin_lock(&pool->lock);
> +	if (list_empty(&queue->pages_list)) {
> +		spin_unlock(&pool->lock);
> +		return false;
> +	}
> +
> +	*page = list_first_entry(&queue->pages_list, struct frmr_handles_page,
> +				 list);
> +	list_del(&(*page)->list);
> +	queue->num_pages--;
> +
> +	/* If this is the last page, count may be less than
> +	 * NUM_HANDLES_PER_PAGE.
> +	 */
> +	if (queue->ci >= NUM_HANDLES_PER_PAGE)
> +		*count = NUM_HANDLES_PER_PAGE;
> +	else
> +		*count = queue->ci;
> +
> +	queue->ci -= *count;
> +	spin_unlock(&pool->lock);
> +	return true;
> +}
> +
> +static void destroy_frmr_pool(struct ib_device *device,
> +			      struct ib_frmr_pool *pool)
> +{
> +	struct ib_frmr_pools *pools = device->frmr_pools;
> +	struct frmr_handles_page *page;
> +	u32 count;
> +
> +	while (pop_frmr_handles_page(pool, &pool->queue, &page, &count)) {
> +		pools->pool_ops->destroy_frmrs(device, page->handles, count);
> +		kfree(page);
> +	}
> +
> +	rb_erase(&pool->node, &pools->rb_root);
> +	kfree(pool);
> +}
> +
> +/*
> + * Initialize the FRMR pools for a device.
> + *
> + * @device: The device to initialize the FRMR pools for.
> + * @pool_ops: The pool operations to use.
> + *
> + * Returns 0 on success, negative error code on failure.
> + */
> +int ib_frmr_pools_init(struct ib_device *device,
> +		       const struct ib_frmr_pool_ops *pool_ops)
> +{
> +	struct ib_frmr_pools *pools;
> +
> +	pools = kzalloc(sizeof(*pools), GFP_KERNEL);
> +	if (!pools)
> +		return -ENOMEM;
> +
> +	pools->rb_root = RB_ROOT;
> +	rwlock_init(&pools->rb_lock);
> +	pools->pool_ops = pool_ops;
> +
> +	device->frmr_pools = pools;
> +	return 0;
> +}
> +EXPORT_SYMBOL(ib_frmr_pools_init);
> +
> +/*
> + * Clean up the FRMR pools for a device.
> + *
> + * @device: The device to clean up the FRMR pools for.
> + *
> + * Call cleanup only after all FRMR handles have been pushed back to the pool
> + * and no other FRMR operations are allowed to run in parallel.
> + * Ensuring this allows us to save synchronization overhead in pop and push
> + * operations.
> + */
> +void ib_frmr_pools_cleanup(struct ib_device *device)
> +{
> +	struct ib_frmr_pools *pools = device->frmr_pools;
> +	struct rb_node *node = rb_first(&pools->rb_root);
> +	struct ib_frmr_pool *pool;
> +
> +	while (node) {
> +		struct rb_node *next = rb_next(node);
> +
> +		pool = rb_entry(node, struct ib_frmr_pool, node);
> +		destroy_frmr_pool(device, pool);
> +		node = next;
> +	}
> +
> +	kfree(pools);
> +	device->frmr_pools = NULL;
> +}
> +EXPORT_SYMBOL(ib_frmr_pools_cleanup);
> +
> +static inline int compare_keys(struct ib_frmr_key *key1,
> +			       struct ib_frmr_key *key2)
> +{
> +	int res;
> +
> +	res = cmp_int(key1->ats, key2->ats);
> +	if (res)
> +		return res;
> +
> +	res = cmp_int(key1->access_flags, key2->access_flags);
> +	if (res)
> +		return res;
> +
> +	res = cmp_int(key1->vendor_key, key2->vendor_key);
> +	if (res)
> +		return res;
> +
> +	res = cmp_int(key1->kernel_vendor_key, key2->kernel_vendor_key);
> +	if (res)
> +		return res;
> +
> +	/*
> +	 * allow using handles that support more DMA blocks, up to twice the
> +	 * requested number
> +	 */
> +	res = cmp_int(key1->num_dma_blocks, key2->num_dma_blocks);
> +	if (res > 0) {
> +		if (key1->num_dma_blocks - key2->num_dma_blocks <
> +		    key2->num_dma_blocks)
> +			return 0;
> +	}
> +
> +	return res;
> +}
> +
> +static int frmr_pool_cmp_find(const void *key, const struct rb_node *node)
> +{
> +	struct ib_frmr_pool *pool = rb_entry(node, struct ib_frmr_pool, node);
> +
> +	return compare_keys(&pool->key, (struct ib_frmr_key *)key);
> +}
> +
> +static int frmr_pool_cmp_add(struct rb_node *new, const struct rb_node *node)
> +{
> +	struct ib_frmr_pool *new_pool =
> +		rb_entry(new, struct ib_frmr_pool, node);
> +	struct ib_frmr_pool *pool = rb_entry(node, struct ib_frmr_pool, node);
> +
> +	return compare_keys(&pool->key, &new_pool->key);
> +}
> +
> +static struct ib_frmr_pool *ib_frmr_pool_find(struct ib_frmr_pools *pools,
> +					      struct ib_frmr_key *key)
> +{
> +	struct ib_frmr_pool *pool;
> +	struct rb_node *node;
> +
> +	/* find operation is done under read lock for performance reasons.
> +	 * The case of threads failing to find the same pool and creating it
> +	 * is handled by the create_frmr_pool function.
> +	 */
> +	read_lock(&pools->rb_lock);
> +	node = rb_find(key, &pools->rb_root, frmr_pool_cmp_find);
> +	pool = rb_entry_safe(node, struct ib_frmr_pool, node);
> +	read_unlock(&pools->rb_lock);
> +
> +	return pool;
> +}
> +
> +static struct ib_frmr_pool *create_frmr_pool(struct ib_device *device,
> +					     struct ib_frmr_key *key)
> +{
> +	struct ib_frmr_pools *pools = device->frmr_pools;
> +	struct ib_frmr_pool *pool;
> +	struct rb_node *existing;
> +
> +	pool = kzalloc(sizeof(*pool), GFP_KERNEL);
> +	if (!pool)
> +		return ERR_PTR(-ENOMEM);
> +
> +	memcpy(&pool->key, key, sizeof(*key));
> +	INIT_LIST_HEAD(&pool->queue.pages_list);
> +	spin_lock_init(&pool->lock);
> +
> +	write_lock(&pools->rb_lock);
> +	existing = rb_find_add(&pool->node, &pools->rb_root, frmr_pool_cmp_add);
> +	write_unlock(&pools->rb_lock);
> +
> +	/* If a different thread has already created the pool, return it.
> +	 * The insert operation is done under the write lock so we are sure
> +	 * that the pool is not inserted twice.
> +	 */
> +	if (existing) {
> +		kfree(pool);
> +		return rb_entry(existing, struct ib_frmr_pool, node);
> +	}
> +
> +	return pool;
> +}
> +
> +static int get_frmr_from_pool(struct ib_device *device,
> +			      struct ib_frmr_pool *pool, struct ib_mr *mr)
> +{
> +	struct ib_frmr_pools *pools = device->frmr_pools;
> +	u32 handle;
> +	int err;
> +
> +	spin_lock(&pool->lock);
> +	if (pool->queue.ci == 0) {
> +		spin_unlock(&pool->lock);
> +		err = pools->pool_ops->create_frmrs(device, &pool->key, &handle,
> +						    1);
> +		if (err)
> +			return err;
> +	} else {
> +		handle = pop_handle_from_queue_locked(&pool->queue);
> +		spin_unlock(&pool->lock);
> +	}
> +
> +	mr->frmr.pool = pool;
> +	mr->frmr.handle = handle;
> +
> +	return 0;
> +}
> +
> +/*
> + * Pop an FRMR handle from the pool.
> + *
> + * @device: The device to pop the FRMR handle from.
> + * @mr: The MR to pop the FRMR handle from.
> + *
> + * Returns 0 on success, negative error code on failure.
> + */
> +int ib_frmr_pool_pop(struct ib_device *device, struct ib_mr *mr)
> +{
> +	struct ib_frmr_pools *pools = device->frmr_pools;
> +	struct ib_frmr_pool *pool;
> +
> +	WARN_ON_ONCE(!device->frmr_pools);
> +	pool = ib_frmr_pool_find(pools, &mr->frmr.key);
> +	if (!pool) {
> +		pool = create_frmr_pool(device, &mr->frmr.key);
> +		if (IS_ERR(pool))
> +			return PTR_ERR(pool);
> +	}
> +
> +	return get_frmr_from_pool(device, pool, mr);
> +}
> +EXPORT_SYMBOL(ib_frmr_pool_pop);
> +
> +/*
> + * Push an FRMR handle back to the pool.
> + *
> + * @device: The device to push the FRMR handle to.
> + * @mr: The MR containing the FRMR handle to push back to the pool.
> + *
> + * Returns 0 on success, negative error code on failure.
> + */
> +int ib_frmr_pool_push(struct ib_device *device, struct ib_mr *mr)
> +{
> +	struct ib_frmr_pool *pool = mr->frmr.pool;
> +	int ret;
> +
> +	spin_lock(&pool->lock);
> +	ret = push_handle_to_queue_locked(&pool->queue, mr->frmr.handle);
> +	spin_unlock(&pool->lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(ib_frmr_pool_push);
> diff --git a/drivers/infiniband/core/frmr_pools.h b/drivers/infiniband/core/frmr_pools.h
> new file mode 100644
> index 000000000000..5a4d03b3d86f
> --- /dev/null
> +++ b/drivers/infiniband/core/frmr_pools.h
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0-only
> + *
> + * Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
> + */
> +
> +#ifndef RDMA_CORE_FRMR_POOLS_H
> +#define RDMA_CORE_FRMR_POOLS_H
> +
> +#include <rdma/frmr_pools.h>
> +#include <linux/rbtree_types.h>
> +#include <linux/spinlock_types.h>
> +#include <linux/types.h>
> +#include <asm/page.h>
> +
> +#define NUM_HANDLES_PER_PAGE \
> +	((PAGE_SIZE - sizeof(struct list_head)) / sizeof(u32))
> +
> +struct frmr_handles_page {
> +	struct list_head list;
> +	u32 handles[NUM_HANDLES_PER_PAGE];
> +};
> +
> +/* FRMR queue holds a list of frmr_handles_page.
> + * num_pages: number of pages in the queue.
> + * ci: current index in the handles array across all pages.
> + */
> +struct frmr_queue {
> +	struct list_head pages_list;
> +	u32 num_pages;
> +	unsigned long ci;
> +};
> +
> +struct ib_frmr_pool {
> +	struct rb_node node;
> +	struct ib_frmr_key key; /* Pool key */
> +
> +	/* Protect access to the queue */
> +	spinlock_t lock;
> +	struct frmr_queue queue;
> +};
> +
> +struct ib_frmr_pools {
> +	struct rb_root rb_root;
> +	rwlock_t rb_lock;
> +	const struct ib_frmr_pool_ops *pool_ops;
> +};
> +
> +#endif /* RDMA_CORE_FRMR_POOLS_H */
> diff --git a/include/rdma/frmr_pools.h b/include/rdma/frmr_pools.h
> new file mode 100644
> index 000000000000..da92ef4d7310
> --- /dev/null
> +++ b/include/rdma/frmr_pools.h
> @@ -0,0 +1,37 @@
> +/* SPDX-License-Identifier: GPL-2.0-only
> + *
> + * Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
> + */
> +
> +#ifndef FRMR_POOLS_H
> +#define FRMR_POOLS_H
> +
> +#include <linux/types.h>
> +#include <asm/page.h>
> +
> +struct ib_device;
> +struct ib_mr;
> +
> +struct ib_frmr_key {
> +	u64 vendor_key;
> +	/* A pool with non-zero kernel_vendor_key is a kernel-only pool. */
> +	u64 kernel_vendor_key;
> +	size_t num_dma_blocks;
> +	int access_flags;
> +	u8 ats:1;
> +};
> +
> +struct ib_frmr_pool_ops {
> +	int (*create_frmrs)(struct ib_device *device, struct ib_frmr_key *key,
> +			    u32 *handles, u32 count);
> +	void (*destroy_frmrs)(struct ib_device *device, u32 *handles,
> +			      u32 count);
> +};
> +
> +int ib_frmr_pools_init(struct ib_device *device,
> +		       const struct ib_frmr_pool_ops *pool_ops);
> +void ib_frmr_pools_cleanup(struct ib_device *device);
> +int ib_frmr_pool_pop(struct ib_device *device, struct ib_mr *mr);
> +int ib_frmr_pool_push(struct ib_device *device, struct ib_mr *mr);
> +
> +#endif /* FRMR_POOLS_H */
> diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
> index 0a85af610b6b..6cc557424e23 100644
> --- a/include/rdma/ib_verbs.h
> +++ b/include/rdma/ib_verbs.h
> @@ -43,6 +43,7 @@
>   #include <uapi/rdma/rdma_user_ioctl.h>
>   #include <uapi/rdma/ib_user_ioctl_verbs.h>
>   #include <linux/pci-tph.h>
> +#include <rdma/frmr_pools.h>
>   
>   #define IB_FW_VERSION_NAME_MAX	ETHTOOL_FWVERS_LEN
>   
> @@ -1886,6 +1887,11 @@ struct ib_mr {
>   	struct ib_dm      *dm;
>   	struct ib_sig_attrs *sig_attrs; /* only for IB_MR_TYPE_INTEGRITY MRs */
>   	struct ib_dmah *dmah;
> +	struct {
> +		struct ib_frmr_pool *pool;
> +		struct ib_frmr_key key;
> +		u32 handle;
> +	} frmr;
>   	/*
>   	 * Implementation details of the RDMA core, don't use in drivers:
>   	 */
> @@ -2879,6 +2885,8 @@ struct ib_device {
>   	struct list_head subdev_list;
>   
>   	enum rdma_nl_name_assign_type name_assign_type;
> +
> +	struct ib_frmr_pools *frmr_pools;
>   };
>   
>   static inline void *rdma_zalloc_obj(struct ib_device *dev, size_t size,

A potential NULL pointer dereference was identified in 
ib_frmr_pools_cleanup() when device->frmr_pools is not initialized. 
Instead of sending a full v4 of this long series (no review comments 
have been received on v3 so far), posting the diff change here. The 
change simplifies the error unwinding and allows calls to FRMR pools 
cleanup without prior pools initialization. diff --git 
a/drivers/infiniband/core/frmr_pools.c 
b/drivers/infiniband/core/frmr_pools.c index eae15894a3b2..8cecafabda3c 
100644 --- a/drivers/infiniband/core/frmr_pools.c +++ 
b/drivers/infiniband/core/frmr_pools.c @@ -92,7 +92,6 @@ static void 
destroy_frmr_pool(struct ib_device *device,                 kfree(page); 
         } -       rb_erase(&pool->node, &pools->rb_root);         
kfree(pool);  } @@ -135,16 +134,13 @@ EXPORT_SYMBOL(ib_frmr_pools_init); 
  void ib_frmr_pools_cleanup(struct ib_device *device)  {         struct 
ib_frmr_pools *pools = device->frmr_pools; -       struct rb_node *node 
= rb_first(&pools->rb_root); -       struct ib_frmr_pool *pool; +      
  struct ib_frmr_pool *pool, *next; -       while (node) { -            
    struct rb_node *next = rb_next(node); +       if (!pools) +          
      return; -               pool = rb_entry(node, struct ib_frmr_pool, 
node); +       rbtree_postorder_for_each_entry_safe(pool, next, 
&pools->rb_root, node)                 destroy_frmr_pool(device, pool); 
-               node = next; -       }         kfree(pools);         
device->frmr_pools = NULL;


       reply	other threads:[~2026-02-22 17:39 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20260202-frmr_pools-v3-0-d4543e82744a@nvidia.com>
     [not found] ` <20260202-frmr_pools-v3-2-d4543e82744a@nvidia.com>
2026-02-22 17:39   ` Edward Srouji [this message]
2026-02-22 17:56     ` [PATCH rdma-next v3 02/11] IB/core: Introduce FRMR pools Edward Srouji
2026-02-02 15:59 [PATCH rdma-next v3 00/11] RDMA/core: Introduce FRMR pools infrastructure Edward Srouji
2026-02-02 15:59 ` [PATCH rdma-next v3 02/11] IB/core: Introduce FRMR pools Edward Srouji

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=72191e37-8f9d-43f8-bd56-493d30719494@nvidia.com \
    --to=edwards@nvidia.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=jgg@ziepe.ca \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=michaelgur@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox