From: Matthieu Baerts <matttbe@kernel.org>
To: Alexandra Winter <wintera@linux.ibm.com>,
Sidraya Jayagond <sidraya@linux.ibm.com>
Cc: Julian Ruess <julianr@linux.ibm.com>,
Aswin Karuvally <aswin@linux.ibm.com>,
Halil Pasic <pasic@linux.ibm.com>,
Mahanta Jambigi <mjambigi@linux.ibm.com>,
Tony Lu <tonylu@linux.alibaba.com>,
Wen Gu <guwen@linux.alibaba.com>,
linux-rdma@vger.kernel.org, netdev@vger.kernel.org,
linux-s390@vger.kernel.org, Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
Simon Horman <horms@kernel.org>,
Eric Biggers <ebiggers@kernel.org>,
Ard Biesheuvel <ardb@kernel.org>,
Herbert Xu <herbert@gondor.apana.org.au>,
Harald Freudenberger <freude@linux.ibm.com>,
Konstantin Shkolnyy <kshk@linux.ibm.com>,
Dan Williams <dan.j.williams@intel.com>,
Dave Jiang <dave.jiang@intel.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Shannon Nelson <sln@onemain.com>,
Geert Uytterhoeven <geert@linux-m68k.org>,
Jason Gunthorpe <jgg@ziepe.ca>,
"D. Wythe" <alibuda@linux.alibaba.com>,
Dust Li <dust.li@linux.alibaba.com>,
Wenjia Zhang <wenjia@linux.ibm.com>,
David Miller <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Eric Dumazet <edumazet@google.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
Stephen Rothwell <sfr@canb.auug.org.au>
Subject: Re: [PATCH net-next v3 13/14] dibs: Move data path to dibs layer: manual merge
Date: Wed, 24 Sep 2025 10:07:35 +0100 [thread overview]
Message-ID: <74368a5c-48ac-4f8e-a198-40ec1ed3cf5f@kernel.org> (raw)
In-Reply-To: <20250918110500.1731261-14-wintera@linux.ibm.com>
[-- Attachment #1: Type: text/plain, Size: 2586 bytes --]
Hi Alexandra, Sidraya,
On 18/09/2025 12:04, Alexandra Winter wrote:
> Use struct dibs_dmb instead of struct smc_dmb and move the corresponding
> client tables to dibs_dev. Leave driver specific implementation details
> like sba in the device drivers.
>
> Register and unregister dmbs via dibs_dev_ops. A dmb is dedicated to a
> single client, but a dibs device can have dmbs for more than one client.
>
> Trigger dibs clients via dibs_client_ops->handle_irq(), when data is
> received into a dmb. For dibs_loopback replace scheduling an smcd receive
> tasklet with calling dibs_client_ops->handle_irq().
>
> For loopback devices attach_dmb(), detach_dmb() and move_data() need to
> access the dmb tables, so move those to dibs_dev_ops in this patch as well.
>
> Remove remaining definitions of smc_loopback as they are no longer
> required, now that everything is in dibs_loopback.
>
> Note that struct ism_client and struct ism_dev are still required in smc
> until a follow-on patch moves event handling to dibs. (Loopback does not
> use events).
FYI, we got a conflict when merging 'net' in 'net-next' in the MPTCP
tree due to this patch applied in 'net':
a35c04de2565 ("net/smc: fix warning in smc_rx_splice() when calling
get_page()")
and this one from 'net-next':
cc21191b584c ("dibs: Move data path to dibs layer")
----- Generic Message -----
The best is to avoid conflicts between 'net' and 'net-next' trees but if
they cannot be avoided when preparing patches, a note about how to fix
them is much appreciated.
The conflict has been resolved on our side[1] and the resolution we
suggest is attached to this email. Please report any issues linked to
this conflict resolution as it might be used by others. If you worked on
the mentioned patches, don't hesitate to ACK this conflict resolution.
---------------------------
Regarding this conflict, I hope the resolution is correct. The patch
from 'net' was modifying 'net/smc/smc_loopback.c' in
smc_lo_register_dmb() and __smc_lo_unregister_dmb(). I applied the same
modifications in 'drivers/dibs/dibs_loopback.c', in
dibs_lo_register_dmb() and __dibs_lo_unregister_dmb(). In net-next,
kfree(cpu_addr) was used instead of kvfree(cpu_addr), but this was done
on purpose. Also, I had to include mm.h to be able to build this driver.
I also attached a simple diff of the modifications I did.
Does that look OK to both of you?
Note: no rerere cache is available for this kind of conflicts.
Cheers,
Matt
[1] https://github.com/multipath-tcp/mptcp_net-next/commit/af2dbdbb0a91
--
Sponsored by the NGI0 Core fund.
[-- Attachment #2: af2dbdbb0a91d92af6248888b566b8b154ebce6d.patch --]
[-- Type: text/x-patch, Size: 9619 bytes --]
diff --cc drivers/dibs/dibs_loopback.c
index 000000000000,b3fd0f8100d4..aa029e29c6b2
mode 000000,100644..100644
--- a/drivers/dibs/dibs_loopback.c
+++ b/drivers/dibs/dibs_loopback.c
@@@ -1,0 -1,356 +1,361 @@@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
+ * Functions for dibs loopback/loopback-ism device.
+ *
+ * Copyright (c) 2024, Alibaba Inc.
+ *
+ * Author: Wen Gu <guwen@linux.alibaba.com>
+ * Tony Lu <tonylu@linux.alibaba.com>
+ *
+ */
+
+ #include <linux/bitops.h>
+ #include <linux/device.h>
+ #include <linux/dibs.h>
++#include <linux/mm.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+
+ #include "dibs_loopback.h"
+
+ #define DIBS_LO_SUPPORT_NOCOPY 0x1
+ #define DIBS_DMA_ADDR_INVALID (~(dma_addr_t)0)
+
+ static const char dibs_lo_dev_name[] = "lo";
+ /* global loopback device */
+ static struct dibs_lo_dev *lo_dev;
+
+ static u16 dibs_lo_get_fabric_id(struct dibs_dev *dibs)
+ {
+ return DIBS_LOOPBACK_FABRIC;
+ }
+
+ static int dibs_lo_query_rgid(struct dibs_dev *dibs, const uuid_t *rgid,
+ u32 vid_valid, u32 vid)
+ {
+ /* rgid should be the same as lgid */
+ if (!uuid_equal(rgid, &dibs->gid))
+ return -ENETUNREACH;
+ return 0;
+ }
+
+ static int dibs_lo_max_dmbs(void)
+ {
+ return DIBS_LO_MAX_DMBS;
+ }
+
+ static int dibs_lo_register_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb,
+ struct dibs_client *client)
+ {
+ struct dibs_lo_dmb_node *dmb_node, *tmp_node;
+ struct dibs_lo_dev *ldev;
++ struct folio *folio;
+ unsigned long flags;
+ int sba_idx, rc;
+
+ ldev = dibs->drv_priv;
+ sba_idx = dmb->idx;
+ /* check space for new dmb */
+ for_each_clear_bit(sba_idx, ldev->sba_idx_mask, DIBS_LO_MAX_DMBS) {
+ if (!test_and_set_bit(sba_idx, ldev->sba_idx_mask))
+ break;
+ }
+ if (sba_idx == DIBS_LO_MAX_DMBS)
+ return -ENOSPC;
+
+ dmb_node = kzalloc(sizeof(*dmb_node), GFP_KERNEL);
+ if (!dmb_node) {
+ rc = -ENOMEM;
+ goto err_bit;
+ }
+
+ dmb_node->sba_idx = sba_idx;
+ dmb_node->len = dmb->dmb_len;
- dmb_node->cpu_addr = kzalloc(dmb_node->len, GFP_KERNEL |
- __GFP_NOWARN | __GFP_NORETRY |
- __GFP_NOMEMALLOC);
- if (!dmb_node->cpu_addr) {
++
++ /* not critical; fail under memory pressure and fallback to TCP */
++ folio = folio_alloc(GFP_KERNEL | __GFP_NOWARN | __GFP_NOMEMALLOC |
++ __GFP_NORETRY | __GFP_ZERO,
++ get_order(dmb_node->len));
++ if (!folio) {
+ rc = -ENOMEM;
+ goto err_node;
+ }
++ dmb_node->cpu_addr = folio_address(folio);
+ dmb_node->dma_addr = DIBS_DMA_ADDR_INVALID;
+ refcount_set(&dmb_node->refcnt, 1);
+
+ again:
+ /* add new dmb into hash table */
+ get_random_bytes(&dmb_node->token, sizeof(dmb_node->token));
+ write_lock_bh(&ldev->dmb_ht_lock);
+ hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb_node->token) {
+ if (tmp_node->token == dmb_node->token) {
+ write_unlock_bh(&ldev->dmb_ht_lock);
+ goto again;
+ }
+ }
+ hash_add(ldev->dmb_ht, &dmb_node->list, dmb_node->token);
+ write_unlock_bh(&ldev->dmb_ht_lock);
+ atomic_inc(&ldev->dmb_cnt);
+
+ dmb->idx = dmb_node->sba_idx;
+ dmb->dmb_tok = dmb_node->token;
+ dmb->cpu_addr = dmb_node->cpu_addr;
+ dmb->dma_addr = dmb_node->dma_addr;
+ dmb->dmb_len = dmb_node->len;
+
+ spin_lock_irqsave(&dibs->lock, flags);
+ dibs->dmb_clientid_arr[sba_idx] = client->id;
+ spin_unlock_irqrestore(&dibs->lock, flags);
+
+ return 0;
+
+ err_node:
+ kfree(dmb_node);
+ err_bit:
+ clear_bit(sba_idx, ldev->sba_idx_mask);
+ return rc;
+ }
+
+ static void __dibs_lo_unregister_dmb(struct dibs_lo_dev *ldev,
+ struct dibs_lo_dmb_node *dmb_node)
+ {
+ /* remove dmb from hash table */
+ write_lock_bh(&ldev->dmb_ht_lock);
+ hash_del(&dmb_node->list);
+ write_unlock_bh(&ldev->dmb_ht_lock);
+
+ clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask);
- kfree(dmb_node->cpu_addr);
++ folio_put(virt_to_folio(dmb_node->cpu_addr));
+ kfree(dmb_node);
+
+ if (atomic_dec_and_test(&ldev->dmb_cnt))
+ wake_up(&ldev->ldev_release);
+ }
+
+ static int dibs_lo_unregister_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb)
+ {
+ struct dibs_lo_dmb_node *dmb_node = NULL, *tmp_node;
+ struct dibs_lo_dev *ldev;
+ unsigned long flags;
+
+ ldev = dibs->drv_priv;
+
+ /* find dmb from hash table */
+ read_lock_bh(&ldev->dmb_ht_lock);
+ hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb->dmb_tok) {
+ if (tmp_node->token == dmb->dmb_tok) {
+ dmb_node = tmp_node;
+ break;
+ }
+ }
+ read_unlock_bh(&ldev->dmb_ht_lock);
+ if (!dmb_node)
+ return -EINVAL;
+
+ if (refcount_dec_and_test(&dmb_node->refcnt)) {
+ spin_lock_irqsave(&dibs->lock, flags);
+ dibs->dmb_clientid_arr[dmb_node->sba_idx] = NO_DIBS_CLIENT;
+ spin_unlock_irqrestore(&dibs->lock, flags);
+
+ __dibs_lo_unregister_dmb(ldev, dmb_node);
+ }
+ return 0;
+ }
+
+ static int dibs_lo_support_dmb_nocopy(struct dibs_dev *dibs)
+ {
+ return DIBS_LO_SUPPORT_NOCOPY;
+ }
+
+ static int dibs_lo_attach_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb)
+ {
+ struct dibs_lo_dmb_node *dmb_node = NULL, *tmp_node;
+ struct dibs_lo_dev *ldev;
+
+ ldev = dibs->drv_priv;
+
+ /* find dmb_node according to dmb->dmb_tok */
+ read_lock_bh(&ldev->dmb_ht_lock);
+ hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb->dmb_tok) {
+ if (tmp_node->token == dmb->dmb_tok) {
+ dmb_node = tmp_node;
+ break;
+ }
+ }
+ if (!dmb_node) {
+ read_unlock_bh(&ldev->dmb_ht_lock);
+ return -EINVAL;
+ }
+ read_unlock_bh(&ldev->dmb_ht_lock);
+
+ if (!refcount_inc_not_zero(&dmb_node->refcnt))
+ /* the dmb is being unregistered, but has
+ * not been removed from the hash table.
+ */
+ return -EINVAL;
+
+ /* provide dmb information */
+ dmb->idx = dmb_node->sba_idx;
+ dmb->dmb_tok = dmb_node->token;
+ dmb->cpu_addr = dmb_node->cpu_addr;
+ dmb->dma_addr = dmb_node->dma_addr;
+ dmb->dmb_len = dmb_node->len;
+ return 0;
+ }
+
+ static int dibs_lo_detach_dmb(struct dibs_dev *dibs, u64 token)
+ {
+ struct dibs_lo_dmb_node *dmb_node = NULL, *tmp_node;
+ struct dibs_lo_dev *ldev;
+
+ ldev = dibs->drv_priv;
+
+ /* find dmb_node according to dmb->dmb_tok */
+ read_lock_bh(&ldev->dmb_ht_lock);
+ hash_for_each_possible(ldev->dmb_ht, tmp_node, list, token) {
+ if (tmp_node->token == token) {
+ dmb_node = tmp_node;
+ break;
+ }
+ }
+ if (!dmb_node) {
+ read_unlock_bh(&ldev->dmb_ht_lock);
+ return -EINVAL;
+ }
+ read_unlock_bh(&ldev->dmb_ht_lock);
+
+ if (refcount_dec_and_test(&dmb_node->refcnt))
+ __dibs_lo_unregister_dmb(ldev, dmb_node);
+ return 0;
+ }
+
+ static int dibs_lo_move_data(struct dibs_dev *dibs, u64 dmb_tok,
+ unsigned int idx, bool sf, unsigned int offset,
+ void *data, unsigned int size)
+ {
+ struct dibs_lo_dmb_node *rmb_node = NULL, *tmp_node;
+ struct dibs_lo_dev *ldev;
+ u16 s_mask;
+ u8 client_id;
+ u32 sba_idx;
+
+ ldev = dibs->drv_priv;
+
+ read_lock_bh(&ldev->dmb_ht_lock);
+ hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb_tok) {
+ if (tmp_node->token == dmb_tok) {
+ rmb_node = tmp_node;
+ break;
+ }
+ }
+ if (!rmb_node) {
+ read_unlock_bh(&ldev->dmb_ht_lock);
+ return -EINVAL;
+ }
+ memcpy((char *)rmb_node->cpu_addr + offset, data, size);
+ sba_idx = rmb_node->sba_idx;
+ read_unlock_bh(&ldev->dmb_ht_lock);
+
+ if (!sf)
+ return 0;
+
+ spin_lock(&dibs->lock);
+ client_id = dibs->dmb_clientid_arr[sba_idx];
+ s_mask = ror16(0x1000, idx);
+ if (likely(client_id != NO_DIBS_CLIENT && dibs->subs[client_id]))
+ dibs->subs[client_id]->ops->handle_irq(dibs, sba_idx, s_mask);
+ spin_unlock(&dibs->lock);
+
+ return 0;
+ }
+
+ static const struct dibs_dev_ops dibs_lo_ops = {
+ .get_fabric_id = dibs_lo_get_fabric_id,
+ .query_remote_gid = dibs_lo_query_rgid,
+ .max_dmbs = dibs_lo_max_dmbs,
+ .register_dmb = dibs_lo_register_dmb,
+ .unregister_dmb = dibs_lo_unregister_dmb,
+ .move_data = dibs_lo_move_data,
+ .support_mmapped_rdmb = dibs_lo_support_dmb_nocopy,
+ .attach_dmb = dibs_lo_attach_dmb,
+ .detach_dmb = dibs_lo_detach_dmb,
+ };
+
+ static void dibs_lo_dev_init(struct dibs_lo_dev *ldev)
+ {
+ rwlock_init(&ldev->dmb_ht_lock);
+ hash_init(ldev->dmb_ht);
+ atomic_set(&ldev->dmb_cnt, 0);
+ init_waitqueue_head(&ldev->ldev_release);
+ }
+
+ static void dibs_lo_dev_exit(struct dibs_lo_dev *ldev)
+ {
+ if (atomic_read(&ldev->dmb_cnt))
+ wait_event(ldev->ldev_release, !atomic_read(&ldev->dmb_cnt));
+ }
+
+ static int dibs_lo_dev_probe(void)
+ {
+ struct dibs_lo_dev *ldev;
+ struct dibs_dev *dibs;
+ int ret;
+
+ ldev = kzalloc(sizeof(*ldev), GFP_KERNEL);
+ if (!ldev)
+ return -ENOMEM;
+
+ dibs = dibs_dev_alloc();
+ if (!dibs) {
+ kfree(ldev);
+ return -ENOMEM;
+ }
+
+ ldev->dibs = dibs;
+ dibs->drv_priv = ldev;
+ dibs_lo_dev_init(ldev);
+ uuid_gen(&dibs->gid);
+ dibs->ops = &dibs_lo_ops;
+
+ dibs->dev.parent = NULL;
+ dev_set_name(&dibs->dev, "%s", dibs_lo_dev_name);
+
+ ret = dibs_dev_add(dibs);
+ if (ret)
+ goto err_reg;
+ lo_dev = ldev;
+ return 0;
+
+ err_reg:
+ kfree(dibs->dmb_clientid_arr);
+ /* pairs with dibs_dev_alloc() */
+ put_device(&dibs->dev);
+ kfree(ldev);
+
+ return ret;
+ }
+
+ static void dibs_lo_dev_remove(void)
+ {
+ if (!lo_dev)
+ return;
+
+ dibs_dev_del(lo_dev->dibs);
+ dibs_lo_dev_exit(lo_dev);
+ /* pairs with dibs_dev_alloc() */
+ put_device(&lo_dev->dibs->dev);
+ kfree(lo_dev);
+ lo_dev = NULL;
+ }
+
+ int dibs_loopback_init(void)
+ {
+ return dibs_lo_dev_probe();
+ }
+
+ void dibs_loopback_exit(void)
+ {
+ dibs_lo_dev_remove();
+ }
[-- Attachment #3: conflict_dibs.patch --]
[-- Type: text/x-patch, Size: 1658 bytes --]
diff --git a/drivers/dibs/dibs_loopback.c b/drivers/dibs/dibs_loopback.c
index b3fd0f8100d4..aa029e29c6b2 100644
--- a/drivers/dibs/dibs_loopback.c
+++ b/drivers/dibs/dibs_loopback.c
@@ -12,6 +12,7 @@
#include <linux/bitops.h>
#include <linux/device.h>
#include <linux/dibs.h>
+#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/types.h>
@@ -49,6 +50,7 @@ static int dibs_lo_register_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb,
{
struct dibs_lo_dmb_node *dmb_node, *tmp_node;
struct dibs_lo_dev *ldev;
+ struct folio *folio;
unsigned long flags;
int sba_idx, rc;
@@ -70,13 +72,16 @@ static int dibs_lo_register_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb,
dmb_node->sba_idx = sba_idx;
dmb_node->len = dmb->dmb_len;
- dmb_node->cpu_addr = kzalloc(dmb_node->len, GFP_KERNEL |
- __GFP_NOWARN | __GFP_NORETRY |
- __GFP_NOMEMALLOC);
- if (!dmb_node->cpu_addr) {
+
+ /* not critical; fail under memory pressure and fallback to TCP */
+ folio = folio_alloc(GFP_KERNEL | __GFP_NOWARN | __GFP_NOMEMALLOC |
+ __GFP_NORETRY | __GFP_ZERO,
+ get_order(dmb_node->len));
+ if (!folio) {
rc = -ENOMEM;
goto err_node;
}
+ dmb_node->cpu_addr = folio_address(folio);
dmb_node->dma_addr = DIBS_DMA_ADDR_INVALID;
refcount_set(&dmb_node->refcnt, 1);
@@ -122,7 +127,7 @@ static void __dibs_lo_unregister_dmb(struct dibs_lo_dev *ldev,
write_unlock_bh(&ldev->dmb_ht_lock);
clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask);
- kfree(dmb_node->cpu_addr);
+ folio_put(virt_to_folio(dmb_node->cpu_addr));
kfree(dmb_node);
if (atomic_dec_and_test(&ldev->dmb_cnt))
next prev parent reply other threads:[~2025-09-24 9:07 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-18 11:04 [PATCH net-next v3 00/14] dibs - Direct Internal Buffer Sharing Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 01/14] net/smc: Remove error handling of unregister_dmb() Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 02/14] net/smc: Decouple sf and attached send_buf in smc_loopback Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 03/14] dibs: Create drivers/dibs Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 04/14] dibs: Register smc as dibs_client Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 05/14] dibs: Register ism as dibs device Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 06/14] dibs: Define dibs loopback Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 07/14] dibs: Define dibs_client_ops and dibs_dev_ops Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 08/14] dibs: Move struct device to dibs_dev Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 09/14] dibs: Create class dibs Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 10/14] dibs: Local gid for dibs devices Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 11/14] dibs: Move vlan support to dibs_dev_ops Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 12/14] dibs: Move query_remote_gid() " Alexandra Winter
2025-09-18 11:04 ` [PATCH net-next v3 13/14] dibs: Move data path to dibs layer Alexandra Winter
2025-09-24 9:07 ` Matthieu Baerts [this message]
2025-09-24 17:34 ` [PATCH net-next v3 13/14] dibs: Move data path to dibs layer: manual merge Alexandra Winter
2025-09-25 6:00 ` Sidraya Jayagond
2025-09-25 17:57 ` Jakub Kicinski
2025-09-29 18:29 ` Matthieu Baerts
2025-09-18 11:05 ` [PATCH net-next v3 14/14] dibs: Move event handling to dibs layer Alexandra Winter
2025-09-23 9:20 ` [PATCH net-next v3 00/14] dibs - Direct Internal Buffer Sharing patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=74368a5c-48ac-4f8e-a198-40ec1ed3cf5f@kernel.org \
--to=matttbe@kernel.org \
--cc=Jonathan.Cameron@huawei.com \
--cc=agordeev@linux.ibm.com \
--cc=alibuda@linux.alibaba.com \
--cc=andrew+netdev@lunn.ch \
--cc=ardb@kernel.org \
--cc=aswin@linux.ibm.com \
--cc=borntraeger@linux.ibm.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=davem@davemloft.net \
--cc=dust.li@linux.alibaba.com \
--cc=ebiggers@kernel.org \
--cc=edumazet@google.com \
--cc=freude@linux.ibm.com \
--cc=geert@linux-m68k.org \
--cc=gor@linux.ibm.com \
--cc=guwen@linux.alibaba.com \
--cc=hca@linux.ibm.com \
--cc=herbert@gondor.apana.org.au \
--cc=horms@kernel.org \
--cc=jgg@ziepe.ca \
--cc=julianr@linux.ibm.com \
--cc=kshk@linux.ibm.com \
--cc=kuba@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=mjambigi@linux.ibm.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=sfr@canb.auug.org.au \
--cc=sidraya@linux.ibm.com \
--cc=sln@onemain.com \
--cc=svens@linux.ibm.com \
--cc=tonylu@linux.alibaba.com \
--cc=wenjia@linux.ibm.com \
--cc=wintera@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).