linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
       [not found] ` <1510852885-25519-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-11-16 18:08   ` Leon Romanovsky
       [not found]     ` <20171116180853.GN18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Leon Romanovsky @ 2017-11-16 18:08 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: hch-jcswGhMUV9g, sagi-NQWnxTmZq1alnMjI0IkVqw,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, idanb-VPRAkNaXOzVWk0Htik3J/w,
	RDMA mailing list

On Thu, Nov 16, 2017 at 07:21:22PM +0200, Max Gurtovoy wrote:
> Since there is an active discussion regarding the CQ pool architecture, I decided to push
> this feature (maybe it can be pushed before CQ pool).

Max,

Thanks for CCing me, can you please repost the series and CC linux-rdma too?

>
> This is a new feature for NVMEoF RDMA target, that is intended to save resource allocation
> (by sharing them) and utilize the locality of completions to get the best performance with
> Shared Receive Queues (SRQs). We'll create a SRQ per completion vector (and not per device)
> using a new API (SRQ pool, added to this patchset too) and associate each created QP/CQ with
> an appropriate SRQ. This will also reduce the lock contention on the single SRQ per device
> (today's solution).
>
> My testing environment included 4 initiators (CX5, CX5, CX4, CX3) that were connected to 4
> subsystems (1 ns per sub) throw 2 ports (each initiator connected to unique subsystem
> backed in a different bull_blk device) using a switch to the NVMEoF target (CX5).
> I used RoCE link layer.
>
> Configuration:
>  - Irqbalancer stopped on each server
>  - set_irq_affinity.sh on each interface
>  - 2 initiators run traffic throw port 1
>  - 2 initiators run traffic throw port 2
>  - On initiator set register_always=N
>  - Fio with 12 jobs, iodepth 128
>
> Memory consumption calculation for recv buffers (target):
>  - Multiple SRQ: SRQ_size * comp_num * ib_devs_num * inline_buffer_size
>  - Single SRQ: SRQ_size * 1 * ib_devs_num * inline_buffer_size
>  - MQ: RQ_size * CPU_num * ctrl_num * inline_buffer_size
>
> Cases:
>  1. Multiple SRQ with 1024 entries:
>     - Mem = 1024 * 24 * 2 * 4k = 192MiB (Constant number - not depend on initiators number)
>  2. Multiple SRQ with 256 entries:
>     - Mem = 256 * 24 * 2 * 4k = 48MiB (Constant number - not depend on initiators number)
>  3. MQ:
>     - Mem = 256 * 24 * 8 * 4k = 192MiB (Mem grows for every new created ctrl)
>  4. Single SRQ (current SRQ implementation):
>     - Mem = 4096 * 1 * 2 * 4k = 32MiB (Constant number - not depend on initiators number)
>
> results:
>
> BS    1.read (target CPU)   2.read (target CPU)    3.read (target CPU)   4.read (target CPU)
> ---  --------------------- --------------------- --------------------- ----------------------
> 1k     5.88M (80%)            5.45M (72%)            6.77M (91%)          2.2M (72%)
>
> 2k     3.56M (65%)            3.45M (59%)            3.72M (64%)          2.12M (59%)
>
> 4k     1.8M (33%)             1.87M (32%)            1.88M (32%)          1.59M (34%)
>
> BS    1.write (target CPU)   2.write (target CPU) 3.write (target CPU)   4.write (target CPU)
> ---  --------------------- --------------------- --------------------- ----------------------
> 1k     5.42M (63%)            5.14M (55%)            7.75M (82%)          2.14M (74%)
>
> 2k     4.15M (56%)            4.14M (51%)            4.16M (52%)          2.08M (73%)
>
> 4k     2.17M (28%)            2.17M (27%)            2.16M (28%)          1.62M (24%)
>
>
> We can see the perf improvement between Case 2 and Case 4 (same order of resource).
> We can see the benefit in resource consumption (mem and CPU) with a small perf loss
> between cases 2 and 3.
> There is still an open question between the perf differance for 1k between Case 1 and
> Case 3, but I guess we can investigate and improve it incrementaly.
>
> Thanks to Idan Burstein and Oren Duer for suggesting this nice feature.
>
> Changes from V1:
>  - Added SRQ pool per protection domain for IB/core
>  - Fixed few comments from Christoph and Sagi
>
> Max Gurtovoy (3):
>   IB/core: add a simple SRQ pool per PD
>   nvmet-rdma: use srq pointer in rdma_cmd
>   nvmet-rdma: use SRQ per completion vector
>
>  drivers/infiniband/core/Makefile   |   2 +-
>  drivers/infiniband/core/srq_pool.c | 106 +++++++++++++++++++++
>  drivers/infiniband/core/verbs.c    |   4 +
>  drivers/nvme/target/rdma.c         | 190 +++++++++++++++++++++++++++----------
>  include/rdma/ib_verbs.h            |   5 +
>  include/rdma/srq_pool.h            |  46 +++++++++
>  6 files changed, 301 insertions(+), 52 deletions(-)
>  create mode 100644 drivers/infiniband/core/srq_pool.c
>  create mode 100644 include/rdma/srq_pool.h
>
> --
> 1.8.3.1
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] IB/core: add a simple SRQ pool per PD
       [not found]   ` <1510852885-25519-2-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-11-16 18:10     ` Leon Romanovsky
       [not found]       ` <20171116181049.GO18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Leon Romanovsky @ 2017-11-16 18:10 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: hch-jcswGhMUV9g, sagi-NQWnxTmZq1alnMjI0IkVqw,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, RDMA mailing list

On Thu, Nov 16, 2017 at 07:21:23PM +0200, Max Gurtovoy wrote:
> Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> ---
>  drivers/infiniband/core/Makefile   |   2 +-
>  drivers/infiniband/core/srq_pool.c | 106 +++++++++++++++++++++++++++++++++++++
>  drivers/infiniband/core/verbs.c    |   4 ++
>  include/rdma/ib_verbs.h            |   5 ++
>  include/rdma/srq_pool.h            |  46 ++++++++++++++++
>  5 files changed, 162 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/infiniband/core/srq_pool.c
>  create mode 100644 include/rdma/srq_pool.h
>
> diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile
> index b4df164..365da0c 100644
> --- a/drivers/infiniband/core/Makefile
> +++ b/drivers/infiniband/core/Makefile
> @@ -11,7 +11,7 @@ ib_core-y :=			packer.o ud_header.o verbs.o cq.o rw.o sysfs.o \
>  				device.o fmr_pool.o cache.o netlink.o \
>  				roce_gid_mgmt.o mr_pool.o addr.o sa_query.o \
>  				multicast.o mad.o smi.o agent.o mad_rmpp.o \
> -				security.o nldev.o
> +				security.o nldev.o srq_pool.o
>
>  ib_core-$(CONFIG_INFINIBAND_USER_MEM) += umem.o
>  ib_core-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += umem_odp.o umem_rbtree.o
> diff --git a/drivers/infiniband/core/srq_pool.c b/drivers/infiniband/core/srq_pool.c
> new file mode 100644
> index 0000000..4f4a089
> --- /dev/null
> +++ b/drivers/infiniband/core/srq_pool.c
> @@ -0,0 +1,106 @@
> +/*
> + * Copyright (c) 2017 Mellanox Technologies. All rights reserved.
> + *
> + * This software is available to you under a choice of one of two
> + * licenses.  You may choose to be licensed under the terms of the GNU
> + * General Public License (GPL) Version 2, available from the file
> + * COPYING in the main directory of this source tree, or the
> + * OpenIB.org BSD license below:
> + *
> + *     Redistribution and use in source and binary forms, with or
> + *     without modification, are permitted provided that the following
> + *     conditions are met:
> + *
> + *      - Redistributions of source code must retain the above
> + *        copyright notice, this list of conditions and the following
> + *        disclaimer.
> + *
> + *      - Redistributions in binary form must reproduce the above
> + *        copyright notice, this list of conditions and the following
> + *        disclaimer in the documentation and/or other materials
> + *        provided with the distribution.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> + * SOFTWARE.
> + *
> + */
> +
> +#include <rdma/srq_pool.h>
> +
> +struct ib_srq *ib_srq_pool_get(struct ib_pd *pd)
> +{
> +	struct ib_srq *srq;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&pd->srq_lock, flags);
> +	srq = list_first_entry_or_null(&pd->srqs, struct ib_srq, pd_entry);
> +	if (srq) {
> +		list_del(&srq->pd_entry);
> +		pd->srqs_used++;
> +	}
> +	spin_unlock_irqrestore(&pd->srq_lock, flags);
> +
> +	return srq;
> +}
> +EXPORT_SYMBOL(ib_srq_pool_get);
> +
> +void ib_srq_pool_put(struct ib_pd *pd, struct ib_srq *srq)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&pd->srq_lock, flags);
> +	list_add(&srq->pd_entry, &pd->srqs);
> +	pd->srqs_used--;
> +	spin_unlock_irqrestore(&pd->srq_lock, flags);
> +}
> +EXPORT_SYMBOL(ib_srq_pool_put);
> +
> +int ib_srq_pool_init(struct ib_pd *pd, int nr,
> +		struct ib_srq_init_attr *srq_attr)
> +{
> +	struct ib_srq *srq;
> +	unsigned long flags;
> +	int ret, i;
> +
> +	for (i = 0; i < nr; i++) {
> +		srq = ib_create_srq(pd, srq_attr);
> +		if (IS_ERR(srq)) {
> +			ret = PTR_ERR(srq);
> +			goto out;
> +		}
> +
> +		spin_lock_irqsave(&pd->srq_lock, flags);
> +		list_add_tail(&srq->pd_entry, &pd->srqs);
> +		spin_unlock_irqrestore(&pd->srq_lock, flags);
> +	}
> +
> +	return 0;
> +out:
> +	ib_srq_pool_destroy(pd);
> +	return ret;
> +}
> +EXPORT_SYMBOL(ib_srq_pool_init);
> +
> +void ib_srq_pool_destroy(struct ib_pd *pd)
> +{
> +	struct ib_srq *srq;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&pd->srq_lock, flags);
> +	while (!list_empty(&pd->srqs)) {
> +		srq = list_first_entry(&pd->srqs, struct ib_srq, pd_entry);
> +		list_del(&srq->pd_entry);
> +
> +		spin_unlock_irqrestore(&pd->srq_lock, flags);
> +		ib_destroy_srq(srq);
> +		spin_lock_irqsave(&pd->srq_lock, flags);
> +	}
> +	spin_unlock_irqrestore(&pd->srq_lock, flags);
> +}
> +EXPORT_SYMBOL(ib_srq_pool_destroy);
> diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> index de57d6c..74db405 100644
> --- a/drivers/infiniband/core/verbs.c
> +++ b/drivers/infiniband/core/verbs.c
> @@ -233,6 +233,9 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags,
>  	pd->__internal_mr = NULL;
>  	atomic_set(&pd->usecnt, 0);
>  	pd->flags = flags;
> +	pd->srqs_used = 0;
> +	spin_lock_init(&pd->srq_lock);
> +	INIT_LIST_HEAD(&pd->srqs);
>
>  	if (device->attrs.device_cap_flags & IB_DEVICE_LOCAL_DMA_LKEY)
>  		pd->local_dma_lkey = device->local_dma_lkey;
> @@ -289,6 +292,7 @@ void ib_dealloc_pd(struct ib_pd *pd)
>  		pd->__internal_mr = NULL;
>  	}
>
> +	WARN_ON_ONCE(pd->srqs_used > 0);
>  	/* uverbs manipulates usecnt with proper locking, while the kabi
>  	   requires the caller to guarantee we can't race here. */
>  	WARN_ON(atomic_read(&pd->usecnt));
> diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
> index bdb1279..fdc721f 100644
> --- a/include/rdma/ib_verbs.h
> +++ b/include/rdma/ib_verbs.h
> @@ -1512,6 +1512,10 @@ struct ib_pd {
>
>  	u32			unsafe_global_rkey;
>
> +	spinlock_t		srq_lock;
> +	int			srqs_used;
> +	struct list_head	srqs;
> +
>  	/*
>  	 * Implementation details of the RDMA core, don't use in drivers:
>  	 */
> @@ -1566,6 +1570,7 @@ struct ib_srq {
>  	void		       *srq_context;
>  	enum ib_srq_type	srq_type;
>  	atomic_t		usecnt;
> +	struct list_head	pd_entry; /* srq pool entry */
>
>  	struct {
>  		struct ib_cq   *cq;
> diff --git a/include/rdma/srq_pool.h b/include/rdma/srq_pool.h
> new file mode 100644
> index 0000000..04aa059
> --- /dev/null
> +++ b/include/rdma/srq_pool.h
> @@ -0,0 +1,46 @@
> +/*
> + * Copyright (c) 2017 Mellanox Technologies. All rights reserved.
> + *
> + * This software is available to you under a choice of one of two
> + * licenses.  You may choose to be licensed under the terms of the GNU
> + * General Public License (GPL) Version 2, available from the file
> + * COPYING in the main directory of this source tree, or the
> + * OpenIB.org BSD license below:
> + *
> + *     Redistribution and use in source and binary forms, with or
> + *     without modification, are permitted provided that the following
> + *     conditions are met:
> + *
> + *      - Redistributions of source code must retain the above
> + *        copyright notice, this list of conditions and the following
> + *        disclaimer.
> + *
> + *      - Redistributions in binary form must reproduce the above
> + *        copyright notice, this list of conditions and the following
> + *        disclaimer in the documentation and/or other materials
> + *        provided with the distribution.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> + * SOFTWARE.
> + *
> + */
> +
> +#ifndef _RDMA_SRQ_POOL_H
> +#define _RDMA_SRQ_POOL_H 1
> +
> +#include <rdma/ib_verbs.h>
> +
> +struct ib_srq *ib_srq_pool_get(struct ib_pd *pd);
> +void ib_srq_pool_put(struct ib_pd *pd, struct ib_srq *srq);
> +
> +int ib_srq_pool_init(struct ib_pd *pd, int nr,
> +		struct ib_srq_init_attr *srq_attr);
> +void ib_srq_pool_destroy(struct ib_pd *pd);

Can you please use rdma_ prefix instead of ib_ prefix?

Thanks

> +
> +#endif /* _RDMA_SRQ_POOL_H */
> --
> 1.8.3.1
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] IB/core: add a simple SRQ pool per PD
       [not found]       ` <20171116181049.GO18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2017-11-17 19:12         ` Max Gurtovoy
       [not found]           ` <c94d2e03-dd32-ff0a-789f-aae4d65f3955-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Max Gurtovoy @ 2017-11-17 19:12 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: hch-jcswGhMUV9g, sagi-NQWnxTmZq1alnMjI0IkVqw,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, RDMA mailing list

>> +#ifndef _RDMA_SRQ_POOL_H
>> +#define _RDMA_SRQ_POOL_H 1
>> +
>> +#include <rdma/ib_verbs.h>
>> +
>> +struct ib_srq *ib_srq_pool_get(struct ib_pd *pd);
>> +void ib_srq_pool_put(struct ib_pd *pd, struct ib_srq *srq);
>> +
>> +int ib_srq_pool_init(struct ib_pd *pd, int nr,
>> +		struct ib_srq_init_attr *srq_attr);
>> +void ib_srq_pool_destroy(struct ib_pd *pd);
> 
> Can you please use rdma_ prefix instead of ib_ prefix?
> 
> Thanks
> 

Do you mean rdma_srq_pool_get instead of ib_srq_pool_get ? is this new 
convetion ? this is a pool that contain ib_srq objects and not rdma_srq 
objects..

I can do it if needed.

>> +
>> +#endif /* _RDMA_SRQ_POOL_H */
>> --
>> 1.8.3.1
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
       [not found]     ` <20171116180853.GN18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2017-11-17 19:18       ` Max Gurtovoy
  0 siblings, 0 replies; 11+ messages in thread
From: Max Gurtovoy @ 2017-11-17 19:18 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: hch-jcswGhMUV9g, sagi-NQWnxTmZq1alnMjI0IkVqw,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, idanb-VPRAkNaXOzVWk0Htik3J/w,
	RDMA mailing list



On 11/16/2017 8:08 PM, Leon Romanovsky wrote:
> On Thu, Nov 16, 2017 at 07:21:22PM +0200, Max Gurtovoy wrote:
>> Since there is an active discussion regarding the CQ pool architecture, I decided to push
>> this feature (maybe it can be pushed before CQ pool).
> 
> Max,
> 
> Thanks for CCing me, can you please repost the series and CC linux-rdma too?

Sure, I'll send V3 and CC linux-rdma too.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] IB/core: add a simple SRQ pool per PD
       [not found]           ` <c94d2e03-dd32-ff0a-789f-aae4d65f3955-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-11-18  7:36             ` Leon Romanovsky
  0 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2017-11-18  7:36 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: hch-jcswGhMUV9g, sagi-NQWnxTmZq1alnMjI0IkVqw,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, RDMA mailing list

[-- Attachment #1: Type: text/plain, Size: 1213 bytes --]

On Fri, Nov 17, 2017 at 09:12:46PM +0200, Max Gurtovoy wrote:
> > > +#ifndef _RDMA_SRQ_POOL_H
> > > +#define _RDMA_SRQ_POOL_H 1
> > > +
> > > +#include <rdma/ib_verbs.h>
> > > +
> > > +struct ib_srq *ib_srq_pool_get(struct ib_pd *pd);
> > > +void ib_srq_pool_put(struct ib_pd *pd, struct ib_srq *srq);
> > > +
> > > +int ib_srq_pool_init(struct ib_pd *pd, int nr,
> > > +		struct ib_srq_init_attr *srq_attr);
> > > +void ib_srq_pool_destroy(struct ib_pd *pd);
> >
> > Can you please use rdma_ prefix instead of ib_ prefix?
> >
> > Thanks
> >
>
> Do you mean rdma_srq_pool_get instead of ib_srq_pool_get ? is this new
> convetion ? this is a pool that contain ib_srq objects and not rdma_srq
> objects..

I want to follow similar pattern as was chosen for various popular pools
implementation, like dma_pool_, mempool e.t.c

IMHO rdma_srqpool_* looks cleaner than it is now.

Thanks

>
> I can do it if needed.

>
> > > +
> > > +#endif /* _RDMA_SRQ_POOL_H */
> > > --
> > > 1.8.3.1
> > >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
       [not found]     ` <a8e2be0b-deb7-8cc1-92ce-fc5dea3e241a-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-11-18 12:52       ` Leon Romanovsky
       [not found]         ` <20171118125229.GT18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Leon Romanovsky @ 2017-11-18 12:52 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: Sagi Grimberg, hch-jcswGhMUV9g,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, idanb-VPRAkNaXOzVWk0Htik3J/w,
	RDMA mailing list

On Fri, Nov 17, 2017 at 09:32:42PM +0200, Max Gurtovoy wrote:
>
>
> On 11/16/2017 8:36 PM, Sagi Grimberg wrote:
> >
> > > Since there is an active discussion regarding the CQ pool
> > > architecture, I decided to push
> > > this feature (maybe it can be pushed before CQ pool).
> > >
> > > This is a new feature for NVMEoF RDMA target,
> >
> > Any chance having this for the rest? isert, srpt, svcrdma?
> >
>
> We can implement it for isert, but I think it's better to see how the CQ
> pool will be defined first.
> It can bring a big benefit and improvement for ib_srpt (similar to NVMEoF
> target) but I'm not sure if I can commit for that one soon..

Too bad, but I don't see inclusion of generic SRQ pool code in RDMA
subsystem without actual conversion of existing ULP clients.

Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
       [not found]         ` <20171118125229.GT18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2017-11-18 13:57           ` Max Gurtovoy
       [not found]             ` <935af437-d69e-e258-c00a-8bf9a04f9988-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Max Gurtovoy @ 2017-11-18 13:57 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Sagi Grimberg, hch-jcswGhMUV9g,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, idanb-VPRAkNaXOzVWk0Htik3J/w,
	RDMA mailing list



On 11/18/2017 2:52 PM, Leon Romanovsky wrote:
> On Fri, Nov 17, 2017 at 09:32:42PM +0200, Max Gurtovoy wrote:
>>
>>
>> On 11/16/2017 8:36 PM, Sagi Grimberg wrote:
>>>
>>>> Since there is an active discussion regarding the CQ pool
>>>> architecture, I decided to push
>>>> this feature (maybe it can be pushed before CQ pool).
>>>>
>>>> This is a new feature for NVMEoF RDMA target,
>>>
>>> Any chance having this for the rest? isert, srpt, svcrdma?
>>>
>>
>> We can implement it for isert, but I think it's better to see how the CQ
>> pool will be defined first.
>> It can bring a big benefit and improvement for ib_srpt (similar to NVMEoF
>> target) but I'm not sure if I can commit for that one soon..
> 
> Too bad, but I don't see inclusion of generic SRQ pool code in RDMA
> subsystem without actual conversion of existing ULP clients.
> 
> Thanks
> 

This patchset adds this feature to NVMEoF target so actually there are 
ULPs that use it. Same issue we have with mr_pool that only RDMA rw.c 
use it (Now we're adding it to NVMEoF initiators too - in review).
I can add srq_pool to iSER target code but I don't want to re-write it 
again in few weeks, when the CQ pool will be added.
Regarding other ULPs, we don't have a testing environment for them so I 
prefer not to commit on their implementation in the near future.

I don't know why we can't add this feature "as is".
Other ULPs maintainers might use it once it will be pushed.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
       [not found]             ` <935af437-d69e-e258-c00a-8bf9a04f9988-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-11-18 14:40               ` Leon Romanovsky
       [not found]                 ` <20171118144042.GU18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Leon Romanovsky @ 2017-11-18 14:40 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: Sagi Grimberg, hch-jcswGhMUV9g,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, idanb-VPRAkNaXOzVWk0Htik3J/w,
	RDMA mailing list

[-- Attachment #1: Type: text/plain, Size: 2662 bytes --]

On Sat, Nov 18, 2017 at 03:57:15PM +0200, Max Gurtovoy wrote:
>
>
> On 11/18/2017 2:52 PM, Leon Romanovsky wrote:
> > On Fri, Nov 17, 2017 at 09:32:42PM +0200, Max Gurtovoy wrote:
> > >
> > >
> > > On 11/16/2017 8:36 PM, Sagi Grimberg wrote:
> > > >
> > > > > Since there is an active discussion regarding the CQ pool
> > > > > architecture, I decided to push
> > > > > this feature (maybe it can be pushed before CQ pool).
> > > > >
> > > > > This is a new feature for NVMEoF RDMA target,
> > > >
> > > > Any chance having this for the rest? isert, srpt, svcrdma?
> > > >
> > >
> > > We can implement it for isert, but I think it's better to see how the CQ
> > > pool will be defined first.
> > > It can bring a big benefit and improvement for ib_srpt (similar to NVMEoF
> > > target) but I'm not sure if I can commit for that one soon..
> >
> > Too bad, but I don't see inclusion of generic SRQ pool code in RDMA
> > subsystem without actual conversion of existing ULP clients.
> >
> > Thanks
> >
>
> This patchset adds this feature to NVMEoF target so actually there are ULPs
> that use it. Same issue we have with mr_pool that only RDMA rw.c use it (Now
> we're adding it to NVMEoF initiators too - in review).

The difference between your code and mr_pool is that mr_pool is part of
RDMA/core and in use by RDMA/core (rw.c), which in use by all ULPs.

However if you insist, we can remove EXPORT_SYMBOL from mr_pool
implementation, because of being part of RDMA/core and it blows
symbols map without need. Should I?

In your case, you are proposing generic interface, which supposed to be
good fit for all ULPs but without those ULPs.

> I can add srq_pool to iSER target code but I don't want to re-write it again
> in few weeks, when the CQ pool will be added.

So, please finalize interface in RFC stage and once you are ready, proceed to
the actual patches.

> Regarding other ULPs, we don't have a testing environment for them so I
> prefer not to commit on their implementation in the near future.

You are not expected to have all testing environment, it is their (ULPs
maintainers) responsibility to test your conversion, because you are
doing conversion to generic interface.

>
> I don't know why we can't add this feature "as is".
> Other ULPs maintainers might use it once it will be pushed.

Sorry, but it is not how kernel development process works.
"You propose -> you do" and not "You propose -> they do".

Thanks

> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
       [not found]                 ` <20171118144042.GU18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2017-11-18 21:29                   ` Max Gurtovoy
       [not found]                     ` <d6579bd6-053e-1214-ea95-ff72e6191cb0-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Max Gurtovoy @ 2017-11-18 21:29 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Sagi Grimberg, hch-jcswGhMUV9g,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, idanb-VPRAkNaXOzVWk0Htik3J/w,
	RDMA mailing list



On 11/18/2017 4:40 PM, Leon Romanovsky wrote:
> On Sat, Nov 18, 2017 at 03:57:15PM +0200, Max Gurtovoy wrote:
>>
>>
>> On 11/18/2017 2:52 PM, Leon Romanovsky wrote:
>>> On Fri, Nov 17, 2017 at 09:32:42PM +0200, Max Gurtovoy wrote:
>>>>
>>>>
>>>> On 11/16/2017 8:36 PM, Sagi Grimberg wrote:
>>>>>
>>>>>> Since there is an active discussion regarding the CQ pool
>>>>>> architecture, I decided to push
>>>>>> this feature (maybe it can be pushed before CQ pool).
>>>>>>
>>>>>> This is a new feature for NVMEoF RDMA target,
>>>>>
>>>>> Any chance having this for the rest? isert, srpt, svcrdma?
>>>>>
>>>>
>>>> We can implement it for isert, but I think it's better to see how the CQ
>>>> pool will be defined first.
>>>> It can bring a big benefit and improvement for ib_srpt (similar to NVMEoF
>>>> target) but I'm not sure if I can commit for that one soon..
>>>
>>> Too bad, but I don't see inclusion of generic SRQ pool code in RDMA
>>> subsystem without actual conversion of existing ULP clients.
>>>
>>> Thanks
>>>
>>
>> This patchset adds this feature to NVMEoF target so actually there are ULPs
>> that use it. Same issue we have with mr_pool that only RDMA rw.c use it (Now
>> we're adding it to NVMEoF initiators too - in review).
> 
> The difference between your code and mr_pool is that mr_pool is part of
> RDMA/core and in use by RDMA/core (rw.c), which in use by all ULPs.
> 
> However if you insist, we can remove EXPORT_SYMBOL from mr_pool
> implementation, because of being part of RDMA/core and it blows
> symbols map without need. Should I?

No, we'll use it in NVMEoF host as I mentioned earlier.
> 
> In your case, you are proposing generic interface, which supposed to be
> good fit for all ULPs but without those ULPs.
> 
>> I can add srq_pool to iSER target code but I don't want to re-write it again
>> in few weeks, when the CQ pool will be added.
> 
> So, please finalize interface in RFC stage and once you are ready, proceed to
> the actual patches.
> 
>> Regarding other ULPs, we don't have a testing environment for them so I
>> prefer not to commit on their implementation in the near future.
> 
> You are not expected to have all testing environment, it is their (ULPs
> maintainers) responsibility to test your conversion, because you are
> doing conversion to generic interface.
> 
>>
>> I don't know why we can't add this feature "as is".
>> Other ULPs maintainers might use it once it will be pushed.
> 
> Sorry, but it is not how kernel development process works.
> "You propose -> you do" and not "You propose -> they do".

I'm not changing an interface here. So all the other ULPs that use SRQ 
(ipoib and srpt) currently will cuntinue using it.
I don't know why this patchset brought up the idea to add SRQ pools to 
isert/svcrdma/etc.., but knowing that there are patches (under 
discussions) that will have a big enfluance on these drivers (at least 
isert), it doesn't make sence to implement *new* feature (SRQ usage) and 
chage it a week afterwards.

I will send V3 in a few days with some fixes that I got, so it would be 
nice to have a more comments on the code (I don't see a problem with the 
kernel development process in this patchset).


> 
> Thanks
> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
       [not found]                     ` <d6579bd6-053e-1214-ea95-ff72e6191cb0-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-11-20 11:00                       ` Sagi Grimberg
       [not found]                         ` <14682e66-de0d-042b-434b-0eb40fb79f0c-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: Sagi Grimberg @ 2017-11-20 11:00 UTC (permalink / raw)
  To: Max Gurtovoy, Leon Romanovsky
  Cc: hch-jcswGhMUV9g, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, idanb-VPRAkNaXOzVWk0Htik3J/w,
	RDMA mailing list


>>>>> We can implement it for isert, but I think it's better to see how 
>>>>> the CQ
>>>>> pool will be defined first.
>>>>> It can bring a big benefit and improvement for ib_srpt (similar to 
>>>>> NVMEoF
>>>>> target) but I'm not sure if I can commit for that one soon..
>>>>
>>>> Too bad, but I don't see inclusion of generic SRQ pool code in RDMA
>>>> subsystem without actual conversion of existing ULP clients.
>>>>
>>>> Thanks
>>>>
>>>
>>> This patchset adds this feature to NVMEoF target so actually there 
>>> are ULPs
>>> that use it. Same issue we have with mr_pool that only RDMA rw.c use 
>>> it (Now
>>> we're adding it to NVMEoF initiators too - in review).
>>
>> The difference between your code and mr_pool is that mr_pool is part of
>> RDMA/core and in use by RDMA/core (rw.c), which in use by all ULPs.
>>
>> However if you insist, we can remove EXPORT_SYMBOL from mr_pool
>> implementation, because of being part of RDMA/core and it blows
>> symbols map without need. Should I?
> 
> No, we'll use it in NVMEoF host as I mentioned earlier.
>>
>> In your case, you are proposing generic interface, which supposed to be
>> good fit for all ULPs but without those ULPs.
>>
>>> I can add srq_pool to iSER target code but I don't want to re-write 
>>> it again
>>> in few weeks, when the CQ pool will be added.
>>
>> So, please finalize interface in RFC stage and once you are ready, 
>> proceed to
>> the actual patches.
>>
>>> Regarding other ULPs, we don't have a testing environment for them so I
>>> prefer not to commit on their implementation in the near future.
>>
>> You are not expected to have all testing environment, it is their (ULPs
>> maintainers) responsibility to test your conversion, because you are
>> doing conversion to generic interface.
>>
>>>
>>> I don't know why we can't add this feature "as is".
>>> Other ULPs maintainers might use it once it will be pushed.
>>
>> Sorry, but it is not how kernel development process works.
>> "You propose -> you do" and not "You propose -> they do".
> 
> I'm not changing an interface here. So all the other ULPs that use SRQ 
> (ipoib and srpt) currently will cuntinue using it.
> I don't know why this patchset brought up the idea to add SRQ pools to 
> isert/svcrdma/etc.., but knowing that there are patches (under 
> discussions) that will have a big enfluance on these drivers (at least 
> isert), it doesn't make sence to implement *new* feature (SRQ usage) and 
> chage it a week afterwards.

I'm almost sorry I asked :)

Max,

Leon's request while adding more work for you is valid as I see it. Leon
and Doug (like other kernel maintainers and others in our community) are
interested in improving the RDMA core subsystem in the sense of offering
useful interfaces and having the consumers implement as less as
possible. Making useful features/interfaces (like in your case)
available for (and adopted by) most of the common consumers is helping
the community and the subsystem as a whole rather than helping the
specific module we happen to be focused on at that specific time. The
long term goal is to make the consumers do as much as possible with
implementing as less as possible.

Having said that, if it was up to me, I wouldn't say its a hard
requirement but definitely encouraged (I try to do it for the core
interfaces I happen to offer and I know others have too).
I think that Doug and others should really decide on the direction here.

What do others think?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
       [not found]                         ` <14682e66-de0d-042b-434b-0eb40fb79f0c-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
@ 2017-11-20 11:34                           ` Leon Romanovsky
  0 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2017-11-20 11:34 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Max Gurtovoy, hch-jcswGhMUV9g,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	vladimirk-VPRAkNaXOzVWk0Htik3J/w, idanb-VPRAkNaXOzVWk0Htik3J/w,
	RDMA mailing list

[-- Attachment #1: Type: text/plain, Size: 4755 bytes --]

On Mon, Nov 20, 2017 at 01:00:28PM +0200, Sagi Grimberg wrote:
>
> > > > > > We can implement it for isert, but I think it's better
> > > > > > to see how the CQ
> > > > > > pool will be defined first.
> > > > > > It can bring a big benefit and improvement for ib_srpt
> > > > > > (similar to NVMEoF
> > > > > > target) but I'm not sure if I can commit for that one soon..
> > > > >
> > > > > Too bad, but I don't see inclusion of generic SRQ pool code in RDMA
> > > > > subsystem without actual conversion of existing ULP clients.
> > > > >
> > > > > Thanks
> > > > >
> > > >
> > > > This patchset adds this feature to NVMEoF target so actually
> > > > there are ULPs
> > > > that use it. Same issue we have with mr_pool that only RDMA rw.c
> > > > use it (Now
> > > > we're adding it to NVMEoF initiators too - in review).
> > >
> > > The difference between your code and mr_pool is that mr_pool is part of
> > > RDMA/core and in use by RDMA/core (rw.c), which in use by all ULPs.
> > >
> > > However if you insist, we can remove EXPORT_SYMBOL from mr_pool
> > > implementation, because of being part of RDMA/core and it blows
> > > symbols map without need. Should I?
> >
> > No, we'll use it in NVMEoF host as I mentioned earlier.
> > >
> > > In your case, you are proposing generic interface, which supposed to be
> > > good fit for all ULPs but without those ULPs.
> > >
> > > > I can add srq_pool to iSER target code but I don't want to
> > > > re-write it again
> > > > in few weeks, when the CQ pool will be added.
> > >
> > > So, please finalize interface in RFC stage and once you are ready,
> > > proceed to
> > > the actual patches.
> > >
> > > > Regarding other ULPs, we don't have a testing environment for them so I
> > > > prefer not to commit on their implementation in the near future.
> > >
> > > You are not expected to have all testing environment, it is their (ULPs
> > > maintainers) responsibility to test your conversion, because you are
> > > doing conversion to generic interface.
> > >
> > > >
> > > > I don't know why we can't add this feature "as is".
> > > > Other ULPs maintainers might use it once it will be pushed.
> > >
> > > Sorry, but it is not how kernel development process works.
> > > "You propose -> you do" and not "You propose -> they do".
> >
> > I'm not changing an interface here. So all the other ULPs that use SRQ
> > (ipoib and srpt) currently will cuntinue using it.
> > I don't know why this patchset brought up the idea to add SRQ pools to
> > isert/svcrdma/etc.., but knowing that there are patches (under
> > discussions) that will have a big enfluance on these drivers (at least
> > isert), it doesn't make sence to implement *new* feature (SRQ usage) and
> > chage it a week afterwards.
>
> I'm almost sorry I asked :)
>

There is nothing to sorry about it. It was just a matter of time when we
would see it.

The reason why I'm so direct in this case, is related to the fact that
resource pools are really beneficial when they are used for create/destroy
of resources in (massive) dynamic operations.

In the proposed code (nvme-rdma), it is not the case and resources (SRQ)
are statically allocated, hence it makes no sense to me to propose new interface
without seeing any benefits from it.

This is why I'm asking from Max or implement real users of this interface or
don't provide that interface at all.

> Max,
>
> Leon's request while adding more work for you is valid as I see it. Leon
> and Doug (like other kernel maintainers and others in our community) are
> interested in improving the RDMA core subsystem in the sense of offering
> useful interfaces and having the consumers implement as less as
> possible. Making useful features/interfaces (like in your case)
> available for (and adopted by) most of the common consumers is helping
> the community and the subsystem as a whole rather than helping the
> specific module we happen to be focused on at that specific time. The
> long term goal is to make the consumers do as much as possible with
> implementing as less as possible.
>
> Having said that, if it was up to me, I wouldn't say its a hard
> requirement but definitely encouraged (I try to do it for the core
> interfaces I happen to offer and I know others have too).
> I think that Doug and others should really decide on the direction here.
>
> What do others think?

Right, I'm not alone here.

I already said my position in this mailing list, but will repeat it:
core work is must to move forward and to make this community better.

Thanks

> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-11-20 11:34 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1510852885-25519-1-git-send-email-maxg@mellanox.com>
     [not found] ` <1510852885-25519-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-16 18:08   ` [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector Leon Romanovsky
     [not found]     ` <20171116180853.GN18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-17 19:18       ` Max Gurtovoy
     [not found] ` <1510852885-25519-2-git-send-email-maxg@mellanox.com>
     [not found]   ` <1510852885-25519-2-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-16 18:10     ` [PATCH 1/3] IB/core: add a simple SRQ pool per PD Leon Romanovsky
     [not found]       ` <20171116181049.GO18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-17 19:12         ` Max Gurtovoy
     [not found]           ` <c94d2e03-dd32-ff0a-789f-aae4d65f3955-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-18  7:36             ` Leon Romanovsky
     [not found] ` <263c6c9d-0dd2-da4f-12a9-efefd361e592@grimberg.me>
     [not found]   ` <a8e2be0b-deb7-8cc1-92ce-fc5dea3e241a@mellanox.com>
     [not found]     ` <a8e2be0b-deb7-8cc1-92ce-fc5dea3e241a-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-18 12:52       ` [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector Leon Romanovsky
     [not found]         ` <20171118125229.GT18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-18 13:57           ` Max Gurtovoy
     [not found]             ` <935af437-d69e-e258-c00a-8bf9a04f9988-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-18 14:40               ` Leon Romanovsky
     [not found]                 ` <20171118144042.GU18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-18 21:29                   ` Max Gurtovoy
     [not found]                     ` <d6579bd6-053e-1214-ea95-ff72e6191cb0-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-20 11:00                       ` Sagi Grimberg
     [not found]                         ` <14682e66-de0d-042b-434b-0eb40fb79f0c-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-11-20 11:34                           ` Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).