From: Leon Romanovsky <leonro@mellanox.com>
To: Max Gurtovoy <maxg@mellanox.com>
Cc: loberman@redhat.com, bvanassche@acm.org, vladimirk@mellanox.com,
shlomin@mellanox.com, linux-rdma@vger.kernel.org,
linux-nvme@lists.infradead.org, idanb@mellanox.com,
dledford@redhat.com, jgg@mellanox.com, oren@mellanox.com,
kbusch@kernel.org, hch@lst.de, sagi@grimberg.me
Subject: Re: [PATCH 1/5] IB/core: add a simple SRQ set per PD
Date: Wed, 18 Mar 2020 12:46:18 +0200 [thread overview]
Message-ID: <20200318104618.GU3351@unreal> (raw)
In-Reply-To: <32f23851-6f89-18ef-236f-1416c49b079c@mellanox.com>
On Wed, Mar 18, 2020 at 12:39:44PM +0200, Max Gurtovoy wrote:
>
> On 3/18/2020 12:29 PM, Leon Romanovsky wrote:
> > On Wed, Mar 18, 2020 at 11:46:19AM +0200, Max Gurtovoy wrote:
> > > On 3/18/2020 8:47 AM, Leon Romanovsky wrote:
> > > > On Tue, Mar 17, 2020 at 06:37:57PM +0200, Max Gurtovoy wrote:
> > > > > On 3/17/2020 3:55 PM, Leon Romanovsky wrote:
> > > > > > On Tue, Mar 17, 2020 at 03:40:26PM +0200, Max Gurtovoy wrote:
> > > > > > > ULP's can use this API to create/destroy SRQ's with the same
> > > > > > > characteristics for implementing a logic that aimed to save resources
> > > > > > > without significant performance penalty (e.g. create SRQ per completion
> > > > > > > vector and use shared receive buffers for multiple controllers of the
> > > > > > > ULP).
> > > > > > >
> > > > > > > Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> > > > > > > ---
> > > > > > > drivers/infiniband/core/Makefile | 2 +-
> > > > > > > drivers/infiniband/core/srq_set.c | 78 +++++++++++++++++++++++++++++++++++++++
> > > > > > > drivers/infiniband/core/verbs.c | 4 ++
> > > > > > > include/rdma/ib_verbs.h | 5 +++
> > > > > > > include/rdma/srq_set.h | 18 +++++++++
> > > > > > > 5 files changed, 106 insertions(+), 1 deletion(-)
> > > > > > > create mode 100644 drivers/infiniband/core/srq_set.c
> > > > > > > create mode 100644 include/rdma/srq_set.h
> > > > > > >
> > > > > > > diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile
> > > > > > > index d1b14887..1d3eaec 100644
> > > > > > > --- a/drivers/infiniband/core/Makefile
> > > > > > > +++ b/drivers/infiniband/core/Makefile
> > > > > > > @@ -12,7 +12,7 @@ ib_core-y := packer.o ud_header.o verbs.o cq.o rw.o sysfs.o \
> > > > > > > roce_gid_mgmt.o mr_pool.o addr.o sa_query.o \
> > > > > > > multicast.o mad.o smi.o agent.o mad_rmpp.o \
> > > > > > > nldev.o restrack.o counters.o ib_core_uverbs.o \
> > > > > > > - trace.o
> > > > > > > + trace.o srq_set.o
> > > > > > Why did you call it "srq_set.c" and not "srq.c"?
> > > > > because it's not a SRQ API but SRQ-set API.
> > > > I would say that it is SRQ-pool and not SRQ-set API.
> > > If you have some other idea for an API, please share with us.
> > >
> > > I've created this API in core layer to make the life of the ULPs easier and
> > > we can see that it's very easy to add this feature to ULPs and get a big
> > > benefit of it.
> > No one here said against the feature, but tried to understand the
> > rationale behind name *_set and why you decided to use "set" term
> > and not "pool", like was done for MR pool.
>
> Ok. But srq_pool was the name I used 2 years ago and you didn't like this
> back then.
I don't like it today too, but don't have better name to suggest.
Thanks
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-03-18 10:46 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-17 13:40 [PATCH 0/5] nvmet-rdma/srpt: SRQ per completion vector Max Gurtovoy
2020-03-17 13:40 ` [PATCH 1/5] IB/core: add a simple SRQ set per PD Max Gurtovoy
2020-03-17 13:55 ` Leon Romanovsky
2020-03-17 16:37 ` Max Gurtovoy
2020-03-17 18:10 ` Jason Gunthorpe
2020-03-17 18:24 ` Max Gurtovoy
2020-03-17 18:43 ` Jason Gunthorpe
2020-03-17 21:56 ` Max Gurtovoy
2020-03-17 19:54 ` Leon Romanovsky
2020-03-18 6:47 ` Leon Romanovsky
2020-03-18 9:46 ` Max Gurtovoy
2020-03-18 10:29 ` Leon Romanovsky
2020-03-18 10:39 ` Max Gurtovoy
2020-03-18 10:46 ` Leon Romanovsky [this message]
2020-03-17 13:40 ` [PATCH 2/5] nvmet-rdma: add srq pointer to rdma_cmd Max Gurtovoy
2020-03-17 13:40 ` [PATCH 3/5] nvmet-rdma: use SRQ per completion vector Max Gurtovoy
2020-03-18 6:53 ` Leon Romanovsky
2020-03-18 9:39 ` Max Gurtovoy
2020-03-17 13:40 ` [PATCH 4/5] IB/core: cache the CQ " Max Gurtovoy
2020-03-17 15:19 ` Chuck Lever
2020-03-17 15:41 ` Max Gurtovoy
2020-03-17 20:36 ` Chuck Lever
2020-03-17 22:18 ` Max Gurtovoy
2020-03-17 22:50 ` Bart Van Assche
2020-03-17 23:26 ` Max Gurtovoy
2020-03-17 13:40 ` [PATCH 5/5] RDMA/srpt: use SRQ per " Max Gurtovoy
2020-03-17 13:58 ` Leon Romanovsky
2020-03-17 16:43 ` Max Gurtovoy
2020-03-17 19:58 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200318104618.GU3351@unreal \
--to=leonro@mellanox.com \
--cc=bvanassche@acm.org \
--cc=dledford@redhat.com \
--cc=hch@lst.de \
--cc=idanb@mellanox.com \
--cc=jgg@mellanox.com \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-rdma@vger.kernel.org \
--cc=loberman@redhat.com \
--cc=maxg@mellanox.com \
--cc=oren@mellanox.com \
--cc=sagi@grimberg.me \
--cc=shlomin@mellanox.com \
--cc=vladimirk@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).