From: "Steve Wise" <swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>
To: 'Sagi Grimberg' <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>,
'Christoph Hellwig' <hch-jcswGhMUV9g@public.gmane.org>,
linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: RE: RFC: CQ pools and implicit CQ resource allocation
Date: Mon, 12 Sep 2016 09:42:41 -0500 [thread overview]
Message-ID: <00e001d20d03$ea891100$bf9b3300$@opengridcomputing.com> (raw)
In-Reply-To: <d0c645eb-3674-2841-bdb7-8b9e6fd46473-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
>
> > This series adds support to the RDMA core to implicitly allocate the
required
> > CQEs when creating a QP. The primary driver for that was to implement a
> > common scheme for CQ pooling, which helps with better resource usage for
> > server / target style drivers that have many outstanding connections. In
> > fact the first version of this code from Sagi did just that: add a CQ pool
> > API, and convert drivers that were using some form of pooling (iSER
initiator
> > & target, NVMe target) to that API. But looking at the API I felt that
> > there was still way too much logic in the individual ULPs, and looked into a
> > way to make that boilerplate code go away. It turns out that we can simply
> > create CQs underneath if we know the poll context that the ULP requires, so
> > this series shows an approach that makes CQs mostly invisible to ULPs.
>
> One other note that I wanted to raise for the folks interested in this
> is that with the RDMA core owning the completion queue pools, different
> ULPs can easily share the same completion queue (given that it uses
> the same poll context). For example, nvme-rdma host, iser and srp
> initiators can end up using the same completion queues (if running
> simultaneously on the same machine).
>
> Up until now, I couldn't think of anything that can introduce a problem
> with that but maybe someone else will...
It would be useful to provide details on how many CQs get created and of what
size for an uber iSER/NVMF/SRP initiator/host and target.
One concern I have is that cxgb4 CQs require contiguous memory, So a scheme
like CQ pooling might cause resource problems on large core systems.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2016-09-12 14:42 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-09 12:36 RFC: CQ pools and implicit CQ resource allocation Christoph Hellwig
[not found] ` <1473424587-13818-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2016-09-09 12:36 ` [PATCH 1/6] IB/core: add implicit CQ allocation Christoph Hellwig
2016-09-09 12:36 ` [PATCH 2/6] nvmet-rdma: use " Christoph Hellwig
2016-09-09 12:36 ` [PATCH 3/6] IB/isert: " Christoph Hellwig
2016-09-09 12:36 ` [PATCH 4/6] IB/srpt: " Christoph Hellwig
2016-09-09 12:36 ` [PATCH 5/6] IB/iser: " Christoph Hellwig
2016-09-09 12:36 ` [PATCH 6/6] nvme-rdma: " Christoph Hellwig
2016-09-11 6:39 ` RFC: CQ pools and implicit CQ resource allocation Sagi Grimberg
2016-09-11 6:44 ` Sagi Grimberg
[not found] ` <d0c645eb-3674-2841-bdb7-8b9e6fd46473-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2016-09-12 14:42 ` Steve Wise [this message]
2016-09-12 20:07 ` Sagi Grimberg
[not found] ` <6dd3ecbd-7109-95ff-9c86-dfea9e515538-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2016-09-12 20:22 ` Steve Wise
2016-09-12 21:12 ` Sagi Grimberg
2016-09-14 3:29 ` Bart Van Assche
2016-09-12 15:03 ` Chuck Lever
[not found] ` <BCF7E723-FC50-4497-9E0B-1157CF2D4185-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2016-09-12 20:25 ` Sagi Grimberg
2016-09-16 6:02 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='00e001d20d03$ea891100$bf9b3300$@opengridcomputing.com' \
--to=swise-7bpotxp6k4+p2yhjcf5u+vpxobypeauw@public.gmane.org \
--cc=hch-jcswGhMUV9g@public.gmane.org \
--cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox