public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: "Steve Wise" <swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>
To: 'Sagi Grimberg' <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>,
	'Christoph Hellwig' <hch-jcswGhMUV9g@public.gmane.org>,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: RE: RFC: CQ pools and implicit CQ resource allocation
Date: Mon, 12 Sep 2016 15:22:23 -0500	[thread overview]
Message-ID: <022901d20d33$5eebf1a0$1cc3d4e0$@opengridcomputing.com> (raw)
In-Reply-To: <6dd3ecbd-7109-95ff-9c86-dfea9e515538-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>



> -----Original Message-----
> From: Sagi Grimberg [mailto:sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org]
> Sent: Monday, September 12, 2016 3:08 PM
> To: Steve Wise; 'Christoph Hellwig'; linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Subject: Re: RFC: CQ pools and implicit CQ resource allocation
> 
> 
> >> One other note that I wanted to raise for the folks interested in this
> >> is that with the RDMA core owning the completion queue pools, different
> >> ULPs can easily share the same completion queue (given that it uses
> >> the same poll context). For example, nvme-rdma host, iser and srp
> >> initiators can end up using the same completion queues (if running
> >> simultaneously on the same machine).
> >>
> >> Up until now, I couldn't think of anything that can introduce a problem
> >> with that but maybe someone else will...
> >
> > It would be useful to provide details on how many CQs get created and of
what
> > size for an uber iSER/NVMF/SRP initiator/host and target.
> 
> Are you talking about some debugfs layout?
>

No, just a matrix showing how the CQs scale out when shared among these three
ULPs on a machine with X cores, for example.   Just to visualize if the number
of CQs and their sizes are reduced by this new series or increased...
 
> > One concern I have is that cxgb4 CQs require contiguous memory,  So a scheme
> > like CQ pooling might cause resource problems on large core systems.
> 
> Note that the CQ allocation will never exceed the device max_cqe cap.

If this series causes, say, 2X the amount of memory needed for CQs vs the
existing private CQ approach, then that impacts how many CQs can be allocated,
due to limits on the amount of memory that can be allocated system-wide via
dma_alloc_coherent(), which is what cxgb4 uses to allocate queue memory.

So I'm just voicing the concern this design can possibly reduce the overall
number of CQs available on a given system.  It is probably not a big deal
though, but I don't have a good visualization of how much more memory this
proposed series would incur...

Stevo

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2016-09-12 20:22 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-09 12:36 RFC: CQ pools and implicit CQ resource allocation Christoph Hellwig
     [not found] ` <1473424587-13818-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2016-09-09 12:36   ` [PATCH 1/6] IB/core: add implicit CQ allocation Christoph Hellwig
2016-09-09 12:36   ` [PATCH 2/6] nvmet-rdma: use " Christoph Hellwig
2016-09-09 12:36   ` [PATCH 3/6] IB/isert: " Christoph Hellwig
2016-09-09 12:36   ` [PATCH 4/6] IB/srpt: " Christoph Hellwig
2016-09-09 12:36   ` [PATCH 5/6] IB/iser: " Christoph Hellwig
2016-09-09 12:36   ` [PATCH 6/6] nvme-rdma: " Christoph Hellwig
2016-09-11  6:39   ` RFC: CQ pools and implicit CQ resource allocation Sagi Grimberg
2016-09-11  6:44   ` Sagi Grimberg
     [not found]     ` <d0c645eb-3674-2841-bdb7-8b9e6fd46473-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2016-09-12 14:42       ` Steve Wise
2016-09-12 20:07         ` Sagi Grimberg
     [not found]           ` <6dd3ecbd-7109-95ff-9c86-dfea9e515538-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2016-09-12 20:22             ` Steve Wise [this message]
2016-09-12 21:12               ` Sagi Grimberg
2016-09-14  3:29         ` Bart Van Assche
2016-09-12 15:03       ` Chuck Lever
     [not found]         ` <BCF7E723-FC50-4497-9E0B-1157CF2D4185-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2016-09-12 20:25           ` Sagi Grimberg
2016-09-16  6:02   ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='022901d20d33$5eebf1a0$1cc3d4e0$@opengridcomputing.com' \
    --to=swise-7bpotxp6k4+p2yhjcf5u+vpxobypeauw@public.gmane.org \
    --cc=hch-jcswGhMUV9g@public.gmane.org \
    --cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox