linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Krishnamraju Eraparaju <krishna2@chelsio.com>
To: Max Gurtovoy <mgurtovoy@nvidia.com>
Cc: Sagi Grimberg <sagi@grimberg.me>,
	linux-rdma@vger.kernel.org,
	Potnuri Bharat Teja <bharat@chelsio.com>,
	Max Gurtovoy <maxg@mellanox.com>
Subject: Re: reduce iSERT Max IO size
Date: Wed, 7 Oct 2020 09:06:21 +0530	[thread overview]
Message-ID: <20201007033619.GA11425@chelsio.com> (raw)
In-Reply-To: <4391e240-5d6d-fb59-e6fb-e7818d1d0bd2@nvidia.com>

On Sunday, October 10/04/20, 2020 at 00:45:26 +0300, Max Gurtovoy wrote:
> 
> On 10/3/2020 6:36 AM, Krishnamraju Eraparaju wrote:
> >On Friday, October 10/02/20, 2020 at 13:29:30 -0700, Sagi Grimberg wrote:
> >>>Hi Sagi & Max,
> >>>
> >>>Any update on this?
> >>>Please change the max IO size to 1MiB(256 pages).
> >>I think that the reason why this was changed to handle the worst case
> >>was in case there are different capabilities on the initiator and the
> >>target with respect to number of pages per MR. There is no handshake
> >>that aligns expectations.
> >But, the max pages per MR supported by most adapters is around 256 pages
> >only.
> >And I think only those iSER initiators, whose max pages per MR is 4096,
> >could send 16MiB sized IOs, am I correct?
> 
> If the initiator can send 16MiB, we must make sure the target is
> capable to receive it.
I think max IO size, at iSER initiator, depends on
"max_fast_reg_page_list_len".
currently, below are the supported "max_fast_reg_page_list_len" of
various iwarp drivers:

iw_cxgb4: 128 pages
Softiwarp: 256 pages
i40iw: 512 pages
qedr: couldn't find.

For iwarp case, if 512 is the max pages supported by all iwarp drivers,
then provisioning a gigantic MR pool at target(to accommodate never used
16MiB IO) wouldn't be a overkill?
> 
> >
> >>If we revert that it would restore the issue that you reported in the
> >>first place:
> >>
> >>--
> >>IB/isert: allocate RW ctxs according to max IO size
> >I don't see the reported issue after reducing the IO size to 256
> >pages(keeping all other changes of this patch intact).
> >That is, "attr.cap.max_rdma_ctxs" is now getting filled properly with
> >"rdma_rw_mr_factor()" related changes, I think.
> >
> >Before this change "attr.cap.max_rdma_ctxs" was hardcoded with
> >128(ISCSI_DEF_XMIT_CMDS_MAX) pages, which is very low for single target
> >and muli-luns case.
> >
> >So reverting only ISCSI_ISER_MAX_SG_TABLESIZE macro to 256 doesn't cause the
> >reported issue.
> >
> >Thanks,
> >Krishnam Raju.
> >>Current iSER target code allocates MR pool budget based on queue size.
> >>Since there is no handshake between iSER initiator and target on max IO
> >>size, we'll set the iSER target to support upto 16MiB IO operations and
> >>allocate the correct number of RDMA ctxs according to the factor of MR's
> >>per IO operation. This would guaranty sufficient size of the MR pool for
> >>the required IO queue depth and IO size.
> >>
> >>Reported-by: Krishnamraju Eraparaju <krishna2@chelsio.com>
> >>Tested-by: Krishnamraju Eraparaju <krishna2@chelsio.com>
> >>Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> >>--
> >>
> >>>
> >>>Thanks,
> >>>Krishnam Raju.
> >>>On Wednesday, September 09/23/20, 2020 at 01:57:47 -0700, Sagi Grimberg wrote:
> >>>>>Hi,
> >>>>>
> >>>>>Please reduce the Max IO size to 1MiB(256 pages), at iSER Target.
> >>>>>The PBL memory consumption has increased significantly after increasing
> >>>>>the Max IO size to 16MiB(with commit:317000b926b07c).
> >>>>>Due to the large MR pool, the max no.of iSER connections(On one variant
> >>>>>of Chelsio cards) came down to 9, before it was 250.
> >>>>>NVMe-RDMA target also uses 1MiB max IO size.
> >>>>Max, remind me what was the point to support 16M? Did this resolve
> >>>>an issue?

  reply	other threads:[~2020-10-07  3:36 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-22 10:44 reduce iSERT Max IO size Krishnamraju Eraparaju
2020-09-23  8:57 ` Sagi Grimberg
2020-10-02 17:10   ` Krishnamraju Eraparaju
2020-10-02 20:29     ` Sagi Grimberg
2020-10-03  3:36       ` Krishnamraju Eraparaju
2020-10-03 13:12         ` Max Gurtovoy
2020-10-03 21:45         ` Max Gurtovoy
2020-10-07  3:36           ` Krishnamraju Eraparaju [this message]
2020-10-07 12:56             ` Max Gurtovoy
2020-10-07 23:50               ` Sagi Grimberg
2020-10-08  5:30                 ` Leon Romanovsky
2020-10-08 16:20                   ` Sagi Grimberg
2020-10-09 13:07                     ` Leon Romanovsky
2020-10-08 13:12             ` Bernard Metzler
2020-10-08 18:59               ` Krishnamraju Eraparaju
2020-10-08 22:47                 ` Max Gurtovoy
2020-10-09  3:06                   ` Krishnamraju Eraparaju

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201007033619.GA11425@chelsio.com \
    --to=krishna2@chelsio.com \
    --cc=bharat@chelsio.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=maxg@mellanox.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).