linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: swise@opengridcomputing.com (Steve Wise)
Subject: [PATCH RFC v2 0/2] NVMF/RDMA 8K Inline Support
Date: Wed, 16 May 2018 17:01:20 -0500	[thread overview]
Message-ID: <4a797fc2-2e02-5e72-c1ce-f7ed3ea0ef53@opengridcomputing.com> (raw)
In-Reply-To: <cover.1526505524.git.swise@opengridcomputing.com>

Oops!? The subject should be "16K Inline Support"

Steve.


On 5/16/2018 4:18 PM, Steve Wise wrote:
> For small nvmf write IO over the rdma transport, it is advantagous to
> make use of inline mode to avoid the latency of the target issuing an
> rdma read to fetch the data.  Currently inline is used for <= 4K writes.
> 8K, though, requires the rdma read.  For iWARP transports additional
> latency is incurred because the target mr of the read must be registered
> with remote write access.  By allowing 2 pages worth of inline payload,
> I see a reduction in 8K nvmf write latency of anywhere from 2-7 usecs
> depending on the RDMA transport..
>
> This series is a respin of a series floated last year by Parav and Max [1].
> I'm continuing it now and trying to address the comments from their
> submission.
>
> A few of the comments have been addressed:
>
> - nvme-rdma: Support up to 4 segments of inline data.
>
> - nvme-rdma: Cap the number of inline segments to not exceed device limitations.
>
> - nvmet-rdma: Make the inline data size configurable in nvmet-rdma via configfs.
>
> Other issues I haven't addressed:
>
> - nvme-rdma: make the sge array for inline segments dynamic based on the
> target's advertised inline_data_size.  Since we're limiting the max count
> to 4, I'm not sure this is worth the complexity of allocating the sge array
> vs just embedding the max.
>
> - nvmet-rdma: concern about high order page allocations.  Is 4 pages
> too high?  One possibility is that, if the device max_sge allows, use
> a few more sges.  IE 16K could be 2 8K sges, or 4 4K.  This probably makes
> passing the inline data to bio more complex.  I haven't looked into this
> yet.
>
> - nvmet-rdma: reduce the qp depth if the inline size greatly increases
> the memory footprint.  I'm not sure how to do this in a reasonable mannor.
> Since the inline data size is now configurable, do we still need this?
>
> - nvmet-rdma: make the qp depth configurable so the admin can reduce it
> manually to lower the memory footprint.
>
> Please comment!
>
> Thanks,
>
> Steve.
>
> [1] Original submissions:
> http://lists.infradead.org/pipermail/linux-nvme/2017-February/008057.html
> http://lists.infradead.org/pipermail/linux-nvme/2017-February/008059.html
>
>
> Steve Wise (2):
>   nvme-rdma: support up to 4 segments of inline data
>   nvmet-rdma: support 16K inline data
>
>  drivers/nvme/host/rdma.c        | 34 +++++++++++++++++++++++-----------
>  drivers/nvme/target/admin-cmd.c |  4 ++--
>  drivers/nvme/target/configfs.c  | 34 ++++++++++++++++++++++++++++++++++
>  drivers/nvme/target/discovery.c |  2 +-
>  drivers/nvme/target/nvmet.h     |  4 +++-
>  drivers/nvme/target/rdma.c      | 41 +++++++++++++++++++++++++++++------------
>  6 files changed, 92 insertions(+), 27 deletions(-)
>

      parent reply	other threads:[~2018-05-16 22:01 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-16 21:18 [PATCH RFC v2 0/2] NVMF/RDMA 8K Inline Support Steve Wise
2018-05-16 19:57 ` [PATCH RFC v2 1/2] nvme-rdma: support up to 4 segments of inline data Steve Wise
2018-05-17 11:43   ` Christoph Hellwig
2018-05-16 19:58 ` [PATCH RFC v2 2/2] nvmet-rdma: support 16K " Steve Wise
2018-05-17 11:52   ` Christoph Hellwig
2018-05-17 14:24     ` Steve Wise
2018-05-18  9:08       ` Christoph Hellwig
2018-05-18 16:36         ` Steve Wise
2018-05-16 22:01 ` Steve Wise [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4a797fc2-2e02-5e72-c1ce-f7ed3ea0ef53@opengridcomputing.com \
    --to=swise@opengridcomputing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).