linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: swise@opengridcomputing.com (Steve Wise)
Subject: [PATCH RFC 1/2] nvme-rdma: Support 8K inline
Date: Wed, 9 May 2018 14:28:03 -0500	[thread overview]
Message-ID: <3c5ec87b-0bb0-1347-de8d-9362585967b4@opengridcomputing.com> (raw)
In-Reply-To: <VI1PR0502MB3008DEBA0C8DBC99E829D1B6D1990@VI1PR0502MB3008.eurprd05.prod.outlook.com>



On 5/9/2018 11:55 AM, Parav Pandit wrote:
> Hi Steve,
>
>> -----Original Message-----
>> From: linux-rdma-owner at vger.kernel.org [mailto:linux-rdma-
>> owner at vger.kernel.org] On Behalf Of Steve Wise
>> Sent: Wednesday, May 09, 2018 9:31 AM
>> To: axboe at fb.com; hch at lst.de; keith.busch at intel.com; sagi at grimberg.me;
>> linux-nvme at lists.infradead.org
>> Cc: linux-rdma at vger.kernel.org
>> Subject: [PATCH RFC 1/2] nvme-rdma: Support 8K inline
>>
>> Allow up to 2 pages of inline for NVMF WRITE operations.  This reduces latency
>> for 8K WRITEs by removing the need to issue a READ WR for IB, or a
>> REG_MR+READ WR chain for iWarp.
>>
>> Signed-off-by: Steve Wise <swise at opengridcomputing.com>
>> ---
>>  drivers/nvme/host/rdma.c | 21 +++++++++++++++------
>>  1 file changed, 15 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index
>> 1eb4438..9b8af98 100644
>> --- a/drivers/nvme/host/rdma.c
>> +++ b/drivers/nvme/host/rdma.c
>> @@ -40,7 +40,7 @@
>>
>>  #define NVME_RDMA_MAX_SEGMENTS		256
>>
>> -#define NVME_RDMA_MAX_INLINE_SEGMENTS	1
>> +#define NVME_RDMA_MAX_INLINE_SEGMENTS	2
>>
> I wrote this patch back in Feb 2017 but didn't spend time on V2 to address comments from Sagi, Christoph.
> http://lists.infradead.org/pipermail/linux-nvme/2017-February/008057.html
> Thanks for taking this forward.
> This is helpful for certain db workload who have 8K typical data size too.

Hey Parav,

I thought I'd remembered something similar previously. :)? Let me see if
I can address the previous comments, going forward.? They are:

- why just 1 or 2 pages vs more?
- dynamic allocation of nvme-rdma resources based on the target's
advertised ioccsz.

And I see you posted the nvmet-rdma patch here, which allows up to 16KB
inline:

http://lists.infradead.org/pipermail/linux-nvme/2017-February/008064.html

And I think the comments needing resolution are:

- make it a configfs option
- adjust the qp depth some if the inline depth is bigger to try and keep
the overall memory footprint reasonable
- avoid high-order allocations - maybe per-core SRQ could be helpful

So the question is, do we have agreement on the way forward?? Sagi and
Christoph, I appreciate any feedback on this.? I'd like to get this
featured merged.??

Thanks,

Steve.

  reply	other threads:[~2018-05-09 19:28 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-09 15:38 [PATCH RFC 0/2] 8K Inline Support Steve Wise
2018-05-09 14:31 ` [PATCH RFC 1/2] nvme-rdma: Support 8K inline Steve Wise
2018-05-09 16:55   ` Parav Pandit
2018-05-09 19:28     ` Steve Wise [this message]
2018-05-11  6:48   ` Christoph Hellwig
2018-05-14 18:33     ` Steve Wise
2018-05-09 14:34 ` [PATCH RFC 2/2] nvmet-rdma: " Steve Wise
2018-05-14 10:16   ` Max Gurtovoy
2018-05-14 14:58     ` Steve Wise
2018-05-09 18:46 ` [PATCH RFC 0/2] 8K Inline Support Steve Wise
2018-05-11  6:19 ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3c5ec87b-0bb0-1347-de8d-9362585967b4@opengridcomputing.com \
    --to=swise@opengridcomputing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).