public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
	marcan@marcan.st, sven@svenpeter.dev,
	Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	James Smart <james.smart@broadcom.com>
Cc: alyssa@rosenzweig.io, asahi@lists.linux.dev,
	Chaitanya Kulkarni <kch@nvidia.com>
Subject: Re: [PATCH] nvme: don't set a virt_boundary unless needed
Date: Mon, 25 Dec 2023 12:36:20 +0200	[thread overview]
Message-ID: <d5e688be-3419-4cf6-9bd9-bf2c1a6fb092@nvidia.com> (raw)
In-Reply-To: <0f126715-9b51-4e14-8cef-c999f8760e4e@grimberg.me>



On 25/12/2023 12:08, Sagi Grimberg wrote:
> 
> 
> On 12/22/23 03:16, Max Gurtovoy wrote:
>>
>>
>> On 21/12/2023 11:30, Sagi Grimberg wrote:
>>>
>>>> NVMe PRPs are a pain and force the expensive virt_boundary checking on
>>>> block layer, prevent secure passthrough and require scatter/gather I/O
>>>> to be split into multiple commands which is problematic for the 
>>>> upcoming
>>>> atomic write support.
>>>
>>> But is the threshold still correct? meaning for I/Os small enough the
>>> device will have lower performance? I'm not advocating that we keep it,
>>> but we should at least mention the tradeoff in the change log.
>>>
>>>> Fix the NVMe core to require an opt-in from the drivers for it.
>>>>
>>>> For nvme-apple it is always required as the driver only supports PRPs.
>>>>
>>>> For nvme-pci when SGLs are supported we'll always use them for data I/O
>>>> that would require a virt_boundary.
>>>>
>>>> For nvme-rdma the virt boundary is always required, as RMDA MRs are 
>>>> just
>>>> as dumb as NVMe PRPs.
>>>
>>> That is actually device dependent. The driver can ask for a pool of
>>> mrs with type IB_MR_TYPE_SG_GAPS if the device supports IBK_SG_GAPS_REG.
>>>
>>> See from ib_srp.c:
>>> -- 
>>>         if (device->attrs.kernel_cap_flags & IBK_SG_GAPS_REG)
>>>                  mr_type = IB_MR_TYPE_SG_GAPS;
>>>          else
>>>                  mr_type = IB_MR_TYPE_MEM_REG;
>>
>> For now, I prefer not using the IB_MR_TYPE_SG_GAPS MR in NVMe/RDMA 
>> since in the case of virtual contiguous data buffers it is better to 
>> use IB_MR_TYPE_MEM_REG. It gives much better performance. This is the 
>> reason I didn't add IB_MR_TYPE_SG_GAPS MR support for NVMe/RDMA.
> 
> I see. I guess it is not *that* trivial then.
> 
>> I actually had a plan to re-write the IB_MR_TYPE_SG_GAPS MR logic (or 
>> create a new MR type) that will internally open 2 MRs so if the IO is 
>> contiguous it will use the MTT/MEM_REG and if it isn't it will use the 
>> KLM/SG_GAPS.
>> This is how we implemented the SIG_MR but still didn't make it for the 
>> IB_MR_TYPE_SG_GAPS MR.
> 
> Sounds like a reasonable option. But doesn't think mean that the
> driver will need to scan the page scatterlist to determine what internal
> mr to use? Even a fallback mechanism can be affected by a given
> workload. Plus there is the cost of doubling the number of preallocated
> mrs.
> 

Scanning the scatterlist is done anyway for mapping purposes so I don't 
think it will affect the performance.
The cost of doubling the number of MRs is the what we need to pay to get 
optimal performance for contig and discontig IOs, I guess..

>> Actually, I think we should have the same logic in the NVMe PCI driver:
>> if the IOs can be delivered as PRPs then the driver will prepare SQE 
>> with PRP. Otherwise, driver will prepare SGL.
>> I think that doing the check in the driver for each IO is not so bad 
>> and devices will get benefit from it. Usually HW devices like to work 
>> with contiguous buffers. If the buffers can't be mapped with PRPs, 
>> then the HW will work a bit harder and use SGLs (it is better than 
>> doing a bounce buffer in the block layer).
>>
>> I actually did a POC internally for NVMe/RDMA and created sg_gaps 
>> ib_mr and mem_reg ib_mr and checked the buffers mapping for each IO 
>> and got a big benefit if the buffers were discontig (used the sg_gaps 
>> mr). Also the contig buffers performance didn't degraded because of 
>> the check of the buffers mapping.
>>
>> I created a fio flags that in purpose sends discontig IOs for my testing.
>>
>> WDYT ?
> 
> Sounds possible. However for rdma we probably want this transparent to
> the ulp such that all consumers can have this benefit. Also perhaps add
> this logic in the rdma core so other drivers can use it as well
> (although I don't know if any other rdma driver supports sg gaps
> anyways).
> 
> If this proves to be a good approach, pci can do something similar.

For RDMA, I plan to do it in the device driver (mlx5) layer and not the 
ib_core layer. It is unique to our implementation.

For the NVMe PCI case, I suggested doing it unrelated to the NVMe/RDMA 
solution. The NVMe/PCI is actually the device driver of the PCI device 
and the scanning of the scatterlist should happen in the device driver.
I suggest to try this solution since we always debating about thresholds 
and when to use SGLs.
Now that Christoph opens the gate for the driver to work with discontig 
IOs I believe that for *any* discontig IO we should use SGLs and for 
*any* contig IO we should use PRPs.

NVMe SSD vendors will be able to test this approach and report their 
numbers.


  reply	other threads:[~2023-12-25 10:36 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-21  8:48 [PATCH] nvme: don't set a virt_boundary unless needed Christoph Hellwig
2023-12-21  9:30 ` Sagi Grimberg
2023-12-21 12:17   ` Christoph Hellwig
2023-12-21 12:32     ` Sagi Grimberg
2023-12-21 12:40       ` Christoph Hellwig
2023-12-25  9:13         ` Sagi Grimberg
2023-12-21 17:03     ` Keith Busch
2023-12-25  9:20       ` Sagi Grimberg
2023-12-22  1:16   ` Max Gurtovoy
2023-12-25 10:08     ` Sagi Grimberg
2023-12-25 10:36       ` Max Gurtovoy [this message]
2023-12-25 10:44         ` Sagi Grimberg
2023-12-25 12:31           ` Max Gurtovoy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d5e688be-3419-4cf6-9bd9-bf2c1a6fb092@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=alyssa@rosenzweig.io \
    --cc=asahi@lists.linux.dev \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=james.smart@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=marcan@marcan.st \
    --cc=sagi@grimberg.me \
    --cc=sven@svenpeter.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox