linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: swise@opengridcomputing.com (Steve Wise)
Subject: [PATCH RFC 1/2] nvme-rdma: Support 8K inline
Date: Mon, 14 May 2018 13:33:06 -0500	[thread overview]
Message-ID: <33f015bb-f370-0fec-3ff3-aee07de1dbcf@opengridcomputing.com> (raw)
In-Reply-To: <20180511064828.GC8368@lst.de>



On 5/11/2018 1:48 AM, Christoph Hellwig wrote:
>> -#define NVME_RDMA_MAX_INLINE_SEGMENTS	1
>> +#define NVME_RDMA_MAX_INLINE_SEGMENTS	2
> Given how little space we use for just the sge array maybe we
> want to bump this to 4 once we need to deal with multiple entries?

Agreed.

>>  static int nvme_rdma_map_sg_inline(struct nvme_rdma_queue *queue,
>> -		struct nvme_rdma_request *req, struct nvme_command *c)
>> +		struct nvme_rdma_request *req, int count,
>> +		struct nvme_command *c)
> Just gut feeling, but I'd pass the count argument last.

Yea ok.

>>  	struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
>> +	u32 len;
>>  
>>  	req->sge[1].addr = sg_dma_address(req->sg_table.sgl);
>>  	req->sge[1].length = sg_dma_len(req->sg_table.sgl);
>>  	req->sge[1].lkey = queue->device->pd->local_dma_lkey;
>> +	len = req->sge[1].length;
>> +	if (count == 2) {
>> +		req->sge[2].addr = sg_dma_address(req->sg_table.sgl+1);
>> +		req->sge[2].length = sg_dma_len(req->sg_table.sgl+1);
>> +		req->sge[2].lkey = queue->device->pd->local_dma_lkey;
>> +		len += req->sge[2].length;
>> +	}
> I think this should be turned into a for loop, e.g.

Yes, And I think in the previous incarnation of this patch series, Sagi
suggested an iterator pointer vs sge[blah].

> 	u32 len, i;
>
> 	for (i = 0; i < count; i++) {
> 		req->sge[1 + i].addr = sg_dma_address(&req->sg_table.sgl[i]);
> 		req->sge[1 + i].length = sg_dma_len(&req->sg_table.sgl[i]);
> 		req->sge[1 + i].lkey = queue->device->pd->local_dma_lkey;
> 		req->num_sge++;
> 		len += req->sge[i + 1].length;
> 	}
> 		
>> -	if (count == 1) {
>> +	if (count <= 2) {
> This should be NVME_RDMA_MAX_INLINE_SEGMENTS.

Yup.

Thanks for reviewing!

Steve.

  reply	other threads:[~2018-05-14 18:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-09 15:38 [PATCH RFC 0/2] 8K Inline Support Steve Wise
2018-05-09 14:31 ` [PATCH RFC 1/2] nvme-rdma: Support 8K inline Steve Wise
2018-05-09 16:55   ` Parav Pandit
2018-05-09 19:28     ` Steve Wise
2018-05-11  6:48   ` Christoph Hellwig
2018-05-14 18:33     ` Steve Wise [this message]
2018-05-09 14:34 ` [PATCH RFC 2/2] nvmet-rdma: " Steve Wise
2018-05-14 10:16   ` Max Gurtovoy
2018-05-14 14:58     ` Steve Wise
2018-05-09 18:46 ` [PATCH RFC 0/2] 8K Inline Support Steve Wise
2018-05-11  6:19 ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=33f015bb-f370-0fec-3ff3-aee07de1dbcf@opengridcomputing.com \
    --to=swise@opengridcomputing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).