From: Zhu Yanjun <yanjun.zhu@linux.dev>
To: shaozhengchao <shaozhengchao@huawei.com>,
linux-rdma@vger.kernel.org, tangchengchang@huawei.com,
huangjunxian6@hisilicon.com
Cc: jgg@ziepe.ca, leon@kernel.org, chenglang@huawei.com,
wangxi11@huawei.com, liweihang@huawei.com,
weiyongjun1@huawei.com, yuehaibing@huawei.com
Subject: Re: [PATCH v2] RDMA/hns: fix return value in hns_roce_map_mr_sg
Date: Thu, 11 Apr 2024 09:05:52 +0200 [thread overview]
Message-ID: <d1a7e1db-e425-4c25-8d10-3614d62eab3a@linux.dev> (raw)
In-Reply-To: <7e31102c-74d0-b398-e776-a79adfbac579@huawei.com>
在 2024/4/11 8:26, shaozhengchao 写道:
>
>
> On 2024/4/11 14:08, Zhu Yanjun wrote:
>> 在 2024/4/11 5:38, Zhengchao Shao 写道:
>>> As described in the ib_map_mr_sg function comment, it returns the
>>> number
>>> of sg elements that were mapped to the memory region. However,
>>> hns_roce_map_mr_sg returns the number of pages required for mapping the
>>> DMA area. Fix it.
>>>
>>> Fixes: 9b2cf76c9f05 ("RDMA/hns: Optimize PBL buffer allocation
>>> process")
>>> Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
>>> ---
>>> v2: fix the return value and coding format issues
>>> ---
>>> drivers/infiniband/hw/hns/hns_roce_mr.c | 15 +++++++--------
>>> 1 file changed, 7 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c
>>> b/drivers/infiniband/hw/hns/hns_roce_mr.c
>>> index 9e05b57a2d67..80c050d7d0ea 100644
>>> --- a/drivers/infiniband/hw/hns/hns_roce_mr.c
>>> +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
>>> @@ -441,18 +441,18 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr,
>>> struct scatterlist *sg, int sg_nents,
>>> struct ib_device *ibdev = &hr_dev->ib_dev;
>>> struct hns_roce_mr *mr = to_hr_mr(ibmr);
>>> struct hns_roce_mtr *mtr = &mr->pbl_mtr;
>>> - int ret = 0;
>>> + int ret, sg_num = 0;
>>> mr->npages = 0;
>>> mr->page_list = kvcalloc(mr->pbl_mtr.hem_cfg.buf_pg_count,
>>> sizeof(dma_addr_t), GFP_KERNEL);
>>> if (!mr->page_list)
>>> - return ret;
>>> + return sg_num;
>>> - ret = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset,
>>> hns_roce_set_page);
>>> - if (ret < 1) {
>>> + sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset,
>>> hns_roce_set_page);
>>> + if (sg_num < 1) {
>>> ibdev_err(ibdev, "failed to store sg pages %u %u, cnt =
>>> %d.\n",
>>> - mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, ret);
>>> + mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, sg_num);
>>> goto err_page_list;
>>> }
>>> @@ -463,17 +463,16 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr,
>>> struct scatterlist *sg, int sg_nents,
>>> ret = hns_roce_mtr_map(hr_dev, mtr, mr->page_list, mr->npages);
>>> if (ret) {
>>> ibdev_err(ibdev, "failed to map sg mtr, ret = %d.\n", ret);
>>> - ret = 0;
>>> + sg_num = 0;
>>> } else {
>>> mr->pbl_mtr.hem_cfg.buf_pg_shift =
>>> (u32)ilog2(ibmr->page_size);
>>> - ret = mr->npages;
>>> }
>>
> Hi Yanjun:
> Thank you for your review. The hns_roce_mtr_map function indicates
> whether the page is successfully mapped. If sg_num is used, there may be
> ambiguity. Maybe what do I missed?
Sure. From my perspective, I just want to remove a local variable. You
consideration also makes sense.
Zhu Yanjun
>
> Zhengchao Shao
>> In the above, can we replace the local variable ret with sg_num? So
>> the local variable ret can be removed.
>> A trivial problem.
>>
>> @@ -433,7 +433,7 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct
>> scatterlist *sg, int sg_nents,
>> struct ib_device *ibdev = &hr_dev->ib_dev;
>> struct hns_roce_mr *mr = to_hr_mr(ibmr);
>> struct hns_roce_mtr *mtr = &mr->pbl_mtr;
>> - int ret, sg_num = 0;
>> + int sg_num = 0;
>>
>> mr->npages = 0;
>> mr->page_list = kvcalloc(mr->pbl_mtr.hem_cfg.buf_pg_count,
>> @@ -452,9 +452,9 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct
>> scatterlist *sg, int sg_nents,
>> mtr->hem_cfg.region[0].count = mr->npages;
>> mtr->hem_cfg.region[0].hopnum = mr->pbl_hop_num;
>> mtr->hem_cfg.region_count = 1;
>> - ret = hns_roce_mtr_map(hr_dev, mtr, mr->page_list, mr->npages);
>> - if (ret) {
>> - ibdev_err(ibdev, "failed to map sg mtr, ret = %d.\n",
>> ret);
>> + sg_num = hns_roce_mtr_map(hr_dev, mtr, mr->page_list,
>> mr->npages);
>> + if (sg_num) {
>> + ibdev_err(ibdev, "failed to map sg mtr, ret = %d.\n",
>> sg_num);
>> sg_num = 0;
>> } else {
>> mr->pbl_mtr.hem_cfg.buf_pg_shift =
>> (u32)ilog2(ibmr->page_size);
>>
>> Zhu Yanjun
>>
>>> err_page_list:
>>> kvfree(mr->page_list);
>>> mr->page_list = NULL;
>>> - return ret;
>>> + return sg_num;
>>> }
>>> static void hns_roce_mw_free(struct hns_roce_dev *hr_dev,
>>
next prev parent reply other threads:[~2024-04-11 7:05 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-11 3:38 [PATCH v2] RDMA/hns: fix return value in hns_roce_map_mr_sg Zhengchao Shao
2024-04-11 5:59 ` Junxian Huang
2024-04-11 6:08 ` Zhu Yanjun
2024-04-11 6:26 ` shaozhengchao
2024-04-11 7:05 ` Zhu Yanjun [this message]
2024-04-16 11:59 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d1a7e1db-e425-4c25-8d10-3614d62eab3a@linux.dev \
--to=yanjun.zhu@linux.dev \
--cc=chenglang@huawei.com \
--cc=huangjunxian6@hisilicon.com \
--cc=jgg@ziepe.ca \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=liweihang@huawei.com \
--cc=shaozhengchao@huawei.com \
--cc=tangchengchang@huawei.com \
--cc=wangxi11@huawei.com \
--cc=weiyongjun1@huawei.com \
--cc=yuehaibing@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox