From: Weihang Li <liweihang@huawei.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: <dledford@redhat.com>, <jgg@ziepe.ca>,
<linux-rdma@vger.kernel.org>, <linuxarm@huawei.com>
Subject: Re: [PATCH for-next 4/7] RDMA/hns: Optimize qp buffer allocation flow
Date: Sun, 26 Jan 2020 16:35:56 +0800 [thread overview]
Message-ID: <61c73aa5-6f64-0dd4-1565-09cbe7264f9f@huawei.com> (raw)
In-Reply-To: <20200123143136.GO7018@unreal>
On 2020/1/23 22:31, Leon Romanovsky wrote:
> On Mon, Jan 20, 2020 at 04:19:34PM +0800, Weihang Li wrote:
>> From: Xi Wang <wangxi11@huawei.com>
>>
>> Encapsulate qp buffer allocation related code into 3 functions:
>> alloc_qp_buf(), map_qp_buf() and free_qp_buf().
>>
>> Signed-off-by: Xi Wang <wangxi11@huawei.com>
>> Signed-off-by: Weihang Li <liweihang@huawei.com>
>> ---
>> drivers/infiniband/hw/hns/hns_roce_device.h | 1 -
>> drivers/infiniband/hw/hns/hns_roce_qp.c | 268 +++++++++++++++-------------
>> 2 files changed, 147 insertions(+), 122 deletions(-)
>>
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
>> index 1f361e6..9ddeb2b 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_device.h
>> +++ b/drivers/infiniband/hw/hns/hns_roce_device.h
>> @@ -660,7 +660,6 @@ struct hns_roce_qp {
>> /* this define must less than HNS_ROCE_MAX_BT_REGION */
>> #define HNS_ROCE_WQE_REGION_MAX 3
>> struct hns_roce_buf_region regions[HNS_ROCE_WQE_REGION_MAX];
>> - int region_cnt;
>> int wqe_bt_pg_shift;
>>
>> u32 buff_size;
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
>> index 3bd5809..5184cb4 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_qp.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
>> @@ -716,23 +716,150 @@ static void free_rq_inline_buf(struct hns_roce_qp *hr_qp)
>> kfree(hr_qp->rq_inl_buf.wqe_list);
>> }
>>
>> +static int map_qp_buf(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
>> + u32 page_shift, bool is_user)
>> +{
>> + dma_addr_t *buf_list[ARRAY_SIZE(hr_qp->regions)] = { NULL };
>> + struct ib_device *ibdev = &hr_dev->ib_dev;
>> + struct hns_roce_buf_region *r;
>> + int region_count;
>> + int buf_count;
>> + int ret;
>> + int i;
>> +
>> + region_count = split_wqe_buf_region(hr_dev, hr_qp, hr_qp->regions,
>> + ARRAY_SIZE(hr_qp->regions), page_shift);
>> +
>> + /* alloc a tmp list for storing wqe buf address */
>> + ret = hns_roce_alloc_buf_list(hr_qp->regions, buf_list, region_count);
>> + if (ret) {
>> + ibdev_err(ibdev, "alloc buf_list error for create qp\n");
>> + return ret;
>> + }
>> +
>> + for (i = 0; i < region_count; i++) {
>> + r = &hr_qp->regions[i];
>> + if (is_user)
>> + buf_count = hns_roce_get_umem_bufs(hr_dev, buf_list[i],
>> + r->count, r->offset, hr_qp->umem,
>> + page_shift);
>> + else
>> + buf_count = hns_roce_get_kmem_bufs(hr_dev, buf_list[i],
>> + r->count, r->offset, &hr_qp->hr_buf);
>> +
>> + if (buf_count != r->count) {
>> + ibdev_err(ibdev, "get %s qp buf err,expect %d,ret %d.\n",
>> + is_user ? "user" : "kernel",
>> + r->count, buf_count);
>> + ret = -ENOBUFS;
>> + goto done;
>> + }
>> + }
>> +
>> + hr_qp->wqe_bt_pg_shift = calc_wqe_bt_page_shift(hr_dev, hr_qp->regions,
>> + region_count);
>> + hns_roce_mtr_init(&hr_qp->mtr, PAGE_SHIFT + hr_qp->wqe_bt_pg_shift,
>> + page_shift);
>> + ret = hns_roce_mtr_attach(hr_dev, &hr_qp->mtr, buf_list, hr_qp->regions,
>> + region_count);
>> + if (ret)
>> + ibdev_err(ibdev, "mtr attatch error for create qp\n");
>> +
>> + goto done;
>> +
>> + hns_roce_mtr_cleanup(hr_dev, &hr_qp->mtr);
>> +done:
>> + hns_roce_free_buf_list(buf_list, region_count);
>> +
>> + return ret;
>> +}
>> +
>> +static int alloc_qp_buf(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
>> + struct ib_qp_init_attr *init_attr,
>> + struct ib_udata *udata, unsigned long addr)
>> +{
>> + u32 page_shift = PAGE_SHIFT + hr_dev->caps.mtt_buf_pg_sz;
>> + struct ib_device *ibdev = &hr_dev->ib_dev;
>> + bool is_rq_buf_inline;
>> + int ret;
>> +
>> + is_rq_buf_inline = (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) &&
>> + hns_roce_qp_has_rq(init_attr);
>> + if (is_rq_buf_inline) {
>> + ret = alloc_rq_inline_buf(hr_qp, init_attr);
>> + if (ret) {
>> + ibdev_err(ibdev, "alloc recv inline buffer error\n");
>> + return ret;
>> + }
>> + }
>> +
>> + if (udata) {
>> + hr_qp->umem = ib_umem_get(udata, addr, hr_qp->buff_size, 0);
>> + if (IS_ERR(hr_qp->umem)) {
>> + ibdev_err(ibdev, "get umem error for qp buf\n");
>> + ret = PTR_ERR(hr_qp->umem);
>> + goto err_inline;
>> + }
>> + } else {
>> + ret = hns_roce_buf_alloc(hr_dev, hr_qp->buff_size,
>> + (1 << page_shift) * 2,
>> + &hr_qp->hr_buf, page_shift);
>> + if (ret) {
>> + ibdev_err(ibdev, "alloc roce buf error\n");
>> + goto err_inline;
>> + }
>> + }
>> +
>> + ret = map_qp_buf(hr_dev, hr_qp, page_shift, udata);
>
>
> I don't remember what was the resolution if it is ok to rely on "udata"
> as an indicator of user/kernel flow.
Hi Leon,
Sorry for that I don't know much about previous discussions, will check for
that. Thanks for your reminder.
>
>> + if (ret) {
>> + ibdev_err(ibdev, "map roce buf error\n");
>
> You put ibdev_err() on almost every line in map_qp_buf(), please leave
> only one place.
>
> Thanks
>
OK, thank you.
Weihang
> .
>
next prev parent reply other threads:[~2020-01-26 8:36 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-20 8:19 [PATCH for-next 0/7] RDMA/hns: Refactor qp related code Weihang Li
2020-01-20 8:19 ` [PATCH for-next 1/7] RDMA/hns: Optimize qp destroy flow Weihang Li
2020-01-20 8:19 ` [PATCH for-next 2/7] RDMA/hns: Optimize qp context create and " Weihang Li
2020-01-20 8:19 ` [PATCH for-next 3/7] RDMA/hns: Optimize qp number assign flow Weihang Li
2020-01-20 8:19 ` [PATCH for-next 4/7] RDMA/hns: Optimize qp buffer allocation flow Weihang Li
2020-01-23 14:31 ` Leon Romanovsky
2020-01-23 22:55 ` Jason Gunthorpe
2020-01-26 8:35 ` Weihang Li [this message]
2020-01-20 8:19 ` [PATCH for-next 5/7] RDMA/hns: Optimize qp param setup flow Weihang Li
2020-01-20 8:19 ` [PATCH for-next 6/7] RDMA/hns: Optimize kernel qp wrid allocation flow Weihang Li
2020-01-20 8:19 ` [PATCH for-next 7/7] RDMA/hns: Optimize qp doorbell " Weihang Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=61c73aa5-6f64-0dd4-1565-09cbe7264f9f@huawei.com \
--to=liweihang@huawei.com \
--cc=dledford@redhat.com \
--cc=jgg@ziepe.ca \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linuxarm@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox