linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: yanjun.zhu@linux.dev
To: "Leon Romanovsky" <leon@kernel.org>, "Zhu Yanjun" <yanjun.zhu@intel.com>
Cc: mustafa.ismail@intel.com, shiraz.saleem@intel.com, jgg@ziepe.ca,
	linux-rdma@vger.kernel.org
Subject: Re: [PATCHv2 for-next 4/4] RDMA/irdma: Split CQ handler into irdma_reg_user_mr_type_cq
Date: Mon, 16 Jan 2023 03:03:55 +0000	[thread overview]
Message-ID: <a3d9e71659d80d619923b092e229b968@linux.dev> (raw)
In-Reply-To: <Y8PjTUxb/9HJ7XWH@unreal>

January 15, 2023 7:28 PM, "Leon Romanovsky" <leon@kernel.org> wrote:

> On Wed, Jan 11, 2023 at 07:06:17PM -0500, Zhu Yanjun wrote:
> 
>> From: Zhu Yanjun <yanjun.zhu@linux.dev>
>> 
>> Split the source codes related with CQ handling into a new function.
>> 
>> Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev>
>> ---
>> drivers/infiniband/hw/irdma/verbs.c | 63 +++++++++++++++++------------
>> 1 file changed, 37 insertions(+), 26 deletions(-)
>> 
>> diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
>> index 74dd1972c325..3902c74d59f2 100644
>> --- a/drivers/infiniband/hw/irdma/verbs.c
>> +++ b/drivers/infiniband/hw/irdma/verbs.c
>> @@ -2867,6 +2867,40 @@ static int irdma_reg_user_mr_type_qp(struct irdma_mem_reg_req req,
>> return err;
>> }
>> 
>> +static int irdma_reg_user_mr_type_cq(struct irdma_mem_reg_req req,
>> + struct ib_udata *udata,
>> + struct irdma_mr *iwmr)
>> +{
>> + int err;
>> + u8 shadow_pgcnt = 1;
>> + bool use_pbles;
>> + struct irdma_ucontext *ucontext;
>> + unsigned long flags;
>> + u32 total;
>> + struct irdma_pbl *iwpbl = &iwmr->iwpbl;
>> + struct irdma_device *iwdev = to_iwdev(iwmr->ibmr.device);
> 
> It will be nice to see more structured variable initialization.
> 
> I'm not going to insist on it, but IMHO netdev reverse Christmas
> tree rule looks more appealing than this random list.

Got it. The structured variables are initialized.
And the netdev reverse Christmas tree rule is used in the commits.

> 
>> +
>> + if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_CQ_RESIZE)
>> + shadow_pgcnt = 0;
>> + total = req.cq_pages + shadow_pgcnt;
>> + if (total > iwmr->page_cnt)
>> + return -EINVAL;
>> +
>> + use_pbles = (req.cq_pages > 1);
>> + err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles);
>> + if (err)
>> + return err;
>> +
>> + ucontext = rdma_udata_to_drv_context(udata, struct irdma_ucontext,
>> + ibucontext);
>> + spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags);
>> + list_add_tail(&iwpbl->list, &ucontext->cq_reg_mem_list);
>> + iwpbl->on_list = true;
>> + spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags);
>> +
>> + return err;
> 
> return 0;

I will send out the latest commits very soon.

Zhu Yanjun

> 
>> +}
>> +
>> /**
>> * irdma_reg_user_mr - Register a user memory region
>> * @pd: ptr of pd
>> @@ -2882,16 +2916,10 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64
>> len,
>> {
>> #define IRDMA_MEM_REG_MIN_REQ_LEN offsetofend(struct irdma_mem_reg_req, sq_pages)
>> struct irdma_device *iwdev = to_iwdev(pd->device);
>> - struct irdma_ucontext *ucontext;
>> - struct irdma_pbl *iwpbl;
>> struct irdma_mr *iwmr;
>> struct ib_umem *region;
>> struct irdma_mem_reg_req req;
>> - u32 total;
>> - u8 shadow_pgcnt = 1;
>> - bool use_pbles = false;
>> - unsigned long flags;
>> - int err = -EINVAL;
>> + int err;
>> 
>> if (len > iwdev->rf->sc_dev.hw_attrs.max_mr_size)
>> return ERR_PTR(-EINVAL);
>> @@ -2918,8 +2946,6 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
>> return (struct ib_mr *)iwmr;
>> }
>> 
>> - iwpbl = &iwmr->iwpbl;
>> -
>> switch (req.reg_type) {
>> case IRDMA_MEMREG_TYPE_QP:
>> err = irdma_reg_user_mr_type_qp(req, udata, iwmr);
>> @@ -2928,25 +2954,9 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
>> 
>> break;
>> case IRDMA_MEMREG_TYPE_CQ:
>> - if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_CQ_RESIZE)
>> - shadow_pgcnt = 0;
>> - total = req.cq_pages + shadow_pgcnt;
>> - if (total > iwmr->page_cnt) {
>> - err = -EINVAL;
>> - goto error;
>> - }
>> -
>> - use_pbles = (req.cq_pages > 1);
>> - err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles);
>> + err = irdma_reg_user_mr_type_cq(req, udata, iwmr);
>> if (err)
>> goto error;
>> -
>> - ucontext = rdma_udata_to_drv_context(udata, struct irdma_ucontext,
>> - ibucontext);
>> - spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags);
>> - list_add_tail(&iwpbl->list, &ucontext->cq_reg_mem_list);
>> - iwpbl->on_list = true;
>> - spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags);
>> break;
>> case IRDMA_MEMREG_TYPE_MEM:
>> err = irdma_reg_user_mr_type_mem(iwmr, access);
>> @@ -2955,6 +2965,7 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
>> 
>> break;
>> default:
>> + err = -EINVAL;
>> goto error;
>> }
>> 
>> --
>> 2.31.1

  reply	other threads:[~2023-01-16  3:04 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-12  0:06 [PATCHv2 for-next 0/4] RDMA/irdma: Refactor irdma_reg_user_mr function Zhu Yanjun
2023-01-12  0:06 ` [PATCHv2 for-next 1/4] RDMA/irdma: Split MEM handler into irdma_reg_user_mr_type_mem Zhu Yanjun
2023-01-15 11:20   ` Leon Romanovsky
2023-01-16  2:56     ` yanjun.zhu
2023-01-12  0:06 ` [PATCHv2 for-next 2/4] RDMA/irdma: Split mr alloc and free into new functions Zhu Yanjun
2023-01-12  0:06 ` [PATCHv2 for-next 3/4] RDMA/irdma: Split QP handler into irdma_reg_user_mr_type_qp Zhu Yanjun
2023-01-15 11:23   ` Leon Romanovsky
2023-01-16  2:58     ` yanjun.zhu
2023-01-12  0:06 ` [PATCHv2 for-next 4/4] RDMA/irdma: Split CQ handler into irdma_reg_user_mr_type_cq Zhu Yanjun
2023-01-15 11:28   ` Leon Romanovsky
2023-01-16  3:03     ` yanjun.zhu [this message]
2023-01-12 16:42 ` [PATCHv2 for-next 0/4] RDMA/irdma: Refactor irdma_reg_user_mr function Saleem, Shiraz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a3d9e71659d80d619923b092e229b968@linux.dev \
    --to=yanjun.zhu@linux.dev \
    --cc=jgg@ziepe.ca \
    --cc=leon@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mustafa.ismail@intel.com \
    --cc=shiraz.saleem@intel.com \
    --cc=yanjun.zhu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).