linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: liweihang <liweihang@huawei.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
	Doug Ledford <dledford@redhat.com>,
	"Huwei (Xavier)" <xavier.huwei@huawei.com>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	oulijun <oulijun@huawei.com>
Subject: Re: [PATCH v2 12/17] RDMA/hns: Use ib_umem_num_dma_blocks() instead of opencoding
Date: Mon, 7 Sep 2020 08:11:54 +0000	[thread overview]
Message-ID: <d0aea0dfb6154838bfded3eeacb22221@huawei.com> (raw)
In-Reply-To: 12-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com

On 2020/9/5 6:42, Jason Gunthorpe wrote:
> mtr_umem_page_count() does the same thing, replace it with the core code.
> 
> Also, ib_umem_find_best_pgsz() should always be called to check that the
> umem meets the page_size requirement. If there is a limited set of
> page_sizes that work it the pgsz_bitmap should be set to that set. 0 is a
> failure and the umem cannot be used.
> 
> Lightly tidy the control flow to implement this flow properly.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/infiniband/hw/hns/hns_roce_mr.c | 49 ++++++++++---------------
>  1 file changed, 19 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
> index e5df3884b41dda..16699f6bb03a51 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_mr.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
> @@ -707,19 +707,6 @@ static inline size_t mtr_bufs_size(struct hns_roce_buf_attr *attr)
>  	return size;
>  }
>  
> -static inline int mtr_umem_page_count(struct ib_umem *umem,
> -				      unsigned int page_shift)
> -{
> -	int count = ib_umem_page_count(umem);
> -
> -	if (page_shift >= PAGE_SHIFT)
> -		count >>= page_shift - PAGE_SHIFT;
> -	else
> -		count <<= PAGE_SHIFT - page_shift;
> -
> -	return count;
> -}
> -
>  static inline size_t mtr_kmem_direct_size(bool is_direct, size_t alloc_size,
>  					  unsigned int page_shift)
>  {
> @@ -767,12 +754,10 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
>  			  struct ib_udata *udata, unsigned long user_addr)
>  {
>  	struct ib_device *ibdev = &hr_dev->ib_dev;
> -	unsigned int max_pg_shift = buf_attr->page_shift;
> -	unsigned int best_pg_shift = 0;
> +	unsigned int best_pg_shift;
>  	int all_pg_count = 0;
>  	size_t direct_size;
>  	size_t total_size;
> -	unsigned long tmp;
>  	int ret = 0;
>  
>  	total_size = mtr_bufs_size(buf_attr);
> @@ -782,6 +767,9 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
>  	}
>  
>  	if (udata) {
> +		unsigned long pgsz_bitmap;
> +		unsigned long page_size;
> +
>  		mtr->kmem = NULL;
>  		mtr->umem = ib_umem_get(ibdev, user_addr, total_size,
>  					buf_attr->user_access);
> @@ -790,15 +778,17 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
>  				  PTR_ERR(mtr->umem));
>  			return -ENOMEM;
>  		}
> -		if (buf_attr->fixed_page) {
> -			best_pg_shift = max_pg_shift;
> -		} else {
> -			tmp = GENMASK(max_pg_shift, 0);
> -			ret = ib_umem_find_best_pgsz(mtr->umem, tmp, user_addr);
> -			best_pg_shift = (ret <= PAGE_SIZE) ?
> -					PAGE_SHIFT : ilog2(ret);
> -		}
> -		all_pg_count = mtr_umem_page_count(mtr->umem, best_pg_shift);
> +		if (buf_attr->fixed_page)
> +			pgsz_bitmap = 1 << buf_attr->page_shift;
> +		else
> +			pgsz_bitmap = GENMASK(buf_attr->page_shift, PAGE_SHIFT);
> +
> +		page_size = ib_umem_find_best_pgsz(mtr->umem, pgsz_bitmap,
> +						   user_addr);
> +		if (!page_size)
> +			return -EINVAL;
> +		best_pg_shift = order_base_2(page_size);
> +		all_pg_count = ib_umem_num_dma_blocks(mtr->umem, page_size);
>  		ret = 0;
>  	} else {
>  		mtr->umem = NULL;
> @@ -808,16 +798,15 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
>  			return -ENOMEM;
>  		}
>  		direct_size = mtr_kmem_direct_size(is_direct, total_size,
> -						   max_pg_shift);
> +						   buf_attr->page_shift);
>  		ret = hns_roce_buf_alloc(hr_dev, total_size, direct_size,
> -					 mtr->kmem, max_pg_shift);
> +					 mtr->kmem, buf_attr->page_shift);
>  		if (ret) {
>  			ibdev_err(ibdev, "Failed to alloc kmem, ret %d\n", ret);
>  			goto err_alloc_mem;
> -		} else {
> -			best_pg_shift = max_pg_shift;
> -			all_pg_count = mtr->kmem->npages;
>  		}
> +		best_pg_shift = buf_attr->page_shift;
> +		all_pg_count = mtr->kmem->npages;
>  	}
>  
>  	/* must bigger than minimum hardware page shift */
> 

Thanks

Acked-by: Weihang Li <liweihang@huawei.com>

  reply	other threads:[~2020-09-07  8:12 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-04 22:41 [PATCH v2 00/17] RDMA: Improve use of umem in DMA drivers Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 01/17] RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 02/17] RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz() Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 03/17] RDMA/umem: Use simpler logic for ib_umem_find_best_pgsz() Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 04/17] RDMA/umem: Add rdma_umem_for_each_dma_block() Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 05/17] RDMA/umem: Replace for_each_sg_dma_page with rdma_umem_for_each_dma_block Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 06/17] RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks() Jason Gunthorpe
2020-09-07 12:16   ` Gal Pressman
2020-09-11 13:21   ` Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 07/17] RDMA/efa: Use ib_umem_num_dma_pages() Jason Gunthorpe
2020-09-07 12:19   ` Gal Pressman
2020-09-08 13:48     ` Jason Gunthorpe
2020-09-09  8:18       ` Gal Pressman
2020-09-09 11:14         ` Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 08/17] RDMA/i40iw: " Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 09/17] RDMA/qedr: Use rdma_umem_for_each_dma_block() instead of open-coding Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 10/17] RDMA/qedr: Use ib_umem_num_dma_blocks() instead of ib_umem_page_count() Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 11/17] RDMA/bnxt: Do not use ib_umem_page_count() or ib_umem_num_pages() Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 12/17] RDMA/hns: Use ib_umem_num_dma_blocks() instead of opencoding Jason Gunthorpe
2020-09-07  8:11   ` liweihang [this message]
2020-09-04 22:41 ` [PATCH v2 13/17] RDMA/ocrdma: Use ib_umem_num_dma_blocks() instead of ib_umem_page_count() Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 14/17] RDMA/pvrdma: " Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 15/17] RDMA/mlx4: Use ib_umem_num_dma_blocks() Jason Gunthorpe
2020-09-04 22:41 ` [PATCH v2 16/17] RDMA/qedr: Remove fbo and zbva from the MR Jason Gunthorpe
2020-09-06  8:01   ` [EXT] " Michal Kalderon
2020-09-04 22:41 ` [PATCH v2 17/17] RDMA/ocrdma: Remove fbo from MR Jason Gunthorpe
2020-09-06  7:21   ` Leon Romanovsky
2020-09-09 18:38 ` [PATCH v2 00/17] RDMA: Improve use of umem in DMA drivers Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d0aea0dfb6154838bfded3eeacb22221@huawei.com \
    --to=liweihang@huawei.com \
    --cc=dledford@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=oulijun@huawei.com \
    --cc=xavier.huwei@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).