public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
Cc: linux-rdma@vger.kernel.org, jgg@ziepe.ca, zyjzyj2000@gmail.com,
	lizhijian@fujitsu.com
Subject: Re: [PATCH for-next v1 1/2] RDMA/rxe: Enable ODP in RDMA FLUSH operation
Date: Mon, 17 Mar 2025 20:22:47 +0200	[thread overview]
Message-ID: <20250317182247.GY1322339@unreal> (raw)
In-Reply-To: <20250314081056.3496708-2-matsuda-daisuke@fujitsu.com>

On Fri, Mar 14, 2025 at 05:10:55PM +0900, Daisuke Matsuda wrote:
> For persistent memories, add rxe_odp_flush_pmem_iova() so that ODP specific
> steps are executed. Otherwise, no additional consideration is required.
> 
> Signed-off-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
> ---
>  drivers/infiniband/sw/rxe/rxe.c      |  1 +
>  drivers/infiniband/sw/rxe/rxe_loc.h  |  7 +++
>  drivers/infiniband/sw/rxe/rxe_odp.c  | 73 ++++++++++++++++++++++++++--
>  drivers/infiniband/sw/rxe/rxe_resp.c | 13 ++---
>  include/rdma/ib_verbs.h              |  1 +
>  5 files changed, 85 insertions(+), 10 deletions(-)

<...>

>  
> +static unsigned long rxe_odp_iova_to_index(struct ib_umem_odp *umem_odp, u64 iova)
> +{
> +	return (iova - ib_umem_start(umem_odp)) >> umem_odp->page_shift;
> +}
> +
> +static unsigned long rxe_odp_iova_to_page_offset(struct ib_umem_odp *umem_odp, u64 iova)
> +{
> +	return iova & (BIT(umem_odp->page_shift) - 1);
> +}
> +
>  static int rxe_odp_map_range_and_lock(struct rxe_mr *mr, u64 iova, int length, u32 flags)
>  {
>  	struct ib_umem_odp *umem_odp = to_ib_umem_odp(mr->umem);
> @@ -190,8 +201,8 @@ static int __rxe_odp_mr_copy(struct rxe_mr *mr, u64 iova, void *addr,
>  	size_t offset;
>  	u8 *user_va;
>  
> -	idx = (iova - ib_umem_start(umem_odp)) >> umem_odp->page_shift;
> -	offset = iova & (BIT(umem_odp->page_shift) - 1);
> +	idx = rxe_odp_iova_to_index(umem_odp, iova);
> +	offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
>  
>  	while (length > 0) {
>  		u8 *src, *dest;
> @@ -277,8 +288,8 @@ static int rxe_odp_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
>  		return RESPST_ERR_RKEY_VIOLATION;
>  	}
>  
> -	idx = (iova - ib_umem_start(umem_odp)) >> umem_odp->page_shift;
> -	page_offset = iova & (BIT(umem_odp->page_shift) - 1);
> +	idx = rxe_odp_iova_to_index(umem_odp, iova);
> +	page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
>  	page = hmm_pfn_to_page(umem_odp->pfn_list[idx]);
>  	if (!page)
>  		return RESPST_ERR_RKEY_VIOLATION;
> @@ -324,3 +335,57 @@ int rxe_odp_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
>  
>  	return err;
>  }
> +
> +int rxe_odp_flush_pmem_iova(struct rxe_mr *mr, u64 iova,
> +			    unsigned int length)
> +{

This function looks almost similar to existing rxe_flush_pmem_iova().
Can't you reuse existing functions instead of duplicating?

Thanks

> +	struct ib_umem_odp *umem_odp = to_ib_umem_odp(mr->umem);
> +	unsigned int page_offset;
> +	unsigned long index;
> +	struct page *page;
> +	unsigned int bytes;
> +	int err;
> +	u8 *va;
> +
> +	/* mr must be valid even if length is zero */
> +	if (WARN_ON(!mr))
> +		return -EINVAL;
> +
> +	if (length == 0)
> +		return 0;
> +
> +	err = mr_check_range(mr, iova, length);
> +	if (err)
> +		return err;
> +
> +	err = rxe_odp_map_range_and_lock(mr, iova, length,
> +					 RXE_PAGEFAULT_DEFAULT);
> +	if (err)
> +		return err;
> +
> +	while (length > 0) {
> +		index = rxe_odp_iova_to_index(umem_odp, iova);
> +		page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
> +
> +		page = hmm_pfn_to_page(umem_odp->pfn_list[index]);
> +		if (!page) {
> +			mutex_unlock(&umem_odp->umem_mutex);
> +			return -EFAULT;
> +		}
> +
> +		bytes = min_t(unsigned int, length,
> +			      mr_page_size(mr) - page_offset);
> +
> +		va = kmap_local_page(page);
> +		arch_wb_cache_pmem(va + page_offset, bytes);
> +		kunmap_local(va);
> +
> +		length -= bytes;
> +		iova += bytes;
> +		page_offset = 0;
> +	}
> +
> +	mutex_unlock(&umem_odp->umem_mutex);
> +
> +	return 0;
> +}

  parent reply	other threads:[~2025-03-17 18:22 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-14  8:10 [PATCH for-next v1 0/2] RDMA/rxe: RDMA FLUSH and ATOMIC WRITE with ODP Daisuke Matsuda
2025-03-14  8:10 ` [PATCH for-next v1 1/2] RDMA/rxe: Enable ODP in RDMA FLUSH operation Daisuke Matsuda
2025-03-15 19:23   ` Zhu Yanjun
2025-03-17 18:22   ` Leon Romanovsky [this message]
2025-03-18  9:47     ` Daisuke Matsuda (Fujitsu)
2025-03-14  8:10 ` [PATCH for-next v1 2/2] RDMA/rxe: Enable ODP in ATOMIC WRITE operation Daisuke Matsuda
2025-03-15 19:23   ` Zhu Yanjun
2025-03-15 19:21 ` [PATCH for-next v1 0/2] RDMA/rxe: RDMA FLUSH and ATOMIC WRITE with ODP Zhu Yanjun
2025-03-17  5:22   ` Daisuke Matsuda (Fujitsu)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250317182247.GY1322339@unreal \
    --to=leon@kernel.org \
    --cc=jgg@ziepe.ca \
    --cc=linux-rdma@vger.kernel.org \
    --cc=lizhijian@fujitsu.com \
    --cc=matsuda-daisuke@fujitsu.com \
    --cc=zyjzyj2000@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox