From: Bob Pearson <rpearsonhpe@gmail.com>
To: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>,
linux-rdma@vger.kernel.org, leonro@nvidia.com, jgg@nvidia.com,
zyjzyj2000@gmail.com
Cc: linux-kernel@vger.kernel.org, yangx.jy@fujitsu.com,
lizhijian@fujitsu.com, y-goto@fujitsu.com
Subject: Re: [PATCH for-next v5 7/7] RDMA/rxe: Add support for the traditional Atomic operations with ODP
Date: Mon, 22 May 2023 13:49:31 -0500 [thread overview]
Message-ID: <edad669f-e84e-a9ba-9554-87ae1d571931@gmail.com> (raw)
In-Reply-To: <2841b1a86987564f14f15ec5b59f6a8bead86b30.1684397037.git.matsuda-daisuke@fujitsu.com>
On 5/18/23 03:21, Daisuke Matsuda wrote:
> Enable 'fetch and add' and 'compare and swap' operations to manipulate
> data in an ODP-enabled MR. This is comprised of the following steps:
> 1. Check the driver page table(umem_odp->dma_list) to see if the target
> page is both readable and writable.
> 2. If not, then trigger page fault to map the page.
> 3. Update the entry in the MR xarray.
> 4. Execute the operation.
>
> umem_mutex is used to ensure that dma_list (an array of addresses of an MR)
> is not changed while it is being checked and that the target page is not
> invalidated before data access completes.
>
> Signed-off-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
> ---
> drivers/infiniband/sw/rxe/rxe.c | 1 +
> drivers/infiniband/sw/rxe/rxe_loc.h | 9 +++++++++
> drivers/infiniband/sw/rxe/rxe_odp.c | 26 ++++++++++++++++++++++++++
> drivers/infiniband/sw/rxe/rxe_resp.c | 5 ++++-
> 4 files changed, 40 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
> index 207a022156f0..abd3267c2873 100644
> --- a/drivers/infiniband/sw/rxe/rxe.c
> +++ b/drivers/infiniband/sw/rxe/rxe.c
> @@ -88,6 +88,7 @@ static void rxe_init_device_param(struct rxe_dev *rxe)
> rxe->attr.odp_caps.per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_RECV;
> rxe->attr.odp_caps.per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_WRITE;
> rxe->attr.odp_caps.per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_READ;
> + rxe->attr.odp_caps.per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_ATOMIC;
> rxe->attr.odp_caps.per_transport_caps.rc_odp_caps |= IB_ODP_SUPPORT_SRQ_RECV;
> }
> }
> diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
> index 4b95c8c46bdc..b9d2985774ee 100644
> --- a/drivers/infiniband/sw/rxe/rxe_loc.h
> +++ b/drivers/infiniband/sw/rxe/rxe_loc.h
> @@ -208,6 +208,9 @@ int rxe_odp_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length,
> u64 iova, int access_flags, struct rxe_mr *mr);
> int rxe_odp_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
> enum rxe_mr_copy_dir dir);
> +int rxe_odp_mr_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
> + u64 compare, u64 swap_add, u64 *orig_val);
> +
> #else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
> static inline int
> rxe_odp_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
> @@ -221,6 +224,12 @@ rxe_odp_mr_copy(struct rxe_mr *mr, u64 iova, void *addr,
> {
> return -EOPNOTSUPP;
> }
> +static inline int
> +rxe_odp_mr_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
> + u64 compare, u64 swap_add, u64 *orig_val)
> +{
> + return RESPST_ERR_UNSUPPORTED_OPCODE;
> +}
>
> #endif /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
>
> diff --git a/drivers/infiniband/sw/rxe/rxe_odp.c b/drivers/infiniband/sw/rxe/rxe_odp.c
> index cbe5d0c3fcc4..194b1fab98b7 100644
> --- a/drivers/infiniband/sw/rxe/rxe_odp.c
> +++ b/drivers/infiniband/sw/rxe/rxe_odp.c
> @@ -283,3 +283,29 @@ int rxe_odp_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
>
> return err;
> }
> +
> +int rxe_odp_mr_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
> + u64 compare, u64 swap_add, u64 *orig_val)
> +{
> + int err;
> + struct ib_umem_odp *umem_odp = to_ib_umem_odp(mr->umem);
> +
> + /* If pagefault is not required, umem mutex will be held until the
> + * atomic operation completes. Otherwise, it is released and locked
> + * again in rxe_odp_map_range() to let invalidation handler do its
> + * work meanwhile.
> + */
> + mutex_lock(&umem_odp->umem_mutex);
> +
> + /* Atomic operations manipulate a single char. */
> + err = rxe_odp_map_range(mr, iova, sizeof(char), 0);
> + if (err)
> + return err;
> +
> + err = rxe_mr_do_atomic_op(mr, iova, opcode, compare,
> + swap_add, orig_val);
> +
> + mutex_unlock(&umem_odp->umem_mutex);
> +
> + return err;
> +}
> diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
> index 90c31c4f2944..0a918145dc07 100644
> --- a/drivers/infiniband/sw/rxe/rxe_resp.c
> +++ b/drivers/infiniband/sw/rxe/rxe_resp.c
> @@ -684,7 +684,10 @@ static enum resp_states atomic_reply(struct rxe_qp *qp,
> u64 iova = qp->resp.va + qp->resp.offset;
>
> if (mr->odp_enabled)
> - err = RESPST_ERR_UNSUPPORTED_OPCODE;
> + err = rxe_odp_mr_atomic_op(mr, iova, pkt->opcode,
> + atmeth_comp(pkt),
> + atmeth_swap_add(pkt),
> + &res->atomic.orig_val);
> else
> err = rxe_mr_do_atomic_op(mr, iova, pkt->opcode,
> atmeth_comp(pkt),
Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com>
next prev parent reply other threads:[~2023-05-22 18:49 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-18 8:21 [PATCH for-next v5 0/7] On-Demand Paging on SoftRoCE Daisuke Matsuda
2023-05-18 8:21 ` [PATCH for-next v5 1/7] RDMA/rxe: Always defer tasks on responder and completer to workqueue Daisuke Matsuda
2023-05-18 8:26 ` Daisuke Matsuda (Fujitsu)
2023-05-18 22:25 ` Bob Pearson
2023-05-18 8:21 ` [PATCH for-next v5 2/7] RDMA/rxe: Make MR functions accessible from other rxe source code Daisuke Matsuda
2023-05-18 22:28 ` Bob Pearson
2023-05-18 8:21 ` [PATCH for-next v5 3/7] RDMA/rxe: Move resp_states definition to rxe_verbs.h Daisuke Matsuda
2023-05-18 22:30 ` Bob Pearson
2023-05-18 8:21 ` [PATCH for-next v5 4/7] RDMA/rxe: Add page invalidation support Daisuke Matsuda
2023-05-19 17:08 ` Bob Pearson
2023-05-18 8:21 ` [PATCH for-next v5 5/7] RDMA/rxe: Allow registering MRs for On-Demand Paging Daisuke Matsuda
2023-05-19 17:09 ` Bob Pearson
2023-06-12 16:18 ` Jason Gunthorpe
2023-07-19 6:00 ` Daisuke Matsuda (Fujitsu)
2023-07-21 18:46 ` Jason Gunthorpe
2023-05-18 8:21 ` [PATCH for-next v5 6/7] RDMA/rxe: Add support for Send/Recv/Write/Read with ODP Daisuke Matsuda
2023-05-19 17:10 ` Bob Pearson
2023-05-19 17:10 ` Bob Pearson
2023-06-12 16:22 ` Jason Gunthorpe
2023-07-19 6:01 ` Daisuke Matsuda (Fujitsu)
2023-09-08 6:35 ` Daisuke Matsuda (Fujitsu)
2023-09-08 13:14 ` Jason Gunthorpe
2023-05-18 8:21 ` [PATCH for-next v5 7/7] RDMA/rxe: Add support for the traditional Atomic operations " Daisuke Matsuda
2023-05-22 18:49 ` Bob Pearson [this message]
2023-05-19 6:41 ` [PATCH for-next v5 0/7] On-Demand Paging on SoftRoCE Guoqing Jiang
2023-05-19 9:57 ` Daisuke Matsuda (Fujitsu)
2023-05-19 10:20 ` Guoqing Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=edad669f-e84e-a9ba-9554-87ae1d571931@gmail.com \
--to=rpearsonhpe@gmail.com \
--cc=jgg@nvidia.com \
--cc=leonro@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=lizhijian@fujitsu.com \
--cc=matsuda-daisuke@fujitsu.com \
--cc=y-goto@fujitsu.com \
--cc=yangx.jy@fujitsu.com \
--cc=zyjzyj2000@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).