netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next v1 0/3] ODP support for mlx5 DC QPs
@ 2019-08-04 10:00 Leon Romanovsky
  2019-08-04 10:00 ` [PATCH mlx5-next v1 1/3] IB/mlx5: Query ODP capabilities for DC Leon Romanovsky
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Leon Romanovsky @ 2019-08-04 10:00 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Michael Guralnik, Moni Shoua,
	Saeed Mahameed, linux-netdev

From: Leon Romanovsky <leonro@mellanox.com>

Changelog
 v1:
 * Fixed alignment to u64 in mlx5-abi.h (Gal P.)
 v0:
 * https://lore.kernel.org/linux-rdma/20190801122139.25224-1-leon@kernel.org

---------------------------------------------------------------------------------
From Michael,

The series adds support for on-demand paging for DC transport.
Adding handling of DC WQE parsing upon page faults and exposing
capabilities.

As DC is mlx-only transport, the capabilities are exposed to the user
using the direct-verbs mechanism. Namely through the
mlx5dv_query_device.

Thanks

Thanks

Michael Guralnik (3):
  IB/mlx5: Query ODP capabilities for DC
  IB/mlx5: Expose ODP for DC capabilities to user
  IB/mlx5: Add page fault handler for DC initiator WQE

 drivers/infiniband/hw/mlx5/main.c             |  6 +++++
 drivers/infiniband/hw/mlx5/mlx5_ib.h          |  1 +
 drivers/infiniband/hw/mlx5/odp.c              | 27 ++++++++++++++++++-
 .../net/ethernet/mellanox/mlx5/core/main.c    |  6 +++++
 include/linux/mlx5/mlx5_ifc.h                 |  4 ++-
 include/uapi/rdma/mlx5-abi.h                  |  3 +++
 6 files changed, 45 insertions(+), 2 deletions(-)

--
2.20.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH mlx5-next v1 1/3] IB/mlx5: Query ODP capabilities for DC
  2019-08-04 10:00 [PATCH rdma-next v1 0/3] ODP support for mlx5 DC QPs Leon Romanovsky
@ 2019-08-04 10:00 ` Leon Romanovsky
  2019-08-05 18:23   ` Saeed Mahameed
  2019-08-04 10:00 ` [PATCH rdma-next v1 2/3] IB/mlx5: Expose ODP for DC capabilities to user Leon Romanovsky
  2019-08-04 10:00 ` [PATCH rdma-next v1 3/3] IB/mlx5: Add page fault handler for DC initiator WQE Leon Romanovsky
  2 siblings, 1 reply; 6+ messages in thread
From: Leon Romanovsky @ 2019-08-04 10:00 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Michael Guralnik, Moni Shoua,
	Saeed Mahameed, linux-netdev

From: Michael Guralnik <michaelgur@mellanox.com>

Set current capabilities of ODP for DC to max capabilities and cache
them in mlx5_ib.

Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Reviewed-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h           |  1 +
 drivers/infiniband/hw/mlx5/odp.c               | 18 ++++++++++++++++++
 drivers/net/ethernet/mellanox/mlx5/core/main.c |  6 ++++++
 include/linux/mlx5/mlx5_ifc.h                  |  4 +++-
 4 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index cb41a7e6255a..f99c71b3c876 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -967,6 +967,7 @@ struct mlx5_ib_dev {
 	struct mutex			slow_path_mutex;
 	int				fill_delay;
 	struct ib_odp_caps	odp_caps;
+	uint32_t		dc_odp_caps;
 	u64			odp_max_size;
 	struct mlx5_ib_pf_eq	odp_pf_eq;
 
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index b0c5de39d186..5e87a5e25574 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -353,6 +353,24 @@ void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
 	if (MLX5_CAP_ODP(dev->mdev, xrc_odp_caps.srq_receive))
 		caps->per_transport_caps.xrc_odp_caps |= IB_ODP_SUPPORT_SRQ_RECV;
 
+	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.send))
+		dev->dc_odp_caps |= IB_ODP_SUPPORT_SEND;
+
+	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.receive))
+		dev->dc_odp_caps |= IB_ODP_SUPPORT_RECV;
+
+	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.write))
+		dev->dc_odp_caps |= IB_ODP_SUPPORT_WRITE;
+
+	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.read))
+		dev->dc_odp_caps |= IB_ODP_SUPPORT_READ;
+
+	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.atomic))
+		dev->dc_odp_caps |= IB_ODP_SUPPORT_ATOMIC;
+
+	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.srq_receive))
+		dev->dc_odp_caps |= IB_ODP_SUPPORT_SRQ_RECV;
+
 	if (MLX5_CAP_GEN(dev->mdev, fixed_buffer_size) &&
 	    MLX5_CAP_GEN(dev->mdev, null_mkey) &&
 	    MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset))
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index b15b27a497fc..3995fc6d4d34 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -495,6 +495,12 @@ static int handle_hca_cap_odp(struct mlx5_core_dev *dev)
 	ODP_CAP_SET_MAX(dev, xrc_odp_caps.write);
 	ODP_CAP_SET_MAX(dev, xrc_odp_caps.read);
 	ODP_CAP_SET_MAX(dev, xrc_odp_caps.atomic);
+	ODP_CAP_SET_MAX(dev, dc_odp_caps.srq_receive);
+	ODP_CAP_SET_MAX(dev, dc_odp_caps.send);
+	ODP_CAP_SET_MAX(dev, dc_odp_caps.receive);
+	ODP_CAP_SET_MAX(dev, dc_odp_caps.write);
+	ODP_CAP_SET_MAX(dev, dc_odp_caps.read);
+	ODP_CAP_SET_MAX(dev, dc_odp_caps.atomic);
 
 	if (do_set)
 		err = set_caps(dev, set_ctx, set_sz,
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index ec571fd7fcf8..5eae8d734435 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -944,7 +944,9 @@ struct mlx5_ifc_odp_cap_bits {
 
 	struct mlx5_ifc_odp_per_transport_service_cap_bits xrc_odp_caps;
 
-	u8         reserved_at_100[0x700];
+	struct mlx5_ifc_odp_per_transport_service_cap_bits dc_odp_caps;
+
+	u8         reserved_at_100[0x6E0];
 };
 
 struct mlx5_ifc_calc_op {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH rdma-next v1 2/3] IB/mlx5: Expose ODP for DC capabilities to user
  2019-08-04 10:00 [PATCH rdma-next v1 0/3] ODP support for mlx5 DC QPs Leon Romanovsky
  2019-08-04 10:00 ` [PATCH mlx5-next v1 1/3] IB/mlx5: Query ODP capabilities for DC Leon Romanovsky
@ 2019-08-04 10:00 ` Leon Romanovsky
  2019-08-04 10:00 ` [PATCH rdma-next v1 3/3] IB/mlx5: Add page fault handler for DC initiator WQE Leon Romanovsky
  2 siblings, 0 replies; 6+ messages in thread
From: Leon Romanovsky @ 2019-08-04 10:00 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Michael Guralnik, Moni Shoua,
	Saeed Mahameed, linux-netdev

From: Michael Guralnik <michaelgur@mellanox.com>

Return ODP capabilities for DC to user in alloc_context.

Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Reviewed-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/main.c | 6 ++++++
 include/uapi/rdma/mlx5-abi.h      | 3 +++
 2 files changed, 9 insertions(+)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 4a3d700cd783..a53e0dc7c17f 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -1954,6 +1954,12 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx,
 		resp.response_length += sizeof(resp.dump_fill_mkey);
 	}
 
+	if (field_avail(typeof(resp), dc_odp_caps, udata->outlen)) {
+		resp.dc_odp_caps = dev->dc_odp_caps;
+		resp.comp_mask |= MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_DC_ODP_CAPS;
+		resp.response_length += sizeof(resp.dc_odp_caps);
+	}
+
 	err = ib_copy_to_udata(udata, &resp, resp.response_length);
 	if (err)
 		goto out_mdev;
diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
index 624f5b53eb1f..7cab806d7fa7 100644
--- a/include/uapi/rdma/mlx5-abi.h
+++ b/include/uapi/rdma/mlx5-abi.h
@@ -98,6 +98,7 @@ struct mlx5_ib_alloc_ucontext_req_v2 {
 enum mlx5_ib_alloc_ucontext_resp_mask {
 	MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_CORE_CLOCK_OFFSET = 1UL << 0,
 	MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_DUMP_FILL_MKEY    = 1UL << 1,
+	MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_DC_ODP_CAPS	   = 1UL << 2,
 };
 
 enum mlx5_user_cmds_supp_uhw {
@@ -147,6 +148,8 @@ struct mlx5_ib_alloc_ucontext_resp {
 	__u32	num_uars_per_page;
 	__u32	num_dyn_bfregs;
 	__u32	dump_fill_mkey;
+	__u32	dc_odp_caps;
+	__u32   reserved;
 };
 
 struct mlx5_ib_alloc_pd_resp {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH rdma-next v1 3/3] IB/mlx5: Add page fault handler for DC initiator WQE
  2019-08-04 10:00 [PATCH rdma-next v1 0/3] ODP support for mlx5 DC QPs Leon Romanovsky
  2019-08-04 10:00 ` [PATCH mlx5-next v1 1/3] IB/mlx5: Query ODP capabilities for DC Leon Romanovsky
  2019-08-04 10:00 ` [PATCH rdma-next v1 2/3] IB/mlx5: Expose ODP for DC capabilities to user Leon Romanovsky
@ 2019-08-04 10:00 ` Leon Romanovsky
  2 siblings, 0 replies; 6+ messages in thread
From: Leon Romanovsky @ 2019-08-04 10:00 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Michael Guralnik, Moni Shoua,
	Saeed Mahameed, linux-netdev

From: Michael Guralnik <michaelgur@mellanox.com>

Parsing DC initiator WQEs upon page fault requires skipping an address
vector segment, as in UD WQEs.

Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Reviewed-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/odp.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 5e87a5e25574..6f1de5edbe8e 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -1065,6 +1065,12 @@ static int mlx5_ib_mr_initiator_pfault_handler(
 	case IB_QPT_UD:
 		transport_caps = dev->odp_caps.per_transport_caps.ud_odp_caps;
 		break;
+	case IB_QPT_DRIVER:
+		if (qp->qp_sub_type == MLX5_IB_QPT_DCI) {
+			transport_caps = dev->dc_odp_caps;
+			break;
+		}
+		/* fall through */
 	default:
 		mlx5_ib_err(dev, "ODP fault on QP of an unsupported transport 0x%x\n",
 			    qp->ibqp.qp_type);
@@ -1078,7 +1084,8 @@ static int mlx5_ib_mr_initiator_pfault_handler(
 		return -EFAULT;
 	}
 
-	if (qp->ibqp.qp_type == IB_QPT_UD) {
+	if (qp->ibqp.qp_type == IB_QPT_UD ||
+	    qp->qp_sub_type == MLX5_IB_QPT_DCI) {
 		av = *wqe;
 		if (av->dqp_dct & cpu_to_be32(MLX5_EXTENDED_UD_AV))
 			*wqe += sizeof(struct mlx5_av);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH mlx5-next v1 1/3] IB/mlx5: Query ODP capabilities for DC
  2019-08-04 10:00 ` [PATCH mlx5-next v1 1/3] IB/mlx5: Query ODP capabilities for DC Leon Romanovsky
@ 2019-08-05 18:23   ` Saeed Mahameed
  2019-08-06  7:02     ` Leon Romanovsky
  0 siblings, 1 reply; 6+ messages in thread
From: Saeed Mahameed @ 2019-08-05 18:23 UTC (permalink / raw)
  To: Jason Gunthorpe, leon@kernel.org, dledford@redhat.com
  Cc: Michael Guralnik, Moni Shoua, netdev@vger.kernel.org,
	Leon Romanovsky, linux-rdma@vger.kernel.org

On Sun, 2019-08-04 at 13:00 +0300, Leon Romanovsky wrote:
> From: Michael Guralnik <michaelgur@mellanox.com>
> 
> Set current capabilities of ODP for DC to max capabilities and cache
> them in mlx5_ib.
> 
> Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
> Reviewed-by: Moni Shoua <monis@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> ---
>  drivers/infiniband/hw/mlx5/mlx5_ib.h           |  1 +
>  drivers/infiniband/hw/mlx5/odp.c               | 18 
> ++++++++++++++++++
>  drivers/net/ethernet/mellanox/mlx5/core/main.c |  6 ++++++
>  include/linux/mlx5/mlx5_ifc.h                  |  4 +++-

Please avoid cross tree changes when you can.. 
Here you do can avoid it, so please separate to two stage patches,
mlx5_ifc and core, then mlx5_ib.


>  4 files changed, 28 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h
> b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> index cb41a7e6255a..f99c71b3c876 100644
> --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
> +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> @@ -967,6 +967,7 @@ struct mlx5_ib_dev {
>  	struct mutex			slow_path_mutex;
>  	int				fill_delay;
>  	struct ib_odp_caps	odp_caps;
> +	uint32_t		dc_odp_caps;
>  	u64			odp_max_size;
>  	struct mlx5_ib_pf_eq	odp_pf_eq;
>  
> diff --git a/drivers/infiniband/hw/mlx5/odp.c
> b/drivers/infiniband/hw/mlx5/odp.c
> index b0c5de39d186..5e87a5e25574 100644
> --- a/drivers/infiniband/hw/mlx5/odp.c
> +++ b/drivers/infiniband/hw/mlx5/odp.c
> @@ -353,6 +353,24 @@ void mlx5_ib_internal_fill_odp_caps(struct
> mlx5_ib_dev *dev)
>  	if (MLX5_CAP_ODP(dev->mdev, xrc_odp_caps.srq_receive))
>  		caps->per_transport_caps.xrc_odp_caps |=
> IB_ODP_SUPPORT_SRQ_RECV;
>  
> +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.send))
> +		dev->dc_odp_caps |= IB_ODP_SUPPORT_SEND;
> +
> +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.receive))
> +		dev->dc_odp_caps |= IB_ODP_SUPPORT_RECV;
> +
> +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.write))
> +		dev->dc_odp_caps |= IB_ODP_SUPPORT_WRITE;
> +
> +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.read))
> +		dev->dc_odp_caps |= IB_ODP_SUPPORT_READ;
> +
> +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.atomic))
> +		dev->dc_odp_caps |= IB_ODP_SUPPORT_ATOMIC;
> +
> +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.srq_receive))
> +		dev->dc_odp_caps |= IB_ODP_SUPPORT_SRQ_RECV;
> +
>  	if (MLX5_CAP_GEN(dev->mdev, fixed_buffer_size) &&
>  	    MLX5_CAP_GEN(dev->mdev, null_mkey) &&
>  	    MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset))
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c
> b/drivers/net/ethernet/mellanox/mlx5/core/main.c
> index b15b27a497fc..3995fc6d4d34 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
> @@ -495,6 +495,12 @@ static int handle_hca_cap_odp(struct
> mlx5_core_dev *dev)
>  	ODP_CAP_SET_MAX(dev, xrc_odp_caps.write);
>  	ODP_CAP_SET_MAX(dev, xrc_odp_caps.read);
>  	ODP_CAP_SET_MAX(dev, xrc_odp_caps.atomic);
> +	ODP_CAP_SET_MAX(dev, dc_odp_caps.srq_receive);
> +	ODP_CAP_SET_MAX(dev, dc_odp_caps.send);
> +	ODP_CAP_SET_MAX(dev, dc_odp_caps.receive);
> +	ODP_CAP_SET_MAX(dev, dc_odp_caps.write);
> +	ODP_CAP_SET_MAX(dev, dc_odp_caps.read);
> +	ODP_CAP_SET_MAX(dev, dc_odp_caps.atomic);
>  
>  	if (do_set)
>  		err = set_caps(dev, set_ctx, set_sz,
> diff --git a/include/linux/mlx5/mlx5_ifc.h
> b/include/linux/mlx5/mlx5_ifc.h
> index ec571fd7fcf8..5eae8d734435 100644
> --- a/include/linux/mlx5/mlx5_ifc.h
> +++ b/include/linux/mlx5/mlx5_ifc.h
> @@ -944,7 +944,9 @@ struct mlx5_ifc_odp_cap_bits {
>  
>  	struct mlx5_ifc_odp_per_transport_service_cap_bits
> xrc_odp_caps;
>  
> -	u8         reserved_at_100[0x700];
> +	struct mlx5_ifc_odp_per_transport_service_cap_bits dc_odp_caps;
> +
> +	u8         reserved_at_100[0x6E0];

reserved_at_100 should move 20 bit forward. i.e reserved_at_120



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH mlx5-next v1 1/3] IB/mlx5: Query ODP capabilities for DC
  2019-08-05 18:23   ` Saeed Mahameed
@ 2019-08-06  7:02     ` Leon Romanovsky
  0 siblings, 0 replies; 6+ messages in thread
From: Leon Romanovsky @ 2019-08-06  7:02 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Jason Gunthorpe, dledford@redhat.com, Michael Guralnik,
	Moni Shoua, netdev@vger.kernel.org, linux-rdma@vger.kernel.org

On Mon, Aug 05, 2019 at 06:23:04PM +0000, Saeed Mahameed wrote:
> On Sun, 2019-08-04 at 13:00 +0300, Leon Romanovsky wrote:
> > From: Michael Guralnik <michaelgur@mellanox.com>
> >
> > Set current capabilities of ODP for DC to max capabilities and cache
> > them in mlx5_ib.
> >
> > Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
> > Reviewed-by: Moni Shoua <monis@mellanox.com>
> > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> > ---
> >  drivers/infiniband/hw/mlx5/mlx5_ib.h           |  1 +
> >  drivers/infiniband/hw/mlx5/odp.c               | 18
> > ++++++++++++++++++
> >  drivers/net/ethernet/mellanox/mlx5/core/main.c |  6 ++++++
> >  include/linux/mlx5/mlx5_ifc.h                  |  4 +++-
>
> Please avoid cross tree changes when you can..
> Here you do can avoid it, so please separate to two stage patches,
> mlx5_ifc and core, then mlx5_ib.
>
>
> >  4 files changed, 28 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h
> > b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> > index cb41a7e6255a..f99c71b3c876 100644
> > --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
> > +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> > @@ -967,6 +967,7 @@ struct mlx5_ib_dev {
> >  	struct mutex			slow_path_mutex;
> >  	int				fill_delay;
> >  	struct ib_odp_caps	odp_caps;
> > +	uint32_t		dc_odp_caps;
> >  	u64			odp_max_size;
> >  	struct mlx5_ib_pf_eq	odp_pf_eq;
> >
> > diff --git a/drivers/infiniband/hw/mlx5/odp.c
> > b/drivers/infiniband/hw/mlx5/odp.c
> > index b0c5de39d186..5e87a5e25574 100644
> > --- a/drivers/infiniband/hw/mlx5/odp.c
> > +++ b/drivers/infiniband/hw/mlx5/odp.c
> > @@ -353,6 +353,24 @@ void mlx5_ib_internal_fill_odp_caps(struct
> > mlx5_ib_dev *dev)
> >  	if (MLX5_CAP_ODP(dev->mdev, xrc_odp_caps.srq_receive))
> >  		caps->per_transport_caps.xrc_odp_caps |=
> > IB_ODP_SUPPORT_SRQ_RECV;
> >
> > +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.send))
> > +		dev->dc_odp_caps |= IB_ODP_SUPPORT_SEND;
> > +
> > +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.receive))
> > +		dev->dc_odp_caps |= IB_ODP_SUPPORT_RECV;
> > +
> > +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.write))
> > +		dev->dc_odp_caps |= IB_ODP_SUPPORT_WRITE;
> > +
> > +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.read))
> > +		dev->dc_odp_caps |= IB_ODP_SUPPORT_READ;
> > +
> > +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.atomic))
> > +		dev->dc_odp_caps |= IB_ODP_SUPPORT_ATOMIC;
> > +
> > +	if (MLX5_CAP_ODP(dev->mdev, dc_odp_caps.srq_receive))
> > +		dev->dc_odp_caps |= IB_ODP_SUPPORT_SRQ_RECV;
> > +
> >  	if (MLX5_CAP_GEN(dev->mdev, fixed_buffer_size) &&
> >  	    MLX5_CAP_GEN(dev->mdev, null_mkey) &&
> >  	    MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset))
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c
> > b/drivers/net/ethernet/mellanox/mlx5/core/main.c
> > index b15b27a497fc..3995fc6d4d34 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
> > @@ -495,6 +495,12 @@ static int handle_hca_cap_odp(struct
> > mlx5_core_dev *dev)
> >  	ODP_CAP_SET_MAX(dev, xrc_odp_caps.write);
> >  	ODP_CAP_SET_MAX(dev, xrc_odp_caps.read);
> >  	ODP_CAP_SET_MAX(dev, xrc_odp_caps.atomic);
> > +	ODP_CAP_SET_MAX(dev, dc_odp_caps.srq_receive);
> > +	ODP_CAP_SET_MAX(dev, dc_odp_caps.send);
> > +	ODP_CAP_SET_MAX(dev, dc_odp_caps.receive);
> > +	ODP_CAP_SET_MAX(dev, dc_odp_caps.write);
> > +	ODP_CAP_SET_MAX(dev, dc_odp_caps.read);
> > +	ODP_CAP_SET_MAX(dev, dc_odp_caps.atomic);
> >
> >  	if (do_set)
> >  		err = set_caps(dev, set_ctx, set_sz,
> > diff --git a/include/linux/mlx5/mlx5_ifc.h
> > b/include/linux/mlx5/mlx5_ifc.h
> > index ec571fd7fcf8..5eae8d734435 100644
> > --- a/include/linux/mlx5/mlx5_ifc.h
> > +++ b/include/linux/mlx5/mlx5_ifc.h
> > @@ -944,7 +944,9 @@ struct mlx5_ifc_odp_cap_bits {
> >
> >  	struct mlx5_ifc_odp_per_transport_service_cap_bits
> > xrc_odp_caps;
> >
> > -	u8         reserved_at_100[0x700];
> > +	struct mlx5_ifc_odp_per_transport_service_cap_bits dc_odp_caps;
> > +
> > +	u8         reserved_at_100[0x6E0];
>
> reserved_at_100 should move 20 bit forward. i.e reserved_at_120

Thanks for pointing it, I'm sending new version now.

>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-08-06  7:02 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-08-04 10:00 [PATCH rdma-next v1 0/3] ODP support for mlx5 DC QPs Leon Romanovsky
2019-08-04 10:00 ` [PATCH mlx5-next v1 1/3] IB/mlx5: Query ODP capabilities for DC Leon Romanovsky
2019-08-05 18:23   ` Saeed Mahameed
2019-08-06  7:02     ` Leon Romanovsky
2019-08-04 10:00 ` [PATCH rdma-next v1 2/3] IB/mlx5: Expose ODP for DC capabilities to user Leon Romanovsky
2019-08-04 10:00 ` [PATCH rdma-next v1 3/3] IB/mlx5: Add page fault handler for DC initiator WQE Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).