* [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib
@ 2021-03-04 12:45 Leon Romanovsky
2021-03-04 12:45 ` [PATCH rdma-next 1/3] RDMA/mlx5: Fix query RoCE port Leon Romanovsky
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Leon Romanovsky @ 2021-03-04 12:45 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb, Mark Zhang,
Shay Drory
From: Leon Romanovsky <leonro@nvidia.com>
Completely independent fixes and improvements to mlx5_ib driver.
Maor Gottlieb (1):
RDMA/mlx5: Fix query RoCE port
Mark Zhang (1):
RDMA/mlx5: Fix mlx5 rates to IB rates map
Shay Drory (1):
RDMA/mlx5: Create ODP EQ only when ODP MR is created
drivers/infiniband/hw/mlx5/main.c | 6 +++---
drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 ++++++
drivers/infiniband/hw/mlx5/mr.c | 3 +++
drivers/infiniband/hw/mlx5/odp.c | 32 +++++++++++++++++++---------
drivers/infiniband/hw/mlx5/qp.c | 15 ++++++++++++-
include/linux/mlx5/driver.h | 2 +-
6 files changed, 50 insertions(+), 15 deletions(-)
--
2.29.2
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH rdma-next 1/3] RDMA/mlx5: Fix query RoCE port
2021-03-04 12:45 [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Leon Romanovsky
@ 2021-03-04 12:45 ` Leon Romanovsky
2021-03-04 12:45 ` [PATCH rdma-next 2/3] RDMA/mlx5: Create ODP EQ only when ODP MR is created Leon Romanovsky
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2021-03-04 12:45 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: Maor Gottlieb, linux-rdma
From: Maor Gottlieb <maorg@nvidia.com>
mlx5_is_roce_enabled returns the devlink RoCE init value, therefore
it should be used only when driver is loaded.
Instead we just need to read the roce_en field.
In addition, rename mlx5_is_roce_enabled to mlx5_is_roce_init_enabled.
Fixes: 7a58779edd75 ("IB/mlx5: Improve query port for representor port")
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/main.c | 6 +++---
include/linux/mlx5/driver.h | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index b42b5d39a88e..aa3e90594a88 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -499,7 +499,7 @@ static int mlx5_query_port_roce(struct ib_device *device, u32 port_num,
translate_eth_proto_oper(eth_prot_oper, &props->active_speed,
&props->active_width, ext);
- if (!dev->is_rep && mlx5_is_roce_enabled(mdev)) {
+ if (!dev->is_rep && dev->mdev->roce.roce_en) {
u16 qkey_viol_cntr;
props->port_cap_flags |= IB_PORT_CM_SUP;
@@ -4175,7 +4175,7 @@ static int mlx5_ib_roce_init(struct mlx5_ib_dev *dev)
/* Register only for native ports */
err = mlx5_add_netdev_notifier(dev, port_num);
- if (err || dev->is_rep || !mlx5_is_roce_enabled(mdev))
+ if (err || dev->is_rep || !mlx5_is_roce_init_enabled(mdev))
/*
* We don't enable ETH interface for
* 1. IB representors
@@ -4712,7 +4712,7 @@ static int mlx5r_probe(struct auxiliary_device *adev,
dev->mdev = mdev;
dev->num_ports = num_ports;
- if (ll == IB_LINK_LAYER_ETHERNET && !mlx5_is_roce_enabled(mdev))
+ if (ll == IB_LINK_LAYER_ETHERNET && !mlx5_is_roce_init_enabled(mdev))
profile = &raw_eth_profile;
else
profile = &pf_profile;
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 53b89631a1d9..ab07f09f2bad 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -1226,7 +1226,7 @@ enum {
MLX5_TRIGGERED_CMD_COMP = (u64)1 << 32,
};
-static inline bool mlx5_is_roce_enabled(struct mlx5_core_dev *dev)
+static inline bool mlx5_is_roce_init_enabled(struct mlx5_core_dev *dev)
{
struct devlink *devlink = priv_to_devlink(dev);
union devlink_param_value val;
--
2.29.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH rdma-next 2/3] RDMA/mlx5: Create ODP EQ only when ODP MR is created
2021-03-04 12:45 [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Leon Romanovsky
2021-03-04 12:45 ` [PATCH rdma-next 1/3] RDMA/mlx5: Fix query RoCE port Leon Romanovsky
@ 2021-03-04 12:45 ` Leon Romanovsky
2021-03-12 0:07 ` Jason Gunthorpe
2021-03-04 12:45 ` [PATCH rdma-next 3/3] RDMA/mlx5: Fix mlx5 rates to IB rates map Leon Romanovsky
2021-03-12 0:30 ` [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Jason Gunthorpe
3 siblings, 1 reply; 9+ messages in thread
From: Leon Romanovsky @ 2021-03-04 12:45 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: Shay Drory, linux-rdma, Maor Gottlieb
From: Shay Drory <shayd@nvidia.com>
There is no need to create the ODP EQ if the user doesn't use ODP MRs.
Hence, create it only when the first ODP MR is created. This EQ will be
destroyed only when the device is unloaded.
This will decrease the number of EQs created per device. for example: If
we creates 1K devices (SF/VF/etc'), than we will decrease the num of EQs
by 1K.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 ++++++
drivers/infiniband/hw/mlx5/mr.c | 3 +++
drivers/infiniband/hw/mlx5/odp.c | 32 +++++++++++++++++++---------
3 files changed, 32 insertions(+), 10 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 544a41fec9cd..a31097538dc7 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1080,6 +1080,7 @@ struct mlx5_ib_dev {
struct mutex slow_path_mutex;
struct ib_odp_caps odp_caps;
u64 odp_max_size;
+ struct mutex odp_eq_mutex;
struct mlx5_ib_pf_eq odp_pf_eq;
struct xarray odp_mkeys;
@@ -1358,6 +1359,7 @@ struct ib_mr *mlx5_ib_reg_dm_mr(struct ib_pd *pd, struct ib_dm *dm,
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev);
int mlx5_ib_odp_init_one(struct mlx5_ib_dev *ibdev);
+int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq);
void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev);
int __init mlx5_ib_odp_init(void);
void mlx5_ib_odp_cleanup(void);
@@ -1377,6 +1379,11 @@ static inline void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
}
static inline int mlx5_ib_odp_init_one(struct mlx5_ib_dev *ibdev) { return 0; }
+static inline int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev,
+ struct mlx5_ib_pf_eq *eq)
+{
+ return 0;
+}
static inline void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev) {}
static inline int mlx5_ib_odp_init(void) { return 0; }
static inline void mlx5_ib_odp_cleanup(void) {}
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 86ffc7e5ef96..6700286cc05a 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1500,6 +1500,9 @@ static struct ib_mr *create_user_odp_mr(struct ib_pd *pd, u64 start, u64 length,
if (!IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING))
return ERR_PTR(-EOPNOTSUPP);
+ err = mlx5r_odp_create_eq(dev, &dev->odp_pf_eq);
+ if (err)
+ return ERR_PTR(err);
if (!start && length == U64_MAX) {
if (iova != 0)
return ERR_PTR(-EINVAL);
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 3008d1539ad4..a73e22dd4b2a 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -1531,20 +1531,27 @@ enum {
MLX5_IB_NUM_PF_DRAIN = 64,
};
-static int
-mlx5_ib_create_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
+int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
{
struct mlx5_eq_param param = {};
- int err;
+ int err = 0;
+ if (eq->core)
+ return 0;
+
+ mutex_lock(&dev->odp_eq_mutex);
+ if (eq->core)
+ goto unlock;
INIT_WORK(&eq->work, mlx5_ib_eq_pf_action);
spin_lock_init(&eq->lock);
eq->dev = dev;
eq->pool = mempool_create_kmalloc_pool(MLX5_IB_NUM_PF_DRAIN,
sizeof(struct mlx5_pagefault));
- if (!eq->pool)
- return -ENOMEM;
+ if (!eq->pool) {
+ err = -ENOMEM;
+ goto unlock;
+ }
eq->wq = alloc_workqueue("mlx5_ib_page_fault",
WQ_HIGHPRI | WQ_UNBOUND | WQ_MEM_RECLAIM,
@@ -1555,7 +1562,7 @@ mlx5_ib_create_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
}
eq->irq_nb.notifier_call = mlx5_ib_eq_pf_int;
- param = (struct mlx5_eq_param) {
+ param = (struct mlx5_eq_param){
.irq_index = 0,
.nent = MLX5_IB_NUM_PF_EQE,
};
@@ -1571,21 +1578,27 @@ mlx5_ib_create_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
goto err_eq;
}
+ mutex_unlock(&dev->odp_eq_mutex);
return 0;
err_eq:
mlx5_eq_destroy_generic(dev->mdev, eq->core);
err_wq:
+ eq->core = NULL;
destroy_workqueue(eq->wq);
err_mempool:
mempool_destroy(eq->pool);
+unlock:
+ mutex_unlock(&dev->odp_eq_mutex);
return err;
}
static int
-mlx5_ib_destroy_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
+mlx5_ib_odp_destroy_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
{
int err;
+ if (!eq->core)
+ return 0;
mlx5_eq_disable(dev->mdev, eq->core, &eq->irq_nb);
err = mlx5_eq_destroy_generic(dev->mdev, eq->core);
cancel_work_sync(&eq->work);
@@ -1642,8 +1655,7 @@ int mlx5_ib_odp_init_one(struct mlx5_ib_dev *dev)
}
}
- ret = mlx5_ib_create_pf_eq(dev, &dev->odp_pf_eq);
-
+ mutex_init(&dev->odp_eq_mutex);
return ret;
}
@@ -1652,7 +1664,7 @@ void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *dev)
if (!(dev->odp_caps.general_caps & IB_ODP_SUPPORT))
return;
- mlx5_ib_destroy_pf_eq(dev, &dev->odp_pf_eq);
+ mlx5_ib_odp_destroy_eq(dev, &dev->odp_pf_eq);
}
int mlx5_ib_odp_init(void)
--
2.29.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH rdma-next 3/3] RDMA/mlx5: Fix mlx5 rates to IB rates map
2021-03-04 12:45 [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Leon Romanovsky
2021-03-04 12:45 ` [PATCH rdma-next 1/3] RDMA/mlx5: Fix query RoCE port Leon Romanovsky
2021-03-04 12:45 ` [PATCH rdma-next 2/3] RDMA/mlx5: Create ODP EQ only when ODP MR is created Leon Romanovsky
@ 2021-03-04 12:45 ` Leon Romanovsky
2021-03-12 0:30 ` [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Jason Gunthorpe
3 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2021-03-04 12:45 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: Mark Zhang, linux-rdma, Maor Gottlieb
From: Mark Zhang <markzhang@nvidia.com>
Correct the map between mlx5 rates and corresponding ib rates, as they
are not always have a fixed offset between them.
Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Signed-off-by: Mark Zhang <markzhang@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/qp.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 669716425e83..f56f144dbfd2 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -3142,6 +3142,19 @@ enum {
MLX5_PATH_FLAG_COUNTER = 1 << 2,
};
+static int mlx5_to_ib_rate_map(u8 rate)
+{
+ static const int rates[] = { IB_RATE_PORT_CURRENT, IB_RATE_56_GBPS,
+ IB_RATE_25_GBPS, IB_RATE_100_GBPS,
+ IB_RATE_200_GBPS, IB_RATE_50_GBPS,
+ IB_RATE_400_GBPS };
+
+ if (rate < ARRAY_SIZE(rates))
+ return rates[rate];
+
+ return rate - MLX5_STAT_RATE_OFFSET;
+}
+
static int ib_to_mlx5_rate_map(u8 rate)
{
switch (rate) {
@@ -4481,7 +4494,7 @@ static void to_rdma_ah_attr(struct mlx5_ib_dev *ibdev,
rdma_ah_set_path_bits(ah_attr, MLX5_GET(ads, path, mlid));
static_rate = MLX5_GET(ads, path, stat_rate);
- rdma_ah_set_static_rate(ah_attr, static_rate ? static_rate - 5 : 0);
+ rdma_ah_set_static_rate(ah_attr, mlx5_to_ib_rate_map(static_rate));
if (MLX5_GET(ads, path, grh) ||
ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) {
rdma_ah_set_grh(ah_attr, NULL, MLX5_GET(ads, path, flow_label),
--
2.29.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH rdma-next 2/3] RDMA/mlx5: Create ODP EQ only when ODP MR is created
2021-03-04 12:45 ` [PATCH rdma-next 2/3] RDMA/mlx5: Create ODP EQ only when ODP MR is created Leon Romanovsky
@ 2021-03-12 0:07 ` Jason Gunthorpe
2021-03-12 6:36 ` Leon Romanovsky
0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2021-03-12 0:07 UTC (permalink / raw)
To: Leon Romanovsky; +Cc: Doug Ledford, Shay Drory, linux-rdma, Maor Gottlieb
On Thu, Mar 04, 2021 at 02:45:16PM +0200, Leon Romanovsky wrote:
> -static int
> -mlx5_ib_create_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
> +int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
> {
> struct mlx5_eq_param param = {};
> - int err;
> + int err = 0;
>
> + if (eq->core)
> + return 0;
> +
> + mutex_lock(&dev->odp_eq_mutex);
The above if is locked wrong.
Jason
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib
2021-03-04 12:45 [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Leon Romanovsky
` (2 preceding siblings ...)
2021-03-04 12:45 ` [PATCH rdma-next 3/3] RDMA/mlx5: Fix mlx5 rates to IB rates map Leon Romanovsky
@ 2021-03-12 0:30 ` Jason Gunthorpe
2021-03-12 6:37 ` Leon Romanovsky
3 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2021-03-12 0:30 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Doug Ledford, Leon Romanovsky, linux-rdma, Maor Gottlieb,
Mark Zhang, Shay Drory
On Thu, Mar 04, 2021 at 02:45:14PM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> Completely independent fixes and improvements to mlx5_ib driver.
>
> Maor Gottlieb (1):
> RDMA/mlx5: Fix query RoCE port
>
> Mark Zhang (1):
> RDMA/mlx5: Fix mlx5 rates to IB rates map
Applied to for-next
> Shay Drory (1):
> RDMA/mlx5: Create ODP EQ only when ODP MR is created
This one needs a fix
Thanks,
Jason
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH rdma-next 2/3] RDMA/mlx5: Create ODP EQ only when ODP MR is created
2021-03-12 0:07 ` Jason Gunthorpe
@ 2021-03-12 6:36 ` Leon Romanovsky
2021-03-12 13:03 ` Jason Gunthorpe
0 siblings, 1 reply; 9+ messages in thread
From: Leon Romanovsky @ 2021-03-12 6:36 UTC (permalink / raw)
To: Jason Gunthorpe; +Cc: Doug Ledford, Shay Drory, linux-rdma, Maor Gottlieb
On Thu, Mar 11, 2021 at 08:07:39PM -0400, Jason Gunthorpe wrote:
> On Thu, Mar 04, 2021 at 02:45:16PM +0200, Leon Romanovsky wrote:
> > -static int
> > -mlx5_ib_create_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
> > +int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
> > {
> > struct mlx5_eq_param param = {};
> > - int err;
> > + int err = 0;
> >
> > + if (eq->core)
> > + return 0;
> > +
> > + mutex_lock(&dev->odp_eq_mutex);
>
> The above if is locked wrong.
It is not wrong, but this is optimization for the case that will be always.
We are creating one ODP EQ for the whole life of the device. It means that
once it is created it will always exist and won't be destroyed till device
is destroyed. We don't need lock to check it.
We need lock only for first ODP EQ creation.
Thanks
>
> Jason
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib
2021-03-12 0:30 ` [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Jason Gunthorpe
@ 2021-03-12 6:37 ` Leon Romanovsky
0 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2021-03-12 6:37 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Doug Ledford, linux-rdma, Maor Gottlieb, Mark Zhang, Shay Drory
On Thu, Mar 11, 2021 at 08:30:16PM -0400, Jason Gunthorpe wrote:
> On Thu, Mar 04, 2021 at 02:45:14PM +0200, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@nvidia.com>
> >
> > Completely independent fixes and improvements to mlx5_ib driver.
> >
> > Maor Gottlieb (1):
> > RDMA/mlx5: Fix query RoCE port
> >
> > Mark Zhang (1):
> > RDMA/mlx5: Fix mlx5 rates to IB rates map
>
> Applied to for-next
>
> > Shay Drory (1):
> > RDMA/mlx5: Create ODP EQ only when ODP MR is created
>
> This one needs a fix
I don't think so. Please reconsider.
>
> Thanks,
> Jason
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH rdma-next 2/3] RDMA/mlx5: Create ODP EQ only when ODP MR is created
2021-03-12 6:36 ` Leon Romanovsky
@ 2021-03-12 13:03 ` Jason Gunthorpe
0 siblings, 0 replies; 9+ messages in thread
From: Jason Gunthorpe @ 2021-03-12 13:03 UTC (permalink / raw)
To: Leon Romanovsky; +Cc: Doug Ledford, Shay Drory, linux-rdma, Maor Gottlieb
On Fri, Mar 12, 2021 at 08:36:39AM +0200, Leon Romanovsky wrote:
> On Thu, Mar 11, 2021 at 08:07:39PM -0400, Jason Gunthorpe wrote:
> > On Thu, Mar 04, 2021 at 02:45:16PM +0200, Leon Romanovsky wrote:
> > > -static int
> > > -mlx5_ib_create_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
> > > +int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
> > > {
> > > struct mlx5_eq_param param = {};
> > > - int err;
> > > + int err = 0;
> > >
> > > + if (eq->core)
> > > + return 0;
> > > +
> > > + mutex_lock(&dev->odp_eq_mutex);
> >
> > The above if is locked wrong.
>
> It is not wrong, but this is optimization for the case that will be
> always.
It is wrong because there is no release/acquire on eq->core
Jason
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2021-03-12 13:04 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-03-04 12:45 [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Leon Romanovsky
2021-03-04 12:45 ` [PATCH rdma-next 1/3] RDMA/mlx5: Fix query RoCE port Leon Romanovsky
2021-03-04 12:45 ` [PATCH rdma-next 2/3] RDMA/mlx5: Create ODP EQ only when ODP MR is created Leon Romanovsky
2021-03-12 0:07 ` Jason Gunthorpe
2021-03-12 6:36 ` Leon Romanovsky
2021-03-12 13:03 ` Jason Gunthorpe
2021-03-04 12:45 ` [PATCH rdma-next 3/3] RDMA/mlx5: Fix mlx5 rates to IB rates map Leon Romanovsky
2021-03-12 0:30 ` [PATCH rdma-next 0/3] Batch independent fixes to mlx5_ib Jason Gunthorpe
2021-03-12 6:37 ` Leon Romanovsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox