* [rdma-next 1/8] IB/mlx5: Expose software parsing for Raw Ethernet QP
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
@ 2017-08-17 12:52 ` Leon Romanovsky
2017-08-17 12:52 ` [rdma-next 2/8] IB/mlx5: Enable UMR for MRs created with reg_create Leon Romanovsky
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
To: Doug Ledford
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky,
Noa Osherovich
From: Noa Osherovich <noaos-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Software parsing (SWP) is a feature that can be used to instruct the
device to stop using its internal parser and to parse packets on the
transmit path according to offsets set for each packets.
Through this feature, the device allows the handling of checksum and
LSO by the hardware according to the location of IP and TCP/UDP
headers.
Enable SW parsing on Raw Ethernet send queue by default if firmware
supports it and report these capabilities to user space.
Signed-off-by: Noa Osherovich <noaos-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Maor Gottlieb <maorg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
drivers/infiniband/hw/mlx5/main.c | 21 +++++++++++++++++++++
drivers/infiniband/hw/mlx5/qp.c | 3 +++
include/uapi/rdma/mlx5-abi.h | 17 +++++++++++++++++
3 files changed, 41 insertions(+)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 894aec4a7c9d..544b0ae5d8dc 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -821,6 +821,27 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
if (field_avail(typeof(resp), reserved, uhw->outlen))
resp.response_length += sizeof(resp.reserved);
+ if (field_avail(typeof(resp), sw_parsing_caps,
+ uhw->outlen)) {
+ resp.response_length += sizeof(resp.sw_parsing_caps);
+ if (MLX5_CAP_ETH(mdev, swp)) {
+ resp.sw_parsing_caps.sw_parsing_offloads |=
+ MLX5_IB_SW_PARSING;
+
+ if (MLX5_CAP_ETH(mdev, swp_csum))
+ resp.sw_parsing_caps.sw_parsing_offloads |=
+ MLX5_IB_SW_PARSING_CSUM;
+
+ if (MLX5_CAP_ETH(mdev, swp_lso))
+ resp.sw_parsing_caps.sw_parsing_offloads |=
+ MLX5_IB_SW_PARSING_LSO;
+
+ if (resp.sw_parsing_caps.sw_parsing_offloads)
+ resp.sw_parsing_caps.supported_qpts =
+ BIT(IB_QPT_RAW_PACKET);
+ }
+ }
+
if (uhw->outlen) {
err = ib_copy_to_udata(uhw, &resp, resp.response_length);
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index ebbfd721661d..e58cc42a77a2 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1083,6 +1083,9 @@ static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev,
MLX5_SET(sqc, sqc, cqn, MLX5_GET(qpc, qpc, cqn_snd));
MLX5_SET(sqc, sqc, tis_lst_sz, 1);
MLX5_SET(sqc, sqc, tis_num_0, sq->tisn);
+ if (MLX5_CAP_GEN(dev->mdev, eth_net_offloads) &&
+ MLX5_CAP_ETH(dev->mdev, swp))
+ MLX5_SET(sqc, sqc, allow_swp, 1);
wq = MLX5_ADDR_OF(sqc, sqc, wq);
MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_CYCLIC);
diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
index 0b3d30837a9f..64d398e662cd 100644
--- a/include/uapi/rdma/mlx5-abi.h
+++ b/include/uapi/rdma/mlx5-abi.h
@@ -168,6 +168,22 @@ struct mlx5_packet_pacing_caps {
__u32 reserved;
};
+enum mlx5_ib_sw_parsing_offloads {
+ MLX5_IB_SW_PARSING = 1 << 0,
+ MLX5_IB_SW_PARSING_CSUM = 1 << 1,
+ MLX5_IB_SW_PARSING_LSO = 1 << 2,
+};
+
+struct mlx5_ib_sw_parsing_caps {
+ __u32 sw_parsing_offloads; /* enum mlx5_ib_sw_parsing_offloads */
+
+ /* Corresponding bit will be set if qp type from
+ * 'enum ib_qp_type' is supported, e.g.
+ * supported_qpts |= 1 << IB_QPT_RAW_PACKET
+ */
+ __u32 supported_qpts;
+};
+
struct mlx5_ib_query_device_resp {
__u32 comp_mask;
__u32 response_length;
@@ -177,6 +193,7 @@ struct mlx5_ib_query_device_resp {
struct mlx5_packet_pacing_caps packet_pacing_caps;
__u32 mlx5_ib_support_multi_pkt_send_wqes;
__u32 reserved;
+ struct mlx5_ib_sw_parsing_caps sw_parsing_caps;
};
struct mlx5_ib_create_cq {
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread* [rdma-next 2/8] IB/mlx5: Enable UMR for MRs created with reg_create
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2017-08-17 12:52 ` [rdma-next 1/8] IB/mlx5: Expose software parsing for Raw Ethernet QP Leon Romanovsky
@ 2017-08-17 12:52 ` Leon Romanovsky
2017-08-17 12:52 ` [rdma-next 3/8] IB/mlx5: Decouple MR allocation and population flows Leon Romanovsky
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
To: Doug Ledford
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Ilya Lesokhin
From: Ilya Lesokhin <ilyal-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
This patch is the first step in decoupling UMR usage and
allocation from the MR cache. The only functional change
in this patch is to enables UMR for MRs created with
reg_create.
This change fixes a bug where ODP memory regions that
were not allocated from the MR cache did not have UMR
enabled.
Signed-off-by: Ilya Lesokhin <ilyal-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
drivers/infiniband/hw/mlx5/mlx5_ib.h | 2 +-
drivers/infiniband/hw/mlx5/mr.c | 33 ++++++++++++++-------------------
include/linux/mlx5/driver.h | 2 +-
3 files changed, 16 insertions(+), 21 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 76a731f39d2b..189e80cd6b2f 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -508,7 +508,7 @@ struct mlx5_ib_mr {
struct mlx5_shared_mr_info *smr_info;
struct list_head list;
int order;
- int umred;
+ bool allocated_from_cache;
int npages;
struct mlx5_ib_dev *dev;
u32 out[MLX5_ST_SZ_DW(create_mkey_out)];
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index a0eb2f96179a..bc87016021e3 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -48,8 +48,7 @@ enum {
#define MLX5_UMR_ALIGN 2048
static int clean_mr(struct mlx5_ib_mr *mr);
-static int max_umr_order(struct mlx5_ib_dev *dev);
-static int use_umr(struct mlx5_ib_dev *dev, int order);
+static int mr_cache_max_order(struct mlx5_ib_dev *dev);
static int unreg_umr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr);
static int destroy_mkey(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
@@ -184,7 +183,7 @@ static int add_keys(struct mlx5_ib_dev *dev, int c, int num)
break;
}
mr->order = ent->order;
- mr->umred = 1;
+ mr->allocated_from_cache = 1;
mr->dev = dev;
MLX5_SET(mkc, mkc, free, 1);
@@ -497,7 +496,7 @@ static struct mlx5_ib_mr *alloc_cached_mr(struct mlx5_ib_dev *dev, int order)
int i;
c = order2idx(dev, order);
- last_umr_cache_entry = order2idx(dev, max_umr_order(dev));
+ last_umr_cache_entry = order2idx(dev, mr_cache_max_order(dev));
if (c < 0 || c > last_umr_cache_entry) {
mlx5_ib_warn(dev, "order %d, cache index %d\n", order, c);
return NULL;
@@ -677,12 +676,12 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func);
queue_work(cache->wq, &ent->work);
- if (i > MAX_UMR_CACHE_ENTRY) {
+ if (i > MR_CACHE_LAST_STD_ENTRY) {
mlx5_odp_init_mr_cache_entry(ent);
continue;
}
- if (!use_umr(dev, ent->order))
+ if (ent->order > mr_cache_max_order(dev))
continue;
ent->page = PAGE_SHIFT;
@@ -819,18 +818,13 @@ static int get_octo_len(u64 addr, u64 len, int page_size)
return (npages + 1) / 2;
}
-static int max_umr_order(struct mlx5_ib_dev *dev)
+static int mr_cache_max_order(struct mlx5_ib_dev *dev)
{
if (MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset))
- return MAX_UMR_CACHE_ENTRY + 2;
+ return MR_CACHE_LAST_STD_ENTRY + 2;
return MLX5_MAX_UMR_SHIFT;
}
-static int use_umr(struct mlx5_ib_dev *dev, int order)
-{
- return order <= max_umr_order(dev);
-}
-
static int mr_umem_get(struct ib_pd *pd, u64 start, u64 length,
int access_flags, struct ib_umem **umem,
int *npages, int *page_shift, int *ncont,
@@ -1149,6 +1143,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
MLX5_SET(mkc, mkc, rr, !!(access_flags & IB_ACCESS_REMOTE_READ));
MLX5_SET(mkc, mkc, lw, !!(access_flags & IB_ACCESS_LOCAL_WRITE));
MLX5_SET(mkc, mkc, lr, 1);
+ MLX5_SET(mkc, mkc, umr_en, 1);
MLX5_SET64(mkc, mkc, start_addr, virt_addr);
MLX5_SET64(mkc, mkc, len, length);
@@ -1231,7 +1226,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
if (err < 0)
return ERR_PTR(err);
- if (use_umr(dev, order)) {
+ if (order <= mr_cache_max_order(dev)) {
mr = reg_umr(pd, umem, virt_addr, length, ncont, page_shift,
order, access_flags);
if (PTR_ERR(mr) == -EAGAIN) {
@@ -1355,7 +1350,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
/*
* UMR can't be used - MKey needs to be replaced.
*/
- if (mr->umred) {
+ if (mr->allocated_from_cache) {
err = unreg_umr(dev, mr);
if (err)
mlx5_ib_warn(dev, "Failed to unregister MR\n");
@@ -1373,7 +1368,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
if (IS_ERR(mr))
return PTR_ERR(mr);
- mr->umred = 0;
+ mr->allocated_from_cache = 0;
} else {
/*
* Send a UMR WQE
@@ -1461,7 +1456,7 @@ mlx5_free_priv_descs(struct mlx5_ib_mr *mr)
static int clean_mr(struct mlx5_ib_mr *mr)
{
struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
- int umred = mr->umred;
+ int allocated_from_cache = mr->allocated_from_cache;
int err;
if (mr->sig) {
@@ -1479,7 +1474,7 @@ static int clean_mr(struct mlx5_ib_mr *mr)
mlx5_free_priv_descs(mr);
- if (!umred) {
+ if (!allocated_from_cache) {
err = destroy_mkey(dev, mr);
if (err) {
mlx5_ib_warn(dev, "failed to destroy mkey 0x%x (%d)\n",
@@ -1490,7 +1485,7 @@ static int clean_mr(struct mlx5_ib_mr *mr)
mlx5_mr_cache_free(dev, mr);
}
- if (!umred)
+ if (!allocated_from_cache)
kfree(mr);
return 0;
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index f17e51a4ef94..1640de3200b8 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -1099,7 +1099,7 @@ enum {
};
enum {
- MAX_UMR_CACHE_ENTRY = 20,
+ MR_CACHE_LAST_STD_ENTRY = 20,
MLX5_IMR_MTT_CACHE_ENTRY,
MLX5_IMR_KSM_CACHE_ENTRY,
MAX_MR_CACHE_ENTRIES
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread* [rdma-next 3/8] IB/mlx5: Decouple MR allocation and population flows
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2017-08-17 12:52 ` [rdma-next 1/8] IB/mlx5: Expose software parsing for Raw Ethernet QP Leon Romanovsky
2017-08-17 12:52 ` [rdma-next 2/8] IB/mlx5: Enable UMR for MRs created with reg_create Leon Romanovsky
@ 2017-08-17 12:52 ` Leon Romanovsky
2017-08-17 12:52 ` [rdma-next 4/8] IB/mlx5: Fix memory leak in clean_mr error path Leon Romanovsky
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
To: Doug Ledford
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Ilya Lesokhin
From: Ilya Lesokhin <ilyal-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
mlx5 compatible devices have two ways of populating the MTT
table of an MKEY: using a FW command and using a UMR WQE.
A UMR is much faster, so it should be used whenever possible.
Unfortunately the code today uses UMR only if the MKEY was allocated
from the MR cache.
Fix the code to use UMR even for MKEYs that were allocated using
a FW command.
Signed-off-by: Ilya Lesokhin <ilyal-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
drivers/infiniband/hw/mlx5/mr.c | 75 ++++++++++++++++++++++++-----------------
1 file changed, 45 insertions(+), 30 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index bc87016021e3..aa6f71570b77 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -898,7 +898,8 @@ static int mlx5_ib_post_send_wait(struct mlx5_ib_dev *dev,
return err;
}
-static struct mlx5_ib_mr *reg_umr(struct ib_pd *pd, struct ib_umem *umem,
+static struct mlx5_ib_mr *alloc_mr_from_cache(
+ struct ib_pd *pd, struct ib_umem *umem,
u64 virt_addr, u64 len, int npages,
int page_shift, int order, int access_flags)
{
@@ -930,16 +931,6 @@ static struct mlx5_ib_mr *reg_umr(struct ib_pd *pd, struct ib_umem *umem,
mr->mmkey.size = len;
mr->mmkey.pd = to_mpd(pd)->pdn;
- err = mlx5_ib_update_xlt(mr, 0, npages, page_shift,
- MLX5_IB_UPD_XLT_ENABLE);
-
- if (err) {
- mlx5_mr_cache_free(dev, mr);
- return ERR_PTR(err);
- }
-
- mr->live = 1;
-
return mr;
}
@@ -1105,7 +1096,8 @@ int mlx5_ib_update_xlt(struct mlx5_ib_mr *mr, u64 idx, int npages,
static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
u64 virt_addr, u64 length,
struct ib_umem *umem, int npages,
- int page_shift, int access_flags)
+ int page_shift, int access_flags,
+ bool populate)
{
struct mlx5_ib_dev *dev = to_mdev(pd->device);
struct mlx5_ib_mr *mr;
@@ -1120,15 +1112,19 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
if (!mr)
return ERR_PTR(-ENOMEM);
- inlen = MLX5_ST_SZ_BYTES(create_mkey_in) +
- sizeof(*pas) * ((npages + 1) / 2) * 2;
+ mr->ibmr.pd = pd;
+ mr->access_flags = access_flags;
+
+ inlen = MLX5_ST_SZ_BYTES(create_mkey_in);
+ if (populate)
+ inlen += sizeof(*pas) * roundup(npages, 2);
in = kvzalloc(inlen, GFP_KERNEL);
if (!in) {
err = -ENOMEM;
goto err_1;
}
pas = (__be64 *)MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt);
- if (!(access_flags & IB_ACCESS_ON_DEMAND))
+ if (populate && !(access_flags & IB_ACCESS_ON_DEMAND))
mlx5_ib_populate_pas(dev, umem, page_shift, pas,
pg_cap ? MLX5_IB_MTT_PRESENT : 0);
@@ -1137,6 +1133,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
MLX5_SET(create_mkey_in, in, pg_access, !!(pg_cap));
mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry);
+ MLX5_SET(mkc, mkc, free, !populate);
MLX5_SET(mkc, mkc, access_mode, MLX5_MKC_ACCESS_MODE_MTT);
MLX5_SET(mkc, mkc, a, !!(access_flags & IB_ACCESS_REMOTE_ATOMIC));
MLX5_SET(mkc, mkc, rw, !!(access_flags & IB_ACCESS_REMOTE_WRITE));
@@ -1153,8 +1150,10 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
get_octo_len(virt_addr, length, 1 << page_shift));
MLX5_SET(mkc, mkc, log_page_size, page_shift);
MLX5_SET(mkc, mkc, qpn, 0xffffff);
- MLX5_SET(create_mkey_in, in, translations_octword_actual_size,
- get_octo_len(virt_addr, length, 1 << page_shift));
+ if (populate) {
+ MLX5_SET(create_mkey_in, in, translations_octword_actual_size,
+ get_octo_len(virt_addr, length, 1 << page_shift));
+ }
err = mlx5_core_create_mkey(dev->mdev, &mr->mmkey, in, inlen);
if (err) {
@@ -1163,9 +1162,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
}
mr->mmkey.type = MLX5_MKEY_MR;
mr->desc_size = sizeof(struct mlx5_mtt);
- mr->umem = umem;
mr->dev = dev;
- mr->live = 1;
kvfree(in);
mlx5_ib_dbg(dev, "mkey = 0x%x\n", mr->mmkey.key);
@@ -1205,6 +1202,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
int ncont;
int order;
int err;
+ bool use_umr = true;
mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n",
start, virt_addr, length, access_flags);
@@ -1223,27 +1221,29 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
err = mr_umem_get(pd, start, length, access_flags, &umem, &npages,
&page_shift, &ncont, &order);
- if (err < 0)
+ if (err < 0)
return ERR_PTR(err);
if (order <= mr_cache_max_order(dev)) {
- mr = reg_umr(pd, umem, virt_addr, length, ncont, page_shift,
- order, access_flags);
+ mr = alloc_mr_from_cache(pd, umem, virt_addr, length, ncont,
+ page_shift, order, access_flags);
if (PTR_ERR(mr) == -EAGAIN) {
mlx5_ib_dbg(dev, "cache empty for order %d", order);
mr = NULL;
}
- } else if (access_flags & IB_ACCESS_ON_DEMAND &&
- !MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset)) {
- err = -EINVAL;
- pr_err("Got MR registration for ODP MR > 512MB, not supported for Connect-IB");
- goto error;
+ } else if (!MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset)) {
+ if (access_flags & IB_ACCESS_ON_DEMAND) {
+ err = -EINVAL;
+ pr_err("Got MR registration for ODP MR > 512MB, not supported for Connect-IB");
+ goto error;
+ }
+ use_umr = false;
}
if (!mr) {
mutex_lock(&dev->slow_path_mutex);
mr = reg_create(NULL, pd, virt_addr, length, umem, ncont,
- page_shift, access_flags);
+ page_shift, access_flags, !use_umr);
mutex_unlock(&dev->slow_path_mutex);
}
@@ -1261,8 +1261,22 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
update_odp_mr(mr);
#endif
- return &mr->ibmr;
+ if (use_umr) {
+ int update_xlt_flags = MLX5_IB_UPD_XLT_ENABLE;
+
+ if (access_flags & IB_ACCESS_ON_DEMAND)
+ update_xlt_flags |= MLX5_IB_UPD_XLT_ZAP;
+ err = mlx5_ib_update_xlt(mr, 0, ncont, page_shift,
+ update_xlt_flags);
+ if (err) {
+ mlx5_ib_dereg_mr(&mr->ibmr);
+ return ERR_PTR(err);
+ }
+ }
+
+ mr->live = 1;
+ return &mr->ibmr;
error:
ib_umem_release(umem);
return ERR_PTR(err);
@@ -1363,12 +1377,13 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
return err;
mr = reg_create(ib_mr, pd, addr, len, mr->umem, ncont,
- page_shift, access_flags);
+ page_shift, access_flags, true);
if (IS_ERR(mr))
return PTR_ERR(mr);
mr->allocated_from_cache = 0;
+ mr->live = 1;
} else {
/*
* Send a UMR WQE
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread* [rdma-next 4/8] IB/mlx5: Fix memory leak in clean_mr error path
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (2 preceding siblings ...)
2017-08-17 12:52 ` [rdma-next 3/8] IB/mlx5: Decouple MR allocation and population flows Leon Romanovsky
@ 2017-08-17 12:52 ` Leon Romanovsky
2017-08-17 12:52 ` [rdma-next 5/8] IB/mlx5: Fix integer overflow when page_shift == 31 Leon Romanovsky
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
To: Doug Ledford
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Kamal Heib
From: Kamal Heib <kamalh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
In clean_mr error path the 'mr' should be freed.
Fixes: e126ba97dba9 ('mlx5: Add driver for Mellanox Connect-IB adapters')
Signed-off-by: Kamal Heib <kamalh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
drivers/infiniband/hw/mlx5/mr.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index aa6f71570b77..8cf53eb13148 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1490,19 +1490,19 @@ static int clean_mr(struct mlx5_ib_mr *mr)
mlx5_free_priv_descs(mr);
if (!allocated_from_cache) {
+ u32 key = mr->mmkey.key;
+
err = destroy_mkey(dev, mr);
+ kfree(mr);
if (err) {
mlx5_ib_warn(dev, "failed to destroy mkey 0x%x (%d)\n",
- mr->mmkey.key, err);
+ key, err);
return err;
}
} else {
mlx5_mr_cache_free(dev, mr);
}
- if (!allocated_from_cache)
- kfree(mr);
-
return 0;
}
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread* [rdma-next 5/8] IB/mlx5: Fix integer overflow when page_shift == 31
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (3 preceding siblings ...)
2017-08-17 12:52 ` [rdma-next 4/8] IB/mlx5: Fix memory leak in clean_mr error path Leon Romanovsky
@ 2017-08-17 12:52 ` Leon Romanovsky
2017-08-17 12:52 ` [rdma-next 6/8] IB/mlx5: Add support for multi underlay QP Leon Romanovsky
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
To: Doug Ledford
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Ilya Lesokhin
From: Ilya Lesokhin <ilyal-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Fix a bug where MR registration fails when mlx5_ib_cont_pages
indicates that the MR can be mapped using 2GB pages (page_shift == 31).
Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Signed-off-by: Ilya Lesokhin <ilyal-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
drivers/infiniband/hw/mlx5/mr.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 8cf53eb13148..0e2789d9bb4d 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -808,13 +808,14 @@ struct ib_mr *mlx5_ib_get_dma_mr(struct ib_pd *pd, int acc)
return ERR_PTR(err);
}
-static int get_octo_len(u64 addr, u64 len, int page_size)
+static int get_octo_len(u64 addr, u64 len, int page_shift)
{
+ u64 page_size = 1ULL << page_shift;
u64 offset;
int npages;
offset = addr & (page_size - 1);
- npages = ALIGN(len + offset, page_size) >> ilog2(page_size);
+ npages = ALIGN(len + offset, page_size) >> page_shift;
return (npages + 1) / 2;
}
@@ -1147,12 +1148,12 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
MLX5_SET(mkc, mkc, pd, to_mpd(pd)->pdn);
MLX5_SET(mkc, mkc, bsf_octword_size, 0);
MLX5_SET(mkc, mkc, translations_octword_size,
- get_octo_len(virt_addr, length, 1 << page_shift));
+ get_octo_len(virt_addr, length, page_shift));
MLX5_SET(mkc, mkc, log_page_size, page_shift);
MLX5_SET(mkc, mkc, qpn, 0xffffff);
if (populate) {
MLX5_SET(create_mkey_in, in, translations_octword_actual_size,
- get_octo_len(virt_addr, length, 1 << page_shift));
+ get_octo_len(virt_addr, length, page_shift));
}
err = mlx5_core_create_mkey(dev->mdev, &mr->mmkey, in, inlen);
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread* [rdma-next 6/8] IB/mlx5: Add support for multi underlay QP
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (4 preceding siblings ...)
2017-08-17 12:52 ` [rdma-next 5/8] IB/mlx5: Fix integer overflow when page_shift == 31 Leon Romanovsky
@ 2017-08-17 12:52 ` Leon Romanovsky
2017-08-17 12:52 ` [rdma-next 7/8] IB/mlx5: Allow posting multi packet send WQEs if hardware supports Leon Romanovsky
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
To: Doug Ledford
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Yishai Hadas
From: Yishai Hadas <yishaih-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Set underlay QPN as part of flow rule when it's applicable.
There is one root flow table in the NIC RX namespace and all the
underlay QPs steer the traffic to this flow table.
In order to prevent QP to get traffic which is not target to its
underlay QP, we need to set the underlay QP number as part of
the steering matching.
Note:
When multicast traffic is sent the QPN filtering is done by the firmware
as some early step. Adding the QPN match on the flow table entry is
wrong as by that time the target QPN holds the multicast address (e.g.
FF(s)) and it won't match.
Signed-off-by: Yishai Hadas <yishaih-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
drivers/infiniband/hw/mlx5/main.c | 49 +++++++++++++++++++++++++++++++++------
include/linux/mlx5/mlx5_ifc.h | 8 +++++--
2 files changed, 48 insertions(+), 9 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 544b0ae5d8dc..438663356982 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2134,7 +2134,7 @@ static int parse_flow_attr(struct mlx5_core_dev *mdev, u32 *match_c,
* it won't fall into the multicast flow steering table and this rule
* could steal other multicast packets.
*/
-static bool flow_is_multicast_only(struct ib_flow_attr *ib_attr)
+static bool flow_is_multicast_only(const struct ib_flow_attr *ib_attr)
{
union ib_flow_spec *flow_spec;
@@ -2346,10 +2346,31 @@ static struct mlx5_ib_flow_prio *get_flow_table(struct mlx5_ib_dev *dev,
return err ? ERR_PTR(err) : prio;
}
-static struct mlx5_ib_flow_handler *create_flow_rule(struct mlx5_ib_dev *dev,
- struct mlx5_ib_flow_prio *ft_prio,
- const struct ib_flow_attr *flow_attr,
- struct mlx5_flow_destination *dst)
+static void set_underlay_qp(struct mlx5_ib_dev *dev,
+ struct mlx5_flow_spec *spec,
+ u32 underlay_qpn)
+{
+ void *misc_params_c = MLX5_ADDR_OF(fte_match_param,
+ spec->match_criteria,
+ misc_parameters);
+ void *misc_params_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ misc_parameters);
+
+ if (underlay_qpn &&
+ MLX5_CAP_FLOWTABLE_NIC_RX(dev->mdev,
+ ft_field_support.bth_dst_qp)) {
+ MLX5_SET(fte_match_set_misc,
+ misc_params_v, bth_dst_qp, underlay_qpn);
+ MLX5_SET(fte_match_set_misc,
+ misc_params_c, bth_dst_qp, 0xffffff);
+ }
+}
+
+static struct mlx5_ib_flow_handler *_create_flow_rule(struct mlx5_ib_dev *dev,
+ struct mlx5_ib_flow_prio *ft_prio,
+ const struct ib_flow_attr *flow_attr,
+ struct mlx5_flow_destination *dst,
+ u32 underlay_qpn)
{
struct mlx5_flow_table *ft = ft_prio->flow_table;
struct mlx5_ib_flow_handler *handler;
@@ -2385,6 +2406,9 @@ static struct mlx5_ib_flow_handler *create_flow_rule(struct mlx5_ib_dev *dev,
ib_flow += ((union ib_flow_spec *)ib_flow)->size;
}
+ if (!flow_is_multicast_only(flow_attr))
+ set_underlay_qp(dev, spec, underlay_qpn);
+
spec->match_criteria_enable = get_match_criteria_enable(spec->match_criteria);
if (is_drop) {
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
@@ -2424,6 +2448,14 @@ static struct mlx5_ib_flow_handler *create_flow_rule(struct mlx5_ib_dev *dev,
return err ? ERR_PTR(err) : handler;
}
+static struct mlx5_ib_flow_handler *create_flow_rule(struct mlx5_ib_dev *dev,
+ struct mlx5_ib_flow_prio *ft_prio,
+ const struct ib_flow_attr *flow_attr,
+ struct mlx5_flow_destination *dst)
+{
+ return _create_flow_rule(dev, ft_prio, flow_attr, dst, 0);
+}
+
static struct mlx5_ib_flow_handler *create_dont_trap_rule(struct mlx5_ib_dev *dev,
struct mlx5_ib_flow_prio *ft_prio,
struct ib_flow_attr *flow_attr,
@@ -2560,6 +2592,7 @@ static struct ib_flow *mlx5_ib_create_flow(struct ib_qp *qp,
struct mlx5_ib_flow_prio *ft_prio_tx = NULL;
struct mlx5_ib_flow_prio *ft_prio;
int err;
+ int underlay_qpn;
if (flow_attr->priority > MLX5_IB_FLOW_LAST_PRIO)
return ERR_PTR(-ENOMEM);
@@ -2600,8 +2633,10 @@ static struct ib_flow *mlx5_ib_create_flow(struct ib_qp *qp,
handler = create_dont_trap_rule(dev, ft_prio,
flow_attr, dst);
} else {
- handler = create_flow_rule(dev, ft_prio, flow_attr,
- dst);
+ underlay_qpn = (mqp->flags & MLX5_IB_QP_UNDERLAY) ?
+ mqp->underlay_qpn : 0;
+ handler = _create_flow_rule(dev, ft_prio, flow_attr,
+ dst, underlay_qpn);
}
} else if (flow_attr->type == IB_FLOW_ATTR_ALL_DEFAULT ||
flow_attr->type == IB_FLOW_ATTR_MC_DEFAULT) {
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 8d62c108dd80..640313da25c7 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -295,8 +295,10 @@ struct mlx5_ifc_flow_table_fields_supported_bits {
u8 inner_tcp_dport[0x1];
u8 inner_tcp_flags[0x1];
u8 reserved_at_37[0x9];
+ u8 reserved_at_40[0x1a];
+ u8 bth_dst_qp[0x1];
- u8 reserved_at_40[0x40];
+ u8 reserved_at_5b[0x25];
};
struct mlx5_ifc_flow_table_prop_layout_bits {
@@ -432,7 +434,9 @@ struct mlx5_ifc_fte_match_set_misc_bits {
u8 reserved_at_100[0xc];
u8 inner_ipv6_flow_label[0x14];
- u8 reserved_at_120[0xe0];
+ u8 reserved_at_120[0x28];
+ u8 bth_dst_qp[0x18];
+ u8 reserved_at_160[0xa0];
};
struct mlx5_ifc_cmd_pas_bits {
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread* [rdma-next 7/8] IB/mlx5: Allow posting multi packet send WQEs if hardware supports
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (5 preceding siblings ...)
2017-08-17 12:52 ` [rdma-next 6/8] IB/mlx5: Add support for multi underlay QP Leon Romanovsky
@ 2017-08-17 12:52 ` Leon Romanovsky
2017-08-17 12:52 ` [rdma-next 8/8] IB/mlx5: Report mlx5 enhanced multi packet WQE capability Leon Romanovsky
2017-08-24 21:50 ` [pull request][rdma-next 0/8] mlx5 feature for 4.14 Doug Ledford
8 siblings, 0 replies; 10+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
To: Doug Ledford
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Bodong Wang
From: Bodong Wang <bodong-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Set the field to allow posting multi packet send WQEs if hardware
supports this feature. This doesn't mean the send WQEs will be for
multi packet unless the send WQE was prepared according to multi
packet send WQE format.
User space shall use flag MLX5_IB_ALLOW_MPW to check if hardware
supports MPW and allows MPW in SQ context.
Signed-off-by: Bodong Wang <bodong-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Daniel Jurgens <danielj-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
drivers/infiniband/hw/mlx5/main.c | 5 +++--
drivers/infiniband/hw/mlx5/qp.c | 2 ++
include/linux/mlx5/mlx5_ifc.h | 2 +-
include/uapi/rdma/mlx5-abi.h | 5 +++++
4 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 438663356982..aef1d8a0f4e4 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -812,8 +812,9 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
if (field_avail(typeof(resp), mlx5_ib_support_multi_pkt_send_wqes,
uhw->outlen)) {
- resp.mlx5_ib_support_multi_pkt_send_wqes =
- MLX5_CAP_ETH(mdev, multi_pkt_send_wqe);
+ if (MLX5_CAP_ETH(mdev, multi_pkt_send_wqe))
+ resp.mlx5_ib_support_multi_pkt_send_wqes =
+ MLX5_IB_ALLOW_MPW;
resp.response_length +=
sizeof(resp.mlx5_ib_support_multi_pkt_send_wqes);
}
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index e58cc42a77a2..ecfa27920554 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1078,6 +1078,8 @@ static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev,
sqc = MLX5_ADDR_OF(create_sq_in, in, ctx);
MLX5_SET(sqc, sqc, flush_in_error_en, 1);
+ if (MLX5_CAP_ETH(dev->mdev, multi_pkt_send_wqe))
+ MLX5_SET(sqc, sqc, allow_multi_pkt_send_wqe, 1);
MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RST);
MLX5_SET(sqc, sqc, user_index, MLX5_GET(qpc, qpc, user_index));
MLX5_SET(sqc, sqc, cqn, MLX5_GET(qpc, qpc, cqn_snd));
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 640313da25c7..4334ce9fc803 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -2448,7 +2448,7 @@ struct mlx5_ifc_sqc_bits {
u8 cd_master[0x1];
u8 fre[0x1];
u8 flush_in_error_en[0x1];
- u8 reserved_at_4[0x1];
+ u8 allow_multi_pkt_send_wqe[0x1];
u8 min_wqe_inline_mode[0x3];
u8 state[0x4];
u8 reg_umr[0x1];
diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
index 64d398e662cd..a61a512e5747 100644
--- a/include/uapi/rdma/mlx5-abi.h
+++ b/include/uapi/rdma/mlx5-abi.h
@@ -168,6 +168,11 @@ struct mlx5_packet_pacing_caps {
__u32 reserved;
};
+enum mlx5_ib_mpw_caps {
+ MPW_RESERVED = 1 << 0,
+ MLX5_IB_ALLOW_MPW = 1 << 1,
+};
+
enum mlx5_ib_sw_parsing_offloads {
MLX5_IB_SW_PARSING = 1 << 0,
MLX5_IB_SW_PARSING_CSUM = 1 << 1,
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread* [rdma-next 8/8] IB/mlx5: Report mlx5 enhanced multi packet WQE capability
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (6 preceding siblings ...)
2017-08-17 12:52 ` [rdma-next 7/8] IB/mlx5: Allow posting multi packet send WQEs if hardware supports Leon Romanovsky
@ 2017-08-17 12:52 ` Leon Romanovsky
2017-08-24 21:50 ` [pull request][rdma-next 0/8] mlx5 feature for 4.14 Doug Ledford
8 siblings, 0 replies; 10+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
To: Doug Ledford
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Bodong Wang
From: Bodong Wang <bodong-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Expose enhanced multi packet WQE capability to user space through
query_device by uhw.
Signed-off-by: Bodong Wang <bodong-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Daniel Jurgens <danielj-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
drivers/infiniband/hw/mlx5/main.c | 5 +++++
include/linux/mlx5/mlx5_ifc.h | 2 +-
include/uapi/rdma/mlx5-abi.h | 1 +
3 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index aef1d8a0f4e4..06cdc279364d 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -815,6 +815,11 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
if (MLX5_CAP_ETH(mdev, multi_pkt_send_wqe))
resp.mlx5_ib_support_multi_pkt_send_wqes =
MLX5_IB_ALLOW_MPW;
+
+ if (MLX5_CAP_ETH(mdev, enhanced_multi_pkt_send_wqe))
+ resp.mlx5_ib_support_multi_pkt_send_wqes |=
+ MLX5_IB_SUPPORT_EMPW;
+
resp.response_length +=
sizeof(resp.mlx5_ib_support_multi_pkt_send_wqes);
}
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 4334ce9fc803..6329348ff1d2 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -604,7 +604,7 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
u8 rss_ind_tbl_cap[0x4];
u8 reg_umr_sq[0x1];
u8 scatter_fcs[0x1];
- u8 reserved_at_1a[0x1];
+ u8 enhanced_multi_pkt_send_wqe[0x1];
u8 tunnel_lso_const_out_ip_id[0x1];
u8 reserved_at_1c[0x2];
u8 tunnel_statless_gre[0x1];
diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
index a61a512e5747..1791bf123ba9 100644
--- a/include/uapi/rdma/mlx5-abi.h
+++ b/include/uapi/rdma/mlx5-abi.h
@@ -171,6 +171,7 @@ struct mlx5_packet_pacing_caps {
enum mlx5_ib_mpw_caps {
MPW_RESERVED = 1 << 0,
MLX5_IB_ALLOW_MPW = 1 << 1,
+ MLX5_IB_SUPPORT_EMPW = 1 << 2,
};
enum mlx5_ib_sw_parsing_offloads {
--
2.14.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [pull request][rdma-next 0/8] mlx5 feature for 4.14
[not found] ` <20170817125235.5107-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (7 preceding siblings ...)
2017-08-17 12:52 ` [rdma-next 8/8] IB/mlx5: Report mlx5 enhanced multi packet WQE capability Leon Romanovsky
@ 2017-08-24 21:50 ` Doug Ledford
8 siblings, 0 replies; 10+ messages in thread
From: Doug Ledford @ 2017-08-24 21:50 UTC (permalink / raw)
To: Leon Romanovsky; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA
On Thu, 2017-08-17 at 15:52 +0300, Leon Romanovsky wrote:
> ----------------------------------------------------------------
> The mlx5 features for 4.14
>
> 1) Software paring from Nos
> * Software parsing (SWP) is a feature that can be used to instruct
> the
> device to stop using its internal parser and to parse packets on
> the
> transmit path according to offsets set for each packets.
>
> Through this feature, the device allows the handling of checksum
> and
> LSO by the hardware according to the location of IP and TCP/UDP
> headers.
>
> 2) Decouple UMR usage from the MR cache from Ilya and fix from Kamal
> * Previously, UMR (HW population of translation entries) was used
> only for MRs allocated from the MR cache.
> This resulted in slow allocations of large MRs,
> and also limited ODP (which relies on UMR) only to cached MRs.
>
> This series uses UMR for populating translations of any MR,
> regardless of the cache. Consequently, allocation speed of
> new/large
> MRs is improved considerably.
>
> 3) Add support for multi underlay QP from Yishai
> * Set underlay QPN as part of flow rule when it's applicable.
>
> 4) Enhanced multi packet WQE from Bodong
> * This feature enables sending multiple various sized packets
> using a single WQE, where each data segment in the posted WQE
> points to a separate packet.
>
> ----------------------------------------------------------------
Hi Leon,
I didn't see any problems in here. It plays with mlx5 internal ABIs,
but it was all extensions, and no breakages, so that's fine. I've
applied this on the mellanox branch on github.
--
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread