* [for-next 0/3] Add congestion control capability
@ 2017-08-22 11:49 Devesh Sharma
[not found] ` <1503402587-24669-1-git-send-email-devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Devesh Sharma @ 2017-08-22 11:49 UTC (permalink / raw)
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch series is to support congestion control capability
on Broadcom NetXtreme-E 10/25/40/50 RDMA Ethernet Controllers.
The implementation exposes congestion control related parameters
to administrator using configfs.
patch 0001 mostly implements the configfs interface to publish
all the required CC parameters.
patch 0002 updates the HSI declarations in broadcom's ethernet
driver. This will cause a merge conflict in Stephen Rothwells
tree and most likely should be dropped during merging in his tree.
The sole purpose of this patch in this series is to prevent build
break on Doug's tree.
patch 0003 implements dscp to priority mapping.
Devesh Sharma (3):
RDMA/bnxt_re: expose cc parameters through configfs
RDMA/bnxt: update bnxt_hsi to hold dscp2pri declaration
RDMA/bnxt_re: setup dscp to priority map
drivers/infiniband/hw/bnxt_re/Makefile | 3 +-
drivers/infiniband/hw/bnxt_re/bnxt_re.h | 14 +
drivers/infiniband/hw/bnxt_re/configfs.c | 761 ++++++++++++++++++++++++++
drivers/infiniband/hw/bnxt_re/configfs.h | 93 ++++
drivers/infiniband/hw/bnxt_re/main.c | 229 +++++++-
drivers/infiniband/hw/bnxt_re/qplib_sp.c | 88 +++
drivers/infiniband/hw/bnxt_re/qplib_sp.h | 23 +
drivers/infiniband/hw/bnxt_re/roce_hsi.h | 152 +++--
drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h | 92 ++++
9 files changed, 1405 insertions(+), 50 deletions(-)
create mode 100644 drivers/infiniband/hw/bnxt_re/configfs.c
create mode 100644 drivers/infiniband/hw/bnxt_re/configfs.h
--
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* [for-next 1/3] RDMA/bnxt_re: expose cc parameters through configfs
[not found] ` <1503402587-24669-1-git-send-email-devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
@ 2017-08-22 11:49 ` Devesh Sharma
2017-08-22 11:49 ` [for-next 2/3] RDMA/bnxt: update bnxt_hsi to hold dscp2pri declaration Devesh Sharma
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Devesh Sharma @ 2017-08-22 11:49 UTC (permalink / raw)
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch adds per port entry in configfs to allow
configuring the congestion control parameters.
The newly configured vaules will not apply to the h/w
until "echo -n 1 > apply" is issued.
Signed-off-by: Devesh Sharma <devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
---
drivers/infiniband/hw/bnxt_re/Makefile | 3 +-
drivers/infiniband/hw/bnxt_re/bnxt_re.h | 1 +
drivers/infiniband/hw/bnxt_re/configfs.c | 707 +++++++++++++++++++++++++++++++
drivers/infiniband/hw/bnxt_re/configfs.h | 93 ++++
drivers/infiniband/hw/bnxt_re/main.c | 8 +-
drivers/infiniband/hw/bnxt_re/qplib_sp.c | 32 ++
drivers/infiniband/hw/bnxt_re/qplib_sp.h | 19 +
drivers/infiniband/hw/bnxt_re/roce_hsi.h | 152 +++++--
8 files changed, 979 insertions(+), 36 deletions(-)
create mode 100644 drivers/infiniband/hw/bnxt_re/configfs.c
create mode 100644 drivers/infiniband/hw/bnxt_re/configfs.h
diff --git a/drivers/infiniband/hw/bnxt_re/Makefile b/drivers/infiniband/hw/bnxt_re/Makefile
index afbaa0e..2a406a7 100644
--- a/drivers/infiniband/hw/bnxt_re/Makefile
+++ b/drivers/infiniband/hw/bnxt_re/Makefile
@@ -3,4 +3,5 @@ ccflags-y := -Idrivers/net/ethernet/broadcom/bnxt
obj-$(CONFIG_INFINIBAND_BNXT_RE) += bnxt_re.o
bnxt_re-y := main.o ib_verbs.o \
qplib_res.o qplib_rcfw.o \
- qplib_sp.o qplib_fp.o hw_counters.o
+ qplib_sp.o qplib_fp.o hw_counters.o \
+ configfs.o
diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
index b3ad37f..ade6698 100644
--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
@@ -123,6 +123,7 @@ struct bnxt_re_dev {
struct bnxt_qplib_ctx qplib_ctx;
struct bnxt_qplib_res qplib_res;
struct bnxt_qplib_dpi dpi_privileged;
+ struct bnxt_qplib_cc_param cc_param;
atomic_t qp_count;
struct mutex qp_lock; /* protect qp list */
diff --git a/drivers/infiniband/hw/bnxt_re/configfs.c b/drivers/infiniband/hw/bnxt_re/configfs.c
new file mode 100644
index 0000000..d05e239
--- /dev/null
+++ b/drivers/infiniband/hw/bnxt_re/configfs.c
@@ -0,0 +1,707 @@
+/*
+ * Copyright (c) 2015-2017, Broadcom. All rights reserved. The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * Description: Enables configfs interface
+ */
+
+#include "configfs.h"
+
+static struct bnxt_re_dev *__get_rdev_from_name(const char *name)
+{
+ struct bnxt_re_dev *rdev;
+ u8 found = false;
+
+ mutex_lock(&bnxt_re_dev_lock);
+ list_for_each_entry(rdev, &bnxt_re_dev_list, list) {
+ if (!strcmp(name, rdev->ibdev.name)) {
+ found = true;
+ break;
+ }
+ }
+ mutex_unlock(&bnxt_re_dev_lock);
+
+ return found ? rdev : ERR_PTR(-ENODEV);
+}
+
+static struct bnxt_re_cc_group * __get_cc_group(struct config_item *item)
+{
+ struct config_group *group = container_of(item, struct config_group,
+ cg_item);
+ struct bnxt_re_cc_group *ccgrp =
+ container_of(group, struct bnxt_re_cc_group, group);
+ return ccgrp;
+}
+
+static ssize_t apply_show(struct config_item *item, char *buf)
+{
+ return 0;
+}
+
+static ssize_t apply_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+ int rc = 0;
+
+ if (!ccgrp)
+ return -EINVAL;
+
+ rdev = ccgrp->rdev;
+ sscanf(buf, "%x\n", &val);
+ if (val == BNXT_RE_MODIFY_CC) {
+ rc = bnxt_qplib_modify_cc(&rdev->qplib_res,
+ &rdev->cc_param);
+ if (rc)
+ dev_err(rdev_to_dev(rdev),
+ "Failed to apply cc settings\n");
+ }
+
+ return rc ? -EINVAL : strnlen(buf, count);
+}
+CONFIGFS_ATTR(, apply);
+
+
+static ssize_t alt_tos_dscp_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+
+ rdev = ccgrp->rdev;
+ return sprintf(buf,"%#x\n", rdev->cc_param.alt_tos_dscp);
+}
+
+static ssize_t alt_tos_dscp_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.alt_tos_dscp = val & 0xFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_ALT_TOS_DSCP;
+
+ return strnlen(buf, count);
+}
+
+CONFIGFS_ATTR(, alt_tos_dscp);
+
+
+static ssize_t alt_vlan_pcp_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.alt_vlan_pcp);
+}
+
+static ssize_t alt_vlan_pcp_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+
+ rdev = ccgrp->rdev;
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.alt_vlan_pcp = val & 0xFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_ALT_VLAN_PCP;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, alt_vlan_pcp);
+
+static ssize_t cc_mode_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.cc_mode);
+}
+
+static ssize_t cc_mode_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.cc_mode = val & 0xFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_CC_MODE;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, cc_mode);
+
+
+static ssize_t enable_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.enable);
+}
+
+static ssize_t enable_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.enable = val & 0xFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_ENABLE_CC;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, enable);
+
+
+static ssize_t g_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.g);
+}
+
+static ssize_t g_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.g = val & 0xFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_G;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, g);
+
+
+static ssize_t init_cr_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.init_cr);
+}
+
+static ssize_t init_cr_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.init_cr = val & 0xFFFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INIT_CR;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, init_cr);
+
+static ssize_t inact_th_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.inact_th);
+}
+
+static ssize_t inact_th_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.inact_th = val & 0xFFFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INACTIVITY_CP;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, inact_th);
+
+static ssize_t init_tr_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.init_tr);
+}
+
+static ssize_t init_tr_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.init_tr = val & 0xFFFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INIT_TR;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, init_tr);
+
+static ssize_t nph_per_state_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.nph_per_state);
+}
+
+static ssize_t nph_per_state_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.nph_per_state = val & 0xFF;
+ rdev->cc_param.mask |=
+ CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_NUMPHASEPERSTATE;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, nph_per_state);
+
+static ssize_t rtt_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.rtt);
+}
+
+static ssize_t rtt_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.rtt = val & 0xFFFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_RTT;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, rtt);
+
+static ssize_t tcp_cp_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.tcp_cp);
+}
+
+static ssize_t tcp_cp_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.tcp_cp = val & 0xFFFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TCP_CP;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, tcp_cp);
+
+static ssize_t tos_dscp_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.tos_dscp);
+}
+
+static ssize_t tos_dscp_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.tos_dscp = val & 0xFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TOS_DSCP;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, tos_dscp);
+
+static ssize_t tos_ecn_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ return sprintf(buf,"%#x\n", rdev->cc_param.tos_ecn);
+}
+
+static ssize_t tos_ecn_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.tos_ecn = val & 0xFF;
+ rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TOS_ECN;
+
+ return strnlen(buf, count);
+}
+CONFIGFS_ATTR(, tos_ecn);
+
+static struct configfs_attribute *bnxt_re_cc_attrs[] = {
+ &attr_apply,
+ &attr_alt_tos_dscp,
+ &attr_alt_vlan_pcp,
+ &attr_cc_mode,
+ &attr_enable,
+ &attr_g,
+ &attr_init_cr,
+ &attr_inact_th,
+ &attr_init_tr,
+ &attr_nph_per_state,
+ &attr_rtt,
+ &attr_tcp_cp,
+ &attr_tos_dscp,
+ &attr_tos_ecn,
+ NULL,
+};
+
+static struct config_item_type bnxt_re_ccgrp_type = {
+ .ct_attrs = bnxt_re_cc_attrs,
+ .ct_owner = THIS_MODULE,
+};
+
+static int make_bnxt_re_cc(struct bnxt_re_port_group *portgrp,
+ struct bnxt_re_dev *rdev)
+{
+ struct bnxt_re_cc_group *ccgrp;
+ int rc;
+
+ ccgrp = kzalloc(sizeof(*ccgrp), GFP_KERNEL);
+ if (!ccgrp) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ ccgrp->rdev = rdev;
+ config_group_init_type_name(&ccgrp->group, "cc", &bnxt_re_ccgrp_type);
+ configfs_add_default_group(&ccgrp->group, &portgrp->nportgrp);
+ portgrp->ccgrp = ccgrp;
+
+ return 0;
+out:
+ kfree(ccgrp);
+ return rc;
+}
+
+
+static void bnxt_re_release_nport_group(struct config_item *item)
+{
+ struct config_group *group = container_of(item, struct config_group,
+ cg_item);
+ struct bnxt_re_port_group *portgrp =
+ container_of(group, struct bnxt_re_port_group,
+ nportgrp);
+ kfree(portgrp->ccgrp);
+}
+
+static struct configfs_item_operations bnxt_re_nport_item_ops = {
+ .release = bnxt_re_release_nport_group
+};
+
+static struct config_item_type bnxt_re_nportgrp_type = {
+ .ct_item_ops = &bnxt_re_nport_item_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static int make_bnxt_re_ports(struct bnxt_re_dev_group *devgrp,
+ struct bnxt_re_dev *rdev)
+{
+ struct bnxt_re_port_group *ports;
+ struct ib_device *ibdev;
+ int nports, rc, indx;
+
+ if (!rdev)
+ return -ENODEV;
+ ibdev = &rdev->ibdev;
+ nports = ibdev->phys_port_cnt;
+ ports = kcalloc(nports, sizeof(*ports), GFP_KERNEL);
+ if (!ports) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ for (indx = 0; indx < nports; indx++) {
+ char port_name[10];
+ ports[indx].port_num = indx + 1;
+ snprintf(port_name, sizeof(port_name), "%u", indx + 1);
+ ports[indx].devgrp = devgrp;
+ config_group_init_type_name(&ports[indx].nportgrp,
+ port_name, &bnxt_re_nportgrp_type);
+ rc = make_bnxt_re_cc(&ports[indx], rdev);
+ if (rc)
+ goto out;
+ configfs_add_default_group(&ports[indx].nportgrp,
+ &devgrp->port_group);
+ }
+ devgrp->ports = ports;
+
+ return 0;
+out:
+ kfree(ports);
+ return rc;
+}
+
+static void bnxt_re_release_device_group(struct config_item *item)
+{
+ struct config_group *group = container_of(item, struct config_group,
+ cg_item);
+ struct bnxt_re_dev_group *devgrp =
+ container_of(group, struct bnxt_re_dev_group,
+ dev_group);
+ kfree(devgrp);
+}
+
+static void bnxt_re_release_ports_group(struct config_item *item)
+{
+ struct config_group *group = container_of(item, struct config_group,
+ cg_item);
+ struct bnxt_re_dev_group *devgrp =
+ container_of(group, struct bnxt_re_dev_group,
+ port_group);
+ kfree(devgrp->ports);
+ devgrp->ports = NULL;
+}
+
+static struct configfs_item_operations bnxt_re_ports_item_ops = {
+ .release = bnxt_re_release_ports_group
+};
+
+static struct config_item_type bnxt_re_ports_group_type = {
+ .ct_item_ops = &bnxt_re_ports_item_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct configfs_item_operations bnxt_re_dev_item_ops = {
+ .release = bnxt_re_release_device_group
+};
+
+static struct config_item_type bnxt_re_dev_group_type = {
+ .ct_item_ops = &bnxt_re_dev_item_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group *make_bnxt_re_dev(struct config_group *group,
+ const char *name)
+{
+ struct bnxt_re_dev *rdev;
+ struct bnxt_re_dev_group *devgrp = NULL;
+ int rc = -ENODEV;
+
+ rdev = __get_rdev_from_name(name);
+ if (PTR_ERR(rdev) == -ENODEV)
+ goto out;
+
+ devgrp = kzalloc(sizeof(*devgrp), GFP_KERNEL);
+ if (!devgrp) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ strncpy(devgrp->name, name, sizeof(devgrp->name));
+ config_group_init_type_name(&devgrp->port_group, "ports",
+ &bnxt_re_ports_group_type);
+ rc = make_bnxt_re_ports(devgrp, rdev);
+ if (rc)
+ goto out;
+ config_group_init_type_name(&devgrp->dev_group, name,
+ &bnxt_re_dev_group_type);
+ configfs_add_default_group(&devgrp->port_group,
+ &devgrp->dev_group);
+ return &devgrp->dev_group;
+out:
+ kfree(devgrp);
+ return ERR_PTR(rc);
+}
+
+static struct configfs_group_operations bnxt_re_group_ops = {
+ .make_group = &make_bnxt_re_dev
+};
+
+static struct config_item_type bnxt_re_subsys_type = {
+ .ct_group_ops = &bnxt_re_group_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct configfs_subsystem bnxt_re_subsys = {
+ .su_group = {
+ .cg_item = {
+ .ci_namebuf = "bnxt_re",
+ .ci_type = &bnxt_re_subsys_type,
+ },
+ },
+};
+
+int bnxt_re_configfs_init(void)
+{
+ config_group_init(&bnxt_re_subsys.su_group);
+ mutex_init(&bnxt_re_subsys.su_mutex);
+ return configfs_register_subsystem(&bnxt_re_subsys);
+}
+
+void bnxt_re_configfs_exit(void)
+{
+ configfs_unregister_subsystem(&bnxt_re_subsys);
+}
diff --git a/drivers/infiniband/hw/bnxt_re/configfs.h b/drivers/infiniband/hw/bnxt_re/configfs.h
new file mode 100644
index 0000000..88f7edc
--- /dev/null
+++ b/drivers/infiniband/hw/bnxt_re/configfs.h
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) 2015-2017, Broadcom. All rights reserved. The term
+ * Broadcom refers to Broadcom Limited and/or its subsidiaries.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * Description: defines data-structure for configfs interface
+ */
+#ifndef __BNXT_RE_CONFIGFS_H__
+#define __BNXT_RE_CONFIGFS_H__
+
+#include <linux/module.h>
+#include <linux/configfs.h>
+
+#include <rdma/ib_verbs.h>
+#include <rdma/ib_user_verbs.h>
+#include <rdma/ib_umem.h>
+#include <rdma/ib_addr.h>
+
+#include "bnxt_ulp.h"
+#include "roce_hsi.h"
+#include "qplib_res.h"
+#include "qplib_sp.h"
+#include "qplib_fp.h"
+#include "qplib_rcfw.h"
+#include "bnxt_re.h"
+
+extern struct list_head bnxt_re_dev_list;
+extern u32 adapter_count;
+extern struct mutex bnxt_re_dev_lock;
+
+enum bnxt_re_configfs_cmd {
+ BNXT_RE_MODIFY_CC = 0x01,
+};
+
+struct bnxt_re_cc_group;
+struct bnxt_re_port_group;
+struct bnxt_re_dev_group;
+
+struct bnxt_re_cc_group
+{
+ struct bnxt_re_dev *rdev;
+ struct bnxt_re_port_group *portgrp;
+ struct config_group group;
+};
+
+struct bnxt_re_port_group
+{
+ unsigned int port_num;
+ struct bnxt_re_dev_group *devgrp;
+ struct bnxt_re_cc_group *ccgrp;
+ struct config_group nportgrp;
+};
+
+struct bnxt_re_dev_group
+{
+ char name[IB_DEVICE_NAME_MAX];
+ struct config_group dev_group;
+ struct config_group port_group;
+ struct bnxt_re_port_group *ports;
+};
+
+int bnxt_re_configfs_init(void);
+void bnxt_re_configfs_exit(void);
+#endif
diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
index 82d1cbc..6ea3ae8 100644
--- a/drivers/infiniband/hw/bnxt_re/main.c
+++ b/drivers/infiniband/hw/bnxt_re/main.c
@@ -65,6 +65,7 @@
#include <rdma/bnxt_re-abi.h>
#include "bnxt.h"
#include "hw_counters.h"
+#include "configfs.h"
static char version[] =
BNXT_RE_DESC " v" ROCE_DRV_MODULE_VERSION "\n";
@@ -74,9 +75,9 @@
MODULE_LICENSE("Dual BSD/GPL");
/* globals */
-static struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
+struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
/* Mutex to protect the list of bnxt_re devices added */
-static DEFINE_MUTEX(bnxt_re_dev_lock);
+DEFINE_MUTEX(bnxt_re_dev_lock);
static struct workqueue_struct *bnxt_re_wq;
/* for handling bnxt_en callbacks later */
@@ -1357,6 +1358,7 @@ static int __init bnxt_re_mod_init(void)
if (!bnxt_re_wq)
return -ENOMEM;
+ bnxt_re_configfs_init();
INIT_LIST_HEAD(&bnxt_re_dev_list);
rc = register_netdevice_notifier(&bnxt_re_netdev_notifier);
@@ -1368,6 +1370,7 @@ static int __init bnxt_re_mod_init(void)
return 0;
err_netdev:
+ bnxt_re_configfs_exit();
destroy_workqueue(bnxt_re_wq);
return rc;
@@ -1376,6 +1379,7 @@ static int __init bnxt_re_mod_init(void)
static void __exit bnxt_re_mod_exit(void)
{
unregister_netdevice_notifier(&bnxt_re_netdev_notifier);
+ bnxt_re_configfs_exit();
if (bnxt_re_wq)
destroy_workqueue(bnxt_re_wq);
}
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
index e277e54..2438477 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
@@ -730,3 +730,35 @@ int bnxt_qplib_map_tc2cos(struct bnxt_qplib_res *res, u16 *cids)
(void *)&resp, NULL, 0);
return 0;
}
+
+int bnxt_qplib_modify_cc(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_cc_param *cc_param)
+{
+ struct cmdq_modify_roce_cc req;
+ struct creq_modify_roce_cc_resp resp;
+ u16 cmd_flags = 0;
+ int rc;
+
+ RCFW_CMD_PREP(req, MODIFY_CC, cmd_flags);
+ req.modify_mask = cpu_to_le32(cc_param->mask);
+ cc_param->mask = 0x00; /* reset mask for next. */
+ req.enable_cc = cc_param->enable;
+ req.g = cc_param->g;
+ req.num_phases_per_state = cc_param->nph_per_state;
+ req.init_cr = cpu_to_le16(cc_param->init_cr);
+ req.init_tr = cpu_to_le16(cc_param->init_tr);
+ req.tos_dscp_tos_ecn = (cc_param->tos_dscp <<
+ CMDQ_MODIFY_ROCE_CC_TOS_DSCP_SFT) |
+ (cc_param->tos_ecn &
+ CMDQ_MODIFY_ROCE_CC_TOS_ECN_MASK);
+ req.alt_vlan_pcp = cc_param->alt_vlan_pcp;
+ req.alt_tos_dscp = cpu_to_le16(cc_param->alt_tos_dscp);
+ req.rtt = cpu_to_le16(cc_param->rtt);
+ req.tcp_cp = cpu_to_le16(cc_param->tcp_cp);
+ req.cc_mode = cc_param->cc_mode;
+ req.inactivity_th = cpu_to_le16(cc_param->inact_th);
+
+ rc = bnxt_qplib_rcfw_send_message(res->rcfw, (void *)&req,
+ (void *)&resp, NULL, 0);
+ return rc;
+}
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
index 1132258..87173701 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
@@ -119,6 +119,23 @@ struct bnxt_qplib_frpl {
struct bnxt_qplib_hwq hwq;
};
+struct bnxt_qplib_cc_param {
+ u8 alt_vlan_pcp;
+ u8 alt_tos_dscp;
+ u8 cc_mode;
+ u8 enable;
+ u16 inact_th;
+ u16 init_cr;
+ u16 init_tr;
+ u16 rtt;
+ u8 g;
+ u8 nph_per_state;
+ u8 tos_ecn;
+ u8 tos_dscp;
+ u16 tcp_cp;
+ u32 mask;
+};
+
#define BNXT_QPLIB_ACCESS_LOCAL_WRITE BIT(0)
#define BNXT_QPLIB_ACCESS_REMOTE_READ BIT(1)
#define BNXT_QPLIB_ACCESS_REMOTE_WRITE BIT(2)
@@ -164,4 +181,6 @@ int bnxt_qplib_alloc_fast_reg_page_list(struct bnxt_qplib_res *res,
int bnxt_qplib_free_fast_reg_page_list(struct bnxt_qplib_res *res,
struct bnxt_qplib_frpl *frpl);
int bnxt_qplib_map_tc2cos(struct bnxt_qplib_res *res, u16 *cids);
+int bnxt_qplib_modify_cc(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_cc_param *cc_param);
#endif /* __BNXT_QPLIB_SP_H__*/
diff --git a/drivers/infiniband/hw/bnxt_re/roce_hsi.h b/drivers/infiniband/hw/bnxt_re/roce_hsi.h
index eeb55b2..60b61cb 100644
--- a/drivers/infiniband/hw/bnxt_re/roce_hsi.h
+++ b/drivers/infiniband/hw/bnxt_re/roce_hsi.h
@@ -1853,6 +1853,95 @@ struct cmdq_query_version {
__le64 resp_addr;
};
+/* Modify congestion control command (56 bytes) */
+struct cmdq_modify_roce_cc {
+ u8 opcode;
+ #define CMDQ_MODIFY_ROCE_CC_OPCODE_MODIFY_ROCE_CC 0x8cUL
+ u8 cmd_size;
+ __le16 flags;
+ __le16 cookie;
+ u8 resp_size;
+ u8 reserved8;
+ __le64 resp_addr;
+ __le32 modify_mask;
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_ENABLE_CC 0x1UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_G 0x2UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_NUMPHASEPERSTATE 0x4UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INIT_CR 0x8UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INIT_TR 0x10UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TOS_ECN 0x20UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TOS_DSCP 0x40UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_ALT_VLAN_PCP 0x80UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_ALT_TOS_DSCP 0x100UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_RTT 0x200UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_CC_MODE 0x400UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TCP_CP 0x800UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TX_QUEUE 0x1000UL
+ #define CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INACTIVITY_CP 0x2000UL
+ u8 enable_cc;
+ #define CMDQ_MODIFY_ROCE_CC_ENABLE_CC 0x1UL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD1_MASK 0xfeUL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD1_SFT 1
+ u8 g;
+ #define CMDQ_MODIFY_ROCE_CC_G_MASK 0x7UL
+ #define CMDQ_MODIFY_ROCE_CC_G_SFT 0
+ #define CMDQ_MODIFY_ROCE_CC_RSVD2_MASK 0xf8UL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD2_SFT 3
+ u8 num_phases_per_state;
+ u8 rsvd9;
+ __le16 init_cr;
+ __le16 init_tr;
+ u8 tos_dscp_tos_ecn;
+ #define CMDQ_MODIFY_ROCE_CC_TOS_ECN_MASK 0x3UL
+ #define CMDQ_MODIFY_ROCE_CC_TOS_ECN_SFT 0
+ #define CMDQ_MODIFY_ROCE_CC_TOS_DSCP_MASK 0xfcUL
+ #define CMDQ_MODIFY_ROCE_CC_TOS_DSCP_SFT 2
+ u8 alt_vlan_pcp;
+ #define CMDQ_MODIFY_ROCE_CC_ALT_VLAN_PCP_MASK 0x7UL
+ #define CMDQ_MODIFY_ROCE_CC_ALT_VLAN_PCP_SFT 0
+ #define CMDQ_MODIFY_ROCE_CC_RSVD3_MASK 0xf8UL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD3_SFT 3
+ __le16 alt_tos_dscp;
+ #define CMDQ_MODIFY_ROCE_CC_ALT_TOS_DSCP_MASK 0x3fUL
+ #define CMDQ_MODIFY_ROCE_CC_ALT_TOS_DSCP_SFT 0
+ #define CMDQ_MODIFY_ROCE_CC_RSVD4_MASK 0xffc0UL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD4_SFT 6
+ __le16 rtt;
+ #define CMDQ_MODIFY_ROCE_CC_RTT_MASK 0x3fffUL
+ #define CMDQ_MODIFY_ROCE_CC_RTT_SFT 0
+ #define CMDQ_MODIFY_ROCE_CC_RSVD5_MASK 0xc000UL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD5_SFT 14
+ __le16 tcp_cp;
+ #define CMDQ_MODIFY_ROCE_CC_TCP_CP_MASK 0x3ffUL
+ #define CMDQ_MODIFY_ROCE_CC_TCP_CP_SFT 0
+ #define CMDQ_MODIFY_ROCE_CC_RSVD6_MASK 0xfc00UL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD6_SFT 10
+ u8 cc_mode;
+ #define CMDQ_MODIFY_ROCE_CC_CC_MODE 0x1UL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD7_MASK 0xfeUL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD7_SFT 1
+ u8 tx_queue;
+ #define CMDQ_MODIFY_ROCE_CC_TX_QUEUE_MASK 0x3UL
+ #define CMDQ_MODIFY_ROCE_CC_TX_QUEUE_SFT 0
+ #define CMDQ_MODIFY_ROCE_CC_RSVD8_MASK 0xfcUL
+ #define CMDQ_MODIFY_ROCE_CC_RSVD8_SFT 2
+ __le16 inactivity_th;
+ __le64 reserved64;
+ __le64 reserved64_1;
+};
+
+/* Query congestion control command (16 bytes) */
+struct cmdq_query_roce_cc {
+ u8 opcode;
+ #define CMDQ_QUERY_ROCE_CC_OPCODE_QUERY_ROCE_CC 0x8dUL
+ u8 cmd_size;
+ __le16 flags;
+ __le16 cookie;
+ u8 resp_size;
+ u8 reserved8;
+ __le64 resp_addr;
+};
+
/* Command-Response Event Queue (CREQ) Structures */
/* Base CREQ Record (16 bytes) */
struct creq_base {
@@ -2715,70 +2804,67 @@ struct creq_query_version_resp {
};
/* Modify congestion control command response (16 bytes) */
-struct creq_modify_cc_resp {
+struct creq_modify_roce_cc_resp {
u8 type;
- #define CREQ_MODIFY_CC_RESP_TYPE_MASK 0x3fUL
- #define CREQ_MODIFY_CC_RESP_TYPE_SFT 0
- #define CREQ_MODIFY_CC_RESP_TYPE_QP_EVENT 0x38UL
- #define CREQ_MODIFY_CC_RESP_RESERVED2_MASK 0xc0UL
- #define CREQ_MODIFY_CC_RESP_RESERVED2_SFT 6
+ #define CREQ_MODIFY_ROCE_CC_RESP_TYPE_MASK 0x3fUL
+ #define CREQ_MODIFY_ROCE_CC_RESP_TYPE_SFT 0
+ #define CREQ_MODIFY_ROCE_CC_RESP_TYPE_QP_EVENT 0x38UL
+ #define CREQ_MODIFY_ROCE_CC_RESP_RESERVED2_MASK 0xc0UL
+ #define CREQ_MODIFY_ROCE_CC_RESP_RESERVED2_SFT 6
u8 status;
__le16 cookie;
__le32 reserved32;
u8 v;
- #define CREQ_MODIFY_CC_RESP_V 0x1UL
- #define CREQ_MODIFY_CC_RESP_RESERVED7_MASK 0xfeUL
- #define CREQ_MODIFY_CC_RESP_RESERVED7_SFT 1
+ #define CREQ_MODIFY_ROCE_CC_RESP_V 0x1UL
+ #define CREQ_MODIFY_ROCE_CC_RESP_RESERVED7_MASK 0xfeUL
+ #define CREQ_MODIFY_ROCE_CC_RESP_RESERVED7_SFT 1
u8 event;
- #define CREQ_MODIFY_CC_RESP_EVENT_MODIFY_CC 0x8cUL
+ #define CREQ_MODIFY_ROCE_CC_RESP_EVENT_MODIFY_ROCE_CC 0x8cUL
__le16 reserved48[3];
};
/* Query congestion control command response (16 bytes) */
-struct creq_query_cc_resp {
+struct creq_query_roce_cc_resp {
u8 type;
- #define CREQ_QUERY_CC_RESP_TYPE_MASK 0x3fUL
- #define CREQ_QUERY_CC_RESP_TYPE_SFT 0
- #define CREQ_QUERY_CC_RESP_TYPE_QP_EVENT 0x38UL
- #define CREQ_QUERY_CC_RESP_RESERVED2_MASK 0xc0UL
- #define CREQ_QUERY_CC_RESP_RESERVED2_SFT 6
+ #define CREQ_QUERY_ROCE_CC_RESP_TYPE_MASK 0x3fUL
+ #define CREQ_QUERY_ROCE_CC_RESP_TYPE_SFT 0
+ #define CREQ_QUERY_ROCE_CC_RESP_TYPE_QP_EVENT 0x38UL
+ #define CREQ_QUERY_ROCE_CC_RESP_RESERVED2_MASK 0xc0UL
+ #define CREQ_QUERY_ROCE_CC_RESP_RESERVED2_SFT 6
u8 status;
__le16 cookie;
__le32 size;
u8 v;
- #define CREQ_QUERY_CC_RESP_V 0x1UL
- #define CREQ_QUERY_CC_RESP_RESERVED7_MASK 0xfeUL
- #define CREQ_QUERY_CC_RESP_RESERVED7_SFT 1
+ #define CREQ_QUERY_ROCE_CC_RESP_V 0x1UL
+ #define CREQ_QUERY_ROCE_CC_RESP_RESERVED7_MASK 0xfeUL
+ #define CREQ_QUERY_ROCE_CC_RESP_RESERVED7_SFT 1
u8 event;
- #define CREQ_QUERY_CC_RESP_EVENT_QUERY_CC 0x8dUL
+ #define CREQ_QUERY_ROCE_CC_RESP_EVENT_QUERY_ROCE_CC 0x8dUL
__le16 reserved48[3];
};
/* Query congestion control command response side buffer structure (32 bytes) */
-struct creq_query_cc_resp_sb {
+struct creq_query_roce_cc_resp_sb {
u8 opcode;
- #define CREQ_QUERY_CC_RESP_SB_OPCODE_QUERY_CC 0x8dUL
+ #define CREQ_QUERY_ROCE_CC_RESP_SB_OPCODE_QUERY_ROCE_CC 0x8dUL
u8 status;
__le16 cookie;
__le16 flags;
u8 resp_size;
u8 reserved8;
u8 enable_cc;
- #define CREQ_QUERY_CC_RESP_SB_ENABLE_CC 0x1UL
+ #define CREQ_QUERY_ROCE_CC_RESP_SB_ENABLE_CC 0x1UL
+ u8 tos_dscp_tos_ecn;
+ #define CREQ_QUERY_ROCE_CC_RESP_SB_TOS_ECN_MASK 0x3UL
+ #define CREQ_QUERY_ROCE_CC_RESP_SB_TOS_ECN_SFT 0
+ #define CREQ_QUERY_ROCE_CC_RESP_SB_TOS_DSCP_MASK 0xfcUL
+ #define CREQ_QUERY_ROCE_CC_RESP_SB_TOS_DSCP_SFT 2
u8 g;
- #define CREQ_QUERY_CC_RESP_SB_G_MASK 0x7UL
- #define CREQ_QUERY_CC_RESP_SB_G_SFT 0
+ #define CREQ_QUERY_ROCE_CC_RESP_SB_G_MASK 0x7UL
+ #define CREQ_QUERY_ROCE_CC_RESP_SB_G_SFT 0
u8 num_phases_per_state;
__le16 init_cr;
- u8 unused_2;
- __le16 unused_3;
- u8 unused_4;
__le16 init_tr;
- u8 tos_dscp_tos_ecn;
- #define CREQ_QUERY_CC_RESP_SB_TOS_ECN_MASK 0x3UL
- #define CREQ_QUERY_CC_RESP_SB_TOS_ECN_SFT 0
- #define CREQ_QUERY_CC_RESP_SB_TOS_DSCP_MASK 0xfcUL
- #define CREQ_QUERY_CC_RESP_SB_TOS_DSCP_SFT 2
__le64 reserved64;
__le64 reserved64_1;
};
--
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [for-next 2/3] RDMA/bnxt: update bnxt_hsi to hold dscp2pri declaration
[not found] ` <1503402587-24669-1-git-send-email-devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2017-08-22 11:49 ` [for-next 1/3] RDMA/bnxt_re: expose cc parameters through configfs Devesh Sharma
@ 2017-08-22 11:49 ` Devesh Sharma
2017-08-22 11:49 ` [for-next 3/3] RDMA/bnxt_re: setup dscp to priority map Devesh Sharma
2017-08-22 12:32 ` [for-next 0/3] Add congestion control capability Leon Romanovsky
3 siblings, 0 replies; 9+ messages in thread
From: Devesh Sharma @ 2017-08-22 11:49 UTC (permalink / raw)
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch adds new structure declarations to bnxt_hsi.h
file. The new declarations are related to DSCP to priority
mapping slow patch commands.
Signed-off-by: Devesh Sharma <devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
---
drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h | 92 +++++++++++++++++++++++++++
1 file changed, 92 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
index 7dc71bb..d45a039 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
@@ -3152,6 +3152,95 @@ struct hwrm_queue_cos2bw_cfg_output {
u8 valid;
};
+/* hwrm_queue_dscp_qcaps */
+/* Input (24 bytes) */
+struct hwrm_queue_dscp_qcaps_input {
+ __le16 req_type;
+ __le16 cmpl_ring;
+ __le16 seq_id;
+ __le16 target_id;
+ __le64 resp_addr;
+ u8 port_id;
+ u8 unused_0[7];
+};
+
+/* Output (16 bytes) */
+struct hwrm_queue_dscp_qcaps_output {
+ __le16 error_code;
+ __le16 req_type;
+ __le16 seq_id;
+ __le16 resp_len;
+ u8 num_dscp_bits;
+ u8 unused_0;
+ __le16 max_entries;
+ u8 unused_1;
+ u8 unused_2;
+ u8 unused_3;
+ u8 valid;
+};
+
+/* hwrm_queue_dscp2pri_qcfg */
+/* Input (32 bytes) */
+struct hwrm_queue_dscp2pri_qcfg_input {
+ __le16 req_type;
+ __le16 cmpl_ring;
+ __le16 seq_id;
+ __le16 target_id;
+ __le64 resp_addr;
+ __le64 dest_data_addr;
+ u8 port_id;
+ u8 unused_0;
+ __le16 dest_data_buffer_size;
+ __le32 unused_1;
+};
+
+/* Output (16 bytes) */
+struct hwrm_queue_dscp2pri_qcfg_output {
+ __le16 error_code;
+ __le16 req_type;
+ __le16 seq_id;
+ __le16 resp_len;
+ __le16 entry_cnt;
+ u8 default_pri;
+ u8 unused_0;
+ u8 unused_1;
+ u8 unused_2;
+ u8 unused_3;
+ u8 valid;
+};
+
+/* hwrm_queue_dscp2pri_cfg */
+/* Input (40 bytes) */
+struct hwrm_queue_dscp2pri_cfg_input {
+ __le16 req_type;
+ __le16 cmpl_ring;
+ __le16 seq_id;
+ __le16 target_id;
+ __le64 resp_addr;
+ __le64 src_data_addr;
+ __le32 flags;
+ #define QUEUE_DSCP2PRI_CFG_REQ_FLAGS_USE_HW_DEFAULT_PRI 0x1UL
+ __le32 enables;
+ #define QUEUE_DSCP2PRI_CFG_REQ_ENABLES_DEFAULT_PRI 0x1UL
+ u8 port_id;
+ u8 default_pri;
+ __le16 entry_cnt;
+ __le32 unused_0;
+};
+
+/* Output (16 bytes) */
+struct hwrm_queue_dscp2pri_cfg_output {
+ __le16 error_code;
+ __le16 req_type;
+ __le16 seq_id;
+ __le16 resp_len;
+ __le32 unused_0;
+ u8 unused_1;
+ u8 unused_2;
+ u8 unused_3;
+ u8 valid;
+};
+
/* hwrm_vnic_alloc */
/* Input (24 bytes) */
struct hwrm_vnic_alloc_input {
@@ -5429,6 +5518,9 @@ struct cmd_nums {
#define HWRM_QUEUE_PRI2COS_CFG (0x38UL)
#define HWRM_QUEUE_COS2BW_QCFG (0x39UL)
#define HWRM_QUEUE_COS2BW_CFG (0x3aUL)
+ #define HWRM_QUEUE_DSCP_QCAPS (0x3bUL)
+ #define HWRM_QUEUE_DSCP2PRI_QCFG (0x3cUL)
+ #define HWRM_QUEUE_DSCP2PRI_CFG (0x3dUL)
#define HWRM_VNIC_ALLOC (0x40UL)
#define HWRM_VNIC_FREE (0x41UL)
#define HWRM_VNIC_CFG (0x42UL)
--
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [for-next 3/3] RDMA/bnxt_re: setup dscp to priority map
[not found] ` <1503402587-24669-1-git-send-email-devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2017-08-22 11:49 ` [for-next 1/3] RDMA/bnxt_re: expose cc parameters through configfs Devesh Sharma
2017-08-22 11:49 ` [for-next 2/3] RDMA/bnxt: update bnxt_hsi to hold dscp2pri declaration Devesh Sharma
@ 2017-08-22 11:49 ` Devesh Sharma
2017-08-22 12:32 ` [for-next 0/3] Add congestion control capability Leon Romanovsky
3 siblings, 0 replies; 9+ messages in thread
From: Devesh Sharma @ 2017-08-22 11:49 UTC (permalink / raw)
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch allows bnxt_re driver to map a particular
dscp value to a pre-configured RoCE priority. There
is a DPC which keeps looking for the changes in the
RoCE-priority, if there is any change h/w is updated
with the new settings.
Signed-off-by: Eddie Wai <eddie.wai-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
Signed-off-by: Devesh Sharma <devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
---
drivers/infiniband/hw/bnxt_re/bnxt_re.h | 13 ++
drivers/infiniband/hw/bnxt_re/configfs.c | 54 ++++++++
drivers/infiniband/hw/bnxt_re/main.c | 221 +++++++++++++++++++++++++++++--
drivers/infiniband/hw/bnxt_re/qplib_sp.c | 56 ++++++++
drivers/infiniband/hw/bnxt_re/qplib_sp.h | 4 +
5 files changed, 334 insertions(+), 14 deletions(-)
diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
index ade6698..fbbbea3 100644
--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
@@ -84,6 +84,12 @@ struct bnxt_re_sqp_entries {
struct bnxt_re_qp *qp1_qp;
};
+struct bnxt_re_dscp2pri {
+ u8 dscp;
+ u8 mask;
+ u8 pri;
+};
+
#define BNXT_RE_MIN_MSIX 2
#define BNXT_RE_MAX_MSIX 9
#define BNXT_RE_AEQ_IDX 0
@@ -108,6 +114,7 @@ struct bnxt_re_dev {
struct delayed_work worker;
u8 cur_prio_map;
+ u8 dscp_prio;
/* FP Notification Queue (CQ & SRQ) */
struct tasklet_struct nq_task;
@@ -124,6 +131,7 @@ struct bnxt_re_dev {
struct bnxt_qplib_res qplib_res;
struct bnxt_qplib_dpi dpi_privileged;
struct bnxt_qplib_cc_param cc_param;
+ struct mutex cc_lock;
atomic_t qp_count;
struct mutex qp_lock; /* protect qp list */
@@ -158,4 +166,9 @@ static inline struct device *rdev_to_dev(struct bnxt_re_dev *rdev)
return NULL;
}
+int bnxt_re_set_hwrm_dscp2pri(struct bnxt_re_dev *rdev,
+ struct bnxt_re_dscp2pri *d2p, u16 count);
+int bnxt_re_query_hwrm_dscp2pri(struct bnxt_re_dev *rdev,
+ struct bnxt_re_dscp2pri *d2p, u16 count);
+int bnxt_re_vlan_tx_disable(struct bnxt_re_dev *rdev);
#endif
diff --git a/drivers/infiniband/hw/bnxt_re/configfs.c b/drivers/infiniband/hw/bnxt_re/configfs.c
index d05e239..e356481 100644
--- a/drivers/infiniband/hw/bnxt_re/configfs.c
+++ b/drivers/infiniband/hw/bnxt_re/configfs.c
@@ -73,6 +73,7 @@ static ssize_t apply_store(struct config_item *item, const char *buf,
{
struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
struct bnxt_re_dev *rdev;
+ struct bnxt_re_dscp2pri d2p;
unsigned int val;
int rc = 0;
@@ -82,11 +83,30 @@ static ssize_t apply_store(struct config_item *item, const char *buf,
rdev = ccgrp->rdev;
sscanf(buf, "%x\n", &val);
if (val == BNXT_RE_MODIFY_CC) {
+ /* For VLAN transmission disablement */
+ if (rdev->cc_param.mask &
+ BNXT_QPLIB_CC_PARAM_MASK_VLAN_TX_DISABLE) {
+ rdev->cc_param.mask &=
+ ~BNXT_QPLIB_CC_PARAM_MASK_VLAN_TX_DISABLE;
+ rc = bnxt_re_vlan_tx_disable(rdev);
+ if (rc)
+ dev_err(rdev_to_dev(rdev),
+ "Failed to disable VLAN tx\n");
+ }
rc = bnxt_qplib_modify_cc(&rdev->qplib_res,
&rdev->cc_param);
if (rc)
dev_err(rdev_to_dev(rdev),
"Failed to apply cc settings\n");
+ mutex_lock(&rdev->cc_lock);
+ d2p.dscp = rdev->cc_param.tos_dscp;
+ d2p.pri = rdev->dscp_prio;
+ mutex_unlock(&rdev->cc_lock);
+ d2p.mask = 0x3F;
+ rc = bnxt_re_set_hwrm_dscp2pri(rdev, &d2p, 1);
+ if (rc)
+ dev_err(rdev_to_dev(rdev),
+ "Failed to updated dscp\n");
}
return rc ? -EINVAL : strnlen(buf, count);
@@ -462,7 +482,9 @@ static ssize_t tos_dscp_store(struct config_item *item, const char *buf,
rdev = ccgrp->rdev;
sscanf(buf, "%x\n", &val);
+ mutex_lock(&rdev->cc_lock);
rdev->cc_param.tos_dscp = val & 0xFF;
+ mutex_unlock(&rdev->cc_lock);
rdev->cc_param.mask |= CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TOS_DSCP;
return strnlen(buf, count);
@@ -500,6 +522,37 @@ static ssize_t tos_ecn_store(struct config_item *item, const char *buf,
}
CONFIGFS_ATTR(, tos_ecn);
+static ssize_t vlan_tx_disable_show(struct config_item *item, char *buf)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+
+ if (!ccgrp)
+ return -EINVAL;
+
+ rdev = ccgrp->rdev;
+ return sprintf(buf,"%#x\n", rdev->cc_param.vlan_tx_disable);
+}
+
+static ssize_t vlan_tx_disable_store(struct config_item *item, const char *buf,
+ size_t count)
+{
+ struct bnxt_re_cc_group *ccgrp = __get_cc_group(item);
+ struct bnxt_re_dev *rdev;
+ unsigned int val;
+
+ if (!ccgrp)
+ return -EINVAL;
+ rdev = ccgrp->rdev;
+ sscanf(buf, "%x\n", &val);
+ rdev->cc_param.vlan_tx_disable = val & 0x1;
+ rdev->cc_param.mask |= BNXT_QPLIB_CC_PARAM_MASK_VLAN_TX_DISABLE;
+
+ return strnlen(buf, count);
+}
+
+CONFIGFS_ATTR(, vlan_tx_disable);
+
static struct configfs_attribute *bnxt_re_cc_attrs[] = {
&attr_apply,
&attr_alt_tos_dscp,
@@ -515,6 +568,7 @@ static ssize_t tos_ecn_store(struct config_item *item, const char *buf,
&attr_tcp_cp,
&attr_tos_dscp,
&attr_tos_ecn,
+ &attr_vlan_tx_disable,
NULL,
};
diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
index 6ea3ae8..decb740 100644
--- a/drivers/infiniband/hw/bnxt_re/main.c
+++ b/drivers/infiniband/hw/bnxt_re/main.c
@@ -591,6 +591,7 @@ static struct bnxt_re_dev *bnxt_re_dev_add(struct net_device *netdev,
rdev->id = rdev->en_dev->pdev->devfn;
INIT_LIST_HEAD(&rdev->qp_list);
mutex_init(&rdev->qp_lock);
+ mutex_init(&rdev->cc_lock);
atomic_set(&rdev->qp_count, 0);
atomic_set(&rdev->cq_count, 0);
atomic_set(&rdev->srq_count, 0);
@@ -889,8 +890,11 @@ static int bnxt_re_update_gid(struct bnxt_re_dev *rdev)
continue;
/* need to modify the VLAN enable setting of non VLAN GID only
* as setting is done for VLAN GID while adding GID
+ *
+ * If vlan_tx_disable is enable, then we'll need to remove the
+ * vlan entry from the sgid_tbl.
*/
- if (sgid_tbl->vlan[index])
+ if (sgid_tbl->vlan[index] && !rdev->cc_param.vlan_tx_disable)
continue;
memcpy(&gid, &sgid_tbl->tbl[index], sizeof(gid));
@@ -902,7 +906,7 @@ static int bnxt_re_update_gid(struct bnxt_re_dev *rdev)
return rc;
}
-static u32 bnxt_re_get_priority_mask(struct bnxt_re_dev *rdev)
+static u32 bnxt_re_get_priority_mask(struct bnxt_re_dev *rdev, u8 selector)
{
u32 prio_map = 0, tmp_map = 0;
struct net_device *netdev;
@@ -911,15 +915,19 @@ static u32 bnxt_re_get_priority_mask(struct bnxt_re_dev *rdev)
netdev = rdev->netdev;
memset(&app, 0, sizeof(app));
- app.selector = IEEE_8021QAZ_APP_SEL_ETHERTYPE;
- app.protocol = ETH_P_IBOE;
- tmp_map = dcb_ieee_getapp_mask(netdev, &app);
- prio_map = tmp_map;
+ if (selector & IEEE_8021QAZ_APP_SEL_ETHERTYPE) {
+ app.selector = IEEE_8021QAZ_APP_SEL_ETHERTYPE;
+ app.protocol = ETH_P_IBOE;
+ tmp_map = dcb_ieee_getapp_mask(netdev, &app);
+ prio_map = tmp_map;
+ }
- app.selector = IEEE_8021QAZ_APP_SEL_DGRAM;
- app.protocol = ROCE_V2_UDP_DPORT;
- tmp_map = dcb_ieee_getapp_mask(netdev, &app);
- prio_map |= tmp_map;
+ if (selector & IEEE_8021QAZ_APP_SEL_DGRAM) {
+ app.selector = IEEE_8021QAZ_APP_SEL_DGRAM;
+ app.protocol = ROCE_V2_UDP_DPORT;
+ tmp_map = dcb_ieee_getapp_mask(netdev, &app);
+ prio_map |= tmp_map;
+ }
return prio_map;
}
@@ -946,8 +954,9 @@ static int bnxt_re_setup_qos(struct bnxt_re_dev *rdev)
int rc;
/* Get priority for roce */
- prio_map = bnxt_re_get_priority_mask(rdev);
-
+ prio_map = bnxt_re_get_priority_mask(rdev,
+ (IEEE_8021QAZ_APP_SEL_ETHERTYPE |
+ IEEE_8021QAZ_APP_SEL_DGRAM));
if (prio_map == rdev->cur_prio_map)
return 0;
rdev->cur_prio_map = prio_map;
@@ -973,9 +982,188 @@ static int bnxt_re_setup_qos(struct bnxt_re_dev *rdev)
*/
if ((prio_map == 0 && rdev->qplib_res.prio) ||
(prio_map != 0 && !rdev->qplib_res.prio)) {
- rdev->qplib_res.prio = prio_map ? true : false;
+ if (!rdev->cc_param.vlan_tx_disable) {
+ rdev->qplib_res.prio = prio_map ? true : false;
+ bnxt_re_update_gid(rdev);
+ }
+ }
+
+ return 0;
+}
+
+int bnxt_re_query_hwrm_dscp2pri(struct bnxt_re_dev *rdev,
+ struct bnxt_re_dscp2pri *d2p, u16 count)
+{
+ struct bnxt_en_dev *en_dev = rdev->en_dev;
+ struct bnxt *bp = netdev_priv(rdev->netdev);
+ struct hwrm_queue_dscp2pri_qcfg_input req = {0};
+ struct hwrm_queue_dscp2pri_qcfg_output resp;
+ struct bnxt_fw_msg fw_msg;
+ struct bnxt_re_dscp2pri *dscp2pri;
+ int i, rc = 0, data_len = 3 * 256; /*FIXME: Hard coding */
+ dma_addr_t dma_handle;
+ u16 entry_cnt = 0;
+ u8 *kmem;
+
+ bnxt_re_init_hwrm_hdr(rdev, (void *)&req,
+ HWRM_QUEUE_DSCP2PRI_QCFG, -1, -1);
+ req.port_id = bp->pf.port_id;
+ kmem = dma_alloc_coherent(&bp->pdev->dev, data_len, &dma_handle,
+ GFP_KERNEL);
+ if (!kmem) {
+ dev_err(rdev_to_dev(rdev),
+ "dma_alloc_coherent failure, length = %u\n",
+ (unsigned)data_len);
+ return -ENOMEM;
+ }
+ req.dest_data_addr = cpu_to_le64(dma_handle);
+ req.dest_data_buffer_size = cpu_to_le16(data_len);
+ memset(&fw_msg, 0, sizeof(fw_msg));
+ bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
+ sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
+ rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg);
+ if (rc)
+ goto out;
+
+ /* Upload the DSCP-MASK-PRI tuple(s) */
+ dscp2pri = (struct bnxt_re_dscp2pri *)kmem;
+ entry_cnt = le16_to_cpu(resp.entry_cnt);
+ for (i = 0; i < entry_cnt && i < count; i++) {
+ d2p[i].dscp = dscp2pri->dscp;
+ d2p[i].mask = dscp2pri->mask;
+ d2p[i].pri = dscp2pri->pri;
+ dscp2pri++;
+ }
+out:
+ dma_free_coherent(&bp->pdev->dev, data_len, kmem, dma_handle);
+ return rc;
+}
+
+int bnxt_re_vlan_tx_disable(struct bnxt_re_dev *rdev)
+{
+ /* Remove the VLAN from the GID entry */
+ if (!rdev->cur_prio_map)
+ return 0;
+
+ rdev->qplib_res.prio = false;
+ return bnxt_re_update_gid(rdev);
+}
+
+int bnxt_re_set_hwrm_dscp2pri(struct bnxt_re_dev *rdev,
+ struct bnxt_re_dscp2pri *d2p, u16 count)
+{
+ struct bnxt_en_dev *en_dev = rdev->en_dev;
+ struct bnxt *bp = netdev_priv(rdev->netdev);
+ struct hwrm_queue_dscp2pri_cfg_input req = {0};
+ struct hwrm_queue_dscp2pri_cfg_output resp;
+ struct bnxt_fw_msg fw_msg;
+ struct bnxt_re_dscp2pri *dscp2pri;
+ int i, rc, data_len = 3 * 256;
+ dma_addr_t dma_handle;
+ u8 *kmem;
+
+ bnxt_re_init_hwrm_hdr(rdev, (void *)&req,
+ HWRM_QUEUE_DSCP2PRI_CFG, -1, -1);
+ req.port_id = bp->pf.port_id;
+ kmem = dma_alloc_coherent(&bp->pdev->dev, data_len, &dma_handle,
+ GFP_KERNEL);
+ if (!kmem) {
+ dev_err(rdev_to_dev(rdev),
+ "dma_alloc_coherent failure, length = %u\n",
+ (unsigned)data_len);
+ return -ENOMEM;
+ }
+ req.src_data_addr = cpu_to_le64(dma_handle);
+
+ /* Download the DSCP-MASK-PRI tuple(s) */
+ dscp2pri = (struct bnxt_re_dscp2pri *)kmem;
+ for (i = 0; i < count; i++) {
+ dscp2pri->dscp = d2p[i].dscp;
+ dscp2pri->mask = d2p[i].mask;
+ dscp2pri->pri = d2p[i].pri;
+ dscp2pri++;
+ }
+
+ req.entry_cnt = cpu_to_le16(count);
+ memset(&fw_msg, 0, sizeof(fw_msg));
+ bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
+ sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
+ rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg);
+ dma_free_coherent(&bp->pdev->dev, data_len, kmem, dma_handle);
+ return rc;
+}
+
+static u8 bnxt_re_get_prio(u8 prio_map)
+{
+ u8 prio = 0;
+
+ for (prio = 0; prio < 8; prio++) {
+ if (prio_map & (1UL << prio))
+ break;
+ }
+
+ return prio;
+}
+
+static int bnxt_re_setup_dscp(struct bnxt_re_dev *rdev, u8 need_init)
+{
+ u8 prio_map = 0, pri;
+ struct bnxt_re_dscp2pri d2p;
+ int rc;
+
+ prio_map = bnxt_re_get_priority_mask(rdev, IEEE_8021QAZ_APP_SEL_DGRAM);
+ if (!prio_map) {
+ dev_dbg(rdev_to_dev(rdev), "no priority to map\n");
+ if (need_init) {
+ rdev->cc_param.mask = 0;
+ rc = bnxt_qplib_init_cc_param(&rdev->qplib_res,
+ &rdev->cc_param);
+ if (rc)
+ dev_warn(rdev_to_dev(rdev),
+ "init cc failed rc = 0x%x\n", rc);
+ }
+ return 0;
+ }
+
+ pri = bnxt_re_get_prio(prio_map);
+
+ rc = bnxt_re_query_hwrm_dscp2pri(rdev, &d2p, 1);
+ if (rc) {
+ dev_warn(rdev_to_dev(rdev), "query dscp config failed\n");
+ return rc;
+ }
- bnxt_re_update_gid(rdev);
+ if (need_init) {
+ rdev->cc_param.alt_vlan_pcp = pri;
+ rdev->cc_param.mask |=
+ CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_ALT_VLAN_PCP;
+ rdev->cc_param.alt_tos_dscp = d2p.dscp;
+ rdev->cc_param.mask |=
+ CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_ALT_TOS_DSCP;
+ rdev->cc_param.tos_dscp = d2p.dscp;
+ rdev->cc_param.mask |=
+ CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TOS_DSCP;
+ rc = bnxt_qplib_init_cc_param(&rdev->qplib_res,
+ &rdev->cc_param);
+ if (rc)
+ dev_warn(rdev_to_dev(rdev), "init cc failed\n");
+ }
+
+ mutex_lock(&rdev->cc_lock);
+ if ((pri == rdev->dscp_prio) && (rdev->cc_param.tos_dscp == d2p.dscp)) {
+ mutex_unlock(&rdev->cc_lock);
+ return 0;
+ }
+ d2p.dscp = rdev->cc_param.tos_dscp;
+ rdev->dscp_prio = pri;
+ d2p.pri = rdev->dscp_prio;
+ mutex_unlock(&rdev->cc_lock);
+ d2p.mask = 0x3F;
+
+ rc = bnxt_re_set_hwrm_dscp2pri(rdev, &d2p, 1);
+ if (rc) {
+ dev_warn(rdev_to_dev(rdev), "no dscp for prio %d\n", d2p.pri);
+ return rc;
}
return 0;
@@ -1044,6 +1232,7 @@ static void bnxt_re_worker(struct work_struct *work)
worker.work);
bnxt_re_setup_qos(rdev);
+ bnxt_re_setup_dscp(rdev, false);
schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000));
}
@@ -1136,6 +1325,10 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
if (rc)
pr_info("RoCE priority not yet configured\n");
+ rc = bnxt_re_setup_dscp(rdev, true);
+ if (rc)
+ pr_info("DSCP init failed, may not be functional.\n");
+
INIT_DELAYED_WORK(&rdev->worker, bnxt_re_worker);
set_bit(BNXT_RE_FLAG_QOS_WORK_REG, &rdev->flags);
schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000));
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
index 2438477..1e7889c 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
@@ -762,3 +762,59 @@ int bnxt_qplib_modify_cc(struct bnxt_qplib_res *res,
(void *)&resp, NULL, 0);
return rc;
}
+
+int bnxt_qplib_init_cc_param(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_cc_param *cc_param)
+{
+ struct bnxt_qplib_rcfw *rcfw = res->rcfw;
+ struct cmdq_query_roce_cc req;
+ struct creq_query_roce_cc_resp resp;
+ struct bnxt_qplib_rcfw_sbuf *sbuf;
+ struct creq_query_roce_cc_resp_sb *sb;
+ u16 cmd_flags = 0;
+ int rc;
+
+ /* Query the parameters from chip */
+ RCFW_CMD_PREP(req, QUERY_CC, cmd_flags);
+ sbuf = bnxt_qplib_rcfw_alloc_sbuf(rcfw, sizeof(*sb));
+ if (!sbuf) {
+ dev_warn(&res->pdev->dev, "init cc no buffer\n");
+ return -ENOMEM;
+ }
+
+ sb = sbuf->sb;
+ req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS;
+ rc = bnxt_qplib_rcfw_send_message(res->rcfw, (void *)&req,
+ (void *)&resp, (void *)sbuf, 0);
+ if (rc) {
+ dev_warn(&res->pdev->dev, "QUERY_ROCE_CC error\n");
+ goto out;
+ }
+ cc_param->enable = sb->enable_cc & CREQ_QUERY_ROCE_CC_RESP_SB_ENABLE_CC;
+ cc_param->tos_ecn = (sb->tos_dscp_tos_ecn &
+ CREQ_QUERY_ROCE_CC_RESP_SB_TOS_ECN_MASK) >>
+ CREQ_QUERY_ROCE_CC_RESP_SB_TOS_ECN_SFT;
+ cc_param->tos_dscp = (sb->tos_dscp_tos_ecn &
+ CREQ_QUERY_ROCE_CC_RESP_SB_TOS_DSCP_MASK) >>
+ CREQ_QUERY_ROCE_CC_RESP_SB_TOS_DSCP_SFT;
+ cc_param->g = (sb->g & CREQ_QUERY_ROCE_CC_RESP_SB_G_MASK) >>
+ CREQ_QUERY_ROCE_CC_RESP_SB_G_SFT;
+ cc_param->nph_per_state = sb->num_phases_per_state;
+ cc_param->init_cr = le16_to_cpu(sb->init_cr);
+ cc_param->init_tr = le16_to_cpu(sb->init_tr);
+
+ /* There's currently no way to extract these values so we are
+ * initializing them to driver defaults
+ */
+ cc_param->cc_mode = 0;
+ cc_param->inact_th = 0x1388;
+ cc_param->rtt = 0x64;
+ cc_param->tcp_cp = 0;
+ cc_param->mask |= (CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_CC_MODE |
+ CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INACTIVITY_CP |
+ CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_RTT |
+ CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TCP_CP);
+ rc = bnxt_qplib_modify_cc(res, cc_param);
+out:
+ return rc;
+}
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
index 87173701..0d6496f 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
@@ -133,7 +133,9 @@ struct bnxt_qplib_cc_param {
u8 tos_ecn;
u8 tos_dscp;
u16 tcp_cp;
+ u8 vlan_tx_disable;
u32 mask;
+#define BNXT_QPLIB_CC_PARAM_MASK_VLAN_TX_DISABLE 0x4000
};
#define BNXT_QPLIB_ACCESS_LOCAL_WRITE BIT(0)
@@ -183,4 +185,6 @@ int bnxt_qplib_free_fast_reg_page_list(struct bnxt_qplib_res *res,
int bnxt_qplib_map_tc2cos(struct bnxt_qplib_res *res, u16 *cids);
int bnxt_qplib_modify_cc(struct bnxt_qplib_res *res,
struct bnxt_qplib_cc_param *cc_param);
+int bnxt_qplib_init_cc_param(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_cc_param *cc_param);
#endif /* __BNXT_QPLIB_SP_H__*/
--
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [for-next 0/3] Add congestion control capability
[not found] ` <1503402587-24669-1-git-send-email-devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
` (2 preceding siblings ...)
2017-08-22 11:49 ` [for-next 3/3] RDMA/bnxt_re: setup dscp to priority map Devesh Sharma
@ 2017-08-22 12:32 ` Leon Romanovsky
[not found] ` <20170822123246.GX1724-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
3 siblings, 1 reply; 9+ messages in thread
From: Leon Romanovsky @ 2017-08-22 12:32 UTC (permalink / raw)
To: Devesh Sharma; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 376 bytes --]
On Tue, Aug 22, 2017 at 07:49:44AM -0400, Devesh Sharma wrote:
> This patch series is to support congestion control capability
> on Broadcom NetXtreme-E 10/25/40/50 RDMA Ethernet Controllers.
>
> The implementation exposes congestion control related parameters
> to administrator using configfs.
>
We have RDMAtool and netlink for configurations now, please use it.
Thanks,
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [for-next 0/3] Add congestion control capability
[not found] ` <20170822123246.GX1724-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2017-08-22 15:11 ` Jason Gunthorpe
[not found] ` <20170822151143.GB1201-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2017-08-22 15:11 UTC (permalink / raw)
To: Leon Romanovsky; +Cc: Devesh Sharma, linux-rdma-u79uwXL29TY76Z2rM5mHXA
On Tue, Aug 22, 2017 at 03:32:46PM +0300, Leon Romanovsky wrote:
> On Tue, Aug 22, 2017 at 07:49:44AM -0400, Devesh Sharma wrote:
> > This patch series is to support congestion control capability
> > on Broadcom NetXtreme-E 10/25/40/50 RDMA Ethernet Controllers.
> >
> > The implementation exposes congestion control related parameters
> > to administrator using configfs.
> >
>
> We have RDMAtool and netlink for configurations now, please use it.
Yes, +1
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [for-next 0/3] Add congestion control capability
[not found] ` <20170822151143.GB1201-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2017-08-23 15:45 ` Devesh Sharma
[not found] ` <CANjDDBhGoMRZr5vptGRoKZetjNawomXeuHM5XorCu0XG+RzYAg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-08-24 21:25 ` Doug Ledford
1 sibling, 1 reply; 9+ messages in thread
From: Devesh Sharma @ 2017-08-23 15:45 UTC (permalink / raw)
To: Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma
Hi Leon and Jason,
Sure we can definitely use this tool now or in the near-future for
controlling most of the device tuneable parameters etc. I am still
catching up on the older discussion related to this topic.
In the meanwhile I wanted to ask about the availability of this tool
in the upcoming standard OS distributions like SLES/RHEL etc. Do we
have any timeline for the availability, in which version should we
push/avail this tool?
-Regards
Devesh
On Tue, Aug 22, 2017 at 8:41 PM, Jason Gunthorpe
<jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org> wrote:
> On Tue, Aug 22, 2017 at 03:32:46PM +0300, Leon Romanovsky wrote:
>> On Tue, Aug 22, 2017 at 07:49:44AM -0400, Devesh Sharma wrote:
>> > This patch series is to support congestion control capability
>> > on Broadcom NetXtreme-E 10/25/40/50 RDMA Ethernet Controllers.
>> >
>> > The implementation exposes congestion control related parameters
>> > to administrator using configfs.
>> >
>>
>> We have RDMAtool and netlink for configurations now, please use it.
>
> Yes, +1
>
> Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [for-next 0/3] Add congestion control capability
[not found] ` <CANjDDBhGoMRZr5vptGRoKZetjNawomXeuHM5XorCu0XG+RzYAg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-08-23 16:26 ` Leon Romanovsky
0 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2017-08-23 16:26 UTC (permalink / raw)
To: Devesh Sharma; +Cc: Jason Gunthorpe, linux-rdma
[-- Attachment #1: Type: text/plain, Size: 881 bytes --]
On Wed, Aug 23, 2017 at 09:15:24PM +0530, Devesh Sharma wrote:
> Hi Leon and Jason,
>
> Sure we can definitely use this tool now or in the near-future for
> controlling most of the device tuneable parameters etc. I am still
> catching up on the older discussion related to this topic.
>
> In the meanwhile I wanted to ask about the availability of this tool
> in the upcoming standard OS distributions like SLES/RHEL etc. Do we
> have any timeline for the availability, in which version should we
> push/avail this tool?
It is part of iproute2 package and will be available in all OSes once
they will upgrade it. I'm pretty confident that it will be available at
the same time as this kernel version, because these tools are aligned
with kernels.
Exactly as your feature is not going to be back ported by distros, they
won't bring new iproute2 package to the old kernel.
Thanks
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [for-next 0/3] Add congestion control capability
[not found] ` <20170822151143.GB1201-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2017-08-23 15:45 ` Devesh Sharma
@ 2017-08-24 21:25 ` Doug Ledford
1 sibling, 0 replies; 9+ messages in thread
From: Doug Ledford @ 2017-08-24 21:25 UTC (permalink / raw)
To: Jason Gunthorpe, Leon Romanovsky
Cc: Devesh Sharma, linux-rdma-u79uwXL29TY76Z2rM5mHXA
On Tue, 2017-08-22 at 09:11 -0600, Jason Gunthorpe wrote:
> On Tue, Aug 22, 2017 at 03:32:46PM +0300, Leon Romanovsky wrote:
> > On Tue, Aug 22, 2017 at 07:49:44AM -0400, Devesh Sharma wrote:
> > > This patch series is to support congestion control capability
> > > on Broadcom NetXtreme-E 10/25/40/50 RDMA Ethernet Controllers.
> > >
> > > The implementation exposes congestion control related parameters
> > > to administrator using configfs.
> > >
> >
> > We have RDMAtool and netlink for configurations now, please use it.
>
> Yes, +1
Indeed. I'm putting this version out to pasture as "Changes
Requested".
--
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2017-08-24 21:25 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-08-22 11:49 [for-next 0/3] Add congestion control capability Devesh Sharma
[not found] ` <1503402587-24669-1-git-send-email-devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2017-08-22 11:49 ` [for-next 1/3] RDMA/bnxt_re: expose cc parameters through configfs Devesh Sharma
2017-08-22 11:49 ` [for-next 2/3] RDMA/bnxt: update bnxt_hsi to hold dscp2pri declaration Devesh Sharma
2017-08-22 11:49 ` [for-next 3/3] RDMA/bnxt_re: setup dscp to priority map Devesh Sharma
2017-08-22 12:32 ` [for-next 0/3] Add congestion control capability Leon Romanovsky
[not found] ` <20170822123246.GX1724-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-08-22 15:11 ` Jason Gunthorpe
[not found] ` <20170822151143.GB1201-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2017-08-23 15:45 ` Devesh Sharma
[not found] ` <CANjDDBhGoMRZr5vptGRoKZetjNawomXeuHM5XorCu0XG+RzYAg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-08-23 16:26 ` Leon Romanovsky
2017-08-24 21:25 ` Doug Ledford
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).