* Re: [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06
[not found] ` <1510651577-20794-2-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2017-11-14 8:53 ` Liuyixian (Eason)
[not found] ` <ad1fff67-3511-8252-5b6f-aa1ab0c90078-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-11-14 9:11 ` Leon Romanovsky
1 sibling, 1 reply; 14+ messages in thread
From: Liuyixian (Eason) @ 2017-11-14 8:53 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
Sorry, cover-letter has been lost for some unknown problem.
I will resend the patch set.
On 2017/11/14 17:26, Yixian Liu wrote:
> Considering the compatibility of supporting hip08's eq
> process and possible changes of data structure, this patch
> refactors the eq code structure of hip06.
>
> We move all the eq process code for hip06 from hns_roce_eq.c
> into hns_roce_hw_v1.c, and also for hns_roce_eq.h. With
> these changes, it will be convenient to add the eq support
> for later hardware version.
>
> Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Reviewed-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Reviewed-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> ---
> drivers/infiniband/hw/hns/Makefile | 2 +-
> drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
> drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
> drivers/infiniband/hw/hns/hns_roce_device.h | 57 ++-
> drivers/infiniband/hw/hns/hns_roce_eq.c | 759 ----------------------------
> drivers/infiniband/hw/hns/hns_roce_eq.h | 134 -----
> drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++++++++++++-
> drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
> drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
> drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
> 10 files changed, 843 insertions(+), 930 deletions(-)
> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
>
> diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile
> index ff426a6..97bf2cd 100644
> --- a/drivers/infiniband/hw/hns/Makefile
> +++ b/drivers/infiniband/hw/hns/Makefile
> @@ -5,7 +5,7 @@
> ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
>
> obj-$(CONFIG_INFINIBAND_HNS) += hns-roce.o
> -hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_eq.o hns_roce_pd.o \
> +hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
> hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
> hns_roce_cq.o hns_roce_alloc.o
> obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o
> diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
> index 1085cb2..9ebe839 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
> @@ -103,6 +103,7 @@ void hns_roce_cmd_event(struct hns_roce_dev *hr_dev, u16 token, u8 status,
> context->out_param = out_param;
> complete(&context->done);
> }
> +EXPORT_SYMBOL_GPL(hns_roce_cmd_event);
>
> /* this should be called with "use_events" */
> static int __hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
> diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
> index 2111b57..bccc9b5 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_cq.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
> @@ -196,15 +196,14 @@ void hns_roce_free_cq(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
> if (ret)
> dev_err(dev, "HW2SW_CQ failed (%d) for CQN %06lx\n", ret,
> hr_cq->cqn);
> - if (hr_dev->eq_table.eq) {
> - /* Waiting interrupt process procedure carried out */
> - synchronize_irq(hr_dev->eq_table.eq[hr_cq->vector].irq);
> -
> - /* wait for all interrupt processed */
> - if (atomic_dec_and_test(&hr_cq->refcount))
> - complete(&hr_cq->free);
> - wait_for_completion(&hr_cq->free);
> - }
> +
> + /* Waiting interrupt process procedure carried out */
> + synchronize_irq(hr_dev->eq_table.eq[hr_cq->vector].irq);
> +
> + /* wait for all interrupt processed */
> + if (atomic_dec_and_test(&hr_cq->refcount))
> + complete(&hr_cq->free);
> + wait_for_completion(&hr_cq->free);
>
> spin_lock_irq(&cq_table->lock);
> radix_tree_delete(&cq_table->tree, hr_cq->cqn);
> @@ -460,6 +459,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn)
> ++cq->arm_sn;
> cq->comp(cq);
> }
> +EXPORT_SYMBOL_GPL(hns_roce_cq_completion);
>
> void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
> {
> @@ -482,6 +482,7 @@ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
> if (atomic_dec_and_test(&cq->refcount))
> complete(&cq->free);
> }
> +EXPORT_SYMBOL_GPL(hns_roce_cq_event);
>
> int hns_roce_init_cq_table(struct hns_roce_dev *hr_dev)
> {
> diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
> index 01d3d69..9aa9e94 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_device.h
> +++ b/drivers/infiniband/hw/hns/hns_roce_device.h
> @@ -62,12 +62,16 @@
> #define HNS_ROCE_CQE_WCMD_EMPTY_BIT 0x2
> #define HNS_ROCE_MIN_CQE_CNT 16
>
> -#define HNS_ROCE_MAX_IRQ_NUM 34
> +#define HNS_ROCE_MAX_IRQ_NUM 128
>
> -#define HNS_ROCE_COMP_VEC_NUM 32
> +#define EQ_ENABLE 1
> +#define EQ_DISABLE 0
>
> -#define HNS_ROCE_AEQE_VEC_NUM 1
> -#define HNS_ROCE_AEQE_OF_VEC_NUM 1
> +#define HNS_ROCE_CEQ 0
> +#define HNS_ROCE_AEQ 1
> +
> +#define HNS_ROCE_CEQ_ENTRY_SIZE 0x4
> +#define HNS_ROCE_AEQ_ENTRY_SIZE 0x10
>
> /* 4G/4K = 1M */
> #define HNS_ROCE_SL_SHIFT 28
> @@ -485,6 +489,45 @@ struct hns_roce_ib_iboe {
> u8 phy_port[HNS_ROCE_MAX_PORTS];
> };
>
> +enum {
> + HNS_ROCE_EQ_STAT_INVALID = 0,
> + HNS_ROCE_EQ_STAT_VALID = 2,
> +};
> +
> +struct hns_roce_ceqe {
> + u32 comp;
> +};
> +
> +struct hns_roce_aeqe {
> + u32 asyn;
> + union {
> + struct {
> + u32 qp;
> + u32 rsv0;
> + u32 rsv1;
> + } qp_event;
> +
> + struct {
> + u32 cq;
> + u32 rsv0;
> + u32 rsv1;
> + } cq_event;
> +
> + struct {
> + u32 ceqe;
> + u32 rsv0;
> + u32 rsv1;
> + } ce_event;
> +
> + struct {
> + __le64 out_param;
> + __le16 token;
> + u8 status;
> + u8 rsv0;
> + } __packed cmd;
> + } event;
> +};
> +
> struct hns_roce_eq {
> struct hns_roce_dev *hr_dev;
> void __iomem *doorbell;
> @@ -502,7 +545,7 @@ struct hns_roce_eq {
>
> struct hns_roce_eq_table {
> struct hns_roce_eq *eq;
> - void __iomem **eqc_base;
> + void __iomem **eqc_base; /* only for hw v1 */
> };
>
> struct hns_roce_caps {
> @@ -550,7 +593,7 @@ struct hns_roce_caps {
> u32 pbl_buf_pg_sz;
> u32 pbl_hop_num;
> int aeqe_depth;
> - int ceqe_depth[HNS_ROCE_COMP_VEC_NUM];
> + int ceqe_depth;
> enum ib_mtu max_mtu;
> u32 qpc_bt_num;
> u32 srqc_bt_num;
> @@ -623,6 +666,8 @@ struct hns_roce_hw {
> int (*dereg_mr)(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr);
> int (*destroy_cq)(struct ib_cq *ibcq);
> int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period);
> + int (*init_eq)(struct hns_roce_dev *hr_dev);
> + void (*cleanup_eq)(struct hns_roce_dev *hr_dev);
> };
>
> struct hns_roce_dev {
> diff --git a/drivers/infiniband/hw/hns/hns_roce_eq.c b/drivers/infiniband/hw/hns/hns_roce_eq.c
> deleted file mode 100644
> index d184431..0000000
> --- a/drivers/infiniband/hw/hns/hns_roce_eq.c
> +++ /dev/null
> @@ -1,759 +0,0 @@
> -/*
> - * Copyright (c) 2016 Hisilicon Limited.
> - *
> - * This software is available to you under a choice of one of two
> - * licenses. You may choose to be licensed under the terms of the GNU
> - * General Public License (GPL) Version 2, available from the file
> - * COPYING in the main directory of this source tree, or the
> - * OpenIB.org BSD license below:
> - *
> - * Redistribution and use in source and binary forms, with or
> - * without modification, are permitted provided that the following
> - * conditions are met:
> - *
> - * - Redistributions of source code must retain the above
> - * copyright notice, this list of conditions and the following
> - * disclaimer.
> - *
> - * - Redistributions in binary form must reproduce the above
> - * copyright notice, this list of conditions and the following
> - * disclaimer in the documentation and/or other materials
> - * provided with the distribution.
> - *
> - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> - * SOFTWARE.
> - */
> -
> -#include <linux/platform_device.h>
> -#include <linux/interrupt.h>
> -#include "hns_roce_common.h"
> -#include "hns_roce_device.h"
> -#include "hns_roce_eq.h"
> -
> -static void eq_set_cons_index(struct hns_roce_eq *eq, int req_not)
> -{
> - roce_raw_write((eq->cons_index & CONS_INDEX_MASK) |
> - (req_not << eq->log_entries), eq->doorbell);
> - /* Memory barrier */
> - mb();
> -}
> -
> -static struct hns_roce_aeqe *get_aeqe(struct hns_roce_eq *eq, u32 entry)
> -{
> - unsigned long off = (entry & (eq->entries - 1)) *
> - HNS_ROCE_AEQ_ENTRY_SIZE;
> -
> - return (struct hns_roce_aeqe *)((u8 *)
> - (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
> - off % HNS_ROCE_BA_SIZE);
> -}
> -
> -static struct hns_roce_aeqe *next_aeqe_sw(struct hns_roce_eq *eq)
> -{
> - struct hns_roce_aeqe *aeqe = get_aeqe(eq, eq->cons_index);
> -
> - return (roce_get_bit(aeqe->asyn, HNS_ROCE_AEQE_U32_4_OWNER_S) ^
> - !!(eq->cons_index & eq->entries)) ? aeqe : NULL;
> -}
> -
> -static void hns_roce_wq_catas_err_handle(struct hns_roce_dev *hr_dev,
> - struct hns_roce_aeqe *aeqe, int qpn)
> -{
> - struct device *dev = &hr_dev->pdev->dev;
> -
> - dev_warn(dev, "Local Work Queue Catastrophic Error.\n");
> - switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
> - HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
> - case HNS_ROCE_LWQCE_QPC_ERROR:
> - dev_warn(dev, "QP %d, QPC error.\n", qpn);
> - break;
> - case HNS_ROCE_LWQCE_MTU_ERROR:
> - dev_warn(dev, "QP %d, MTU error.\n", qpn);
> - break;
> - case HNS_ROCE_LWQCE_WQE_BA_ADDR_ERROR:
> - dev_warn(dev, "QP %d, WQE BA addr error.\n", qpn);
> - break;
> - case HNS_ROCE_LWQCE_WQE_ADDR_ERROR:
> - dev_warn(dev, "QP %d, WQE addr error.\n", qpn);
> - break;
> - case HNS_ROCE_LWQCE_SQ_WQE_SHIFT_ERROR:
> - dev_warn(dev, "QP %d, WQE shift error\n", qpn);
> - break;
> - case HNS_ROCE_LWQCE_SL_ERROR:
> - dev_warn(dev, "QP %d, SL error.\n", qpn);
> - break;
> - case HNS_ROCE_LWQCE_PORT_ERROR:
> - dev_warn(dev, "QP %d, port error.\n", qpn);
> - break;
> - default:
> - break;
> - }
> -}
> -
> -static void hns_roce_local_wq_access_err_handle(struct hns_roce_dev *hr_dev,
> - struct hns_roce_aeqe *aeqe,
> - int qpn)
> -{
> - struct device *dev = &hr_dev->pdev->dev;
> -
> - dev_warn(dev, "Local Access Violation Work Queue Error.\n");
> - switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
> - HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
> - case HNS_ROCE_LAVWQE_R_KEY_VIOLATION:
> - dev_warn(dev, "QP %d, R_key violation.\n", qpn);
> - break;
> - case HNS_ROCE_LAVWQE_LENGTH_ERROR:
> - dev_warn(dev, "QP %d, length error.\n", qpn);
> - break;
> - case HNS_ROCE_LAVWQE_VA_ERROR:
> - dev_warn(dev, "QP %d, VA error.\n", qpn);
> - break;
> - case HNS_ROCE_LAVWQE_PD_ERROR:
> - dev_err(dev, "QP %d, PD error.\n", qpn);
> - break;
> - case HNS_ROCE_LAVWQE_RW_ACC_ERROR:
> - dev_warn(dev, "QP %d, rw acc error.\n", qpn);
> - break;
> - case HNS_ROCE_LAVWQE_KEY_STATE_ERROR:
> - dev_warn(dev, "QP %d, key state error.\n", qpn);
> - break;
> - case HNS_ROCE_LAVWQE_MR_OPERATION_ERROR:
> - dev_warn(dev, "QP %d, MR operation error.\n", qpn);
> - break;
> - default:
> - break;
> - }
> -}
> -
> -static void hns_roce_qp_err_handle(struct hns_roce_dev *hr_dev,
> - struct hns_roce_aeqe *aeqe,
> - int event_type)
> -{
> - struct device *dev = &hr_dev->pdev->dev;
> - int phy_port;
> - int qpn;
> -
> - qpn = roce_get_field(aeqe->event.qp_event.qp,
> - HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M,
> - HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S);
> - phy_port = roce_get_field(aeqe->event.qp_event.qp,
> - HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M,
> - HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S);
> - if (qpn <= 1)
> - qpn = HNS_ROCE_MAX_PORTS * qpn + phy_port;
> -
> - switch (event_type) {
> - case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
> - dev_warn(dev, "Invalid Req Local Work Queue Error.\n"
> - "QP %d, phy_port %d.\n", qpn, phy_port);
> - break;
> - case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
> - hns_roce_wq_catas_err_handle(hr_dev, aeqe, qpn);
> - break;
> - case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
> - hns_roce_local_wq_access_err_handle(hr_dev, aeqe, qpn);
> - break;
> - default:
> - break;
> - }
> -
> - hns_roce_qp_event(hr_dev, qpn, event_type);
> -}
> -
> -static void hns_roce_cq_err_handle(struct hns_roce_dev *hr_dev,
> - struct hns_roce_aeqe *aeqe,
> - int event_type)
> -{
> - struct device *dev = &hr_dev->pdev->dev;
> - u32 cqn;
> -
> - cqn = le32_to_cpu(roce_get_field(aeqe->event.cq_event.cq,
> - HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M,
> - HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S));
> -
> - switch (event_type) {
> - case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
> - dev_warn(dev, "CQ 0x%x access err.\n", cqn);
> - break;
> - case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
> - dev_warn(dev, "CQ 0x%x overflow\n", cqn);
> - break;
> - case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
> - dev_warn(dev, "CQ 0x%x ID invalid.\n", cqn);
> - break;
> - default:
> - break;
> - }
> -
> - hns_roce_cq_event(hr_dev, cqn, event_type);
> -}
> -
> -static void hns_roce_db_overflow_handle(struct hns_roce_dev *hr_dev,
> - struct hns_roce_aeqe *aeqe)
> -{
> - struct device *dev = &hr_dev->pdev->dev;
> -
> - switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
> - HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
> - case HNS_ROCE_DB_SUBTYPE_SDB_OVF:
> - dev_warn(dev, "SDB overflow.\n");
> - break;
> - case HNS_ROCE_DB_SUBTYPE_SDB_ALM_OVF:
> - dev_warn(dev, "SDB almost overflow.\n");
> - break;
> - case HNS_ROCE_DB_SUBTYPE_SDB_ALM_EMP:
> - dev_warn(dev, "SDB almost empty.\n");
> - break;
> - case HNS_ROCE_DB_SUBTYPE_ODB_OVF:
> - dev_warn(dev, "ODB overflow.\n");
> - break;
> - case HNS_ROCE_DB_SUBTYPE_ODB_ALM_OVF:
> - dev_warn(dev, "ODB almost overflow.\n");
> - break;
> - case HNS_ROCE_DB_SUBTYPE_ODB_ALM_EMP:
> - dev_warn(dev, "SDB almost empty.\n");
> - break;
> - default:
> - break;
> - }
> -}
> -
> -static int hns_roce_aeq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
> -{
> - struct device *dev = &hr_dev->pdev->dev;
> - struct hns_roce_aeqe *aeqe;
> - int aeqes_found = 0;
> - int event_type;
> -
> - while ((aeqe = next_aeqe_sw(eq))) {
> - dev_dbg(dev, "aeqe = %p, aeqe->asyn.event_type = 0x%lx\n", aeqe,
> - roce_get_field(aeqe->asyn,
> - HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
> - HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S));
> - /* Memory barrier */
> - rmb();
> -
> - event_type = roce_get_field(aeqe->asyn,
> - HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
> - HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S);
> - switch (event_type) {
> - case HNS_ROCE_EVENT_TYPE_PATH_MIG:
> - dev_warn(dev, "PATH MIG not supported\n");
> - break;
> - case HNS_ROCE_EVENT_TYPE_COMM_EST:
> - dev_warn(dev, "COMMUNICATION established\n");
> - break;
> - case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
> - dev_warn(dev, "SQ DRAINED not supported\n");
> - break;
> - case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
> - dev_warn(dev, "PATH MIG failed\n");
> - break;
> - case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
> - case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
> - case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
> - hns_roce_qp_err_handle(hr_dev, aeqe, event_type);
> - break;
> - case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
> - case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
> - case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
> - dev_warn(dev, "SRQ not support!\n");
> - break;
> - case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
> - case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
> - case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
> - hns_roce_cq_err_handle(hr_dev, aeqe, event_type);
> - break;
> - case HNS_ROCE_EVENT_TYPE_PORT_CHANGE:
> - dev_warn(dev, "port change.\n");
> - break;
> - case HNS_ROCE_EVENT_TYPE_MB:
> - hns_roce_cmd_event(hr_dev,
> - le16_to_cpu(aeqe->event.cmd.token),
> - aeqe->event.cmd.status,
> - le64_to_cpu(aeqe->event.cmd.out_param
> - ));
> - break;
> - case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
> - hns_roce_db_overflow_handle(hr_dev, aeqe);
> - break;
> - case HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW:
> - dev_warn(dev, "CEQ 0x%lx overflow.\n",
> - roce_get_field(aeqe->event.ce_event.ceqe,
> - HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M,
> - HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S));
> - break;
> - default:
> - dev_warn(dev, "Unhandled event %d on EQ %d at index %u\n",
> - event_type, eq->eqn, eq->cons_index);
> - break;
> - }
> -
> - eq->cons_index++;
> - aeqes_found = 1;
> -
> - if (eq->cons_index > 2 * hr_dev->caps.aeqe_depth - 1) {
> - dev_warn(dev, "cons_index overflow, set back to zero\n"
> - );
> - eq->cons_index = 0;
> - }
> - }
> -
> - eq_set_cons_index(eq, 0);
> -
> - return aeqes_found;
> -}
> -
> -static struct hns_roce_ceqe *get_ceqe(struct hns_roce_eq *eq, u32 entry)
> -{
> - unsigned long off = (entry & (eq->entries - 1)) *
> - HNS_ROCE_CEQ_ENTRY_SIZE;
> -
> - return (struct hns_roce_ceqe *)((u8 *)
> - (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
> - off % HNS_ROCE_BA_SIZE);
> -}
> -
> -static struct hns_roce_ceqe *next_ceqe_sw(struct hns_roce_eq *eq)
> -{
> - struct hns_roce_ceqe *ceqe = get_ceqe(eq, eq->cons_index);
> -
> - return (!!(roce_get_bit(ceqe->ceqe.comp,
> - HNS_ROCE_CEQE_CEQE_COMP_OWNER_S))) ^
> - (!!(eq->cons_index & eq->entries)) ? ceqe : NULL;
> -}
> -
> -static int hns_roce_ceq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
> -{
> - struct hns_roce_ceqe *ceqe;
> - int ceqes_found = 0;
> - u32 cqn;
> -
> - while ((ceqe = next_ceqe_sw(eq))) {
> - /* Memory barrier */
> - rmb();
> - cqn = roce_get_field(ceqe->ceqe.comp,
> - HNS_ROCE_CEQE_CEQE_COMP_CQN_M,
> - HNS_ROCE_CEQE_CEQE_COMP_CQN_S);
> - hns_roce_cq_completion(hr_dev, cqn);
> -
> - ++eq->cons_index;
> - ceqes_found = 1;
> -
> - if (eq->cons_index > 2 * hr_dev->caps.ceqe_depth[eq->eqn] - 1) {
> - dev_warn(&eq->hr_dev->pdev->dev,
> - "cons_index overflow, set back to zero\n");
> - eq->cons_index = 0;
> - }
> - }
> -
> - eq_set_cons_index(eq, 0);
> -
> - return ceqes_found;
> -}
> -
> -static int hns_roce_aeq_ovf_int(struct hns_roce_dev *hr_dev,
> - struct hns_roce_eq *eq)
> -{
> - struct device *dev = &eq->hr_dev->pdev->dev;
> - int eqovf_found = 0;
> - u32 caepaemask_val;
> - u32 cealmovf_val;
> - u32 caepaest_val;
> - u32 aeshift_val;
> - u32 ceshift_val;
> - u32 cemask_val;
> - int i = 0;
> -
> - /**
> - * AEQ overflow ECC mult bit err CEQ overflow alarm
> - * must clear interrupt, mask irq, clear irq, cancel mask operation
> - */
> - aeshift_val = roce_read(hr_dev, ROCEE_CAEP_AEQC_AEQE_SHIFT_REG);
> -
> - if (roce_get_bit(aeshift_val,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQ_ALM_OVF_INT_ST_S) == 1) {
> - dev_warn(dev, "AEQ overflow!\n");
> -
> - /* Set mask */
> - caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
> - roce_set_bit(caepaemask_val,
> - ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
> - HNS_ROCE_INT_MASK_ENABLE);
> - roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
> -
> - /* Clear int state(INT_WC : write 1 clear) */
> - caepaest_val = roce_read(hr_dev, ROCEE_CAEP_AE_ST_REG);
> - roce_set_bit(caepaest_val,
> - ROCEE_CAEP_AE_ST_CAEP_AEQ_ALM_OVF_S, 1);
> - roce_write(hr_dev, ROCEE_CAEP_AE_ST_REG, caepaest_val);
> -
> - /* Clear mask */
> - caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
> - roce_set_bit(caepaemask_val,
> - ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
> - HNS_ROCE_INT_MASK_DISABLE);
> - roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
> - }
> -
> - /* CEQ almost overflow */
> - for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
> - ceshift_val = roce_read(hr_dev, ROCEE_CAEP_CEQC_SHIFT_0_REG +
> - i * CEQ_REG_OFFSET);
> -
> - if (roce_get_bit(ceshift_val,
> - ROCEE_CAEP_CEQC_SHIFT_CAEP_CEQ_ALM_OVF_INT_ST_S) == 1) {
> - dev_warn(dev, "CEQ[%d] almost overflow!\n", i);
> - eqovf_found++;
> -
> - /* Set mask */
> - cemask_val = roce_read(hr_dev,
> - ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> - i * CEQ_REG_OFFSET);
> - roce_set_bit(cemask_val,
> - ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
> - HNS_ROCE_INT_MASK_ENABLE);
> - roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> - i * CEQ_REG_OFFSET, cemask_val);
> -
> - /* Clear int state(INT_WC : write 1 clear) */
> - cealmovf_val = roce_read(hr_dev,
> - ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
> - i * CEQ_REG_OFFSET);
> - roce_set_bit(cealmovf_val,
> - ROCEE_CAEP_CEQ_ALM_OVF_CAEP_CEQ_ALM_OVF_S,
> - 1);
> - roce_write(hr_dev, ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
> - i * CEQ_REG_OFFSET, cealmovf_val);
> -
> - /* Clear mask */
> - cemask_val = roce_read(hr_dev,
> - ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> - i * CEQ_REG_OFFSET);
> - roce_set_bit(cemask_val,
> - ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
> - HNS_ROCE_INT_MASK_DISABLE);
> - roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> - i * CEQ_REG_OFFSET, cemask_val);
> - }
> - }
> -
> - /* ECC multi-bit error alarm */
> - dev_warn(dev, "ECC UCERR ALARM: 0x%x, 0x%x, 0x%x\n",
> - roce_read(hr_dev, ROCEE_ECC_UCERR_ALM0_REG),
> - roce_read(hr_dev, ROCEE_ECC_UCERR_ALM1_REG),
> - roce_read(hr_dev, ROCEE_ECC_UCERR_ALM2_REG));
> -
> - dev_warn(dev, "ECC CERR ALARM: 0x%x, 0x%x, 0x%x\n",
> - roce_read(hr_dev, ROCEE_ECC_CERR_ALM0_REG),
> - roce_read(hr_dev, ROCEE_ECC_CERR_ALM1_REG),
> - roce_read(hr_dev, ROCEE_ECC_CERR_ALM2_REG));
> -
> - return eqovf_found;
> -}
> -
> -static int hns_roce_eq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
> -{
> - int eqes_found = 0;
> -
> - if (likely(eq->type_flag == HNS_ROCE_CEQ))
> - /* CEQ irq routine, CEQ is pulse irq, not clear */
> - eqes_found = hns_roce_ceq_int(hr_dev, eq);
> - else if (likely(eq->type_flag == HNS_ROCE_AEQ))
> - /* AEQ irq routine, AEQ is pulse irq, not clear */
> - eqes_found = hns_roce_aeq_int(hr_dev, eq);
> - else
> - /* AEQ queue overflow irq */
> - eqes_found = hns_roce_aeq_ovf_int(hr_dev, eq);
> -
> - return eqes_found;
> -}
> -
> -static irqreturn_t hns_roce_msi_x_interrupt(int irq, void *eq_ptr)
> -{
> - int int_work = 0;
> - struct hns_roce_eq *eq = eq_ptr;
> - struct hns_roce_dev *hr_dev = eq->hr_dev;
> -
> - int_work = hns_roce_eq_int(hr_dev, eq);
> -
> - return IRQ_RETVAL(int_work);
> -}
> -
> -static void hns_roce_enable_eq(struct hns_roce_dev *hr_dev, int eq_num,
> - int enable_flag)
> -{
> - void __iomem *eqc = hr_dev->eq_table.eqc_base[eq_num];
> - u32 val;
> -
> - val = readl(eqc);
> -
> - if (enable_flag)
> - roce_set_field(val,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
> - HNS_ROCE_EQ_STAT_VALID);
> - else
> - roce_set_field(val,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
> - HNS_ROCE_EQ_STAT_INVALID);
> - writel(val, eqc);
> -}
> -
> -static int hns_roce_create_eq(struct hns_roce_dev *hr_dev,
> - struct hns_roce_eq *eq)
> -{
> - void __iomem *eqc = hr_dev->eq_table.eqc_base[eq->eqn];
> - struct device *dev = &hr_dev->pdev->dev;
> - dma_addr_t tmp_dma_addr;
> - u32 eqconsindx_val = 0;
> - u32 eqcuridx_val = 0;
> - u32 eqshift_val = 0;
> - int num_bas = 0;
> - int ret;
> - int i;
> -
> - num_bas = (PAGE_ALIGN(eq->entries * eq->eqe_size) +
> - HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
> -
> - if ((eq->entries * eq->eqe_size) > HNS_ROCE_BA_SIZE) {
> - dev_err(dev, "[error]eq buf %d gt ba size(%d) need bas=%d\n",
> - (eq->entries * eq->eqe_size), HNS_ROCE_BA_SIZE,
> - num_bas);
> - return -EINVAL;
> - }
> -
> - eq->buf_list = kcalloc(num_bas, sizeof(*eq->buf_list), GFP_KERNEL);
> - if (!eq->buf_list)
> - return -ENOMEM;
> -
> - for (i = 0; i < num_bas; ++i) {
> - eq->buf_list[i].buf = dma_alloc_coherent(dev, HNS_ROCE_BA_SIZE,
> - &tmp_dma_addr,
> - GFP_KERNEL);
> - if (!eq->buf_list[i].buf) {
> - ret = -ENOMEM;
> - goto err_out_free_pages;
> - }
> -
> - eq->buf_list[i].map = tmp_dma_addr;
> - memset(eq->buf_list[i].buf, 0, HNS_ROCE_BA_SIZE);
> - }
> - eq->cons_index = 0;
> - roce_set_field(eqshift_val,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
> - HNS_ROCE_EQ_STAT_INVALID);
> - roce_set_field(eqshift_val,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_M,
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_S,
> - eq->log_entries);
> - writel(eqshift_val, eqc);
> -
> - /* Configure eq extended address 12~44bit */
> - writel((u32)(eq->buf_list[0].map >> 12), eqc + 4);
> -
> - /*
> - * Configure eq extended address 45~49 bit.
> - * 44 = 32 + 12, When evaluating addr to hardware, shift 12 because of
> - * using 4K page, and shift more 32 because of
> - * caculating the high 32 bit value evaluated to hardware.
> - */
> - roce_set_field(eqcuridx_val, ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_M,
> - ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_S,
> - eq->buf_list[0].map >> 44);
> - roce_set_field(eqcuridx_val,
> - ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_M,
> - ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_S, 0);
> - writel(eqcuridx_val, eqc + 8);
> -
> - /* Configure eq consumer index */
> - roce_set_field(eqconsindx_val,
> - ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_M,
> - ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_S, 0);
> - writel(eqconsindx_val, eqc + 0xc);
> -
> - return 0;
> -
> -err_out_free_pages:
> - for (i = i - 1; i >= 0; i--)
> - dma_free_coherent(dev, HNS_ROCE_BA_SIZE, eq->buf_list[i].buf,
> - eq->buf_list[i].map);
> -
> - kfree(eq->buf_list);
> - return ret;
> -}
> -
> -static void hns_roce_free_eq(struct hns_roce_dev *hr_dev,
> - struct hns_roce_eq *eq)
> -{
> - int i = 0;
> - int npages = (PAGE_ALIGN(eq->eqe_size * eq->entries) +
> - HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
> -
> - if (!eq->buf_list)
> - return;
> -
> - for (i = 0; i < npages; ++i)
> - dma_free_coherent(&hr_dev->pdev->dev, HNS_ROCE_BA_SIZE,
> - eq->buf_list[i].buf, eq->buf_list[i].map);
> -
> - kfree(eq->buf_list);
> -}
> -
> -static void hns_roce_int_mask_en(struct hns_roce_dev *hr_dev)
> -{
> - int i = 0;
> - u32 aemask_val;
> - int masken = 0;
> -
> - /* AEQ INT */
> - aemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
> - roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
> - masken);
> - roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AE_IRQ_MASK_S, masken);
> - roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, aemask_val);
> -
> - /* CEQ INT */
> - for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
> - /* IRQ mask */
> - roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> - i * CEQ_REG_OFFSET, masken);
> - }
> -}
> -
> -static void hns_roce_ce_int_default_cfg(struct hns_roce_dev *hr_dev)
> -{
> - /* Configure ce int interval */
> - roce_write(hr_dev, ROCEE_CAEP_CE_INTERVAL_CFG_REG,
> - HNS_ROCE_CEQ_DEFAULT_INTERVAL);
> -
> - /* Configure ce int burst num */
> - roce_write(hr_dev, ROCEE_CAEP_CE_BURST_NUM_CFG_REG,
> - HNS_ROCE_CEQ_DEFAULT_BURST_NUM);
> -}
> -
> -int hns_roce_init_eq_table(struct hns_roce_dev *hr_dev)
> -{
> - struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
> - struct device *dev = &hr_dev->pdev->dev;
> - struct hns_roce_eq *eq = NULL;
> - int eq_num = 0;
> - int ret = 0;
> - int i = 0;
> - int j = 0;
> -
> - eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
> - eq_table->eq = kcalloc(eq_num, sizeof(*eq_table->eq), GFP_KERNEL);
> - if (!eq_table->eq)
> - return -ENOMEM;
> -
> - eq_table->eqc_base = kcalloc(eq_num, sizeof(*eq_table->eqc_base),
> - GFP_KERNEL);
> - if (!eq_table->eqc_base) {
> - ret = -ENOMEM;
> - goto err_eqc_base_alloc_fail;
> - }
> -
> - for (i = 0; i < eq_num; i++) {
> - eq = &eq_table->eq[i];
> - eq->hr_dev = hr_dev;
> - eq->eqn = i;
> - eq->irq = hr_dev->irq[i];
> - eq->log_page_size = PAGE_SHIFT;
> -
> - if (i < hr_dev->caps.num_comp_vectors) {
> - /* CEQ */
> - eq_table->eqc_base[i] = hr_dev->reg_base +
> - ROCEE_CAEP_CEQC_SHIFT_0_REG +
> - HNS_ROCE_CEQC_REG_OFFSET * i;
> - eq->type_flag = HNS_ROCE_CEQ;
> - eq->doorbell = hr_dev->reg_base +
> - ROCEE_CAEP_CEQC_CONS_IDX_0_REG +
> - HNS_ROCE_CEQC_REG_OFFSET * i;
> - eq->entries = hr_dev->caps.ceqe_depth[i];
> - eq->log_entries = ilog2(eq->entries);
> - eq->eqe_size = sizeof(struct hns_roce_ceqe);
> - } else {
> - /* AEQ */
> - eq_table->eqc_base[i] = hr_dev->reg_base +
> - ROCEE_CAEP_AEQC_AEQE_SHIFT_REG;
> - eq->type_flag = HNS_ROCE_AEQ;
> - eq->doorbell = hr_dev->reg_base +
> - ROCEE_CAEP_AEQE_CONS_IDX_REG;
> - eq->entries = hr_dev->caps.aeqe_depth;
> - eq->log_entries = ilog2(eq->entries);
> - eq->eqe_size = sizeof(struct hns_roce_aeqe);
> - }
> - }
> -
> - /* Disable irq */
> - hns_roce_int_mask_en(hr_dev);
> -
> - /* Configure CE irq interval and burst num */
> - hns_roce_ce_int_default_cfg(hr_dev);
> -
> - for (i = 0; i < eq_num; i++) {
> - ret = hns_roce_create_eq(hr_dev, &eq_table->eq[i]);
> - if (ret) {
> - dev_err(dev, "eq create failed\n");
> - goto err_create_eq_fail;
> - }
> - }
> -
> - for (j = 0; j < eq_num; j++) {
> - ret = request_irq(eq_table->eq[j].irq, hns_roce_msi_x_interrupt,
> - 0, hr_dev->irq_names[j], eq_table->eq + j);
> - if (ret) {
> - dev_err(dev, "request irq error!\n");
> - goto err_request_irq_fail;
> - }
> - }
> -
> - for (i = 0; i < eq_num; i++)
> - hns_roce_enable_eq(hr_dev, i, EQ_ENABLE);
> -
> - return 0;
> -
> -err_request_irq_fail:
> - for (j = j - 1; j >= 0; j--)
> - free_irq(eq_table->eq[j].irq, eq_table->eq + j);
> -
> -err_create_eq_fail:
> - for (i = i - 1; i >= 0; i--)
> - hns_roce_free_eq(hr_dev, &eq_table->eq[i]);
> -
> - kfree(eq_table->eqc_base);
> -
> -err_eqc_base_alloc_fail:
> - kfree(eq_table->eq);
> -
> - return ret;
> -}
> -
> -void hns_roce_cleanup_eq_table(struct hns_roce_dev *hr_dev)
> -{
> - int i;
> - int eq_num;
> - struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
> -
> - eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
> - for (i = 0; i < eq_num; i++) {
> - /* Disable EQ */
> - hns_roce_enable_eq(hr_dev, i, EQ_DISABLE);
> -
> - free_irq(eq_table->eq[i].irq, eq_table->eq + i);
> -
> - hns_roce_free_eq(hr_dev, &eq_table->eq[i]);
> - }
> -
> - kfree(eq_table->eqc_base);
> - kfree(eq_table->eq);
> -}
> diff --git a/drivers/infiniband/hw/hns/hns_roce_eq.h b/drivers/infiniband/hw/hns/hns_roce_eq.h
> deleted file mode 100644
> index c6d212d..0000000
> --- a/drivers/infiniband/hw/hns/hns_roce_eq.h
> +++ /dev/null
> @@ -1,134 +0,0 @@
> -/*
> - * Copyright (c) 2016 Hisilicon Limited.
> - *
> - * This software is available to you under a choice of one of two
> - * licenses. You may choose to be licensed under the terms of the GNU
> - * General Public License (GPL) Version 2, available from the file
> - * COPYING in the main directory of this source tree, or the
> - * OpenIB.org BSD license below:
> - *
> - * Redistribution and use in source and binary forms, with or
> - * without modification, are permitted provided that the following
> - * conditions are met:
> - *
> - * - Redistributions of source code must retain the above
> - * copyright notice, this list of conditions and the following
> - * disclaimer.
> - *
> - * - Redistributions in binary form must reproduce the above
> - * copyright notice, this list of conditions and the following
> - * disclaimer in the documentation and/or other materials
> - * provided with the distribution.
> - *
> - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> - * SOFTWARE.
> - */
> -
> -#ifndef _HNS_ROCE_EQ_H
> -#define _HNS_ROCE_EQ_H
> -
> -#define HNS_ROCE_CEQ 1
> -#define HNS_ROCE_AEQ 2
> -
> -#define HNS_ROCE_CEQ_ENTRY_SIZE 0x4
> -#define HNS_ROCE_AEQ_ENTRY_SIZE 0x10
> -#define HNS_ROCE_CEQC_REG_OFFSET 0x18
> -
> -#define HNS_ROCE_CEQ_DEFAULT_INTERVAL 0x10
> -#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x10
> -
> -#define HNS_ROCE_INT_MASK_DISABLE 0
> -#define HNS_ROCE_INT_MASK_ENABLE 1
> -
> -#define EQ_ENABLE 1
> -#define EQ_DISABLE 0
> -#define CONS_INDEX_MASK 0xffff
> -
> -#define CEQ_REG_OFFSET 0x18
> -
> -enum {
> - HNS_ROCE_EQ_STAT_INVALID = 0,
> - HNS_ROCE_EQ_STAT_VALID = 2,
> -};
> -
> -struct hns_roce_aeqe {
> - u32 asyn;
> - union {
> - struct {
> - u32 qp;
> - u32 rsv0;
> - u32 rsv1;
> - } qp_event;
> -
> - struct {
> - u32 cq;
> - u32 rsv0;
> - u32 rsv1;
> - } cq_event;
> -
> - struct {
> - u32 port;
> - u32 rsv0;
> - u32 rsv1;
> - } port_event;
> -
> - struct {
> - u32 ceqe;
> - u32 rsv0;
> - u32 rsv1;
> - } ce_event;
> -
> - struct {
> - __le64 out_param;
> - __le16 token;
> - u8 status;
> - u8 rsv0;
> - } __packed cmd;
> - } event;
> -};
> -
> -#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S 16
> -#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M \
> - (((1UL << 8) - 1) << HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S)
> -
> -#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S 24
> -#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M \
> - (((1UL << 7) - 1) << HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)
> -
> -#define HNS_ROCE_AEQE_U32_4_OWNER_S 31
> -
> -#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S 0
> -#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M \
> - (((1UL << 24) - 1) << HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S)
> -
> -#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S 25
> -#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M \
> - (((1UL << 3) - 1) << HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S)
> -
> -#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S 0
> -#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M \
> - (((1UL << 16) - 1) << HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S)
> -
> -#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S 0
> -#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M \
> - (((1UL << 5) - 1) << HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S)
> -
> -struct hns_roce_ceqe {
> - union {
> - int comp;
> - } ceqe;
> -};
> -
> -#define HNS_ROCE_CEQE_CEQE_COMP_OWNER_S 0
> -
> -#define HNS_ROCE_CEQE_CEQE_COMP_CQN_S 16
> -#define HNS_ROCE_CEQE_CEQE_COMP_CQN_M \
> - (((1UL << 16) - 1) << HNS_ROCE_CEQE_CEQE_COMP_CQN_S)
> -
> -#endif /* _HNS_ROCE_EQ_H */
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> index af27168..6100ace 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> @@ -33,6 +33,7 @@
> #include <linux/platform_device.h>
> #include <linux/acpi.h>
> #include <linux/etherdevice.h>
> +#include <linux/interrupt.h>
> #include <linux/of.h>
> #include <linux/of_platform.h>
> #include <rdma/ib_umem.h>
> @@ -1492,9 +1493,9 @@ static int hns_roce_v1_profile(struct hns_roce_dev *hr_dev)
> caps->max_sq_inline = HNS_ROCE_V1_INLINE_SIZE;
> caps->num_uars = HNS_ROCE_V1_UAR_NUM;
> caps->phy_num_uars = HNS_ROCE_V1_PHY_UAR_NUM;
> - caps->num_aeq_vectors = HNS_ROCE_AEQE_VEC_NUM;
> - caps->num_comp_vectors = HNS_ROCE_COMP_VEC_NUM;
> - caps->num_other_vectors = HNS_ROCE_AEQE_OF_VEC_NUM;
> + caps->num_aeq_vectors = HNS_ROCE_V1_AEQE_VEC_NUM;
> + caps->num_comp_vectors = HNS_ROCE_V1_COMP_VEC_NUM;
> + caps->num_other_vectors = HNS_ROCE_V1_ABNORMAL_VEC_NUM;
> caps->num_mtpts = HNS_ROCE_V1_MAX_MTPT_NUM;
> caps->num_mtt_segs = HNS_ROCE_V1_MAX_MTT_SEGS;
> caps->num_pds = HNS_ROCE_V1_MAX_PD_NUM;
> @@ -1529,10 +1530,8 @@ static int hns_roce_v1_profile(struct hns_roce_dev *hr_dev)
> caps->num_ports + 1;
> }
>
> - for (i = 0; i < caps->num_comp_vectors; i++)
> - caps->ceqe_depth[i] = HNS_ROCE_V1_NUM_COMP_EQE;
> -
> - caps->aeqe_depth = HNS_ROCE_V1_NUM_ASYNC_EQE;
> + caps->ceqe_depth = HNS_ROCE_V1_COMP_EQE_NUM;
> + caps->aeqe_depth = HNS_ROCE_V1_ASYNC_EQE_NUM;
> caps->local_ca_ack_delay = le32_to_cpu(roce_read(hr_dev,
> ROCEE_ACK_DELAY_REG));
> caps->max_mtu = IB_MTU_2048;
> @@ -3960,6 +3959,727 @@ static int hns_roce_v1_destroy_cq(struct ib_cq *ibcq)
> return ret;
> }
>
> +static void set_eq_cons_index_v1(struct hns_roce_eq *eq, int req_not)
> +{
> + roce_raw_write((eq->cons_index & HNS_ROCE_V1_CONS_IDX_M) |
> + (req_not << eq->log_entries), eq->doorbell);
> + /* Memory barrier */
> + mb();
> +}
> +
> +static void hns_roce_v1_wq_catas_err_handle(struct hns_roce_dev *hr_dev,
> + struct hns_roce_aeqe *aeqe, int qpn)
> +{
> + struct device *dev = &hr_dev->pdev->dev;
> +
> + dev_warn(dev, "Local Work Queue Catastrophic Error.\n");
> + switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
> + HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
> + case HNS_ROCE_LWQCE_QPC_ERROR:
> + dev_warn(dev, "QP %d, QPC error.\n", qpn);
> + break;
> + case HNS_ROCE_LWQCE_MTU_ERROR:
> + dev_warn(dev, "QP %d, MTU error.\n", qpn);
> + break;
> + case HNS_ROCE_LWQCE_WQE_BA_ADDR_ERROR:
> + dev_warn(dev, "QP %d, WQE BA addr error.\n", qpn);
> + break;
> + case HNS_ROCE_LWQCE_WQE_ADDR_ERROR:
> + dev_warn(dev, "QP %d, WQE addr error.\n", qpn);
> + break;
> + case HNS_ROCE_LWQCE_SQ_WQE_SHIFT_ERROR:
> + dev_warn(dev, "QP %d, WQE shift error\n", qpn);
> + break;
> + case HNS_ROCE_LWQCE_SL_ERROR:
> + dev_warn(dev, "QP %d, SL error.\n", qpn);
> + break;
> + case HNS_ROCE_LWQCE_PORT_ERROR:
> + dev_warn(dev, "QP %d, port error.\n", qpn);
> + break;
> + default:
> + break;
> + }
> +}
> +
> +static void hns_roce_v1_local_wq_access_err_handle(struct hns_roce_dev *hr_dev,
> + struct hns_roce_aeqe *aeqe,
> + int qpn)
> +{
> + struct device *dev = &hr_dev->pdev->dev;
> +
> + dev_warn(dev, "Local Access Violation Work Queue Error.\n");
> + switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
> + HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
> + case HNS_ROCE_LAVWQE_R_KEY_VIOLATION:
> + dev_warn(dev, "QP %d, R_key violation.\n", qpn);
> + break;
> + case HNS_ROCE_LAVWQE_LENGTH_ERROR:
> + dev_warn(dev, "QP %d, length error.\n", qpn);
> + break;
> + case HNS_ROCE_LAVWQE_VA_ERROR:
> + dev_warn(dev, "QP %d, VA error.\n", qpn);
> + break;
> + case HNS_ROCE_LAVWQE_PD_ERROR:
> + dev_err(dev, "QP %d, PD error.\n", qpn);
> + break;
> + case HNS_ROCE_LAVWQE_RW_ACC_ERROR:
> + dev_warn(dev, "QP %d, rw acc error.\n", qpn);
> + break;
> + case HNS_ROCE_LAVWQE_KEY_STATE_ERROR:
> + dev_warn(dev, "QP %d, key state error.\n", qpn);
> + break;
> + case HNS_ROCE_LAVWQE_MR_OPERATION_ERROR:
> + dev_warn(dev, "QP %d, MR operation error.\n", qpn);
> + break;
> + default:
> + break;
> + }
> +}
> +
> +static void hns_roce_v1_qp_err_handle(struct hns_roce_dev *hr_dev,
> + struct hns_roce_aeqe *aeqe,
> + int event_type)
> +{
> + struct device *dev = &hr_dev->pdev->dev;
> + int phy_port;
> + int qpn;
> +
> + qpn = roce_get_field(aeqe->event.qp_event.qp,
> + HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M,
> + HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S);
> + phy_port = roce_get_field(aeqe->event.qp_event.qp,
> + HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M,
> + HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S);
> + if (qpn <= 1)
> + qpn = HNS_ROCE_MAX_PORTS * qpn + phy_port;
> +
> + switch (event_type) {
> + case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
> + dev_warn(dev, "Invalid Req Local Work Queue Error.\n"
> + "QP %d, phy_port %d.\n", qpn, phy_port);
> + break;
> + case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
> + hns_roce_v1_wq_catas_err_handle(hr_dev, aeqe, qpn);
> + break;
> + case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
> + hns_roce_v1_local_wq_access_err_handle(hr_dev, aeqe, qpn);
> + break;
> + default:
> + break;
> + }
> +
> + hns_roce_qp_event(hr_dev, qpn, event_type);
> +}
> +
> +static void hns_roce_v1_cq_err_handle(struct hns_roce_dev *hr_dev,
> + struct hns_roce_aeqe *aeqe,
> + int event_type)
> +{
> + struct device *dev = &hr_dev->pdev->dev;
> + u32 cqn;
> +
> + cqn = le32_to_cpu(roce_get_field(aeqe->event.cq_event.cq,
> + HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M,
> + HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S));
> +
> + switch (event_type) {
> + case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
> + dev_warn(dev, "CQ 0x%x access err.\n", cqn);
> + break;
> + case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
> + dev_warn(dev, "CQ 0x%x overflow\n", cqn);
> + break;
> + case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
> + dev_warn(dev, "CQ 0x%x ID invalid.\n", cqn);
> + break;
> + default:
> + break;
> + }
> +
> + hns_roce_cq_event(hr_dev, cqn, event_type);
> +}
> +
> +static void hns_roce_v1_db_overflow_handle(struct hns_roce_dev *hr_dev,
> + struct hns_roce_aeqe *aeqe)
> +{
> + struct device *dev = &hr_dev->pdev->dev;
> +
> + switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
> + HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
> + case HNS_ROCE_DB_SUBTYPE_SDB_OVF:
> + dev_warn(dev, "SDB overflow.\n");
> + break;
> + case HNS_ROCE_DB_SUBTYPE_SDB_ALM_OVF:
> + dev_warn(dev, "SDB almost overflow.\n");
> + break;
> + case HNS_ROCE_DB_SUBTYPE_SDB_ALM_EMP:
> + dev_warn(dev, "SDB almost empty.\n");
> + break;
> + case HNS_ROCE_DB_SUBTYPE_ODB_OVF:
> + dev_warn(dev, "ODB overflow.\n");
> + break;
> + case HNS_ROCE_DB_SUBTYPE_ODB_ALM_OVF:
> + dev_warn(dev, "ODB almost overflow.\n");
> + break;
> + case HNS_ROCE_DB_SUBTYPE_ODB_ALM_EMP:
> + dev_warn(dev, "SDB almost empty.\n");
> + break;
> + default:
> + break;
> + }
> +}
> +
> +static struct hns_roce_aeqe *get_aeqe_v1(struct hns_roce_eq *eq, u32 entry)
> +{
> + unsigned long off = (entry & (eq->entries - 1)) *
> + HNS_ROCE_AEQ_ENTRY_SIZE;
> +
> + return (struct hns_roce_aeqe *)((u8 *)
> + (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
> + off % HNS_ROCE_BA_SIZE);
> +}
> +
> +static struct hns_roce_aeqe *next_aeqe_sw_v1(struct hns_roce_eq *eq)
> +{
> + struct hns_roce_aeqe *aeqe = get_aeqe_v1(eq, eq->cons_index);
> +
> + return (roce_get_bit(aeqe->asyn, HNS_ROCE_AEQE_U32_4_OWNER_S) ^
> + !!(eq->cons_index & eq->entries)) ? aeqe : NULL;
> +}
> +
> +static int hns_roce_v1_aeq_int(struct hns_roce_dev *hr_dev,
> + struct hns_roce_eq *eq)
> +{
> + struct device *dev = &hr_dev->pdev->dev;
> + struct hns_roce_aeqe *aeqe;
> + int aeqes_found = 0;
> + int event_type;
> +
> + while ((aeqe = next_aeqe_sw_v1(eq))) {
> + dev_dbg(dev, "aeqe = %p, aeqe->asyn.event_type = 0x%lx\n", aeqe,
> + roce_get_field(aeqe->asyn,
> + HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
> + HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S));
> + /* Memory barrier */
> + rmb();
> +
> + event_type = roce_get_field(aeqe->asyn,
> + HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
> + HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S);
> + switch (event_type) {
> + case HNS_ROCE_EVENT_TYPE_PATH_MIG:
> + dev_warn(dev, "PATH MIG not supported\n");
> + break;
> + case HNS_ROCE_EVENT_TYPE_COMM_EST:
> + dev_warn(dev, "COMMUNICATION established\n");
> + break;
> + case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
> + dev_warn(dev, "SQ DRAINED not supported\n");
> + break;
> + case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
> + dev_warn(dev, "PATH MIG failed\n");
> + break;
> + case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
> + case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
> + case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
> + hns_roce_v1_qp_err_handle(hr_dev, aeqe, event_type);
> + break;
> + case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
> + case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
> + case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
> + dev_warn(dev, "SRQ not support!\n");
> + break;
> + case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
> + case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
> + case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
> + hns_roce_v1_cq_err_handle(hr_dev, aeqe, event_type);
> + break;
> + case HNS_ROCE_EVENT_TYPE_PORT_CHANGE:
> + dev_warn(dev, "port change.\n");
> + break;
> + case HNS_ROCE_EVENT_TYPE_MB:
> + hns_roce_cmd_event(hr_dev,
> + le16_to_cpu(aeqe->event.cmd.token),
> + aeqe->event.cmd.status,
> + le64_to_cpu(aeqe->event.cmd.out_param
> + ));
> + break;
> + case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
> + hns_roce_v1_db_overflow_handle(hr_dev, aeqe);
> + break;
> + case HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW:
> + dev_warn(dev, "CEQ 0x%lx overflow.\n",
> + roce_get_field(aeqe->event.ce_event.ceqe,
> + HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M,
> + HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S));
> + break;
> + default:
> + dev_warn(dev, "Unhandled event %d on EQ %d at idx %u.\n",
> + event_type, eq->eqn, eq->cons_index);
> + break;
> + }
> +
> + eq->cons_index++;
> + aeqes_found = 1;
> +
> + if (eq->cons_index > 2 * hr_dev->caps.aeqe_depth - 1) {
> + dev_warn(dev, "cons_index overflow, set back to 0.\n");
> + eq->cons_index = 0;
> + }
> + }
> +
> + set_eq_cons_index_v1(eq, 0);
> +
> + return aeqes_found;
> +}
> +
> +static struct hns_roce_ceqe *get_ceqe_v1(struct hns_roce_eq *eq, u32 entry)
> +{
> + unsigned long off = (entry & (eq->entries - 1)) *
> + HNS_ROCE_CEQ_ENTRY_SIZE;
> +
> + return (struct hns_roce_ceqe *)((u8 *)
> + (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
> + off % HNS_ROCE_BA_SIZE);
> +}
> +
> +static struct hns_roce_ceqe *next_ceqe_sw_v1(struct hns_roce_eq *eq)
> +{
> + struct hns_roce_ceqe *ceqe = get_ceqe_v1(eq, eq->cons_index);
> +
> + return (!!(roce_get_bit(ceqe->comp,
> + HNS_ROCE_CEQE_CEQE_COMP_OWNER_S))) ^
> + (!!(eq->cons_index & eq->entries)) ? ceqe : NULL;
> +}
> +
> +static int hns_roce_v1_ceq_int(struct hns_roce_dev *hr_dev,
> + struct hns_roce_eq *eq)
> +{
> + struct hns_roce_ceqe *ceqe;
> + int ceqes_found = 0;
> + u32 cqn;
> +
> + while ((ceqe = next_ceqe_sw_v1(eq))) {
> + /* Memory barrier */
> + rmb();
> + cqn = roce_get_field(ceqe->comp,
> + HNS_ROCE_CEQE_CEQE_COMP_CQN_M,
> + HNS_ROCE_CEQE_CEQE_COMP_CQN_S);
> + hns_roce_cq_completion(hr_dev, cqn);
> +
> + ++eq->cons_index;
> + ceqes_found = 1;
> +
> + if (eq->cons_index > 2 * hr_dev->caps.ceqe_depth - 1) {
> + dev_warn(&eq->hr_dev->pdev->dev,
> + "cons_index overflow, set back to 0.\n");
> + eq->cons_index = 0;
> + }
> + }
> +
> + set_eq_cons_index_v1(eq, 0);
> +
> + return ceqes_found;
> +}
> +
> +static irqreturn_t hns_roce_v1_msix_interrupt_eq(int irq, void *eq_ptr)
> +{
> + struct hns_roce_eq *eq = eq_ptr;
> + struct hns_roce_dev *hr_dev = eq->hr_dev;
> + int int_work = 0;
> +
> + if (eq->type_flag == HNS_ROCE_CEQ)
> + /* CEQ irq routine, CEQ is pulse irq, not clear */
> + int_work = hns_roce_v1_ceq_int(hr_dev, eq);
> + else
> + /* AEQ irq routine, AEQ is pulse irq, not clear */
> + int_work = hns_roce_v1_aeq_int(hr_dev, eq);
> +
> + return IRQ_RETVAL(int_work);
> +}
> +
> +static irqreturn_t hns_roce_v1_msix_interrupt_abn(int irq, void *dev_id)
> +{
> + struct hns_roce_dev *hr_dev = dev_id;
> + struct device *dev = &hr_dev->pdev->dev;
> + int int_work = 0;
> + u32 caepaemask_val;
> + u32 cealmovf_val;
> + u32 caepaest_val;
> + u32 aeshift_val;
> + u32 ceshift_val;
> + u32 cemask_val;
> + int i;
> +
> + /*
> + * Abnormal interrupt:
> + * AEQ overflow, ECC multi-bit err, CEQ overflow must clear
> + * interrupt, mask irq, clear irq, cancel mask operation
> + */
> + aeshift_val = roce_read(hr_dev, ROCEE_CAEP_AEQC_AEQE_SHIFT_REG);
> +
> + /* AEQE overflow */
> + if (roce_get_bit(aeshift_val,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQ_ALM_OVF_INT_ST_S) == 1) {
> + dev_warn(dev, "AEQ overflow!\n");
> +
> + /* Set mask */
> + caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
> + roce_set_bit(caepaemask_val,
> + ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
> + HNS_ROCE_INT_MASK_ENABLE);
> + roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
> +
> + /* Clear int state(INT_WC : write 1 clear) */
> + caepaest_val = roce_read(hr_dev, ROCEE_CAEP_AE_ST_REG);
> + roce_set_bit(caepaest_val,
> + ROCEE_CAEP_AE_ST_CAEP_AEQ_ALM_OVF_S, 1);
> + roce_write(hr_dev, ROCEE_CAEP_AE_ST_REG, caepaest_val);
> +
> + /* Clear mask */
> + caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
> + roce_set_bit(caepaemask_val,
> + ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
> + HNS_ROCE_INT_MASK_DISABLE);
> + roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
> + }
> +
> + /* CEQ almost overflow */
> + for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
> + ceshift_val = roce_read(hr_dev, ROCEE_CAEP_CEQC_SHIFT_0_REG +
> + i * CEQ_REG_OFFSET);
> +
> + if (roce_get_bit(ceshift_val,
> + ROCEE_CAEP_CEQC_SHIFT_CAEP_CEQ_ALM_OVF_INT_ST_S) == 1) {
> + dev_warn(dev, "CEQ[%d] almost overflow!\n", i);
> + int_work++;
> +
> + /* Set mask */
> + cemask_val = roce_read(hr_dev,
> + ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> + i * CEQ_REG_OFFSET);
> + roce_set_bit(cemask_val,
> + ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
> + HNS_ROCE_INT_MASK_ENABLE);
> + roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> + i * CEQ_REG_OFFSET, cemask_val);
> +
> + /* Clear int state(INT_WC : write 1 clear) */
> + cealmovf_val = roce_read(hr_dev,
> + ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
> + i * CEQ_REG_OFFSET);
> + roce_set_bit(cealmovf_val,
> + ROCEE_CAEP_CEQ_ALM_OVF_CAEP_CEQ_ALM_OVF_S,
> + 1);
> + roce_write(hr_dev, ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
> + i * CEQ_REG_OFFSET, cealmovf_val);
> +
> + /* Clear mask */
> + cemask_val = roce_read(hr_dev,
> + ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> + i * CEQ_REG_OFFSET);
> + roce_set_bit(cemask_val,
> + ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
> + HNS_ROCE_INT_MASK_DISABLE);
> + roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> + i * CEQ_REG_OFFSET, cemask_val);
> + }
> + }
> +
> + /* ECC multi-bit error alarm */
> + dev_warn(dev, "ECC UCERR ALARM: 0x%x, 0x%x, 0x%x\n",
> + roce_read(hr_dev, ROCEE_ECC_UCERR_ALM0_REG),
> + roce_read(hr_dev, ROCEE_ECC_UCERR_ALM1_REG),
> + roce_read(hr_dev, ROCEE_ECC_UCERR_ALM2_REG));
> +
> + dev_warn(dev, "ECC CERR ALARM: 0x%x, 0x%x, 0x%x\n",
> + roce_read(hr_dev, ROCEE_ECC_CERR_ALM0_REG),
> + roce_read(hr_dev, ROCEE_ECC_CERR_ALM1_REG),
> + roce_read(hr_dev, ROCEE_ECC_CERR_ALM2_REG));
> +
> + return IRQ_RETVAL(int_work);
> +}
> +
> +static void hns_roce_v1_int_mask_enable(struct hns_roce_dev *hr_dev)
> +{
> + u32 aemask_val;
> + int masken = 0;
> + int i;
> +
> + /* AEQ INT */
> + aemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
> + roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
> + masken);
> + roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AE_IRQ_MASK_S, masken);
> + roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, aemask_val);
> +
> + /* CEQ INT */
> + for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
> + /* IRQ mask */
> + roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
> + i * CEQ_REG_OFFSET, masken);
> + }
> +}
> +
> +static void hns_roce_v1_free_eq(struct hns_roce_dev *hr_dev,
> + struct hns_roce_eq *eq)
> +{
> + int npages = (PAGE_ALIGN(eq->eqe_size * eq->entries) +
> + HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
> + int i;
> +
> + if (!eq->buf_list)
> + return;
> +
> + for (i = 0; i < npages; ++i)
> + dma_free_coherent(&hr_dev->pdev->dev, HNS_ROCE_BA_SIZE,
> + eq->buf_list[i].buf, eq->buf_list[i].map);
> +
> + kfree(eq->buf_list);
> +}
> +
> +static void hns_roce_v1_enable_eq(struct hns_roce_dev *hr_dev, int eq_num,
> + int enable_flag)
> +{
> + void __iomem *eqc = hr_dev->eq_table.eqc_base[eq_num];
> + u32 val;
> +
> + val = readl(eqc);
> +
> + if (enable_flag)
> + roce_set_field(val,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
> + HNS_ROCE_EQ_STAT_VALID);
> + else
> + roce_set_field(val,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
> + HNS_ROCE_EQ_STAT_INVALID);
> + writel(val, eqc);
> +}
> +
> +static int hns_roce_v1_create_eq(struct hns_roce_dev *hr_dev,
> + struct hns_roce_eq *eq)
> +{
> + void __iomem *eqc = hr_dev->eq_table.eqc_base[eq->eqn];
> + struct device *dev = &hr_dev->pdev->dev;
> + dma_addr_t tmp_dma_addr;
> + u32 eqconsindx_val = 0;
> + u32 eqcuridx_val = 0;
> + u32 eqshift_val = 0;
> + int num_bas;
> + int ret;
> + int i;
> +
> + num_bas = (PAGE_ALIGN(eq->entries * eq->eqe_size) +
> + HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
> +
> + if ((eq->entries * eq->eqe_size) > HNS_ROCE_BA_SIZE) {
> + dev_err(dev, "[error]eq buf %d gt ba size(%d) need bas=%d\n",
> + (eq->entries * eq->eqe_size), HNS_ROCE_BA_SIZE,
> + num_bas);
> + return -EINVAL;
> + }
> +
> + eq->buf_list = kcalloc(num_bas, sizeof(*eq->buf_list), GFP_KERNEL);
> + if (!eq->buf_list)
> + return -ENOMEM;
> +
> + for (i = 0; i < num_bas; ++i) {
> + eq->buf_list[i].buf = dma_alloc_coherent(dev, HNS_ROCE_BA_SIZE,
> + &tmp_dma_addr,
> + GFP_KERNEL);
> + if (!eq->buf_list[i].buf) {
> + ret = -ENOMEM;
> + goto err_out_free_pages;
> + }
> +
> + eq->buf_list[i].map = tmp_dma_addr;
> + memset(eq->buf_list[i].buf, 0, HNS_ROCE_BA_SIZE);
> + }
> + eq->cons_index = 0;
> + roce_set_field(eqshift_val,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
> + HNS_ROCE_EQ_STAT_INVALID);
> + roce_set_field(eqshift_val,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_M,
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_S,
> + eq->log_entries);
> + writel(eqshift_val, eqc);
> +
> + /* Configure eq extended address 12~44bit */
> + writel((u32)(eq->buf_list[0].map >> 12), eqc + 4);
> +
> + /*
> + * Configure eq extended address 45~49 bit.
> + * 44 = 32 + 12, When evaluating addr to hardware, shift 12 because of
> + * using 4K page, and shift more 32 because of
> + * caculating the high 32 bit value evaluated to hardware.
> + */
> + roce_set_field(eqcuridx_val, ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_M,
> + ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_S,
> + eq->buf_list[0].map >> 44);
> + roce_set_field(eqcuridx_val,
> + ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_M,
> + ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_S, 0);
> + writel(eqcuridx_val, eqc + 8);
> +
> + /* Configure eq consumer index */
> + roce_set_field(eqconsindx_val,
> + ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_M,
> + ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_S, 0);
> + writel(eqconsindx_val, eqc + 0xc);
> +
> + return 0;
> +
> +err_out_free_pages:
> + for (i -= 1; i >= 0; i--)
> + dma_free_coherent(dev, HNS_ROCE_BA_SIZE, eq->buf_list[i].buf,
> + eq->buf_list[i].map);
> +
> + kfree(eq->buf_list);
> + return ret;
> +}
> +
> +static int hns_roce_v1_init_eq_table(struct hns_roce_dev *hr_dev)
> +{
> + struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
> + struct device *dev = &hr_dev->pdev->dev;
> + struct hns_roce_eq *eq;
> + int irq_num;
> + int eq_num;
> + int ret;
> + int i, j;
> +
> + eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
> + irq_num = eq_num + hr_dev->caps.num_other_vectors;
> +
> + eq_table->eq = kcalloc(eq_num, sizeof(*eq_table->eq), GFP_KERNEL);
> + if (!eq_table->eq)
> + return -ENOMEM;
> +
> + eq_table->eqc_base = kcalloc(eq_num, sizeof(*eq_table->eqc_base),
> + GFP_KERNEL);
> + if (!eq_table->eqc_base) {
> + ret = -ENOMEM;
> + goto err_eqc_base_alloc_fail;
> + }
> +
> + for (i = 0; i < eq_num; i++) {
> + eq = &eq_table->eq[i];
> + eq->hr_dev = hr_dev;
> + eq->eqn = i;
> + eq->irq = hr_dev->irq[i];
> + eq->log_page_size = PAGE_SHIFT;
> +
> + if (i < hr_dev->caps.num_comp_vectors) {
> + /* CEQ */
> + eq_table->eqc_base[i] = hr_dev->reg_base +
> + ROCEE_CAEP_CEQC_SHIFT_0_REG +
> + CEQ_REG_OFFSET * i;
> + eq->type_flag = HNS_ROCE_CEQ;
> + eq->doorbell = hr_dev->reg_base +
> + ROCEE_CAEP_CEQC_CONS_IDX_0_REG +
> + CEQ_REG_OFFSET * i;
> + eq->entries = hr_dev->caps.ceqe_depth;
> + eq->log_entries = ilog2(eq->entries);
> + eq->eqe_size = HNS_ROCE_CEQ_ENTRY_SIZE;
> + } else {
> + /* AEQ */
> + eq_table->eqc_base[i] = hr_dev->reg_base +
> + ROCEE_CAEP_AEQC_AEQE_SHIFT_REG;
> + eq->type_flag = HNS_ROCE_AEQ;
> + eq->doorbell = hr_dev->reg_base +
> + ROCEE_CAEP_AEQE_CONS_IDX_REG;
> + eq->entries = hr_dev->caps.aeqe_depth;
> + eq->log_entries = ilog2(eq->entries);
> + eq->eqe_size = HNS_ROCE_AEQ_ENTRY_SIZE;
> + }
> + }
> +
> + /* Disable irq */
> + hns_roce_v1_int_mask_enable(hr_dev);
> +
> + /* Configure ce int interval */
> + roce_write(hr_dev, ROCEE_CAEP_CE_INTERVAL_CFG_REG,
> + HNS_ROCE_CEQ_DEFAULT_INTERVAL);
> +
> + /* Configure ce int burst num */
> + roce_write(hr_dev, ROCEE_CAEP_CE_BURST_NUM_CFG_REG,
> + HNS_ROCE_CEQ_DEFAULT_BURST_NUM);
> +
> + for (i = 0; i < eq_num; i++) {
> + ret = hns_roce_v1_create_eq(hr_dev, &eq_table->eq[i]);
> + if (ret) {
> + dev_err(dev, "eq create failed\n");
> + goto err_create_eq_fail;
> + }
> + }
> +
> + for (j = 0; j < irq_num; j++) {
> + if (j < eq_num)
> + ret = request_irq(hr_dev->irq[j],
> + hns_roce_v1_msix_interrupt_eq, 0,
> + hr_dev->irq_names[j],
> + &eq_table->eq[j]);
> + else
> + ret = request_irq(hr_dev->irq[j],
> + hns_roce_v1_msix_interrupt_abn, 0,
> + hr_dev->irq_names[j], hr_dev);
> +
> + if (ret) {
> + dev_err(dev, "request irq error!\n");
> + goto err_request_irq_fail;
> + }
> + }
> +
> + for (i = 0; i < eq_num; i++)
> + hns_roce_v1_enable_eq(hr_dev, i, EQ_ENABLE);
> +
> + return 0;
> +
> +err_request_irq_fail:
> + for (j -= 1; j >= 0; j--)
> + free_irq(hr_dev->irq[j], &eq_table->eq[j]);
> +
> +err_create_eq_fail:
> + for (i -= 1; i >= 0; i--)
> + hns_roce_v1_free_eq(hr_dev, &eq_table->eq[i]);
> +
> + kfree(eq_table->eqc_base);
> +
> +err_eqc_base_alloc_fail:
> + kfree(eq_table->eq);
> +
> + return ret;
> +}
> +
> +static void hns_roce_v1_cleanup_eq_table(struct hns_roce_dev *hr_dev)
> +{
> + struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
> + int irq_num;
> + int eq_num;
> + int i;
> +
> + eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
> + irq_num = eq_num + hr_dev->caps.num_other_vectors;
> + for (i = 0; i < eq_num; i++) {
> + /* Disable EQ */
> + hns_roce_v1_enable_eq(hr_dev, i, EQ_DISABLE);
> +
> + free_irq(hr_dev->irq[i], &eq_table->eq[i]);
> +
> + hns_roce_v1_free_eq(hr_dev, &eq_table->eq[i]);
> + }
> + for (i = eq_num; i < irq_num; i++)
> + free_irq(hr_dev->irq[i], hr_dev);
> +
> + kfree(eq_table->eqc_base);
> + kfree(eq_table->eq);
> +}
> +
> static const struct hns_roce_hw hns_roce_hw_v1 = {
> .reset = hns_roce_v1_reset,
> .hw_profile = hns_roce_v1_profile,
> @@ -3983,6 +4703,8 @@ static int hns_roce_v1_destroy_cq(struct ib_cq *ibcq)
> .poll_cq = hns_roce_v1_poll_cq,
> .dereg_mr = hns_roce_v1_dereg_mr,
> .destroy_cq = hns_roce_v1_destroy_cq,
> + .init_eq = hns_roce_v1_init_eq_table,
> + .cleanup_eq = hns_roce_v1_cleanup_eq_table,
> };
>
> static const struct of_device_id hns_roce_of_match[] = {
> @@ -4132,14 +4854,14 @@ static int hns_roce_get_cfg(struct hns_roce_dev *hr_dev)
> /* read the interrupt names from the DT or ACPI */
> ret = device_property_read_string_array(dev, "interrupt-names",
> hr_dev->irq_names,
> - HNS_ROCE_MAX_IRQ_NUM);
> + HNS_ROCE_V1_MAX_IRQ_NUM);
> if (ret < 0) {
> dev_err(dev, "couldn't get interrupt names from DT or ACPI!\n");
> return ret;
> }
>
> /* fetch the interrupt numbers */
> - for (i = 0; i < HNS_ROCE_MAX_IRQ_NUM; i++) {
> + for (i = 0; i < HNS_ROCE_V1_MAX_IRQ_NUM; i++) {
> hr_dev->irq[i] = platform_get_irq(hr_dev->pdev, i);
> if (hr_dev->irq[i] <= 0) {
> dev_err(dev, "platform get of irq[=%d] failed!\n", i);
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> index 21a07ef..b44ddd2 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> @@ -60,8 +60,13 @@
> #define HNS_ROCE_V1_GID_NUM 16
> #define HNS_ROCE_V1_RESV_QP 8
>
> -#define HNS_ROCE_V1_NUM_COMP_EQE 0x8000
> -#define HNS_ROCE_V1_NUM_ASYNC_EQE 0x400
> +#define HNS_ROCE_V1_MAX_IRQ_NUM 34
> +#define HNS_ROCE_V1_COMP_VEC_NUM 32
> +#define HNS_ROCE_V1_AEQE_VEC_NUM 1
> +#define HNS_ROCE_V1_ABNORMAL_VEC_NUM 1
> +
> +#define HNS_ROCE_V1_COMP_EQE_NUM 0x8000
> +#define HNS_ROCE_V1_ASYNC_EQE_NUM 0x400
>
> #define HNS_ROCE_V1_QPC_ENTRY_SIZE 256
> #define HNS_ROCE_V1_IRRL_ENTRY_SIZE 8
> @@ -159,6 +164,41 @@
> #define SDB_INV_CNT_OFFSET 8
> #define SDB_ST_CMP_VAL 8
>
> +#define HNS_ROCE_CEQ_DEFAULT_INTERVAL 0x10
> +#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x10
> +
> +#define HNS_ROCE_INT_MASK_DISABLE 0
> +#define HNS_ROCE_INT_MASK_ENABLE 1
> +
> +#define CEQ_REG_OFFSET 0x18
> +
> +#define HNS_ROCE_CEQE_CEQE_COMP_OWNER_S 0
> +
> +#define HNS_ROCE_V1_CONS_IDX_M GENMASK(15, 0)
> +
> +#define HNS_ROCE_CEQE_CEQE_COMP_CQN_S 16
> +#define HNS_ROCE_CEQE_CEQE_COMP_CQN_M GENMASK(31, 16)
> +
> +#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S 16
> +#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M GENMASK(23, 16)
> +
> +#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S 24
> +#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M GENMASK(30, 24)
> +
> +#define HNS_ROCE_AEQE_U32_4_OWNER_S 31
> +
> +#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S 0
> +#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M GENMASK(23, 0)
> +
> +#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S 25
> +#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M GENMASK(27, 25)
> +
> +#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S 0
> +#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M GENMASK(15, 0)
> +
> +#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S 0
> +#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M GENMASK(4, 0)
> +
> struct hns_roce_cq_context {
> u32 cqc_byte_4;
> u32 cq_bt_l;
> diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
> index cf02ac2..aa0c242 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_main.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_main.c
> @@ -748,12 +748,10 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
> goto error_failed_cmd_init;
> }
>
> - if (hr_dev->cmd_mod) {
> - ret = hns_roce_init_eq_table(hr_dev);
> - if (ret) {
> - dev_err(dev, "eq init failed!\n");
> - goto error_failed_eq_table;
> - }
> + ret = hr_dev->hw->init_eq(hr_dev);
> + if (ret) {
> + dev_err(dev, "eq init failed!\n");
> + goto error_failed_eq_table;
> }
>
> if (hr_dev->cmd_mod) {
> @@ -805,8 +803,7 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
> hns_roce_cmd_use_polling(hr_dev);
>
> error_failed_use_event:
> - if (hr_dev->cmd_mod)
> - hns_roce_cleanup_eq_table(hr_dev);
> + hr_dev->hw->cleanup_eq(hr_dev);
>
> error_failed_eq_table:
> hns_roce_cmd_cleanup(hr_dev);
> @@ -837,8 +834,7 @@ void hns_roce_exit(struct hns_roce_dev *hr_dev)
> if (hr_dev->cmd_mod)
> hns_roce_cmd_use_polling(hr_dev);
>
> - if (hr_dev->cmd_mod)
> - hns_roce_cleanup_eq_table(hr_dev);
> + hr_dev->hw->cleanup_eq(hr_dev);
> hns_roce_cmd_cleanup(hr_dev);
> if (hr_dev->hw->cmq_exit)
> hr_dev->hw->cmq_exit(hr_dev);
> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
> index 49586ec..69e2584 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_qp.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
> @@ -65,6 +65,7 @@ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
> if (atomic_dec_and_test(&qp->refcount))
> complete(&qp->free);
> }
> +EXPORT_SYMBOL_GPL(hns_roce_qp_event);
>
> static void hns_roce_ib_qp_event(struct hns_roce_qp *hr_qp,
> enum hns_roce_event type)
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06
[not found] ` <ad1fff67-3511-8252-5b6f-aa1ab0c90078-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2017-11-14 9:09 ` Liuyixian (Eason)
0 siblings, 0 replies; 14+ messages in thread
From: Liuyixian (Eason) @ 2017-11-14 9:09 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
The cover-letter is there. There is no need to resend.
Sorry again.
Eason
On 2017/11/14 16:53, Liuyixian (Eason) wrote:
> Sorry, cover-letter has been lost for some unknown problem.
> I will resend the patch set.
>
> On 2017/11/14 17:26, Yixian Liu wrote:
>> Considering the compatibility of supporting hip08's eq
>> process and possible changes of data structure, this patch
>> refactors the eq code structure of hip06.
>>
>> We move all the eq process code for hip06 from hns_roce_eq.c
>> into hns_roce_hw_v1.c, and also for hns_roce_eq.h. With
>> these changes, it will be convenient to add the eq support
>> for later hardware version.
>>
>> Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> Reviewed-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> Reviewed-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> ---
>> drivers/infiniband/hw/hns/Makefile | 2 +-
>> drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
>> drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
>> drivers/infiniband/hw/hns/hns_roce_device.h | 57 ++-
>> drivers/infiniband/hw/hns/hns_roce_eq.c | 759 ----------------------------
>> drivers/infiniband/hw/hns/hns_roce_eq.h | 134 -----
>> drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++++++++++++-
>> drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
>> drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
>> drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
>> 10 files changed, 843 insertions(+), 930 deletions(-)
>> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
>> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
>>
>> diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile
>> index ff426a6..97bf2cd 100644
>> --- a/drivers/infiniband/hw/hns/Makefile
>> +++ b/drivers/infiniband/hw/hns/Makefile
>> @@ -5,7 +5,7 @@
>> ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
>>
>> obj-$(CONFIG_INFINIBAND_HNS) += hns-roce.o
>> -hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_eq.o hns_roce_pd.o \
>> +hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
>> hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
>> hns_roce_cq.o hns_roce_alloc.o
>> obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> index 1085cb2..9ebe839 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> @@ -103,6 +103,7 @@ void hns_roce_cmd_event(struct hns_roce_dev *hr_dev, u16 token, u8 status,
>> context->out_param = out_param;
>> complete(&context->done);
>> }
>> +EXPORT_SYMBOL_GPL(hns_roce_cmd_event);
>>
>> /* this should be called with "use_events" */
>> static int __hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
>> index 2111b57..bccc9b5 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_cq.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
>> @@ -196,15 +196,14 @@ void hns_roce_free_cq(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
>> if (ret)
>> dev_err(dev, "HW2SW_CQ failed (%d) for CQN %06lx\n", ret,
>> hr_cq->cqn);
>> - if (hr_dev->eq_table.eq) {
>> - /* Waiting interrupt process procedure carried out */
>> - synchronize_irq(hr_dev->eq_table.eq[hr_cq->vector].irq);
>> -
>> - /* wait for all interrupt processed */
>> - if (atomic_dec_and_test(&hr_cq->refcount))
>> - complete(&hr_cq->free);
>> - wait_for_completion(&hr_cq->free);
>> - }
>> +
>> + /* Waiting interrupt process procedure carried out */
>> + synchronize_irq(hr_dev->eq_table.eq[hr_cq->vector].irq);
>> +
>> + /* wait for all interrupt processed */
>> + if (atomic_dec_and_test(&hr_cq->refcount))
>> + complete(&hr_cq->free);
>> + wait_for_completion(&hr_cq->free);
>>
>> spin_lock_irq(&cq_table->lock);
>> radix_tree_delete(&cq_table->tree, hr_cq->cqn);
>> @@ -460,6 +459,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn)
>> ++cq->arm_sn;
>> cq->comp(cq);
>> }
>> +EXPORT_SYMBOL_GPL(hns_roce_cq_completion);
>>
>> void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
>> {
>> @@ -482,6 +482,7 @@ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
>> if (atomic_dec_and_test(&cq->refcount))
>> complete(&cq->free);
>> }
>> +EXPORT_SYMBOL_GPL(hns_roce_cq_event);
>>
>> int hns_roce_init_cq_table(struct hns_roce_dev *hr_dev)
>> {
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
>> index 01d3d69..9aa9e94 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_device.h
>> +++ b/drivers/infiniband/hw/hns/hns_roce_device.h
>> @@ -62,12 +62,16 @@
>> #define HNS_ROCE_CQE_WCMD_EMPTY_BIT 0x2
>> #define HNS_ROCE_MIN_CQE_CNT 16
>>
>> -#define HNS_ROCE_MAX_IRQ_NUM 34
>> +#define HNS_ROCE_MAX_IRQ_NUM 128
>>
>> -#define HNS_ROCE_COMP_VEC_NUM 32
>> +#define EQ_ENABLE 1
>> +#define EQ_DISABLE 0
>>
>> -#define HNS_ROCE_AEQE_VEC_NUM 1
>> -#define HNS_ROCE_AEQE_OF_VEC_NUM 1
>> +#define HNS_ROCE_CEQ 0
>> +#define HNS_ROCE_AEQ 1
>> +
>> +#define HNS_ROCE_CEQ_ENTRY_SIZE 0x4
>> +#define HNS_ROCE_AEQ_ENTRY_SIZE 0x10
>>
>> /* 4G/4K = 1M */
>> #define HNS_ROCE_SL_SHIFT 28
>> @@ -485,6 +489,45 @@ struct hns_roce_ib_iboe {
>> u8 phy_port[HNS_ROCE_MAX_PORTS];
>> };
>>
>> +enum {
>> + HNS_ROCE_EQ_STAT_INVALID = 0,
>> + HNS_ROCE_EQ_STAT_VALID = 2,
>> +};
>> +
>> +struct hns_roce_ceqe {
>> + u32 comp;
>> +};
>> +
>> +struct hns_roce_aeqe {
>> + u32 asyn;
>> + union {
>> + struct {
>> + u32 qp;
>> + u32 rsv0;
>> + u32 rsv1;
>> + } qp_event;
>> +
>> + struct {
>> + u32 cq;
>> + u32 rsv0;
>> + u32 rsv1;
>> + } cq_event;
>> +
>> + struct {
>> + u32 ceqe;
>> + u32 rsv0;
>> + u32 rsv1;
>> + } ce_event;
>> +
>> + struct {
>> + __le64 out_param;
>> + __le16 token;
>> + u8 status;
>> + u8 rsv0;
>> + } __packed cmd;
>> + } event;
>> +};
>> +
>> struct hns_roce_eq {
>> struct hns_roce_dev *hr_dev;
>> void __iomem *doorbell;
>> @@ -502,7 +545,7 @@ struct hns_roce_eq {
>>
>> struct hns_roce_eq_table {
>> struct hns_roce_eq *eq;
>> - void __iomem **eqc_base;
>> + void __iomem **eqc_base; /* only for hw v1 */
>> };
>>
>> struct hns_roce_caps {
>> @@ -550,7 +593,7 @@ struct hns_roce_caps {
>> u32 pbl_buf_pg_sz;
>> u32 pbl_hop_num;
>> int aeqe_depth;
>> - int ceqe_depth[HNS_ROCE_COMP_VEC_NUM];
>> + int ceqe_depth;
>> enum ib_mtu max_mtu;
>> u32 qpc_bt_num;
>> u32 srqc_bt_num;
>> @@ -623,6 +666,8 @@ struct hns_roce_hw {
>> int (*dereg_mr)(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr);
>> int (*destroy_cq)(struct ib_cq *ibcq);
>> int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period);
>> + int (*init_eq)(struct hns_roce_dev *hr_dev);
>> + void (*cleanup_eq)(struct hns_roce_dev *hr_dev);
>> };
>>
>> struct hns_roce_dev {
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_eq.c b/drivers/infiniband/hw/hns/hns_roce_eq.c
>> deleted file mode 100644
>> index d184431..0000000
>> --- a/drivers/infiniband/hw/hns/hns_roce_eq.c
>> +++ /dev/null
>> @@ -1,759 +0,0 @@
>> -/*
>> - * Copyright (c) 2016 Hisilicon Limited.
>> - *
>> - * This software is available to you under a choice of one of two
>> - * licenses. You may choose to be licensed under the terms of the GNU
>> - * General Public License (GPL) Version 2, available from the file
>> - * COPYING in the main directory of this source tree, or the
>> - * OpenIB.org BSD license below:
>> - *
>> - * Redistribution and use in source and binary forms, with or
>> - * without modification, are permitted provided that the following
>> - * conditions are met:
>> - *
>> - * - Redistributions of source code must retain the above
>> - * copyright notice, this list of conditions and the following
>> - * disclaimer.
>> - *
>> - * - Redistributions in binary form must reproduce the above
>> - * copyright notice, this list of conditions and the following
>> - * disclaimer in the documentation and/or other materials
>> - * provided with the distribution.
>> - *
>> - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
>> - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
>> - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
>> - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
>> - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
>> - * SOFTWARE.
>> - */
>> -
>> -#include <linux/platform_device.h>
>> -#include <linux/interrupt.h>
>> -#include "hns_roce_common.h"
>> -#include "hns_roce_device.h"
>> -#include "hns_roce_eq.h"
>> -
>> -static void eq_set_cons_index(struct hns_roce_eq *eq, int req_not)
>> -{
>> - roce_raw_write((eq->cons_index & CONS_INDEX_MASK) |
>> - (req_not << eq->log_entries), eq->doorbell);
>> - /* Memory barrier */
>> - mb();
>> -}
>> -
>> -static struct hns_roce_aeqe *get_aeqe(struct hns_roce_eq *eq, u32 entry)
>> -{
>> - unsigned long off = (entry & (eq->entries - 1)) *
>> - HNS_ROCE_AEQ_ENTRY_SIZE;
>> -
>> - return (struct hns_roce_aeqe *)((u8 *)
>> - (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
>> - off % HNS_ROCE_BA_SIZE);
>> -}
>> -
>> -static struct hns_roce_aeqe *next_aeqe_sw(struct hns_roce_eq *eq)
>> -{
>> - struct hns_roce_aeqe *aeqe = get_aeqe(eq, eq->cons_index);
>> -
>> - return (roce_get_bit(aeqe->asyn, HNS_ROCE_AEQE_U32_4_OWNER_S) ^
>> - !!(eq->cons_index & eq->entries)) ? aeqe : NULL;
>> -}
>> -
>> -static void hns_roce_wq_catas_err_handle(struct hns_roce_dev *hr_dev,
>> - struct hns_roce_aeqe *aeqe, int qpn)
>> -{
>> - struct device *dev = &hr_dev->pdev->dev;
>> -
>> - dev_warn(dev, "Local Work Queue Catastrophic Error.\n");
>> - switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
>> - HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
>> - case HNS_ROCE_LWQCE_QPC_ERROR:
>> - dev_warn(dev, "QP %d, QPC error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LWQCE_MTU_ERROR:
>> - dev_warn(dev, "QP %d, MTU error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LWQCE_WQE_BA_ADDR_ERROR:
>> - dev_warn(dev, "QP %d, WQE BA addr error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LWQCE_WQE_ADDR_ERROR:
>> - dev_warn(dev, "QP %d, WQE addr error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LWQCE_SQ_WQE_SHIFT_ERROR:
>> - dev_warn(dev, "QP %d, WQE shift error\n", qpn);
>> - break;
>> - case HNS_ROCE_LWQCE_SL_ERROR:
>> - dev_warn(dev, "QP %d, SL error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LWQCE_PORT_ERROR:
>> - dev_warn(dev, "QP %d, port error.\n", qpn);
>> - break;
>> - default:
>> - break;
>> - }
>> -}
>> -
>> -static void hns_roce_local_wq_access_err_handle(struct hns_roce_dev *hr_dev,
>> - struct hns_roce_aeqe *aeqe,
>> - int qpn)
>> -{
>> - struct device *dev = &hr_dev->pdev->dev;
>> -
>> - dev_warn(dev, "Local Access Violation Work Queue Error.\n");
>> - switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
>> - HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
>> - case HNS_ROCE_LAVWQE_R_KEY_VIOLATION:
>> - dev_warn(dev, "QP %d, R_key violation.\n", qpn);
>> - break;
>> - case HNS_ROCE_LAVWQE_LENGTH_ERROR:
>> - dev_warn(dev, "QP %d, length error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LAVWQE_VA_ERROR:
>> - dev_warn(dev, "QP %d, VA error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LAVWQE_PD_ERROR:
>> - dev_err(dev, "QP %d, PD error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LAVWQE_RW_ACC_ERROR:
>> - dev_warn(dev, "QP %d, rw acc error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LAVWQE_KEY_STATE_ERROR:
>> - dev_warn(dev, "QP %d, key state error.\n", qpn);
>> - break;
>> - case HNS_ROCE_LAVWQE_MR_OPERATION_ERROR:
>> - dev_warn(dev, "QP %d, MR operation error.\n", qpn);
>> - break;
>> - default:
>> - break;
>> - }
>> -}
>> -
>> -static void hns_roce_qp_err_handle(struct hns_roce_dev *hr_dev,
>> - struct hns_roce_aeqe *aeqe,
>> - int event_type)
>> -{
>> - struct device *dev = &hr_dev->pdev->dev;
>> - int phy_port;
>> - int qpn;
>> -
>> - qpn = roce_get_field(aeqe->event.qp_event.qp,
>> - HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M,
>> - HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S);
>> - phy_port = roce_get_field(aeqe->event.qp_event.qp,
>> - HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M,
>> - HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S);
>> - if (qpn <= 1)
>> - qpn = HNS_ROCE_MAX_PORTS * qpn + phy_port;
>> -
>> - switch (event_type) {
>> - case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
>> - dev_warn(dev, "Invalid Req Local Work Queue Error.\n"
>> - "QP %d, phy_port %d.\n", qpn, phy_port);
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
>> - hns_roce_wq_catas_err_handle(hr_dev, aeqe, qpn);
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
>> - hns_roce_local_wq_access_err_handle(hr_dev, aeqe, qpn);
>> - break;
>> - default:
>> - break;
>> - }
>> -
>> - hns_roce_qp_event(hr_dev, qpn, event_type);
>> -}
>> -
>> -static void hns_roce_cq_err_handle(struct hns_roce_dev *hr_dev,
>> - struct hns_roce_aeqe *aeqe,
>> - int event_type)
>> -{
>> - struct device *dev = &hr_dev->pdev->dev;
>> - u32 cqn;
>> -
>> - cqn = le32_to_cpu(roce_get_field(aeqe->event.cq_event.cq,
>> - HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M,
>> - HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S));
>> -
>> - switch (event_type) {
>> - case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
>> - dev_warn(dev, "CQ 0x%x access err.\n", cqn);
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
>> - dev_warn(dev, "CQ 0x%x overflow\n", cqn);
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
>> - dev_warn(dev, "CQ 0x%x ID invalid.\n", cqn);
>> - break;
>> - default:
>> - break;
>> - }
>> -
>> - hns_roce_cq_event(hr_dev, cqn, event_type);
>> -}
>> -
>> -static void hns_roce_db_overflow_handle(struct hns_roce_dev *hr_dev,
>> - struct hns_roce_aeqe *aeqe)
>> -{
>> - struct device *dev = &hr_dev->pdev->dev;
>> -
>> - switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
>> - HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
>> - case HNS_ROCE_DB_SUBTYPE_SDB_OVF:
>> - dev_warn(dev, "SDB overflow.\n");
>> - break;
>> - case HNS_ROCE_DB_SUBTYPE_SDB_ALM_OVF:
>> - dev_warn(dev, "SDB almost overflow.\n");
>> - break;
>> - case HNS_ROCE_DB_SUBTYPE_SDB_ALM_EMP:
>> - dev_warn(dev, "SDB almost empty.\n");
>> - break;
>> - case HNS_ROCE_DB_SUBTYPE_ODB_OVF:
>> - dev_warn(dev, "ODB overflow.\n");
>> - break;
>> - case HNS_ROCE_DB_SUBTYPE_ODB_ALM_OVF:
>> - dev_warn(dev, "ODB almost overflow.\n");
>> - break;
>> - case HNS_ROCE_DB_SUBTYPE_ODB_ALM_EMP:
>> - dev_warn(dev, "SDB almost empty.\n");
>> - break;
>> - default:
>> - break;
>> - }
>> -}
>> -
>> -static int hns_roce_aeq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
>> -{
>> - struct device *dev = &hr_dev->pdev->dev;
>> - struct hns_roce_aeqe *aeqe;
>> - int aeqes_found = 0;
>> - int event_type;
>> -
>> - while ((aeqe = next_aeqe_sw(eq))) {
>> - dev_dbg(dev, "aeqe = %p, aeqe->asyn.event_type = 0x%lx\n", aeqe,
>> - roce_get_field(aeqe->asyn,
>> - HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
>> - HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S));
>> - /* Memory barrier */
>> - rmb();
>> -
>> - event_type = roce_get_field(aeqe->asyn,
>> - HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
>> - HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S);
>> - switch (event_type) {
>> - case HNS_ROCE_EVENT_TYPE_PATH_MIG:
>> - dev_warn(dev, "PATH MIG not supported\n");
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_COMM_EST:
>> - dev_warn(dev, "COMMUNICATION established\n");
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
>> - dev_warn(dev, "SQ DRAINED not supported\n");
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
>> - dev_warn(dev, "PATH MIG failed\n");
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
>> - case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
>> - case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
>> - hns_roce_qp_err_handle(hr_dev, aeqe, event_type);
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
>> - case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
>> - case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
>> - dev_warn(dev, "SRQ not support!\n");
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
>> - case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
>> - case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
>> - hns_roce_cq_err_handle(hr_dev, aeqe, event_type);
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_PORT_CHANGE:
>> - dev_warn(dev, "port change.\n");
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_MB:
>> - hns_roce_cmd_event(hr_dev,
>> - le16_to_cpu(aeqe->event.cmd.token),
>> - aeqe->event.cmd.status,
>> - le64_to_cpu(aeqe->event.cmd.out_param
>> - ));
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
>> - hns_roce_db_overflow_handle(hr_dev, aeqe);
>> - break;
>> - case HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW:
>> - dev_warn(dev, "CEQ 0x%lx overflow.\n",
>> - roce_get_field(aeqe->event.ce_event.ceqe,
>> - HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M,
>> - HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S));
>> - break;
>> - default:
>> - dev_warn(dev, "Unhandled event %d on EQ %d at index %u\n",
>> - event_type, eq->eqn, eq->cons_index);
>> - break;
>> - }
>> -
>> - eq->cons_index++;
>> - aeqes_found = 1;
>> -
>> - if (eq->cons_index > 2 * hr_dev->caps.aeqe_depth - 1) {
>> - dev_warn(dev, "cons_index overflow, set back to zero\n"
>> - );
>> - eq->cons_index = 0;
>> - }
>> - }
>> -
>> - eq_set_cons_index(eq, 0);
>> -
>> - return aeqes_found;
>> -}
>> -
>> -static struct hns_roce_ceqe *get_ceqe(struct hns_roce_eq *eq, u32 entry)
>> -{
>> - unsigned long off = (entry & (eq->entries - 1)) *
>> - HNS_ROCE_CEQ_ENTRY_SIZE;
>> -
>> - return (struct hns_roce_ceqe *)((u8 *)
>> - (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
>> - off % HNS_ROCE_BA_SIZE);
>> -}
>> -
>> -static struct hns_roce_ceqe *next_ceqe_sw(struct hns_roce_eq *eq)
>> -{
>> - struct hns_roce_ceqe *ceqe = get_ceqe(eq, eq->cons_index);
>> -
>> - return (!!(roce_get_bit(ceqe->ceqe.comp,
>> - HNS_ROCE_CEQE_CEQE_COMP_OWNER_S))) ^
>> - (!!(eq->cons_index & eq->entries)) ? ceqe : NULL;
>> -}
>> -
>> -static int hns_roce_ceq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
>> -{
>> - struct hns_roce_ceqe *ceqe;
>> - int ceqes_found = 0;
>> - u32 cqn;
>> -
>> - while ((ceqe = next_ceqe_sw(eq))) {
>> - /* Memory barrier */
>> - rmb();
>> - cqn = roce_get_field(ceqe->ceqe.comp,
>> - HNS_ROCE_CEQE_CEQE_COMP_CQN_M,
>> - HNS_ROCE_CEQE_CEQE_COMP_CQN_S);
>> - hns_roce_cq_completion(hr_dev, cqn);
>> -
>> - ++eq->cons_index;
>> - ceqes_found = 1;
>> -
>> - if (eq->cons_index > 2 * hr_dev->caps.ceqe_depth[eq->eqn] - 1) {
>> - dev_warn(&eq->hr_dev->pdev->dev,
>> - "cons_index overflow, set back to zero\n");
>> - eq->cons_index = 0;
>> - }
>> - }
>> -
>> - eq_set_cons_index(eq, 0);
>> -
>> - return ceqes_found;
>> -}
>> -
>> -static int hns_roce_aeq_ovf_int(struct hns_roce_dev *hr_dev,
>> - struct hns_roce_eq *eq)
>> -{
>> - struct device *dev = &eq->hr_dev->pdev->dev;
>> - int eqovf_found = 0;
>> - u32 caepaemask_val;
>> - u32 cealmovf_val;
>> - u32 caepaest_val;
>> - u32 aeshift_val;
>> - u32 ceshift_val;
>> - u32 cemask_val;
>> - int i = 0;
>> -
>> - /**
>> - * AEQ overflow ECC mult bit err CEQ overflow alarm
>> - * must clear interrupt, mask irq, clear irq, cancel mask operation
>> - */
>> - aeshift_val = roce_read(hr_dev, ROCEE_CAEP_AEQC_AEQE_SHIFT_REG);
>> -
>> - if (roce_get_bit(aeshift_val,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQ_ALM_OVF_INT_ST_S) == 1) {
>> - dev_warn(dev, "AEQ overflow!\n");
>> -
>> - /* Set mask */
>> - caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
>> - roce_set_bit(caepaemask_val,
>> - ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
>> - HNS_ROCE_INT_MASK_ENABLE);
>> - roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
>> -
>> - /* Clear int state(INT_WC : write 1 clear) */
>> - caepaest_val = roce_read(hr_dev, ROCEE_CAEP_AE_ST_REG);
>> - roce_set_bit(caepaest_val,
>> - ROCEE_CAEP_AE_ST_CAEP_AEQ_ALM_OVF_S, 1);
>> - roce_write(hr_dev, ROCEE_CAEP_AE_ST_REG, caepaest_val);
>> -
>> - /* Clear mask */
>> - caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
>> - roce_set_bit(caepaemask_val,
>> - ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
>> - HNS_ROCE_INT_MASK_DISABLE);
>> - roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
>> - }
>> -
>> - /* CEQ almost overflow */
>> - for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
>> - ceshift_val = roce_read(hr_dev, ROCEE_CAEP_CEQC_SHIFT_0_REG +
>> - i * CEQ_REG_OFFSET);
>> -
>> - if (roce_get_bit(ceshift_val,
>> - ROCEE_CAEP_CEQC_SHIFT_CAEP_CEQ_ALM_OVF_INT_ST_S) == 1) {
>> - dev_warn(dev, "CEQ[%d] almost overflow!\n", i);
>> - eqovf_found++;
>> -
>> - /* Set mask */
>> - cemask_val = roce_read(hr_dev,
>> - ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> - i * CEQ_REG_OFFSET);
>> - roce_set_bit(cemask_val,
>> - ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
>> - HNS_ROCE_INT_MASK_ENABLE);
>> - roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> - i * CEQ_REG_OFFSET, cemask_val);
>> -
>> - /* Clear int state(INT_WC : write 1 clear) */
>> - cealmovf_val = roce_read(hr_dev,
>> - ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
>> - i * CEQ_REG_OFFSET);
>> - roce_set_bit(cealmovf_val,
>> - ROCEE_CAEP_CEQ_ALM_OVF_CAEP_CEQ_ALM_OVF_S,
>> - 1);
>> - roce_write(hr_dev, ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
>> - i * CEQ_REG_OFFSET, cealmovf_val);
>> -
>> - /* Clear mask */
>> - cemask_val = roce_read(hr_dev,
>> - ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> - i * CEQ_REG_OFFSET);
>> - roce_set_bit(cemask_val,
>> - ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
>> - HNS_ROCE_INT_MASK_DISABLE);
>> - roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> - i * CEQ_REG_OFFSET, cemask_val);
>> - }
>> - }
>> -
>> - /* ECC multi-bit error alarm */
>> - dev_warn(dev, "ECC UCERR ALARM: 0x%x, 0x%x, 0x%x\n",
>> - roce_read(hr_dev, ROCEE_ECC_UCERR_ALM0_REG),
>> - roce_read(hr_dev, ROCEE_ECC_UCERR_ALM1_REG),
>> - roce_read(hr_dev, ROCEE_ECC_UCERR_ALM2_REG));
>> -
>> - dev_warn(dev, "ECC CERR ALARM: 0x%x, 0x%x, 0x%x\n",
>> - roce_read(hr_dev, ROCEE_ECC_CERR_ALM0_REG),
>> - roce_read(hr_dev, ROCEE_ECC_CERR_ALM1_REG),
>> - roce_read(hr_dev, ROCEE_ECC_CERR_ALM2_REG));
>> -
>> - return eqovf_found;
>> -}
>> -
>> -static int hns_roce_eq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
>> -{
>> - int eqes_found = 0;
>> -
>> - if (likely(eq->type_flag == HNS_ROCE_CEQ))
>> - /* CEQ irq routine, CEQ is pulse irq, not clear */
>> - eqes_found = hns_roce_ceq_int(hr_dev, eq);
>> - else if (likely(eq->type_flag == HNS_ROCE_AEQ))
>> - /* AEQ irq routine, AEQ is pulse irq, not clear */
>> - eqes_found = hns_roce_aeq_int(hr_dev, eq);
>> - else
>> - /* AEQ queue overflow irq */
>> - eqes_found = hns_roce_aeq_ovf_int(hr_dev, eq);
>> -
>> - return eqes_found;
>> -}
>> -
>> -static irqreturn_t hns_roce_msi_x_interrupt(int irq, void *eq_ptr)
>> -{
>> - int int_work = 0;
>> - struct hns_roce_eq *eq = eq_ptr;
>> - struct hns_roce_dev *hr_dev = eq->hr_dev;
>> -
>> - int_work = hns_roce_eq_int(hr_dev, eq);
>> -
>> - return IRQ_RETVAL(int_work);
>> -}
>> -
>> -static void hns_roce_enable_eq(struct hns_roce_dev *hr_dev, int eq_num,
>> - int enable_flag)
>> -{
>> - void __iomem *eqc = hr_dev->eq_table.eqc_base[eq_num];
>> - u32 val;
>> -
>> - val = readl(eqc);
>> -
>> - if (enable_flag)
>> - roce_set_field(val,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
>> - HNS_ROCE_EQ_STAT_VALID);
>> - else
>> - roce_set_field(val,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
>> - HNS_ROCE_EQ_STAT_INVALID);
>> - writel(val, eqc);
>> -}
>> -
>> -static int hns_roce_create_eq(struct hns_roce_dev *hr_dev,
>> - struct hns_roce_eq *eq)
>> -{
>> - void __iomem *eqc = hr_dev->eq_table.eqc_base[eq->eqn];
>> - struct device *dev = &hr_dev->pdev->dev;
>> - dma_addr_t tmp_dma_addr;
>> - u32 eqconsindx_val = 0;
>> - u32 eqcuridx_val = 0;
>> - u32 eqshift_val = 0;
>> - int num_bas = 0;
>> - int ret;
>> - int i;
>> -
>> - num_bas = (PAGE_ALIGN(eq->entries * eq->eqe_size) +
>> - HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
>> -
>> - if ((eq->entries * eq->eqe_size) > HNS_ROCE_BA_SIZE) {
>> - dev_err(dev, "[error]eq buf %d gt ba size(%d) need bas=%d\n",
>> - (eq->entries * eq->eqe_size), HNS_ROCE_BA_SIZE,
>> - num_bas);
>> - return -EINVAL;
>> - }
>> -
>> - eq->buf_list = kcalloc(num_bas, sizeof(*eq->buf_list), GFP_KERNEL);
>> - if (!eq->buf_list)
>> - return -ENOMEM;
>> -
>> - for (i = 0; i < num_bas; ++i) {
>> - eq->buf_list[i].buf = dma_alloc_coherent(dev, HNS_ROCE_BA_SIZE,
>> - &tmp_dma_addr,
>> - GFP_KERNEL);
>> - if (!eq->buf_list[i].buf) {
>> - ret = -ENOMEM;
>> - goto err_out_free_pages;
>> - }
>> -
>> - eq->buf_list[i].map = tmp_dma_addr;
>> - memset(eq->buf_list[i].buf, 0, HNS_ROCE_BA_SIZE);
>> - }
>> - eq->cons_index = 0;
>> - roce_set_field(eqshift_val,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
>> - HNS_ROCE_EQ_STAT_INVALID);
>> - roce_set_field(eqshift_val,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_M,
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_S,
>> - eq->log_entries);
>> - writel(eqshift_val, eqc);
>> -
>> - /* Configure eq extended address 12~44bit */
>> - writel((u32)(eq->buf_list[0].map >> 12), eqc + 4);
>> -
>> - /*
>> - * Configure eq extended address 45~49 bit.
>> - * 44 = 32 + 12, When evaluating addr to hardware, shift 12 because of
>> - * using 4K page, and shift more 32 because of
>> - * caculating the high 32 bit value evaluated to hardware.
>> - */
>> - roce_set_field(eqcuridx_val, ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_M,
>> - ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_S,
>> - eq->buf_list[0].map >> 44);
>> - roce_set_field(eqcuridx_val,
>> - ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_M,
>> - ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_S, 0);
>> - writel(eqcuridx_val, eqc + 8);
>> -
>> - /* Configure eq consumer index */
>> - roce_set_field(eqconsindx_val,
>> - ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_M,
>> - ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_S, 0);
>> - writel(eqconsindx_val, eqc + 0xc);
>> -
>> - return 0;
>> -
>> -err_out_free_pages:
>> - for (i = i - 1; i >= 0; i--)
>> - dma_free_coherent(dev, HNS_ROCE_BA_SIZE, eq->buf_list[i].buf,
>> - eq->buf_list[i].map);
>> -
>> - kfree(eq->buf_list);
>> - return ret;
>> -}
>> -
>> -static void hns_roce_free_eq(struct hns_roce_dev *hr_dev,
>> - struct hns_roce_eq *eq)
>> -{
>> - int i = 0;
>> - int npages = (PAGE_ALIGN(eq->eqe_size * eq->entries) +
>> - HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
>> -
>> - if (!eq->buf_list)
>> - return;
>> -
>> - for (i = 0; i < npages; ++i)
>> - dma_free_coherent(&hr_dev->pdev->dev, HNS_ROCE_BA_SIZE,
>> - eq->buf_list[i].buf, eq->buf_list[i].map);
>> -
>> - kfree(eq->buf_list);
>> -}
>> -
>> -static void hns_roce_int_mask_en(struct hns_roce_dev *hr_dev)
>> -{
>> - int i = 0;
>> - u32 aemask_val;
>> - int masken = 0;
>> -
>> - /* AEQ INT */
>> - aemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
>> - roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
>> - masken);
>> - roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AE_IRQ_MASK_S, masken);
>> - roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, aemask_val);
>> -
>> - /* CEQ INT */
>> - for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
>> - /* IRQ mask */
>> - roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> - i * CEQ_REG_OFFSET, masken);
>> - }
>> -}
>> -
>> -static void hns_roce_ce_int_default_cfg(struct hns_roce_dev *hr_dev)
>> -{
>> - /* Configure ce int interval */
>> - roce_write(hr_dev, ROCEE_CAEP_CE_INTERVAL_CFG_REG,
>> - HNS_ROCE_CEQ_DEFAULT_INTERVAL);
>> -
>> - /* Configure ce int burst num */
>> - roce_write(hr_dev, ROCEE_CAEP_CE_BURST_NUM_CFG_REG,
>> - HNS_ROCE_CEQ_DEFAULT_BURST_NUM);
>> -}
>> -
>> -int hns_roce_init_eq_table(struct hns_roce_dev *hr_dev)
>> -{
>> - struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
>> - struct device *dev = &hr_dev->pdev->dev;
>> - struct hns_roce_eq *eq = NULL;
>> - int eq_num = 0;
>> - int ret = 0;
>> - int i = 0;
>> - int j = 0;
>> -
>> - eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
>> - eq_table->eq = kcalloc(eq_num, sizeof(*eq_table->eq), GFP_KERNEL);
>> - if (!eq_table->eq)
>> - return -ENOMEM;
>> -
>> - eq_table->eqc_base = kcalloc(eq_num, sizeof(*eq_table->eqc_base),
>> - GFP_KERNEL);
>> - if (!eq_table->eqc_base) {
>> - ret = -ENOMEM;
>> - goto err_eqc_base_alloc_fail;
>> - }
>> -
>> - for (i = 0; i < eq_num; i++) {
>> - eq = &eq_table->eq[i];
>> - eq->hr_dev = hr_dev;
>> - eq->eqn = i;
>> - eq->irq = hr_dev->irq[i];
>> - eq->log_page_size = PAGE_SHIFT;
>> -
>> - if (i < hr_dev->caps.num_comp_vectors) {
>> - /* CEQ */
>> - eq_table->eqc_base[i] = hr_dev->reg_base +
>> - ROCEE_CAEP_CEQC_SHIFT_0_REG +
>> - HNS_ROCE_CEQC_REG_OFFSET * i;
>> - eq->type_flag = HNS_ROCE_CEQ;
>> - eq->doorbell = hr_dev->reg_base +
>> - ROCEE_CAEP_CEQC_CONS_IDX_0_REG +
>> - HNS_ROCE_CEQC_REG_OFFSET * i;
>> - eq->entries = hr_dev->caps.ceqe_depth[i];
>> - eq->log_entries = ilog2(eq->entries);
>> - eq->eqe_size = sizeof(struct hns_roce_ceqe);
>> - } else {
>> - /* AEQ */
>> - eq_table->eqc_base[i] = hr_dev->reg_base +
>> - ROCEE_CAEP_AEQC_AEQE_SHIFT_REG;
>> - eq->type_flag = HNS_ROCE_AEQ;
>> - eq->doorbell = hr_dev->reg_base +
>> - ROCEE_CAEP_AEQE_CONS_IDX_REG;
>> - eq->entries = hr_dev->caps.aeqe_depth;
>> - eq->log_entries = ilog2(eq->entries);
>> - eq->eqe_size = sizeof(struct hns_roce_aeqe);
>> - }
>> - }
>> -
>> - /* Disable irq */
>> - hns_roce_int_mask_en(hr_dev);
>> -
>> - /* Configure CE irq interval and burst num */
>> - hns_roce_ce_int_default_cfg(hr_dev);
>> -
>> - for (i = 0; i < eq_num; i++) {
>> - ret = hns_roce_create_eq(hr_dev, &eq_table->eq[i]);
>> - if (ret) {
>> - dev_err(dev, "eq create failed\n");
>> - goto err_create_eq_fail;
>> - }
>> - }
>> -
>> - for (j = 0; j < eq_num; j++) {
>> - ret = request_irq(eq_table->eq[j].irq, hns_roce_msi_x_interrupt,
>> - 0, hr_dev->irq_names[j], eq_table->eq + j);
>> - if (ret) {
>> - dev_err(dev, "request irq error!\n");
>> - goto err_request_irq_fail;
>> - }
>> - }
>> -
>> - for (i = 0; i < eq_num; i++)
>> - hns_roce_enable_eq(hr_dev, i, EQ_ENABLE);
>> -
>> - return 0;
>> -
>> -err_request_irq_fail:
>> - for (j = j - 1; j >= 0; j--)
>> - free_irq(eq_table->eq[j].irq, eq_table->eq + j);
>> -
>> -err_create_eq_fail:
>> - for (i = i - 1; i >= 0; i--)
>> - hns_roce_free_eq(hr_dev, &eq_table->eq[i]);
>> -
>> - kfree(eq_table->eqc_base);
>> -
>> -err_eqc_base_alloc_fail:
>> - kfree(eq_table->eq);
>> -
>> - return ret;
>> -}
>> -
>> -void hns_roce_cleanup_eq_table(struct hns_roce_dev *hr_dev)
>> -{
>> - int i;
>> - int eq_num;
>> - struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
>> -
>> - eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
>> - for (i = 0; i < eq_num; i++) {
>> - /* Disable EQ */
>> - hns_roce_enable_eq(hr_dev, i, EQ_DISABLE);
>> -
>> - free_irq(eq_table->eq[i].irq, eq_table->eq + i);
>> -
>> - hns_roce_free_eq(hr_dev, &eq_table->eq[i]);
>> - }
>> -
>> - kfree(eq_table->eqc_base);
>> - kfree(eq_table->eq);
>> -}
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_eq.h b/drivers/infiniband/hw/hns/hns_roce_eq.h
>> deleted file mode 100644
>> index c6d212d..0000000
>> --- a/drivers/infiniband/hw/hns/hns_roce_eq.h
>> +++ /dev/null
>> @@ -1,134 +0,0 @@
>> -/*
>> - * Copyright (c) 2016 Hisilicon Limited.
>> - *
>> - * This software is available to you under a choice of one of two
>> - * licenses. You may choose to be licensed under the terms of the GNU
>> - * General Public License (GPL) Version 2, available from the file
>> - * COPYING in the main directory of this source tree, or the
>> - * OpenIB.org BSD license below:
>> - *
>> - * Redistribution and use in source and binary forms, with or
>> - * without modification, are permitted provided that the following
>> - * conditions are met:
>> - *
>> - * - Redistributions of source code must retain the above
>> - * copyright notice, this list of conditions and the following
>> - * disclaimer.
>> - *
>> - * - Redistributions in binary form must reproduce the above
>> - * copyright notice, this list of conditions and the following
>> - * disclaimer in the documentation and/or other materials
>> - * provided with the distribution.
>> - *
>> - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
>> - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
>> - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
>> - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
>> - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
>> - * SOFTWARE.
>> - */
>> -
>> -#ifndef _HNS_ROCE_EQ_H
>> -#define _HNS_ROCE_EQ_H
>> -
>> -#define HNS_ROCE_CEQ 1
>> -#define HNS_ROCE_AEQ 2
>> -
>> -#define HNS_ROCE_CEQ_ENTRY_SIZE 0x4
>> -#define HNS_ROCE_AEQ_ENTRY_SIZE 0x10
>> -#define HNS_ROCE_CEQC_REG_OFFSET 0x18
>> -
>> -#define HNS_ROCE_CEQ_DEFAULT_INTERVAL 0x10
>> -#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x10
>> -
>> -#define HNS_ROCE_INT_MASK_DISABLE 0
>> -#define HNS_ROCE_INT_MASK_ENABLE 1
>> -
>> -#define EQ_ENABLE 1
>> -#define EQ_DISABLE 0
>> -#define CONS_INDEX_MASK 0xffff
>> -
>> -#define CEQ_REG_OFFSET 0x18
>> -
>> -enum {
>> - HNS_ROCE_EQ_STAT_INVALID = 0,
>> - HNS_ROCE_EQ_STAT_VALID = 2,
>> -};
>> -
>> -struct hns_roce_aeqe {
>> - u32 asyn;
>> - union {
>> - struct {
>> - u32 qp;
>> - u32 rsv0;
>> - u32 rsv1;
>> - } qp_event;
>> -
>> - struct {
>> - u32 cq;
>> - u32 rsv0;
>> - u32 rsv1;
>> - } cq_event;
>> -
>> - struct {
>> - u32 port;
>> - u32 rsv0;
>> - u32 rsv1;
>> - } port_event;
>> -
>> - struct {
>> - u32 ceqe;
>> - u32 rsv0;
>> - u32 rsv1;
>> - } ce_event;
>> -
>> - struct {
>> - __le64 out_param;
>> - __le16 token;
>> - u8 status;
>> - u8 rsv0;
>> - } __packed cmd;
>> - } event;
>> -};
>> -
>> -#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S 16
>> -#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M \
>> - (((1UL << 8) - 1) << HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S)
>> -
>> -#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S 24
>> -#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M \
>> - (((1UL << 7) - 1) << HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)
>> -
>> -#define HNS_ROCE_AEQE_U32_4_OWNER_S 31
>> -
>> -#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S 0
>> -#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M \
>> - (((1UL << 24) - 1) << HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S)
>> -
>> -#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S 25
>> -#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M \
>> - (((1UL << 3) - 1) << HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S)
>> -
>> -#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S 0
>> -#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M \
>> - (((1UL << 16) - 1) << HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S)
>> -
>> -#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S 0
>> -#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M \
>> - (((1UL << 5) - 1) << HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S)
>> -
>> -struct hns_roce_ceqe {
>> - union {
>> - int comp;
>> - } ceqe;
>> -};
>> -
>> -#define HNS_ROCE_CEQE_CEQE_COMP_OWNER_S 0
>> -
>> -#define HNS_ROCE_CEQE_CEQE_COMP_CQN_S 16
>> -#define HNS_ROCE_CEQE_CEQE_COMP_CQN_M \
>> - (((1UL << 16) - 1) << HNS_ROCE_CEQE_CEQE_COMP_CQN_S)
>> -
>> -#endif /* _HNS_ROCE_EQ_H */
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
>> index af27168..6100ace 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
>> @@ -33,6 +33,7 @@
>> #include <linux/platform_device.h>
>> #include <linux/acpi.h>
>> #include <linux/etherdevice.h>
>> +#include <linux/interrupt.h>
>> #include <linux/of.h>
>> #include <linux/of_platform.h>
>> #include <rdma/ib_umem.h>
>> @@ -1492,9 +1493,9 @@ static int hns_roce_v1_profile(struct hns_roce_dev *hr_dev)
>> caps->max_sq_inline = HNS_ROCE_V1_INLINE_SIZE;
>> caps->num_uars = HNS_ROCE_V1_UAR_NUM;
>> caps->phy_num_uars = HNS_ROCE_V1_PHY_UAR_NUM;
>> - caps->num_aeq_vectors = HNS_ROCE_AEQE_VEC_NUM;
>> - caps->num_comp_vectors = HNS_ROCE_COMP_VEC_NUM;
>> - caps->num_other_vectors = HNS_ROCE_AEQE_OF_VEC_NUM;
>> + caps->num_aeq_vectors = HNS_ROCE_V1_AEQE_VEC_NUM;
>> + caps->num_comp_vectors = HNS_ROCE_V1_COMP_VEC_NUM;
>> + caps->num_other_vectors = HNS_ROCE_V1_ABNORMAL_VEC_NUM;
>> caps->num_mtpts = HNS_ROCE_V1_MAX_MTPT_NUM;
>> caps->num_mtt_segs = HNS_ROCE_V1_MAX_MTT_SEGS;
>> caps->num_pds = HNS_ROCE_V1_MAX_PD_NUM;
>> @@ -1529,10 +1530,8 @@ static int hns_roce_v1_profile(struct hns_roce_dev *hr_dev)
>> caps->num_ports + 1;
>> }
>>
>> - for (i = 0; i < caps->num_comp_vectors; i++)
>> - caps->ceqe_depth[i] = HNS_ROCE_V1_NUM_COMP_EQE;
>> -
>> - caps->aeqe_depth = HNS_ROCE_V1_NUM_ASYNC_EQE;
>> + caps->ceqe_depth = HNS_ROCE_V1_COMP_EQE_NUM;
>> + caps->aeqe_depth = HNS_ROCE_V1_ASYNC_EQE_NUM;
>> caps->local_ca_ack_delay = le32_to_cpu(roce_read(hr_dev,
>> ROCEE_ACK_DELAY_REG));
>> caps->max_mtu = IB_MTU_2048;
>> @@ -3960,6 +3959,727 @@ static int hns_roce_v1_destroy_cq(struct ib_cq *ibcq)
>> return ret;
>> }
>>
>> +static void set_eq_cons_index_v1(struct hns_roce_eq *eq, int req_not)
>> +{
>> + roce_raw_write((eq->cons_index & HNS_ROCE_V1_CONS_IDX_M) |
>> + (req_not << eq->log_entries), eq->doorbell);
>> + /* Memory barrier */
>> + mb();
>> +}
>> +
>> +static void hns_roce_v1_wq_catas_err_handle(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_aeqe *aeqe, int qpn)
>> +{
>> + struct device *dev = &hr_dev->pdev->dev;
>> +
>> + dev_warn(dev, "Local Work Queue Catastrophic Error.\n");
>> + switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
>> + HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
>> + case HNS_ROCE_LWQCE_QPC_ERROR:
>> + dev_warn(dev, "QP %d, QPC error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LWQCE_MTU_ERROR:
>> + dev_warn(dev, "QP %d, MTU error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LWQCE_WQE_BA_ADDR_ERROR:
>> + dev_warn(dev, "QP %d, WQE BA addr error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LWQCE_WQE_ADDR_ERROR:
>> + dev_warn(dev, "QP %d, WQE addr error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LWQCE_SQ_WQE_SHIFT_ERROR:
>> + dev_warn(dev, "QP %d, WQE shift error\n", qpn);
>> + break;
>> + case HNS_ROCE_LWQCE_SL_ERROR:
>> + dev_warn(dev, "QP %d, SL error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LWQCE_PORT_ERROR:
>> + dev_warn(dev, "QP %d, port error.\n", qpn);
>> + break;
>> + default:
>> + break;
>> + }
>> +}
>> +
>> +static void hns_roce_v1_local_wq_access_err_handle(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_aeqe *aeqe,
>> + int qpn)
>> +{
>> + struct device *dev = &hr_dev->pdev->dev;
>> +
>> + dev_warn(dev, "Local Access Violation Work Queue Error.\n");
>> + switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
>> + HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
>> + case HNS_ROCE_LAVWQE_R_KEY_VIOLATION:
>> + dev_warn(dev, "QP %d, R_key violation.\n", qpn);
>> + break;
>> + case HNS_ROCE_LAVWQE_LENGTH_ERROR:
>> + dev_warn(dev, "QP %d, length error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LAVWQE_VA_ERROR:
>> + dev_warn(dev, "QP %d, VA error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LAVWQE_PD_ERROR:
>> + dev_err(dev, "QP %d, PD error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LAVWQE_RW_ACC_ERROR:
>> + dev_warn(dev, "QP %d, rw acc error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LAVWQE_KEY_STATE_ERROR:
>> + dev_warn(dev, "QP %d, key state error.\n", qpn);
>> + break;
>> + case HNS_ROCE_LAVWQE_MR_OPERATION_ERROR:
>> + dev_warn(dev, "QP %d, MR operation error.\n", qpn);
>> + break;
>> + default:
>> + break;
>> + }
>> +}
>> +
>> +static void hns_roce_v1_qp_err_handle(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_aeqe *aeqe,
>> + int event_type)
>> +{
>> + struct device *dev = &hr_dev->pdev->dev;
>> + int phy_port;
>> + int qpn;
>> +
>> + qpn = roce_get_field(aeqe->event.qp_event.qp,
>> + HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M,
>> + HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S);
>> + phy_port = roce_get_field(aeqe->event.qp_event.qp,
>> + HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M,
>> + HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S);
>> + if (qpn <= 1)
>> + qpn = HNS_ROCE_MAX_PORTS * qpn + phy_port;
>> +
>> + switch (event_type) {
>> + case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
>> + dev_warn(dev, "Invalid Req Local Work Queue Error.\n"
>> + "QP %d, phy_port %d.\n", qpn, phy_port);
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
>> + hns_roce_v1_wq_catas_err_handle(hr_dev, aeqe, qpn);
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
>> + hns_roce_v1_local_wq_access_err_handle(hr_dev, aeqe, qpn);
>> + break;
>> + default:
>> + break;
>> + }
>> +
>> + hns_roce_qp_event(hr_dev, qpn, event_type);
>> +}
>> +
>> +static void hns_roce_v1_cq_err_handle(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_aeqe *aeqe,
>> + int event_type)
>> +{
>> + struct device *dev = &hr_dev->pdev->dev;
>> + u32 cqn;
>> +
>> + cqn = le32_to_cpu(roce_get_field(aeqe->event.cq_event.cq,
>> + HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M,
>> + HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S));
>> +
>> + switch (event_type) {
>> + case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
>> + dev_warn(dev, "CQ 0x%x access err.\n", cqn);
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
>> + dev_warn(dev, "CQ 0x%x overflow\n", cqn);
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
>> + dev_warn(dev, "CQ 0x%x ID invalid.\n", cqn);
>> + break;
>> + default:
>> + break;
>> + }
>> +
>> + hns_roce_cq_event(hr_dev, cqn, event_type);
>> +}
>> +
>> +static void hns_roce_v1_db_overflow_handle(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_aeqe *aeqe)
>> +{
>> + struct device *dev = &hr_dev->pdev->dev;
>> +
>> + switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
>> + HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
>> + case HNS_ROCE_DB_SUBTYPE_SDB_OVF:
>> + dev_warn(dev, "SDB overflow.\n");
>> + break;
>> + case HNS_ROCE_DB_SUBTYPE_SDB_ALM_OVF:
>> + dev_warn(dev, "SDB almost overflow.\n");
>> + break;
>> + case HNS_ROCE_DB_SUBTYPE_SDB_ALM_EMP:
>> + dev_warn(dev, "SDB almost empty.\n");
>> + break;
>> + case HNS_ROCE_DB_SUBTYPE_ODB_OVF:
>> + dev_warn(dev, "ODB overflow.\n");
>> + break;
>> + case HNS_ROCE_DB_SUBTYPE_ODB_ALM_OVF:
>> + dev_warn(dev, "ODB almost overflow.\n");
>> + break;
>> + case HNS_ROCE_DB_SUBTYPE_ODB_ALM_EMP:
>> + dev_warn(dev, "SDB almost empty.\n");
>> + break;
>> + default:
>> + break;
>> + }
>> +}
>> +
>> +static struct hns_roce_aeqe *get_aeqe_v1(struct hns_roce_eq *eq, u32 entry)
>> +{
>> + unsigned long off = (entry & (eq->entries - 1)) *
>> + HNS_ROCE_AEQ_ENTRY_SIZE;
>> +
>> + return (struct hns_roce_aeqe *)((u8 *)
>> + (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
>> + off % HNS_ROCE_BA_SIZE);
>> +}
>> +
>> +static struct hns_roce_aeqe *next_aeqe_sw_v1(struct hns_roce_eq *eq)
>> +{
>> + struct hns_roce_aeqe *aeqe = get_aeqe_v1(eq, eq->cons_index);
>> +
>> + return (roce_get_bit(aeqe->asyn, HNS_ROCE_AEQE_U32_4_OWNER_S) ^
>> + !!(eq->cons_index & eq->entries)) ? aeqe : NULL;
>> +}
>> +
>> +static int hns_roce_v1_aeq_int(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_eq *eq)
>> +{
>> + struct device *dev = &hr_dev->pdev->dev;
>> + struct hns_roce_aeqe *aeqe;
>> + int aeqes_found = 0;
>> + int event_type;
>> +
>> + while ((aeqe = next_aeqe_sw_v1(eq))) {
>> + dev_dbg(dev, "aeqe = %p, aeqe->asyn.event_type = 0x%lx\n", aeqe,
>> + roce_get_field(aeqe->asyn,
>> + HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
>> + HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S));
>> + /* Memory barrier */
>> + rmb();
>> +
>> + event_type = roce_get_field(aeqe->asyn,
>> + HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
>> + HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S);
>> + switch (event_type) {
>> + case HNS_ROCE_EVENT_TYPE_PATH_MIG:
>> + dev_warn(dev, "PATH MIG not supported\n");
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_COMM_EST:
>> + dev_warn(dev, "COMMUNICATION established\n");
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
>> + dev_warn(dev, "SQ DRAINED not supported\n");
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
>> + dev_warn(dev, "PATH MIG failed\n");
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
>> + case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
>> + case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
>> + hns_roce_v1_qp_err_handle(hr_dev, aeqe, event_type);
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
>> + case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
>> + case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
>> + dev_warn(dev, "SRQ not support!\n");
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
>> + case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
>> + case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
>> + hns_roce_v1_cq_err_handle(hr_dev, aeqe, event_type);
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_PORT_CHANGE:
>> + dev_warn(dev, "port change.\n");
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_MB:
>> + hns_roce_cmd_event(hr_dev,
>> + le16_to_cpu(aeqe->event.cmd.token),
>> + aeqe->event.cmd.status,
>> + le64_to_cpu(aeqe->event.cmd.out_param
>> + ));
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
>> + hns_roce_v1_db_overflow_handle(hr_dev, aeqe);
>> + break;
>> + case HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW:
>> + dev_warn(dev, "CEQ 0x%lx overflow.\n",
>> + roce_get_field(aeqe->event.ce_event.ceqe,
>> + HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M,
>> + HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S));
>> + break;
>> + default:
>> + dev_warn(dev, "Unhandled event %d on EQ %d at idx %u.\n",
>> + event_type, eq->eqn, eq->cons_index);
>> + break;
>> + }
>> +
>> + eq->cons_index++;
>> + aeqes_found = 1;
>> +
>> + if (eq->cons_index > 2 * hr_dev->caps.aeqe_depth - 1) {
>> + dev_warn(dev, "cons_index overflow, set back to 0.\n");
>> + eq->cons_index = 0;
>> + }
>> + }
>> +
>> + set_eq_cons_index_v1(eq, 0);
>> +
>> + return aeqes_found;
>> +}
>> +
>> +static struct hns_roce_ceqe *get_ceqe_v1(struct hns_roce_eq *eq, u32 entry)
>> +{
>> + unsigned long off = (entry & (eq->entries - 1)) *
>> + HNS_ROCE_CEQ_ENTRY_SIZE;
>> +
>> + return (struct hns_roce_ceqe *)((u8 *)
>> + (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
>> + off % HNS_ROCE_BA_SIZE);
>> +}
>> +
>> +static struct hns_roce_ceqe *next_ceqe_sw_v1(struct hns_roce_eq *eq)
>> +{
>> + struct hns_roce_ceqe *ceqe = get_ceqe_v1(eq, eq->cons_index);
>> +
>> + return (!!(roce_get_bit(ceqe->comp,
>> + HNS_ROCE_CEQE_CEQE_COMP_OWNER_S))) ^
>> + (!!(eq->cons_index & eq->entries)) ? ceqe : NULL;
>> +}
>> +
>> +static int hns_roce_v1_ceq_int(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_eq *eq)
>> +{
>> + struct hns_roce_ceqe *ceqe;
>> + int ceqes_found = 0;
>> + u32 cqn;
>> +
>> + while ((ceqe = next_ceqe_sw_v1(eq))) {
>> + /* Memory barrier */
>> + rmb();
>> + cqn = roce_get_field(ceqe->comp,
>> + HNS_ROCE_CEQE_CEQE_COMP_CQN_M,
>> + HNS_ROCE_CEQE_CEQE_COMP_CQN_S);
>> + hns_roce_cq_completion(hr_dev, cqn);
>> +
>> + ++eq->cons_index;
>> + ceqes_found = 1;
>> +
>> + if (eq->cons_index > 2 * hr_dev->caps.ceqe_depth - 1) {
>> + dev_warn(&eq->hr_dev->pdev->dev,
>> + "cons_index overflow, set back to 0.\n");
>> + eq->cons_index = 0;
>> + }
>> + }
>> +
>> + set_eq_cons_index_v1(eq, 0);
>> +
>> + return ceqes_found;
>> +}
>> +
>> +static irqreturn_t hns_roce_v1_msix_interrupt_eq(int irq, void *eq_ptr)
>> +{
>> + struct hns_roce_eq *eq = eq_ptr;
>> + struct hns_roce_dev *hr_dev = eq->hr_dev;
>> + int int_work = 0;
>> +
>> + if (eq->type_flag == HNS_ROCE_CEQ)
>> + /* CEQ irq routine, CEQ is pulse irq, not clear */
>> + int_work = hns_roce_v1_ceq_int(hr_dev, eq);
>> + else
>> + /* AEQ irq routine, AEQ is pulse irq, not clear */
>> + int_work = hns_roce_v1_aeq_int(hr_dev, eq);
>> +
>> + return IRQ_RETVAL(int_work);
>> +}
>> +
>> +static irqreturn_t hns_roce_v1_msix_interrupt_abn(int irq, void *dev_id)
>> +{
>> + struct hns_roce_dev *hr_dev = dev_id;
>> + struct device *dev = &hr_dev->pdev->dev;
>> + int int_work = 0;
>> + u32 caepaemask_val;
>> + u32 cealmovf_val;
>> + u32 caepaest_val;
>> + u32 aeshift_val;
>> + u32 ceshift_val;
>> + u32 cemask_val;
>> + int i;
>> +
>> + /*
>> + * Abnormal interrupt:
>> + * AEQ overflow, ECC multi-bit err, CEQ overflow must clear
>> + * interrupt, mask irq, clear irq, cancel mask operation
>> + */
>> + aeshift_val = roce_read(hr_dev, ROCEE_CAEP_AEQC_AEQE_SHIFT_REG);
>> +
>> + /* AEQE overflow */
>> + if (roce_get_bit(aeshift_val,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQ_ALM_OVF_INT_ST_S) == 1) {
>> + dev_warn(dev, "AEQ overflow!\n");
>> +
>> + /* Set mask */
>> + caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
>> + roce_set_bit(caepaemask_val,
>> + ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
>> + HNS_ROCE_INT_MASK_ENABLE);
>> + roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
>> +
>> + /* Clear int state(INT_WC : write 1 clear) */
>> + caepaest_val = roce_read(hr_dev, ROCEE_CAEP_AE_ST_REG);
>> + roce_set_bit(caepaest_val,
>> + ROCEE_CAEP_AE_ST_CAEP_AEQ_ALM_OVF_S, 1);
>> + roce_write(hr_dev, ROCEE_CAEP_AE_ST_REG, caepaest_val);
>> +
>> + /* Clear mask */
>> + caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
>> + roce_set_bit(caepaemask_val,
>> + ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
>> + HNS_ROCE_INT_MASK_DISABLE);
>> + roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
>> + }
>> +
>> + /* CEQ almost overflow */
>> + for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
>> + ceshift_val = roce_read(hr_dev, ROCEE_CAEP_CEQC_SHIFT_0_REG +
>> + i * CEQ_REG_OFFSET);
>> +
>> + if (roce_get_bit(ceshift_val,
>> + ROCEE_CAEP_CEQC_SHIFT_CAEP_CEQ_ALM_OVF_INT_ST_S) == 1) {
>> + dev_warn(dev, "CEQ[%d] almost overflow!\n", i);
>> + int_work++;
>> +
>> + /* Set mask */
>> + cemask_val = roce_read(hr_dev,
>> + ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> + i * CEQ_REG_OFFSET);
>> + roce_set_bit(cemask_val,
>> + ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
>> + HNS_ROCE_INT_MASK_ENABLE);
>> + roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> + i * CEQ_REG_OFFSET, cemask_val);
>> +
>> + /* Clear int state(INT_WC : write 1 clear) */
>> + cealmovf_val = roce_read(hr_dev,
>> + ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
>> + i * CEQ_REG_OFFSET);
>> + roce_set_bit(cealmovf_val,
>> + ROCEE_CAEP_CEQ_ALM_OVF_CAEP_CEQ_ALM_OVF_S,
>> + 1);
>> + roce_write(hr_dev, ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
>> + i * CEQ_REG_OFFSET, cealmovf_val);
>> +
>> + /* Clear mask */
>> + cemask_val = roce_read(hr_dev,
>> + ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> + i * CEQ_REG_OFFSET);
>> + roce_set_bit(cemask_val,
>> + ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
>> + HNS_ROCE_INT_MASK_DISABLE);
>> + roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> + i * CEQ_REG_OFFSET, cemask_val);
>> + }
>> + }
>> +
>> + /* ECC multi-bit error alarm */
>> + dev_warn(dev, "ECC UCERR ALARM: 0x%x, 0x%x, 0x%x\n",
>> + roce_read(hr_dev, ROCEE_ECC_UCERR_ALM0_REG),
>> + roce_read(hr_dev, ROCEE_ECC_UCERR_ALM1_REG),
>> + roce_read(hr_dev, ROCEE_ECC_UCERR_ALM2_REG));
>> +
>> + dev_warn(dev, "ECC CERR ALARM: 0x%x, 0x%x, 0x%x\n",
>> + roce_read(hr_dev, ROCEE_ECC_CERR_ALM0_REG),
>> + roce_read(hr_dev, ROCEE_ECC_CERR_ALM1_REG),
>> + roce_read(hr_dev, ROCEE_ECC_CERR_ALM2_REG));
>> +
>> + return IRQ_RETVAL(int_work);
>> +}
>> +
>> +static void hns_roce_v1_int_mask_enable(struct hns_roce_dev *hr_dev)
>> +{
>> + u32 aemask_val;
>> + int masken = 0;
>> + int i;
>> +
>> + /* AEQ INT */
>> + aemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
>> + roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
>> + masken);
>> + roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AE_IRQ_MASK_S, masken);
>> + roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, aemask_val);
>> +
>> + /* CEQ INT */
>> + for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
>> + /* IRQ mask */
>> + roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
>> + i * CEQ_REG_OFFSET, masken);
>> + }
>> +}
>> +
>> +static void hns_roce_v1_free_eq(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_eq *eq)
>> +{
>> + int npages = (PAGE_ALIGN(eq->eqe_size * eq->entries) +
>> + HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
>> + int i;
>> +
>> + if (!eq->buf_list)
>> + return;
>> +
>> + for (i = 0; i < npages; ++i)
>> + dma_free_coherent(&hr_dev->pdev->dev, HNS_ROCE_BA_SIZE,
>> + eq->buf_list[i].buf, eq->buf_list[i].map);
>> +
>> + kfree(eq->buf_list);
>> +}
>> +
>> +static void hns_roce_v1_enable_eq(struct hns_roce_dev *hr_dev, int eq_num,
>> + int enable_flag)
>> +{
>> + void __iomem *eqc = hr_dev->eq_table.eqc_base[eq_num];
>> + u32 val;
>> +
>> + val = readl(eqc);
>> +
>> + if (enable_flag)
>> + roce_set_field(val,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
>> + HNS_ROCE_EQ_STAT_VALID);
>> + else
>> + roce_set_field(val,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
>> + HNS_ROCE_EQ_STAT_INVALID);
>> + writel(val, eqc);
>> +}
>> +
>> +static int hns_roce_v1_create_eq(struct hns_roce_dev *hr_dev,
>> + struct hns_roce_eq *eq)
>> +{
>> + void __iomem *eqc = hr_dev->eq_table.eqc_base[eq->eqn];
>> + struct device *dev = &hr_dev->pdev->dev;
>> + dma_addr_t tmp_dma_addr;
>> + u32 eqconsindx_val = 0;
>> + u32 eqcuridx_val = 0;
>> + u32 eqshift_val = 0;
>> + int num_bas;
>> + int ret;
>> + int i;
>> +
>> + num_bas = (PAGE_ALIGN(eq->entries * eq->eqe_size) +
>> + HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
>> +
>> + if ((eq->entries * eq->eqe_size) > HNS_ROCE_BA_SIZE) {
>> + dev_err(dev, "[error]eq buf %d gt ba size(%d) need bas=%d\n",
>> + (eq->entries * eq->eqe_size), HNS_ROCE_BA_SIZE,
>> + num_bas);
>> + return -EINVAL;
>> + }
>> +
>> + eq->buf_list = kcalloc(num_bas, sizeof(*eq->buf_list), GFP_KERNEL);
>> + if (!eq->buf_list)
>> + return -ENOMEM;
>> +
>> + for (i = 0; i < num_bas; ++i) {
>> + eq->buf_list[i].buf = dma_alloc_coherent(dev, HNS_ROCE_BA_SIZE,
>> + &tmp_dma_addr,
>> + GFP_KERNEL);
>> + if (!eq->buf_list[i].buf) {
>> + ret = -ENOMEM;
>> + goto err_out_free_pages;
>> + }
>> +
>> + eq->buf_list[i].map = tmp_dma_addr;
>> + memset(eq->buf_list[i].buf, 0, HNS_ROCE_BA_SIZE);
>> + }
>> + eq->cons_index = 0;
>> + roce_set_field(eqshift_val,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
>> + HNS_ROCE_EQ_STAT_INVALID);
>> + roce_set_field(eqshift_val,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_M,
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_S,
>> + eq->log_entries);
>> + writel(eqshift_val, eqc);
>> +
>> + /* Configure eq extended address 12~44bit */
>> + writel((u32)(eq->buf_list[0].map >> 12), eqc + 4);
>> +
>> + /*
>> + * Configure eq extended address 45~49 bit.
>> + * 44 = 32 + 12, When evaluating addr to hardware, shift 12 because of
>> + * using 4K page, and shift more 32 because of
>> + * caculating the high 32 bit value evaluated to hardware.
>> + */
>> + roce_set_field(eqcuridx_val, ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_M,
>> + ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_S,
>> + eq->buf_list[0].map >> 44);
>> + roce_set_field(eqcuridx_val,
>> + ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_M,
>> + ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_S, 0);
>> + writel(eqcuridx_val, eqc + 8);
>> +
>> + /* Configure eq consumer index */
>> + roce_set_field(eqconsindx_val,
>> + ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_M,
>> + ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_S, 0);
>> + writel(eqconsindx_val, eqc + 0xc);
>> +
>> + return 0;
>> +
>> +err_out_free_pages:
>> + for (i -= 1; i >= 0; i--)
>> + dma_free_coherent(dev, HNS_ROCE_BA_SIZE, eq->buf_list[i].buf,
>> + eq->buf_list[i].map);
>> +
>> + kfree(eq->buf_list);
>> + return ret;
>> +}
>> +
>> +static int hns_roce_v1_init_eq_table(struct hns_roce_dev *hr_dev)
>> +{
>> + struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
>> + struct device *dev = &hr_dev->pdev->dev;
>> + struct hns_roce_eq *eq;
>> + int irq_num;
>> + int eq_num;
>> + int ret;
>> + int i, j;
>> +
>> + eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
>> + irq_num = eq_num + hr_dev->caps.num_other_vectors;
>> +
>> + eq_table->eq = kcalloc(eq_num, sizeof(*eq_table->eq), GFP_KERNEL);
>> + if (!eq_table->eq)
>> + return -ENOMEM;
>> +
>> + eq_table->eqc_base = kcalloc(eq_num, sizeof(*eq_table->eqc_base),
>> + GFP_KERNEL);
>> + if (!eq_table->eqc_base) {
>> + ret = -ENOMEM;
>> + goto err_eqc_base_alloc_fail;
>> + }
>> +
>> + for (i = 0; i < eq_num; i++) {
>> + eq = &eq_table->eq[i];
>> + eq->hr_dev = hr_dev;
>> + eq->eqn = i;
>> + eq->irq = hr_dev->irq[i];
>> + eq->log_page_size = PAGE_SHIFT;
>> +
>> + if (i < hr_dev->caps.num_comp_vectors) {
>> + /* CEQ */
>> + eq_table->eqc_base[i] = hr_dev->reg_base +
>> + ROCEE_CAEP_CEQC_SHIFT_0_REG +
>> + CEQ_REG_OFFSET * i;
>> + eq->type_flag = HNS_ROCE_CEQ;
>> + eq->doorbell = hr_dev->reg_base +
>> + ROCEE_CAEP_CEQC_CONS_IDX_0_REG +
>> + CEQ_REG_OFFSET * i;
>> + eq->entries = hr_dev->caps.ceqe_depth;
>> + eq->log_entries = ilog2(eq->entries);
>> + eq->eqe_size = HNS_ROCE_CEQ_ENTRY_SIZE;
>> + } else {
>> + /* AEQ */
>> + eq_table->eqc_base[i] = hr_dev->reg_base +
>> + ROCEE_CAEP_AEQC_AEQE_SHIFT_REG;
>> + eq->type_flag = HNS_ROCE_AEQ;
>> + eq->doorbell = hr_dev->reg_base +
>> + ROCEE_CAEP_AEQE_CONS_IDX_REG;
>> + eq->entries = hr_dev->caps.aeqe_depth;
>> + eq->log_entries = ilog2(eq->entries);
>> + eq->eqe_size = HNS_ROCE_AEQ_ENTRY_SIZE;
>> + }
>> + }
>> +
>> + /* Disable irq */
>> + hns_roce_v1_int_mask_enable(hr_dev);
>> +
>> + /* Configure ce int interval */
>> + roce_write(hr_dev, ROCEE_CAEP_CE_INTERVAL_CFG_REG,
>> + HNS_ROCE_CEQ_DEFAULT_INTERVAL);
>> +
>> + /* Configure ce int burst num */
>> + roce_write(hr_dev, ROCEE_CAEP_CE_BURST_NUM_CFG_REG,
>> + HNS_ROCE_CEQ_DEFAULT_BURST_NUM);
>> +
>> + for (i = 0; i < eq_num; i++) {
>> + ret = hns_roce_v1_create_eq(hr_dev, &eq_table->eq[i]);
>> + if (ret) {
>> + dev_err(dev, "eq create failed\n");
>> + goto err_create_eq_fail;
>> + }
>> + }
>> +
>> + for (j = 0; j < irq_num; j++) {
>> + if (j < eq_num)
>> + ret = request_irq(hr_dev->irq[j],
>> + hns_roce_v1_msix_interrupt_eq, 0,
>> + hr_dev->irq_names[j],
>> + &eq_table->eq[j]);
>> + else
>> + ret = request_irq(hr_dev->irq[j],
>> + hns_roce_v1_msix_interrupt_abn, 0,
>> + hr_dev->irq_names[j], hr_dev);
>> +
>> + if (ret) {
>> + dev_err(dev, "request irq error!\n");
>> + goto err_request_irq_fail;
>> + }
>> + }
>> +
>> + for (i = 0; i < eq_num; i++)
>> + hns_roce_v1_enable_eq(hr_dev, i, EQ_ENABLE);
>> +
>> + return 0;
>> +
>> +err_request_irq_fail:
>> + for (j -= 1; j >= 0; j--)
>> + free_irq(hr_dev->irq[j], &eq_table->eq[j]);
>> +
>> +err_create_eq_fail:
>> + for (i -= 1; i >= 0; i--)
>> + hns_roce_v1_free_eq(hr_dev, &eq_table->eq[i]);
>> +
>> + kfree(eq_table->eqc_base);
>> +
>> +err_eqc_base_alloc_fail:
>> + kfree(eq_table->eq);
>> +
>> + return ret;
>> +}
>> +
>> +static void hns_roce_v1_cleanup_eq_table(struct hns_roce_dev *hr_dev)
>> +{
>> + struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
>> + int irq_num;
>> + int eq_num;
>> + int i;
>> +
>> + eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
>> + irq_num = eq_num + hr_dev->caps.num_other_vectors;
>> + for (i = 0; i < eq_num; i++) {
>> + /* Disable EQ */
>> + hns_roce_v1_enable_eq(hr_dev, i, EQ_DISABLE);
>> +
>> + free_irq(hr_dev->irq[i], &eq_table->eq[i]);
>> +
>> + hns_roce_v1_free_eq(hr_dev, &eq_table->eq[i]);
>> + }
>> + for (i = eq_num; i < irq_num; i++)
>> + free_irq(hr_dev->irq[i], hr_dev);
>> +
>> + kfree(eq_table->eqc_base);
>> + kfree(eq_table->eq);
>> +}
>> +
>> static const struct hns_roce_hw hns_roce_hw_v1 = {
>> .reset = hns_roce_v1_reset,
>> .hw_profile = hns_roce_v1_profile,
>> @@ -3983,6 +4703,8 @@ static int hns_roce_v1_destroy_cq(struct ib_cq *ibcq)
>> .poll_cq = hns_roce_v1_poll_cq,
>> .dereg_mr = hns_roce_v1_dereg_mr,
>> .destroy_cq = hns_roce_v1_destroy_cq,
>> + .init_eq = hns_roce_v1_init_eq_table,
>> + .cleanup_eq = hns_roce_v1_cleanup_eq_table,
>> };
>>
>> static const struct of_device_id hns_roce_of_match[] = {
>> @@ -4132,14 +4854,14 @@ static int hns_roce_get_cfg(struct hns_roce_dev *hr_dev)
>> /* read the interrupt names from the DT or ACPI */
>> ret = device_property_read_string_array(dev, "interrupt-names",
>> hr_dev->irq_names,
>> - HNS_ROCE_MAX_IRQ_NUM);
>> + HNS_ROCE_V1_MAX_IRQ_NUM);
>> if (ret < 0) {
>> dev_err(dev, "couldn't get interrupt names from DT or ACPI!\n");
>> return ret;
>> }
>>
>> /* fetch the interrupt numbers */
>> - for (i = 0; i < HNS_ROCE_MAX_IRQ_NUM; i++) {
>> + for (i = 0; i < HNS_ROCE_V1_MAX_IRQ_NUM; i++) {
>> hr_dev->irq[i] = platform_get_irq(hr_dev->pdev, i);
>> if (hr_dev->irq[i] <= 0) {
>> dev_err(dev, "platform get of irq[=%d] failed!\n", i);
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
>> index 21a07ef..b44ddd2 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
>> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
>> @@ -60,8 +60,13 @@
>> #define HNS_ROCE_V1_GID_NUM 16
>> #define HNS_ROCE_V1_RESV_QP 8
>>
>> -#define HNS_ROCE_V1_NUM_COMP_EQE 0x8000
>> -#define HNS_ROCE_V1_NUM_ASYNC_EQE 0x400
>> +#define HNS_ROCE_V1_MAX_IRQ_NUM 34
>> +#define HNS_ROCE_V1_COMP_VEC_NUM 32
>> +#define HNS_ROCE_V1_AEQE_VEC_NUM 1
>> +#define HNS_ROCE_V1_ABNORMAL_VEC_NUM 1
>> +
>> +#define HNS_ROCE_V1_COMP_EQE_NUM 0x8000
>> +#define HNS_ROCE_V1_ASYNC_EQE_NUM 0x400
>>
>> #define HNS_ROCE_V1_QPC_ENTRY_SIZE 256
>> #define HNS_ROCE_V1_IRRL_ENTRY_SIZE 8
>> @@ -159,6 +164,41 @@
>> #define SDB_INV_CNT_OFFSET 8
>> #define SDB_ST_CMP_VAL 8
>>
>> +#define HNS_ROCE_CEQ_DEFAULT_INTERVAL 0x10
>> +#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x10
>> +
>> +#define HNS_ROCE_INT_MASK_DISABLE 0
>> +#define HNS_ROCE_INT_MASK_ENABLE 1
>> +
>> +#define CEQ_REG_OFFSET 0x18
>> +
>> +#define HNS_ROCE_CEQE_CEQE_COMP_OWNER_S 0
>> +
>> +#define HNS_ROCE_V1_CONS_IDX_M GENMASK(15, 0)
>> +
>> +#define HNS_ROCE_CEQE_CEQE_COMP_CQN_S 16
>> +#define HNS_ROCE_CEQE_CEQE_COMP_CQN_M GENMASK(31, 16)
>> +
>> +#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S 16
>> +#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M GENMASK(23, 16)
>> +
>> +#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S 24
>> +#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M GENMASK(30, 24)
>> +
>> +#define HNS_ROCE_AEQE_U32_4_OWNER_S 31
>> +
>> +#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S 0
>> +#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M GENMASK(23, 0)
>> +
>> +#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S 25
>> +#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M GENMASK(27, 25)
>> +
>> +#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S 0
>> +#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M GENMASK(15, 0)
>> +
>> +#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S 0
>> +#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M GENMASK(4, 0)
>> +
>> struct hns_roce_cq_context {
>> u32 cqc_byte_4;
>> u32 cq_bt_l;
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
>> index cf02ac2..aa0c242 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_main.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_main.c
>> @@ -748,12 +748,10 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
>> goto error_failed_cmd_init;
>> }
>>
>> - if (hr_dev->cmd_mod) {
>> - ret = hns_roce_init_eq_table(hr_dev);
>> - if (ret) {
>> - dev_err(dev, "eq init failed!\n");
>> - goto error_failed_eq_table;
>> - }
>> + ret = hr_dev->hw->init_eq(hr_dev);
>> + if (ret) {
>> + dev_err(dev, "eq init failed!\n");
>> + goto error_failed_eq_table;
>> }
>>
>> if (hr_dev->cmd_mod) {
>> @@ -805,8 +803,7 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
>> hns_roce_cmd_use_polling(hr_dev);
>>
>> error_failed_use_event:
>> - if (hr_dev->cmd_mod)
>> - hns_roce_cleanup_eq_table(hr_dev);
>> + hr_dev->hw->cleanup_eq(hr_dev);
>>
>> error_failed_eq_table:
>> hns_roce_cmd_cleanup(hr_dev);
>> @@ -837,8 +834,7 @@ void hns_roce_exit(struct hns_roce_dev *hr_dev)
>> if (hr_dev->cmd_mod)
>> hns_roce_cmd_use_polling(hr_dev);
>>
>> - if (hr_dev->cmd_mod)
>> - hns_roce_cleanup_eq_table(hr_dev);
>> + hr_dev->hw->cleanup_eq(hr_dev);
>> hns_roce_cmd_cleanup(hr_dev);
>> if (hr_dev->hw->cmq_exit)
>> hr_dev->hw->cmq_exit(hr_dev);
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
>> index 49586ec..69e2584 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_qp.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
>> @@ -65,6 +65,7 @@ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
>> if (atomic_dec_and_test(&qp->refcount))
>> complete(&qp->free);
>> }
>> +EXPORT_SYMBOL_GPL(hns_roce_qp_event);
>>
>> static void hns_roce_ib_qp_event(struct hns_roce_qp *hr_qp,
>> enum hns_roce_event type)
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> .
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06
[not found] ` <1510651577-20794-2-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-11-14 8:53 ` Liuyixian (Eason)
@ 2017-11-14 9:11 ` Leon Romanovsky
[not found] ` <20171114091146.GT18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
1 sibling, 1 reply; 14+ messages in thread
From: Leon Romanovsky @ 2017-11-14 9:11 UTC (permalink / raw)
To: Yixian Liu
Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 2639 bytes --]
On Tue, Nov 14, 2017 at 05:26:16PM +0800, Yixian Liu wrote:
> Considering the compatibility of supporting hip08's eq
> process and possible changes of data structure, this patch
> refactors the eq code structure of hip06.
>
> We move all the eq process code for hip06 from hns_roce_eq.c
> into hns_roce_hw_v1.c, and also for hns_roce_eq.h. With
> these changes, it will be convenient to add the eq support
> for later hardware version.
>
> Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Reviewed-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Reviewed-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> ---
> drivers/infiniband/hw/hns/Makefile | 2 +-
> drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
> drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
> drivers/infiniband/hw/hns/hns_roce_device.h | 57 ++-
> drivers/infiniband/hw/hns/hns_roce_eq.c | 759 ----------------------------
> drivers/infiniband/hw/hns/hns_roce_eq.h | 134 -----
> drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++++++++++++-
> drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
> drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
> drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
> 10 files changed, 843 insertions(+), 930 deletions(-)
> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
>
> diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile
> index ff426a6..97bf2cd 100644
> --- a/drivers/infiniband/hw/hns/Makefile
> +++ b/drivers/infiniband/hw/hns/Makefile
> @@ -5,7 +5,7 @@
> ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
>
> obj-$(CONFIG_INFINIBAND_HNS) += hns-roce.o
> -hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_eq.o hns_roce_pd.o \
> +hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
> hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
> hns_roce_cq.o hns_roce_alloc.o
> obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o
> diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
> index 1085cb2..9ebe839 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
> @@ -103,6 +103,7 @@ void hns_roce_cmd_event(struct hns_roce_dev *hr_dev, u16 token, u8 status,
> context->out_param = out_param;
> complete(&context->done);
> }
> +EXPORT_SYMBOL_GPL(hns_roce_cmd_event);
Are you sure that you need these symbols to be exported (used in other modules)?
Thanks
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH for-next 0/2] Revise eq support for hip06 & hip08
@ 2017-11-14 9:26 Yixian Liu
[not found] ` <1510651577-20794-1-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 14+ messages in thread
From: Yixian Liu @ 2017-11-14 9:26 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch-set refactor eq code for hip06 and add eq
support for hip08.
Yixian Liu (2):
RDMA/hns: Refactor eq code for hip06
RDMA/hns: Add eq support of hip08
drivers/infiniband/hw/hns/Makefile | 2 +-
drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
drivers/infiniband/hw/hns/hns_roce_cmd.h | 10 +
drivers/infiniband/hw/hns/hns_roce_common.h | 11 +
drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
drivers/infiniband/hw/hns/hns_roce_device.h | 83 +-
drivers/infiniband/hw/hns/hns_roce_eq.c | 759 -----------------
drivers/infiniband/hw/hns/hns_roce_eq.h | 134 ---
drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++-
drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 1177 ++++++++++++++++++++++++++-
drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 192 ++++-
drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
14 files changed, 2251 insertions(+), 938 deletions(-)
delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06
[not found] ` <1510651577-20794-1-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2017-11-14 9:26 ` Yixian Liu
[not found] ` <1510651577-20794-2-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-11-14 9:26 ` [PATCH for-next 2/2] RDMA/hns: Add eq support of hip08 Yixian Liu
` (2 subsequent siblings)
3 siblings, 1 reply; 14+ messages in thread
From: Yixian Liu @ 2017-11-14 9:26 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
Considering the compatibility of supporting hip08's eq
process and possible changes of data structure, this patch
refactors the eq code structure of hip06.
We move all the eq process code for hip06 from hns_roce_eq.c
into hns_roce_hw_v1.c, and also for hns_roce_eq.h. With
these changes, it will be convenient to add the eq support
for later hardware version.
Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Reviewed-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
drivers/infiniband/hw/hns/Makefile | 2 +-
drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
drivers/infiniband/hw/hns/hns_roce_device.h | 57 ++-
drivers/infiniband/hw/hns/hns_roce_eq.c | 759 ----------------------------
drivers/infiniband/hw/hns/hns_roce_eq.h | 134 -----
drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++++++++++++-
drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
10 files changed, 843 insertions(+), 930 deletions(-)
delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile
index ff426a6..97bf2cd 100644
--- a/drivers/infiniband/hw/hns/Makefile
+++ b/drivers/infiniband/hw/hns/Makefile
@@ -5,7 +5,7 @@
ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
obj-$(CONFIG_INFINIBAND_HNS) += hns-roce.o
-hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_eq.o hns_roce_pd.o \
+hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
hns_roce_cq.o hns_roce_alloc.o
obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o
diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
index 1085cb2..9ebe839 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
@@ -103,6 +103,7 @@ void hns_roce_cmd_event(struct hns_roce_dev *hr_dev, u16 token, u8 status,
context->out_param = out_param;
complete(&context->done);
}
+EXPORT_SYMBOL_GPL(hns_roce_cmd_event);
/* this should be called with "use_events" */
static int __hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index 2111b57..bccc9b5 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -196,15 +196,14 @@ void hns_roce_free_cq(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
if (ret)
dev_err(dev, "HW2SW_CQ failed (%d) for CQN %06lx\n", ret,
hr_cq->cqn);
- if (hr_dev->eq_table.eq) {
- /* Waiting interrupt process procedure carried out */
- synchronize_irq(hr_dev->eq_table.eq[hr_cq->vector].irq);
-
- /* wait for all interrupt processed */
- if (atomic_dec_and_test(&hr_cq->refcount))
- complete(&hr_cq->free);
- wait_for_completion(&hr_cq->free);
- }
+
+ /* Waiting interrupt process procedure carried out */
+ synchronize_irq(hr_dev->eq_table.eq[hr_cq->vector].irq);
+
+ /* wait for all interrupt processed */
+ if (atomic_dec_and_test(&hr_cq->refcount))
+ complete(&hr_cq->free);
+ wait_for_completion(&hr_cq->free);
spin_lock_irq(&cq_table->lock);
radix_tree_delete(&cq_table->tree, hr_cq->cqn);
@@ -460,6 +459,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn)
++cq->arm_sn;
cq->comp(cq);
}
+EXPORT_SYMBOL_GPL(hns_roce_cq_completion);
void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
{
@@ -482,6 +482,7 @@ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
if (atomic_dec_and_test(&cq->refcount))
complete(&cq->free);
}
+EXPORT_SYMBOL_GPL(hns_roce_cq_event);
int hns_roce_init_cq_table(struct hns_roce_dev *hr_dev)
{
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 01d3d69..9aa9e94 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -62,12 +62,16 @@
#define HNS_ROCE_CQE_WCMD_EMPTY_BIT 0x2
#define HNS_ROCE_MIN_CQE_CNT 16
-#define HNS_ROCE_MAX_IRQ_NUM 34
+#define HNS_ROCE_MAX_IRQ_NUM 128
-#define HNS_ROCE_COMP_VEC_NUM 32
+#define EQ_ENABLE 1
+#define EQ_DISABLE 0
-#define HNS_ROCE_AEQE_VEC_NUM 1
-#define HNS_ROCE_AEQE_OF_VEC_NUM 1
+#define HNS_ROCE_CEQ 0
+#define HNS_ROCE_AEQ 1
+
+#define HNS_ROCE_CEQ_ENTRY_SIZE 0x4
+#define HNS_ROCE_AEQ_ENTRY_SIZE 0x10
/* 4G/4K = 1M */
#define HNS_ROCE_SL_SHIFT 28
@@ -485,6 +489,45 @@ struct hns_roce_ib_iboe {
u8 phy_port[HNS_ROCE_MAX_PORTS];
};
+enum {
+ HNS_ROCE_EQ_STAT_INVALID = 0,
+ HNS_ROCE_EQ_STAT_VALID = 2,
+};
+
+struct hns_roce_ceqe {
+ u32 comp;
+};
+
+struct hns_roce_aeqe {
+ u32 asyn;
+ union {
+ struct {
+ u32 qp;
+ u32 rsv0;
+ u32 rsv1;
+ } qp_event;
+
+ struct {
+ u32 cq;
+ u32 rsv0;
+ u32 rsv1;
+ } cq_event;
+
+ struct {
+ u32 ceqe;
+ u32 rsv0;
+ u32 rsv1;
+ } ce_event;
+
+ struct {
+ __le64 out_param;
+ __le16 token;
+ u8 status;
+ u8 rsv0;
+ } __packed cmd;
+ } event;
+};
+
struct hns_roce_eq {
struct hns_roce_dev *hr_dev;
void __iomem *doorbell;
@@ -502,7 +545,7 @@ struct hns_roce_eq {
struct hns_roce_eq_table {
struct hns_roce_eq *eq;
- void __iomem **eqc_base;
+ void __iomem **eqc_base; /* only for hw v1 */
};
struct hns_roce_caps {
@@ -550,7 +593,7 @@ struct hns_roce_caps {
u32 pbl_buf_pg_sz;
u32 pbl_hop_num;
int aeqe_depth;
- int ceqe_depth[HNS_ROCE_COMP_VEC_NUM];
+ int ceqe_depth;
enum ib_mtu max_mtu;
u32 qpc_bt_num;
u32 srqc_bt_num;
@@ -623,6 +666,8 @@ struct hns_roce_hw {
int (*dereg_mr)(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr);
int (*destroy_cq)(struct ib_cq *ibcq);
int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period);
+ int (*init_eq)(struct hns_roce_dev *hr_dev);
+ void (*cleanup_eq)(struct hns_roce_dev *hr_dev);
};
struct hns_roce_dev {
diff --git a/drivers/infiniband/hw/hns/hns_roce_eq.c b/drivers/infiniband/hw/hns/hns_roce_eq.c
deleted file mode 100644
index d184431..0000000
--- a/drivers/infiniband/hw/hns/hns_roce_eq.c
+++ /dev/null
@@ -1,759 +0,0 @@
-/*
- * Copyright (c) 2016 Hisilicon Limited.
- *
- * This software is available to you under a choice of one of two
- * licenses. You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- * Redistribution and use in source and binary forms, with or
- * without modification, are permitted provided that the following
- * conditions are met:
- *
- * - Redistributions of source code must retain the above
- * copyright notice, this list of conditions and the following
- * disclaimer.
- *
- * - Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following
- * disclaimer in the documentation and/or other materials
- * provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- */
-
-#include <linux/platform_device.h>
-#include <linux/interrupt.h>
-#include "hns_roce_common.h"
-#include "hns_roce_device.h"
-#include "hns_roce_eq.h"
-
-static void eq_set_cons_index(struct hns_roce_eq *eq, int req_not)
-{
- roce_raw_write((eq->cons_index & CONS_INDEX_MASK) |
- (req_not << eq->log_entries), eq->doorbell);
- /* Memory barrier */
- mb();
-}
-
-static struct hns_roce_aeqe *get_aeqe(struct hns_roce_eq *eq, u32 entry)
-{
- unsigned long off = (entry & (eq->entries - 1)) *
- HNS_ROCE_AEQ_ENTRY_SIZE;
-
- return (struct hns_roce_aeqe *)((u8 *)
- (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
- off % HNS_ROCE_BA_SIZE);
-}
-
-static struct hns_roce_aeqe *next_aeqe_sw(struct hns_roce_eq *eq)
-{
- struct hns_roce_aeqe *aeqe = get_aeqe(eq, eq->cons_index);
-
- return (roce_get_bit(aeqe->asyn, HNS_ROCE_AEQE_U32_4_OWNER_S) ^
- !!(eq->cons_index & eq->entries)) ? aeqe : NULL;
-}
-
-static void hns_roce_wq_catas_err_handle(struct hns_roce_dev *hr_dev,
- struct hns_roce_aeqe *aeqe, int qpn)
-{
- struct device *dev = &hr_dev->pdev->dev;
-
- dev_warn(dev, "Local Work Queue Catastrophic Error.\n");
- switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
- HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
- case HNS_ROCE_LWQCE_QPC_ERROR:
- dev_warn(dev, "QP %d, QPC error.\n", qpn);
- break;
- case HNS_ROCE_LWQCE_MTU_ERROR:
- dev_warn(dev, "QP %d, MTU error.\n", qpn);
- break;
- case HNS_ROCE_LWQCE_WQE_BA_ADDR_ERROR:
- dev_warn(dev, "QP %d, WQE BA addr error.\n", qpn);
- break;
- case HNS_ROCE_LWQCE_WQE_ADDR_ERROR:
- dev_warn(dev, "QP %d, WQE addr error.\n", qpn);
- break;
- case HNS_ROCE_LWQCE_SQ_WQE_SHIFT_ERROR:
- dev_warn(dev, "QP %d, WQE shift error\n", qpn);
- break;
- case HNS_ROCE_LWQCE_SL_ERROR:
- dev_warn(dev, "QP %d, SL error.\n", qpn);
- break;
- case HNS_ROCE_LWQCE_PORT_ERROR:
- dev_warn(dev, "QP %d, port error.\n", qpn);
- break;
- default:
- break;
- }
-}
-
-static void hns_roce_local_wq_access_err_handle(struct hns_roce_dev *hr_dev,
- struct hns_roce_aeqe *aeqe,
- int qpn)
-{
- struct device *dev = &hr_dev->pdev->dev;
-
- dev_warn(dev, "Local Access Violation Work Queue Error.\n");
- switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
- HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
- case HNS_ROCE_LAVWQE_R_KEY_VIOLATION:
- dev_warn(dev, "QP %d, R_key violation.\n", qpn);
- break;
- case HNS_ROCE_LAVWQE_LENGTH_ERROR:
- dev_warn(dev, "QP %d, length error.\n", qpn);
- break;
- case HNS_ROCE_LAVWQE_VA_ERROR:
- dev_warn(dev, "QP %d, VA error.\n", qpn);
- break;
- case HNS_ROCE_LAVWQE_PD_ERROR:
- dev_err(dev, "QP %d, PD error.\n", qpn);
- break;
- case HNS_ROCE_LAVWQE_RW_ACC_ERROR:
- dev_warn(dev, "QP %d, rw acc error.\n", qpn);
- break;
- case HNS_ROCE_LAVWQE_KEY_STATE_ERROR:
- dev_warn(dev, "QP %d, key state error.\n", qpn);
- break;
- case HNS_ROCE_LAVWQE_MR_OPERATION_ERROR:
- dev_warn(dev, "QP %d, MR operation error.\n", qpn);
- break;
- default:
- break;
- }
-}
-
-static void hns_roce_qp_err_handle(struct hns_roce_dev *hr_dev,
- struct hns_roce_aeqe *aeqe,
- int event_type)
-{
- struct device *dev = &hr_dev->pdev->dev;
- int phy_port;
- int qpn;
-
- qpn = roce_get_field(aeqe->event.qp_event.qp,
- HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M,
- HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S);
- phy_port = roce_get_field(aeqe->event.qp_event.qp,
- HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M,
- HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S);
- if (qpn <= 1)
- qpn = HNS_ROCE_MAX_PORTS * qpn + phy_port;
-
- switch (event_type) {
- case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
- dev_warn(dev, "Invalid Req Local Work Queue Error.\n"
- "QP %d, phy_port %d.\n", qpn, phy_port);
- break;
- case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
- hns_roce_wq_catas_err_handle(hr_dev, aeqe, qpn);
- break;
- case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
- hns_roce_local_wq_access_err_handle(hr_dev, aeqe, qpn);
- break;
- default:
- break;
- }
-
- hns_roce_qp_event(hr_dev, qpn, event_type);
-}
-
-static void hns_roce_cq_err_handle(struct hns_roce_dev *hr_dev,
- struct hns_roce_aeqe *aeqe,
- int event_type)
-{
- struct device *dev = &hr_dev->pdev->dev;
- u32 cqn;
-
- cqn = le32_to_cpu(roce_get_field(aeqe->event.cq_event.cq,
- HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M,
- HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S));
-
- switch (event_type) {
- case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
- dev_warn(dev, "CQ 0x%x access err.\n", cqn);
- break;
- case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
- dev_warn(dev, "CQ 0x%x overflow\n", cqn);
- break;
- case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
- dev_warn(dev, "CQ 0x%x ID invalid.\n", cqn);
- break;
- default:
- break;
- }
-
- hns_roce_cq_event(hr_dev, cqn, event_type);
-}
-
-static void hns_roce_db_overflow_handle(struct hns_roce_dev *hr_dev,
- struct hns_roce_aeqe *aeqe)
-{
- struct device *dev = &hr_dev->pdev->dev;
-
- switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
- HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
- case HNS_ROCE_DB_SUBTYPE_SDB_OVF:
- dev_warn(dev, "SDB overflow.\n");
- break;
- case HNS_ROCE_DB_SUBTYPE_SDB_ALM_OVF:
- dev_warn(dev, "SDB almost overflow.\n");
- break;
- case HNS_ROCE_DB_SUBTYPE_SDB_ALM_EMP:
- dev_warn(dev, "SDB almost empty.\n");
- break;
- case HNS_ROCE_DB_SUBTYPE_ODB_OVF:
- dev_warn(dev, "ODB overflow.\n");
- break;
- case HNS_ROCE_DB_SUBTYPE_ODB_ALM_OVF:
- dev_warn(dev, "ODB almost overflow.\n");
- break;
- case HNS_ROCE_DB_SUBTYPE_ODB_ALM_EMP:
- dev_warn(dev, "SDB almost empty.\n");
- break;
- default:
- break;
- }
-}
-
-static int hns_roce_aeq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
-{
- struct device *dev = &hr_dev->pdev->dev;
- struct hns_roce_aeqe *aeqe;
- int aeqes_found = 0;
- int event_type;
-
- while ((aeqe = next_aeqe_sw(eq))) {
- dev_dbg(dev, "aeqe = %p, aeqe->asyn.event_type = 0x%lx\n", aeqe,
- roce_get_field(aeqe->asyn,
- HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
- HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S));
- /* Memory barrier */
- rmb();
-
- event_type = roce_get_field(aeqe->asyn,
- HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
- HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S);
- switch (event_type) {
- case HNS_ROCE_EVENT_TYPE_PATH_MIG:
- dev_warn(dev, "PATH MIG not supported\n");
- break;
- case HNS_ROCE_EVENT_TYPE_COMM_EST:
- dev_warn(dev, "COMMUNICATION established\n");
- break;
- case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
- dev_warn(dev, "SQ DRAINED not supported\n");
- break;
- case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
- dev_warn(dev, "PATH MIG failed\n");
- break;
- case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
- case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
- case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
- hns_roce_qp_err_handle(hr_dev, aeqe, event_type);
- break;
- case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
- case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
- case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
- dev_warn(dev, "SRQ not support!\n");
- break;
- case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
- case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
- case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
- hns_roce_cq_err_handle(hr_dev, aeqe, event_type);
- break;
- case HNS_ROCE_EVENT_TYPE_PORT_CHANGE:
- dev_warn(dev, "port change.\n");
- break;
- case HNS_ROCE_EVENT_TYPE_MB:
- hns_roce_cmd_event(hr_dev,
- le16_to_cpu(aeqe->event.cmd.token),
- aeqe->event.cmd.status,
- le64_to_cpu(aeqe->event.cmd.out_param
- ));
- break;
- case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
- hns_roce_db_overflow_handle(hr_dev, aeqe);
- break;
- case HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW:
- dev_warn(dev, "CEQ 0x%lx overflow.\n",
- roce_get_field(aeqe->event.ce_event.ceqe,
- HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M,
- HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S));
- break;
- default:
- dev_warn(dev, "Unhandled event %d on EQ %d at index %u\n",
- event_type, eq->eqn, eq->cons_index);
- break;
- }
-
- eq->cons_index++;
- aeqes_found = 1;
-
- if (eq->cons_index > 2 * hr_dev->caps.aeqe_depth - 1) {
- dev_warn(dev, "cons_index overflow, set back to zero\n"
- );
- eq->cons_index = 0;
- }
- }
-
- eq_set_cons_index(eq, 0);
-
- return aeqes_found;
-}
-
-static struct hns_roce_ceqe *get_ceqe(struct hns_roce_eq *eq, u32 entry)
-{
- unsigned long off = (entry & (eq->entries - 1)) *
- HNS_ROCE_CEQ_ENTRY_SIZE;
-
- return (struct hns_roce_ceqe *)((u8 *)
- (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
- off % HNS_ROCE_BA_SIZE);
-}
-
-static struct hns_roce_ceqe *next_ceqe_sw(struct hns_roce_eq *eq)
-{
- struct hns_roce_ceqe *ceqe = get_ceqe(eq, eq->cons_index);
-
- return (!!(roce_get_bit(ceqe->ceqe.comp,
- HNS_ROCE_CEQE_CEQE_COMP_OWNER_S))) ^
- (!!(eq->cons_index & eq->entries)) ? ceqe : NULL;
-}
-
-static int hns_roce_ceq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
-{
- struct hns_roce_ceqe *ceqe;
- int ceqes_found = 0;
- u32 cqn;
-
- while ((ceqe = next_ceqe_sw(eq))) {
- /* Memory barrier */
- rmb();
- cqn = roce_get_field(ceqe->ceqe.comp,
- HNS_ROCE_CEQE_CEQE_COMP_CQN_M,
- HNS_ROCE_CEQE_CEQE_COMP_CQN_S);
- hns_roce_cq_completion(hr_dev, cqn);
-
- ++eq->cons_index;
- ceqes_found = 1;
-
- if (eq->cons_index > 2 * hr_dev->caps.ceqe_depth[eq->eqn] - 1) {
- dev_warn(&eq->hr_dev->pdev->dev,
- "cons_index overflow, set back to zero\n");
- eq->cons_index = 0;
- }
- }
-
- eq_set_cons_index(eq, 0);
-
- return ceqes_found;
-}
-
-static int hns_roce_aeq_ovf_int(struct hns_roce_dev *hr_dev,
- struct hns_roce_eq *eq)
-{
- struct device *dev = &eq->hr_dev->pdev->dev;
- int eqovf_found = 0;
- u32 caepaemask_val;
- u32 cealmovf_val;
- u32 caepaest_val;
- u32 aeshift_val;
- u32 ceshift_val;
- u32 cemask_val;
- int i = 0;
-
- /**
- * AEQ overflow ECC mult bit err CEQ overflow alarm
- * must clear interrupt, mask irq, clear irq, cancel mask operation
- */
- aeshift_val = roce_read(hr_dev, ROCEE_CAEP_AEQC_AEQE_SHIFT_REG);
-
- if (roce_get_bit(aeshift_val,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQ_ALM_OVF_INT_ST_S) == 1) {
- dev_warn(dev, "AEQ overflow!\n");
-
- /* Set mask */
- caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
- roce_set_bit(caepaemask_val,
- ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
- HNS_ROCE_INT_MASK_ENABLE);
- roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
-
- /* Clear int state(INT_WC : write 1 clear) */
- caepaest_val = roce_read(hr_dev, ROCEE_CAEP_AE_ST_REG);
- roce_set_bit(caepaest_val,
- ROCEE_CAEP_AE_ST_CAEP_AEQ_ALM_OVF_S, 1);
- roce_write(hr_dev, ROCEE_CAEP_AE_ST_REG, caepaest_val);
-
- /* Clear mask */
- caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
- roce_set_bit(caepaemask_val,
- ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
- HNS_ROCE_INT_MASK_DISABLE);
- roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
- }
-
- /* CEQ almost overflow */
- for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
- ceshift_val = roce_read(hr_dev, ROCEE_CAEP_CEQC_SHIFT_0_REG +
- i * CEQ_REG_OFFSET);
-
- if (roce_get_bit(ceshift_val,
- ROCEE_CAEP_CEQC_SHIFT_CAEP_CEQ_ALM_OVF_INT_ST_S) == 1) {
- dev_warn(dev, "CEQ[%d] almost overflow!\n", i);
- eqovf_found++;
-
- /* Set mask */
- cemask_val = roce_read(hr_dev,
- ROCEE_CAEP_CE_IRQ_MASK_0_REG +
- i * CEQ_REG_OFFSET);
- roce_set_bit(cemask_val,
- ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
- HNS_ROCE_INT_MASK_ENABLE);
- roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
- i * CEQ_REG_OFFSET, cemask_val);
-
- /* Clear int state(INT_WC : write 1 clear) */
- cealmovf_val = roce_read(hr_dev,
- ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
- i * CEQ_REG_OFFSET);
- roce_set_bit(cealmovf_val,
- ROCEE_CAEP_CEQ_ALM_OVF_CAEP_CEQ_ALM_OVF_S,
- 1);
- roce_write(hr_dev, ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
- i * CEQ_REG_OFFSET, cealmovf_val);
-
- /* Clear mask */
- cemask_val = roce_read(hr_dev,
- ROCEE_CAEP_CE_IRQ_MASK_0_REG +
- i * CEQ_REG_OFFSET);
- roce_set_bit(cemask_val,
- ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
- HNS_ROCE_INT_MASK_DISABLE);
- roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
- i * CEQ_REG_OFFSET, cemask_val);
- }
- }
-
- /* ECC multi-bit error alarm */
- dev_warn(dev, "ECC UCERR ALARM: 0x%x, 0x%x, 0x%x\n",
- roce_read(hr_dev, ROCEE_ECC_UCERR_ALM0_REG),
- roce_read(hr_dev, ROCEE_ECC_UCERR_ALM1_REG),
- roce_read(hr_dev, ROCEE_ECC_UCERR_ALM2_REG));
-
- dev_warn(dev, "ECC CERR ALARM: 0x%x, 0x%x, 0x%x\n",
- roce_read(hr_dev, ROCEE_ECC_CERR_ALM0_REG),
- roce_read(hr_dev, ROCEE_ECC_CERR_ALM1_REG),
- roce_read(hr_dev, ROCEE_ECC_CERR_ALM2_REG));
-
- return eqovf_found;
-}
-
-static int hns_roce_eq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq)
-{
- int eqes_found = 0;
-
- if (likely(eq->type_flag == HNS_ROCE_CEQ))
- /* CEQ irq routine, CEQ is pulse irq, not clear */
- eqes_found = hns_roce_ceq_int(hr_dev, eq);
- else if (likely(eq->type_flag == HNS_ROCE_AEQ))
- /* AEQ irq routine, AEQ is pulse irq, not clear */
- eqes_found = hns_roce_aeq_int(hr_dev, eq);
- else
- /* AEQ queue overflow irq */
- eqes_found = hns_roce_aeq_ovf_int(hr_dev, eq);
-
- return eqes_found;
-}
-
-static irqreturn_t hns_roce_msi_x_interrupt(int irq, void *eq_ptr)
-{
- int int_work = 0;
- struct hns_roce_eq *eq = eq_ptr;
- struct hns_roce_dev *hr_dev = eq->hr_dev;
-
- int_work = hns_roce_eq_int(hr_dev, eq);
-
- return IRQ_RETVAL(int_work);
-}
-
-static void hns_roce_enable_eq(struct hns_roce_dev *hr_dev, int eq_num,
- int enable_flag)
-{
- void __iomem *eqc = hr_dev->eq_table.eqc_base[eq_num];
- u32 val;
-
- val = readl(eqc);
-
- if (enable_flag)
- roce_set_field(val,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
- HNS_ROCE_EQ_STAT_VALID);
- else
- roce_set_field(val,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
- HNS_ROCE_EQ_STAT_INVALID);
- writel(val, eqc);
-}
-
-static int hns_roce_create_eq(struct hns_roce_dev *hr_dev,
- struct hns_roce_eq *eq)
-{
- void __iomem *eqc = hr_dev->eq_table.eqc_base[eq->eqn];
- struct device *dev = &hr_dev->pdev->dev;
- dma_addr_t tmp_dma_addr;
- u32 eqconsindx_val = 0;
- u32 eqcuridx_val = 0;
- u32 eqshift_val = 0;
- int num_bas = 0;
- int ret;
- int i;
-
- num_bas = (PAGE_ALIGN(eq->entries * eq->eqe_size) +
- HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
-
- if ((eq->entries * eq->eqe_size) > HNS_ROCE_BA_SIZE) {
- dev_err(dev, "[error]eq buf %d gt ba size(%d) need bas=%d\n",
- (eq->entries * eq->eqe_size), HNS_ROCE_BA_SIZE,
- num_bas);
- return -EINVAL;
- }
-
- eq->buf_list = kcalloc(num_bas, sizeof(*eq->buf_list), GFP_KERNEL);
- if (!eq->buf_list)
- return -ENOMEM;
-
- for (i = 0; i < num_bas; ++i) {
- eq->buf_list[i].buf = dma_alloc_coherent(dev, HNS_ROCE_BA_SIZE,
- &tmp_dma_addr,
- GFP_KERNEL);
- if (!eq->buf_list[i].buf) {
- ret = -ENOMEM;
- goto err_out_free_pages;
- }
-
- eq->buf_list[i].map = tmp_dma_addr;
- memset(eq->buf_list[i].buf, 0, HNS_ROCE_BA_SIZE);
- }
- eq->cons_index = 0;
- roce_set_field(eqshift_val,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
- HNS_ROCE_EQ_STAT_INVALID);
- roce_set_field(eqshift_val,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_M,
- ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_S,
- eq->log_entries);
- writel(eqshift_val, eqc);
-
- /* Configure eq extended address 12~44bit */
- writel((u32)(eq->buf_list[0].map >> 12), eqc + 4);
-
- /*
- * Configure eq extended address 45~49 bit.
- * 44 = 32 + 12, When evaluating addr to hardware, shift 12 because of
- * using 4K page, and shift more 32 because of
- * caculating the high 32 bit value evaluated to hardware.
- */
- roce_set_field(eqcuridx_val, ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_M,
- ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_S,
- eq->buf_list[0].map >> 44);
- roce_set_field(eqcuridx_val,
- ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_M,
- ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_S, 0);
- writel(eqcuridx_val, eqc + 8);
-
- /* Configure eq consumer index */
- roce_set_field(eqconsindx_val,
- ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_M,
- ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_S, 0);
- writel(eqconsindx_val, eqc + 0xc);
-
- return 0;
-
-err_out_free_pages:
- for (i = i - 1; i >= 0; i--)
- dma_free_coherent(dev, HNS_ROCE_BA_SIZE, eq->buf_list[i].buf,
- eq->buf_list[i].map);
-
- kfree(eq->buf_list);
- return ret;
-}
-
-static void hns_roce_free_eq(struct hns_roce_dev *hr_dev,
- struct hns_roce_eq *eq)
-{
- int i = 0;
- int npages = (PAGE_ALIGN(eq->eqe_size * eq->entries) +
- HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
-
- if (!eq->buf_list)
- return;
-
- for (i = 0; i < npages; ++i)
- dma_free_coherent(&hr_dev->pdev->dev, HNS_ROCE_BA_SIZE,
- eq->buf_list[i].buf, eq->buf_list[i].map);
-
- kfree(eq->buf_list);
-}
-
-static void hns_roce_int_mask_en(struct hns_roce_dev *hr_dev)
-{
- int i = 0;
- u32 aemask_val;
- int masken = 0;
-
- /* AEQ INT */
- aemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
- roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
- masken);
- roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AE_IRQ_MASK_S, masken);
- roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, aemask_val);
-
- /* CEQ INT */
- for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
- /* IRQ mask */
- roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
- i * CEQ_REG_OFFSET, masken);
- }
-}
-
-static void hns_roce_ce_int_default_cfg(struct hns_roce_dev *hr_dev)
-{
- /* Configure ce int interval */
- roce_write(hr_dev, ROCEE_CAEP_CE_INTERVAL_CFG_REG,
- HNS_ROCE_CEQ_DEFAULT_INTERVAL);
-
- /* Configure ce int burst num */
- roce_write(hr_dev, ROCEE_CAEP_CE_BURST_NUM_CFG_REG,
- HNS_ROCE_CEQ_DEFAULT_BURST_NUM);
-}
-
-int hns_roce_init_eq_table(struct hns_roce_dev *hr_dev)
-{
- struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
- struct device *dev = &hr_dev->pdev->dev;
- struct hns_roce_eq *eq = NULL;
- int eq_num = 0;
- int ret = 0;
- int i = 0;
- int j = 0;
-
- eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
- eq_table->eq = kcalloc(eq_num, sizeof(*eq_table->eq), GFP_KERNEL);
- if (!eq_table->eq)
- return -ENOMEM;
-
- eq_table->eqc_base = kcalloc(eq_num, sizeof(*eq_table->eqc_base),
- GFP_KERNEL);
- if (!eq_table->eqc_base) {
- ret = -ENOMEM;
- goto err_eqc_base_alloc_fail;
- }
-
- for (i = 0; i < eq_num; i++) {
- eq = &eq_table->eq[i];
- eq->hr_dev = hr_dev;
- eq->eqn = i;
- eq->irq = hr_dev->irq[i];
- eq->log_page_size = PAGE_SHIFT;
-
- if (i < hr_dev->caps.num_comp_vectors) {
- /* CEQ */
- eq_table->eqc_base[i] = hr_dev->reg_base +
- ROCEE_CAEP_CEQC_SHIFT_0_REG +
- HNS_ROCE_CEQC_REG_OFFSET * i;
- eq->type_flag = HNS_ROCE_CEQ;
- eq->doorbell = hr_dev->reg_base +
- ROCEE_CAEP_CEQC_CONS_IDX_0_REG +
- HNS_ROCE_CEQC_REG_OFFSET * i;
- eq->entries = hr_dev->caps.ceqe_depth[i];
- eq->log_entries = ilog2(eq->entries);
- eq->eqe_size = sizeof(struct hns_roce_ceqe);
- } else {
- /* AEQ */
- eq_table->eqc_base[i] = hr_dev->reg_base +
- ROCEE_CAEP_AEQC_AEQE_SHIFT_REG;
- eq->type_flag = HNS_ROCE_AEQ;
- eq->doorbell = hr_dev->reg_base +
- ROCEE_CAEP_AEQE_CONS_IDX_REG;
- eq->entries = hr_dev->caps.aeqe_depth;
- eq->log_entries = ilog2(eq->entries);
- eq->eqe_size = sizeof(struct hns_roce_aeqe);
- }
- }
-
- /* Disable irq */
- hns_roce_int_mask_en(hr_dev);
-
- /* Configure CE irq interval and burst num */
- hns_roce_ce_int_default_cfg(hr_dev);
-
- for (i = 0; i < eq_num; i++) {
- ret = hns_roce_create_eq(hr_dev, &eq_table->eq[i]);
- if (ret) {
- dev_err(dev, "eq create failed\n");
- goto err_create_eq_fail;
- }
- }
-
- for (j = 0; j < eq_num; j++) {
- ret = request_irq(eq_table->eq[j].irq, hns_roce_msi_x_interrupt,
- 0, hr_dev->irq_names[j], eq_table->eq + j);
- if (ret) {
- dev_err(dev, "request irq error!\n");
- goto err_request_irq_fail;
- }
- }
-
- for (i = 0; i < eq_num; i++)
- hns_roce_enable_eq(hr_dev, i, EQ_ENABLE);
-
- return 0;
-
-err_request_irq_fail:
- for (j = j - 1; j >= 0; j--)
- free_irq(eq_table->eq[j].irq, eq_table->eq + j);
-
-err_create_eq_fail:
- for (i = i - 1; i >= 0; i--)
- hns_roce_free_eq(hr_dev, &eq_table->eq[i]);
-
- kfree(eq_table->eqc_base);
-
-err_eqc_base_alloc_fail:
- kfree(eq_table->eq);
-
- return ret;
-}
-
-void hns_roce_cleanup_eq_table(struct hns_roce_dev *hr_dev)
-{
- int i;
- int eq_num;
- struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
-
- eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
- for (i = 0; i < eq_num; i++) {
- /* Disable EQ */
- hns_roce_enable_eq(hr_dev, i, EQ_DISABLE);
-
- free_irq(eq_table->eq[i].irq, eq_table->eq + i);
-
- hns_roce_free_eq(hr_dev, &eq_table->eq[i]);
- }
-
- kfree(eq_table->eqc_base);
- kfree(eq_table->eq);
-}
diff --git a/drivers/infiniband/hw/hns/hns_roce_eq.h b/drivers/infiniband/hw/hns/hns_roce_eq.h
deleted file mode 100644
index c6d212d..0000000
--- a/drivers/infiniband/hw/hns/hns_roce_eq.h
+++ /dev/null
@@ -1,134 +0,0 @@
-/*
- * Copyright (c) 2016 Hisilicon Limited.
- *
- * This software is available to you under a choice of one of two
- * licenses. You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- * Redistribution and use in source and binary forms, with or
- * without modification, are permitted provided that the following
- * conditions are met:
- *
- * - Redistributions of source code must retain the above
- * copyright notice, this list of conditions and the following
- * disclaimer.
- *
- * - Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following
- * disclaimer in the documentation and/or other materials
- * provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- */
-
-#ifndef _HNS_ROCE_EQ_H
-#define _HNS_ROCE_EQ_H
-
-#define HNS_ROCE_CEQ 1
-#define HNS_ROCE_AEQ 2
-
-#define HNS_ROCE_CEQ_ENTRY_SIZE 0x4
-#define HNS_ROCE_AEQ_ENTRY_SIZE 0x10
-#define HNS_ROCE_CEQC_REG_OFFSET 0x18
-
-#define HNS_ROCE_CEQ_DEFAULT_INTERVAL 0x10
-#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x10
-
-#define HNS_ROCE_INT_MASK_DISABLE 0
-#define HNS_ROCE_INT_MASK_ENABLE 1
-
-#define EQ_ENABLE 1
-#define EQ_DISABLE 0
-#define CONS_INDEX_MASK 0xffff
-
-#define CEQ_REG_OFFSET 0x18
-
-enum {
- HNS_ROCE_EQ_STAT_INVALID = 0,
- HNS_ROCE_EQ_STAT_VALID = 2,
-};
-
-struct hns_roce_aeqe {
- u32 asyn;
- union {
- struct {
- u32 qp;
- u32 rsv0;
- u32 rsv1;
- } qp_event;
-
- struct {
- u32 cq;
- u32 rsv0;
- u32 rsv1;
- } cq_event;
-
- struct {
- u32 port;
- u32 rsv0;
- u32 rsv1;
- } port_event;
-
- struct {
- u32 ceqe;
- u32 rsv0;
- u32 rsv1;
- } ce_event;
-
- struct {
- __le64 out_param;
- __le16 token;
- u8 status;
- u8 rsv0;
- } __packed cmd;
- } event;
-};
-
-#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S 16
-#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M \
- (((1UL << 8) - 1) << HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S)
-
-#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S 24
-#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M \
- (((1UL << 7) - 1) << HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)
-
-#define HNS_ROCE_AEQE_U32_4_OWNER_S 31
-
-#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S 0
-#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M \
- (((1UL << 24) - 1) << HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S)
-
-#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S 25
-#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M \
- (((1UL << 3) - 1) << HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S)
-
-#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S 0
-#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M \
- (((1UL << 16) - 1) << HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S)
-
-#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S 0
-#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M \
- (((1UL << 5) - 1) << HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S)
-
-struct hns_roce_ceqe {
- union {
- int comp;
- } ceqe;
-};
-
-#define HNS_ROCE_CEQE_CEQE_COMP_OWNER_S 0
-
-#define HNS_ROCE_CEQE_CEQE_COMP_CQN_S 16
-#define HNS_ROCE_CEQE_CEQE_COMP_CQN_M \
- (((1UL << 16) - 1) << HNS_ROCE_CEQE_CEQE_COMP_CQN_S)
-
-#endif /* _HNS_ROCE_EQ_H */
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index af27168..6100ace 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -33,6 +33,7 @@
#include <linux/platform_device.h>
#include <linux/acpi.h>
#include <linux/etherdevice.h>
+#include <linux/interrupt.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <rdma/ib_umem.h>
@@ -1492,9 +1493,9 @@ static int hns_roce_v1_profile(struct hns_roce_dev *hr_dev)
caps->max_sq_inline = HNS_ROCE_V1_INLINE_SIZE;
caps->num_uars = HNS_ROCE_V1_UAR_NUM;
caps->phy_num_uars = HNS_ROCE_V1_PHY_UAR_NUM;
- caps->num_aeq_vectors = HNS_ROCE_AEQE_VEC_NUM;
- caps->num_comp_vectors = HNS_ROCE_COMP_VEC_NUM;
- caps->num_other_vectors = HNS_ROCE_AEQE_OF_VEC_NUM;
+ caps->num_aeq_vectors = HNS_ROCE_V1_AEQE_VEC_NUM;
+ caps->num_comp_vectors = HNS_ROCE_V1_COMP_VEC_NUM;
+ caps->num_other_vectors = HNS_ROCE_V1_ABNORMAL_VEC_NUM;
caps->num_mtpts = HNS_ROCE_V1_MAX_MTPT_NUM;
caps->num_mtt_segs = HNS_ROCE_V1_MAX_MTT_SEGS;
caps->num_pds = HNS_ROCE_V1_MAX_PD_NUM;
@@ -1529,10 +1530,8 @@ static int hns_roce_v1_profile(struct hns_roce_dev *hr_dev)
caps->num_ports + 1;
}
- for (i = 0; i < caps->num_comp_vectors; i++)
- caps->ceqe_depth[i] = HNS_ROCE_V1_NUM_COMP_EQE;
-
- caps->aeqe_depth = HNS_ROCE_V1_NUM_ASYNC_EQE;
+ caps->ceqe_depth = HNS_ROCE_V1_COMP_EQE_NUM;
+ caps->aeqe_depth = HNS_ROCE_V1_ASYNC_EQE_NUM;
caps->local_ca_ack_delay = le32_to_cpu(roce_read(hr_dev,
ROCEE_ACK_DELAY_REG));
caps->max_mtu = IB_MTU_2048;
@@ -3960,6 +3959,727 @@ static int hns_roce_v1_destroy_cq(struct ib_cq *ibcq)
return ret;
}
+static void set_eq_cons_index_v1(struct hns_roce_eq *eq, int req_not)
+{
+ roce_raw_write((eq->cons_index & HNS_ROCE_V1_CONS_IDX_M) |
+ (req_not << eq->log_entries), eq->doorbell);
+ /* Memory barrier */
+ mb();
+}
+
+static void hns_roce_v1_wq_catas_err_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe, int qpn)
+{
+ struct device *dev = &hr_dev->pdev->dev;
+
+ dev_warn(dev, "Local Work Queue Catastrophic Error.\n");
+ switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
+ HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
+ case HNS_ROCE_LWQCE_QPC_ERROR:
+ dev_warn(dev, "QP %d, QPC error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_MTU_ERROR:
+ dev_warn(dev, "QP %d, MTU error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_WQE_BA_ADDR_ERROR:
+ dev_warn(dev, "QP %d, WQE BA addr error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_WQE_ADDR_ERROR:
+ dev_warn(dev, "QP %d, WQE addr error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_SQ_WQE_SHIFT_ERROR:
+ dev_warn(dev, "QP %d, WQE shift error\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_SL_ERROR:
+ dev_warn(dev, "QP %d, SL error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_PORT_ERROR:
+ dev_warn(dev, "QP %d, port error.\n", qpn);
+ break;
+ default:
+ break;
+ }
+}
+
+static void hns_roce_v1_local_wq_access_err_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe,
+ int qpn)
+{
+ struct device *dev = &hr_dev->pdev->dev;
+
+ dev_warn(dev, "Local Access Violation Work Queue Error.\n");
+ switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
+ HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
+ case HNS_ROCE_LAVWQE_R_KEY_VIOLATION:
+ dev_warn(dev, "QP %d, R_key violation.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_LENGTH_ERROR:
+ dev_warn(dev, "QP %d, length error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_VA_ERROR:
+ dev_warn(dev, "QP %d, VA error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_PD_ERROR:
+ dev_err(dev, "QP %d, PD error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_RW_ACC_ERROR:
+ dev_warn(dev, "QP %d, rw acc error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_KEY_STATE_ERROR:
+ dev_warn(dev, "QP %d, key state error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_MR_OPERATION_ERROR:
+ dev_warn(dev, "QP %d, MR operation error.\n", qpn);
+ break;
+ default:
+ break;
+ }
+}
+
+static void hns_roce_v1_qp_err_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe,
+ int event_type)
+{
+ struct device *dev = &hr_dev->pdev->dev;
+ int phy_port;
+ int qpn;
+
+ qpn = roce_get_field(aeqe->event.qp_event.qp,
+ HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M,
+ HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S);
+ phy_port = roce_get_field(aeqe->event.qp_event.qp,
+ HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M,
+ HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S);
+ if (qpn <= 1)
+ qpn = HNS_ROCE_MAX_PORTS * qpn + phy_port;
+
+ switch (event_type) {
+ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
+ dev_warn(dev, "Invalid Req Local Work Queue Error.\n"
+ "QP %d, phy_port %d.\n", qpn, phy_port);
+ break;
+ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
+ hns_roce_v1_wq_catas_err_handle(hr_dev, aeqe, qpn);
+ break;
+ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
+ hns_roce_v1_local_wq_access_err_handle(hr_dev, aeqe, qpn);
+ break;
+ default:
+ break;
+ }
+
+ hns_roce_qp_event(hr_dev, qpn, event_type);
+}
+
+static void hns_roce_v1_cq_err_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe,
+ int event_type)
+{
+ struct device *dev = &hr_dev->pdev->dev;
+ u32 cqn;
+
+ cqn = le32_to_cpu(roce_get_field(aeqe->event.cq_event.cq,
+ HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M,
+ HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S));
+
+ switch (event_type) {
+ case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
+ dev_warn(dev, "CQ 0x%x access err.\n", cqn);
+ break;
+ case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+ dev_warn(dev, "CQ 0x%x overflow\n", cqn);
+ break;
+ case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
+ dev_warn(dev, "CQ 0x%x ID invalid.\n", cqn);
+ break;
+ default:
+ break;
+ }
+
+ hns_roce_cq_event(hr_dev, cqn, event_type);
+}
+
+static void hns_roce_v1_db_overflow_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe)
+{
+ struct device *dev = &hr_dev->pdev->dev;
+
+ switch (roce_get_field(aeqe->asyn, HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M,
+ HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S)) {
+ case HNS_ROCE_DB_SUBTYPE_SDB_OVF:
+ dev_warn(dev, "SDB overflow.\n");
+ break;
+ case HNS_ROCE_DB_SUBTYPE_SDB_ALM_OVF:
+ dev_warn(dev, "SDB almost overflow.\n");
+ break;
+ case HNS_ROCE_DB_SUBTYPE_SDB_ALM_EMP:
+ dev_warn(dev, "SDB almost empty.\n");
+ break;
+ case HNS_ROCE_DB_SUBTYPE_ODB_OVF:
+ dev_warn(dev, "ODB overflow.\n");
+ break;
+ case HNS_ROCE_DB_SUBTYPE_ODB_ALM_OVF:
+ dev_warn(dev, "ODB almost overflow.\n");
+ break;
+ case HNS_ROCE_DB_SUBTYPE_ODB_ALM_EMP:
+ dev_warn(dev, "SDB almost empty.\n");
+ break;
+ default:
+ break;
+ }
+}
+
+static struct hns_roce_aeqe *get_aeqe_v1(struct hns_roce_eq *eq, u32 entry)
+{
+ unsigned long off = (entry & (eq->entries - 1)) *
+ HNS_ROCE_AEQ_ENTRY_SIZE;
+
+ return (struct hns_roce_aeqe *)((u8 *)
+ (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
+ off % HNS_ROCE_BA_SIZE);
+}
+
+static struct hns_roce_aeqe *next_aeqe_sw_v1(struct hns_roce_eq *eq)
+{
+ struct hns_roce_aeqe *aeqe = get_aeqe_v1(eq, eq->cons_index);
+
+ return (roce_get_bit(aeqe->asyn, HNS_ROCE_AEQE_U32_4_OWNER_S) ^
+ !!(eq->cons_index & eq->entries)) ? aeqe : NULL;
+}
+
+static int hns_roce_v1_aeq_int(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ struct device *dev = &hr_dev->pdev->dev;
+ struct hns_roce_aeqe *aeqe;
+ int aeqes_found = 0;
+ int event_type;
+
+ while ((aeqe = next_aeqe_sw_v1(eq))) {
+ dev_dbg(dev, "aeqe = %p, aeqe->asyn.event_type = 0x%lx\n", aeqe,
+ roce_get_field(aeqe->asyn,
+ HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
+ HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S));
+ /* Memory barrier */
+ rmb();
+
+ event_type = roce_get_field(aeqe->asyn,
+ HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M,
+ HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S);
+ switch (event_type) {
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+ dev_warn(dev, "PATH MIG not supported\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_COMM_EST:
+ dev_warn(dev, "COMMUNICATION established\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
+ dev_warn(dev, "SQ DRAINED not supported\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
+ dev_warn(dev, "PATH MIG failed\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
+ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
+ hns_roce_v1_qp_err_handle(hr_dev, aeqe, event_type);
+ break;
+ case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
+ case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
+ dev_warn(dev, "SRQ not support!\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+ case HNS_ROCE_EVENT_TYPE_CQ_ID_INVALID:
+ hns_roce_v1_cq_err_handle(hr_dev, aeqe, event_type);
+ break;
+ case HNS_ROCE_EVENT_TYPE_PORT_CHANGE:
+ dev_warn(dev, "port change.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_MB:
+ hns_roce_cmd_event(hr_dev,
+ le16_to_cpu(aeqe->event.cmd.token),
+ aeqe->event.cmd.status,
+ le64_to_cpu(aeqe->event.cmd.out_param
+ ));
+ break;
+ case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+ hns_roce_v1_db_overflow_handle(hr_dev, aeqe);
+ break;
+ case HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW:
+ dev_warn(dev, "CEQ 0x%lx overflow.\n",
+ roce_get_field(aeqe->event.ce_event.ceqe,
+ HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M,
+ HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S));
+ break;
+ default:
+ dev_warn(dev, "Unhandled event %d on EQ %d at idx %u.\n",
+ event_type, eq->eqn, eq->cons_index);
+ break;
+ }
+
+ eq->cons_index++;
+ aeqes_found = 1;
+
+ if (eq->cons_index > 2 * hr_dev->caps.aeqe_depth - 1) {
+ dev_warn(dev, "cons_index overflow, set back to 0.\n");
+ eq->cons_index = 0;
+ }
+ }
+
+ set_eq_cons_index_v1(eq, 0);
+
+ return aeqes_found;
+}
+
+static struct hns_roce_ceqe *get_ceqe_v1(struct hns_roce_eq *eq, u32 entry)
+{
+ unsigned long off = (entry & (eq->entries - 1)) *
+ HNS_ROCE_CEQ_ENTRY_SIZE;
+
+ return (struct hns_roce_ceqe *)((u8 *)
+ (eq->buf_list[off / HNS_ROCE_BA_SIZE].buf) +
+ off % HNS_ROCE_BA_SIZE);
+}
+
+static struct hns_roce_ceqe *next_ceqe_sw_v1(struct hns_roce_eq *eq)
+{
+ struct hns_roce_ceqe *ceqe = get_ceqe_v1(eq, eq->cons_index);
+
+ return (!!(roce_get_bit(ceqe->comp,
+ HNS_ROCE_CEQE_CEQE_COMP_OWNER_S))) ^
+ (!!(eq->cons_index & eq->entries)) ? ceqe : NULL;
+}
+
+static int hns_roce_v1_ceq_int(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ struct hns_roce_ceqe *ceqe;
+ int ceqes_found = 0;
+ u32 cqn;
+
+ while ((ceqe = next_ceqe_sw_v1(eq))) {
+ /* Memory barrier */
+ rmb();
+ cqn = roce_get_field(ceqe->comp,
+ HNS_ROCE_CEQE_CEQE_COMP_CQN_M,
+ HNS_ROCE_CEQE_CEQE_COMP_CQN_S);
+ hns_roce_cq_completion(hr_dev, cqn);
+
+ ++eq->cons_index;
+ ceqes_found = 1;
+
+ if (eq->cons_index > 2 * hr_dev->caps.ceqe_depth - 1) {
+ dev_warn(&eq->hr_dev->pdev->dev,
+ "cons_index overflow, set back to 0.\n");
+ eq->cons_index = 0;
+ }
+ }
+
+ set_eq_cons_index_v1(eq, 0);
+
+ return ceqes_found;
+}
+
+static irqreturn_t hns_roce_v1_msix_interrupt_eq(int irq, void *eq_ptr)
+{
+ struct hns_roce_eq *eq = eq_ptr;
+ struct hns_roce_dev *hr_dev = eq->hr_dev;
+ int int_work = 0;
+
+ if (eq->type_flag == HNS_ROCE_CEQ)
+ /* CEQ irq routine, CEQ is pulse irq, not clear */
+ int_work = hns_roce_v1_ceq_int(hr_dev, eq);
+ else
+ /* AEQ irq routine, AEQ is pulse irq, not clear */
+ int_work = hns_roce_v1_aeq_int(hr_dev, eq);
+
+ return IRQ_RETVAL(int_work);
+}
+
+static irqreturn_t hns_roce_v1_msix_interrupt_abn(int irq, void *dev_id)
+{
+ struct hns_roce_dev *hr_dev = dev_id;
+ struct device *dev = &hr_dev->pdev->dev;
+ int int_work = 0;
+ u32 caepaemask_val;
+ u32 cealmovf_val;
+ u32 caepaest_val;
+ u32 aeshift_val;
+ u32 ceshift_val;
+ u32 cemask_val;
+ int i;
+
+ /*
+ * Abnormal interrupt:
+ * AEQ overflow, ECC multi-bit err, CEQ overflow must clear
+ * interrupt, mask irq, clear irq, cancel mask operation
+ */
+ aeshift_val = roce_read(hr_dev, ROCEE_CAEP_AEQC_AEQE_SHIFT_REG);
+
+ /* AEQE overflow */
+ if (roce_get_bit(aeshift_val,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQ_ALM_OVF_INT_ST_S) == 1) {
+ dev_warn(dev, "AEQ overflow!\n");
+
+ /* Set mask */
+ caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
+ roce_set_bit(caepaemask_val,
+ ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
+ HNS_ROCE_INT_MASK_ENABLE);
+ roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
+
+ /* Clear int state(INT_WC : write 1 clear) */
+ caepaest_val = roce_read(hr_dev, ROCEE_CAEP_AE_ST_REG);
+ roce_set_bit(caepaest_val,
+ ROCEE_CAEP_AE_ST_CAEP_AEQ_ALM_OVF_S, 1);
+ roce_write(hr_dev, ROCEE_CAEP_AE_ST_REG, caepaest_val);
+
+ /* Clear mask */
+ caepaemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
+ roce_set_bit(caepaemask_val,
+ ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
+ HNS_ROCE_INT_MASK_DISABLE);
+ roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, caepaemask_val);
+ }
+
+ /* CEQ almost overflow */
+ for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
+ ceshift_val = roce_read(hr_dev, ROCEE_CAEP_CEQC_SHIFT_0_REG +
+ i * CEQ_REG_OFFSET);
+
+ if (roce_get_bit(ceshift_val,
+ ROCEE_CAEP_CEQC_SHIFT_CAEP_CEQ_ALM_OVF_INT_ST_S) == 1) {
+ dev_warn(dev, "CEQ[%d] almost overflow!\n", i);
+ int_work++;
+
+ /* Set mask */
+ cemask_val = roce_read(hr_dev,
+ ROCEE_CAEP_CE_IRQ_MASK_0_REG +
+ i * CEQ_REG_OFFSET);
+ roce_set_bit(cemask_val,
+ ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
+ HNS_ROCE_INT_MASK_ENABLE);
+ roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
+ i * CEQ_REG_OFFSET, cemask_val);
+
+ /* Clear int state(INT_WC : write 1 clear) */
+ cealmovf_val = roce_read(hr_dev,
+ ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
+ i * CEQ_REG_OFFSET);
+ roce_set_bit(cealmovf_val,
+ ROCEE_CAEP_CEQ_ALM_OVF_CAEP_CEQ_ALM_OVF_S,
+ 1);
+ roce_write(hr_dev, ROCEE_CAEP_CEQ_ALM_OVF_0_REG +
+ i * CEQ_REG_OFFSET, cealmovf_val);
+
+ /* Clear mask */
+ cemask_val = roce_read(hr_dev,
+ ROCEE_CAEP_CE_IRQ_MASK_0_REG +
+ i * CEQ_REG_OFFSET);
+ roce_set_bit(cemask_val,
+ ROCEE_CAEP_CE_IRQ_MASK_CAEP_CEQ_ALM_OVF_MASK_S,
+ HNS_ROCE_INT_MASK_DISABLE);
+ roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
+ i * CEQ_REG_OFFSET, cemask_val);
+ }
+ }
+
+ /* ECC multi-bit error alarm */
+ dev_warn(dev, "ECC UCERR ALARM: 0x%x, 0x%x, 0x%x\n",
+ roce_read(hr_dev, ROCEE_ECC_UCERR_ALM0_REG),
+ roce_read(hr_dev, ROCEE_ECC_UCERR_ALM1_REG),
+ roce_read(hr_dev, ROCEE_ECC_UCERR_ALM2_REG));
+
+ dev_warn(dev, "ECC CERR ALARM: 0x%x, 0x%x, 0x%x\n",
+ roce_read(hr_dev, ROCEE_ECC_CERR_ALM0_REG),
+ roce_read(hr_dev, ROCEE_ECC_CERR_ALM1_REG),
+ roce_read(hr_dev, ROCEE_ECC_CERR_ALM2_REG));
+
+ return IRQ_RETVAL(int_work);
+}
+
+static void hns_roce_v1_int_mask_enable(struct hns_roce_dev *hr_dev)
+{
+ u32 aemask_val;
+ int masken = 0;
+ int i;
+
+ /* AEQ INT */
+ aemask_val = roce_read(hr_dev, ROCEE_CAEP_AE_MASK_REG);
+ roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AEQ_ALM_OVF_MASK_S,
+ masken);
+ roce_set_bit(aemask_val, ROCEE_CAEP_AE_MASK_CAEP_AE_IRQ_MASK_S, masken);
+ roce_write(hr_dev, ROCEE_CAEP_AE_MASK_REG, aemask_val);
+
+ /* CEQ INT */
+ for (i = 0; i < hr_dev->caps.num_comp_vectors; i++) {
+ /* IRQ mask */
+ roce_write(hr_dev, ROCEE_CAEP_CE_IRQ_MASK_0_REG +
+ i * CEQ_REG_OFFSET, masken);
+ }
+}
+
+static void hns_roce_v1_free_eq(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ int npages = (PAGE_ALIGN(eq->eqe_size * eq->entries) +
+ HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
+ int i;
+
+ if (!eq->buf_list)
+ return;
+
+ for (i = 0; i < npages; ++i)
+ dma_free_coherent(&hr_dev->pdev->dev, HNS_ROCE_BA_SIZE,
+ eq->buf_list[i].buf, eq->buf_list[i].map);
+
+ kfree(eq->buf_list);
+}
+
+static void hns_roce_v1_enable_eq(struct hns_roce_dev *hr_dev, int eq_num,
+ int enable_flag)
+{
+ void __iomem *eqc = hr_dev->eq_table.eqc_base[eq_num];
+ u32 val;
+
+ val = readl(eqc);
+
+ if (enable_flag)
+ roce_set_field(val,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
+ HNS_ROCE_EQ_STAT_VALID);
+ else
+ roce_set_field(val,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
+ HNS_ROCE_EQ_STAT_INVALID);
+ writel(val, eqc);
+}
+
+static int hns_roce_v1_create_eq(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ void __iomem *eqc = hr_dev->eq_table.eqc_base[eq->eqn];
+ struct device *dev = &hr_dev->pdev->dev;
+ dma_addr_t tmp_dma_addr;
+ u32 eqconsindx_val = 0;
+ u32 eqcuridx_val = 0;
+ u32 eqshift_val = 0;
+ int num_bas;
+ int ret;
+ int i;
+
+ num_bas = (PAGE_ALIGN(eq->entries * eq->eqe_size) +
+ HNS_ROCE_BA_SIZE - 1) / HNS_ROCE_BA_SIZE;
+
+ if ((eq->entries * eq->eqe_size) > HNS_ROCE_BA_SIZE) {
+ dev_err(dev, "[error]eq buf %d gt ba size(%d) need bas=%d\n",
+ (eq->entries * eq->eqe_size), HNS_ROCE_BA_SIZE,
+ num_bas);
+ return -EINVAL;
+ }
+
+ eq->buf_list = kcalloc(num_bas, sizeof(*eq->buf_list), GFP_KERNEL);
+ if (!eq->buf_list)
+ return -ENOMEM;
+
+ for (i = 0; i < num_bas; ++i) {
+ eq->buf_list[i].buf = dma_alloc_coherent(dev, HNS_ROCE_BA_SIZE,
+ &tmp_dma_addr,
+ GFP_KERNEL);
+ if (!eq->buf_list[i].buf) {
+ ret = -ENOMEM;
+ goto err_out_free_pages;
+ }
+
+ eq->buf_list[i].map = tmp_dma_addr;
+ memset(eq->buf_list[i].buf, 0, HNS_ROCE_BA_SIZE);
+ }
+ eq->cons_index = 0;
+ roce_set_field(eqshift_val,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_M,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_STATE_S,
+ HNS_ROCE_EQ_STAT_INVALID);
+ roce_set_field(eqshift_val,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_M,
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_CAEP_AEQC_AEQE_SHIFT_S,
+ eq->log_entries);
+ writel(eqshift_val, eqc);
+
+ /* Configure eq extended address 12~44bit */
+ writel((u32)(eq->buf_list[0].map >> 12), eqc + 4);
+
+ /*
+ * Configure eq extended address 45~49 bit.
+ * 44 = 32 + 12, When evaluating addr to hardware, shift 12 because of
+ * using 4K page, and shift more 32 because of
+ * caculating the high 32 bit value evaluated to hardware.
+ */
+ roce_set_field(eqcuridx_val, ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_M,
+ ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQ_BT_H_S,
+ eq->buf_list[0].map >> 44);
+ roce_set_field(eqcuridx_val,
+ ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_M,
+ ROCEE_CAEP_AEQE_CUR_IDX_CAEP_AEQE_CUR_IDX_S, 0);
+ writel(eqcuridx_val, eqc + 8);
+
+ /* Configure eq consumer index */
+ roce_set_field(eqconsindx_val,
+ ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_M,
+ ROCEE_CAEP_AEQE_CONS_IDX_CAEP_AEQE_CONS_IDX_S, 0);
+ writel(eqconsindx_val, eqc + 0xc);
+
+ return 0;
+
+err_out_free_pages:
+ for (i -= 1; i >= 0; i--)
+ dma_free_coherent(dev, HNS_ROCE_BA_SIZE, eq->buf_list[i].buf,
+ eq->buf_list[i].map);
+
+ kfree(eq->buf_list);
+ return ret;
+}
+
+static int hns_roce_v1_init_eq_table(struct hns_roce_dev *hr_dev)
+{
+ struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
+ struct device *dev = &hr_dev->pdev->dev;
+ struct hns_roce_eq *eq;
+ int irq_num;
+ int eq_num;
+ int ret;
+ int i, j;
+
+ eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
+ irq_num = eq_num + hr_dev->caps.num_other_vectors;
+
+ eq_table->eq = kcalloc(eq_num, sizeof(*eq_table->eq), GFP_KERNEL);
+ if (!eq_table->eq)
+ return -ENOMEM;
+
+ eq_table->eqc_base = kcalloc(eq_num, sizeof(*eq_table->eqc_base),
+ GFP_KERNEL);
+ if (!eq_table->eqc_base) {
+ ret = -ENOMEM;
+ goto err_eqc_base_alloc_fail;
+ }
+
+ for (i = 0; i < eq_num; i++) {
+ eq = &eq_table->eq[i];
+ eq->hr_dev = hr_dev;
+ eq->eqn = i;
+ eq->irq = hr_dev->irq[i];
+ eq->log_page_size = PAGE_SHIFT;
+
+ if (i < hr_dev->caps.num_comp_vectors) {
+ /* CEQ */
+ eq_table->eqc_base[i] = hr_dev->reg_base +
+ ROCEE_CAEP_CEQC_SHIFT_0_REG +
+ CEQ_REG_OFFSET * i;
+ eq->type_flag = HNS_ROCE_CEQ;
+ eq->doorbell = hr_dev->reg_base +
+ ROCEE_CAEP_CEQC_CONS_IDX_0_REG +
+ CEQ_REG_OFFSET * i;
+ eq->entries = hr_dev->caps.ceqe_depth;
+ eq->log_entries = ilog2(eq->entries);
+ eq->eqe_size = HNS_ROCE_CEQ_ENTRY_SIZE;
+ } else {
+ /* AEQ */
+ eq_table->eqc_base[i] = hr_dev->reg_base +
+ ROCEE_CAEP_AEQC_AEQE_SHIFT_REG;
+ eq->type_flag = HNS_ROCE_AEQ;
+ eq->doorbell = hr_dev->reg_base +
+ ROCEE_CAEP_AEQE_CONS_IDX_REG;
+ eq->entries = hr_dev->caps.aeqe_depth;
+ eq->log_entries = ilog2(eq->entries);
+ eq->eqe_size = HNS_ROCE_AEQ_ENTRY_SIZE;
+ }
+ }
+
+ /* Disable irq */
+ hns_roce_v1_int_mask_enable(hr_dev);
+
+ /* Configure ce int interval */
+ roce_write(hr_dev, ROCEE_CAEP_CE_INTERVAL_CFG_REG,
+ HNS_ROCE_CEQ_DEFAULT_INTERVAL);
+
+ /* Configure ce int burst num */
+ roce_write(hr_dev, ROCEE_CAEP_CE_BURST_NUM_CFG_REG,
+ HNS_ROCE_CEQ_DEFAULT_BURST_NUM);
+
+ for (i = 0; i < eq_num; i++) {
+ ret = hns_roce_v1_create_eq(hr_dev, &eq_table->eq[i]);
+ if (ret) {
+ dev_err(dev, "eq create failed\n");
+ goto err_create_eq_fail;
+ }
+ }
+
+ for (j = 0; j < irq_num; j++) {
+ if (j < eq_num)
+ ret = request_irq(hr_dev->irq[j],
+ hns_roce_v1_msix_interrupt_eq, 0,
+ hr_dev->irq_names[j],
+ &eq_table->eq[j]);
+ else
+ ret = request_irq(hr_dev->irq[j],
+ hns_roce_v1_msix_interrupt_abn, 0,
+ hr_dev->irq_names[j], hr_dev);
+
+ if (ret) {
+ dev_err(dev, "request irq error!\n");
+ goto err_request_irq_fail;
+ }
+ }
+
+ for (i = 0; i < eq_num; i++)
+ hns_roce_v1_enable_eq(hr_dev, i, EQ_ENABLE);
+
+ return 0;
+
+err_request_irq_fail:
+ for (j -= 1; j >= 0; j--)
+ free_irq(hr_dev->irq[j], &eq_table->eq[j]);
+
+err_create_eq_fail:
+ for (i -= 1; i >= 0; i--)
+ hns_roce_v1_free_eq(hr_dev, &eq_table->eq[i]);
+
+ kfree(eq_table->eqc_base);
+
+err_eqc_base_alloc_fail:
+ kfree(eq_table->eq);
+
+ return ret;
+}
+
+static void hns_roce_v1_cleanup_eq_table(struct hns_roce_dev *hr_dev)
+{
+ struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
+ int irq_num;
+ int eq_num;
+ int i;
+
+ eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
+ irq_num = eq_num + hr_dev->caps.num_other_vectors;
+ for (i = 0; i < eq_num; i++) {
+ /* Disable EQ */
+ hns_roce_v1_enable_eq(hr_dev, i, EQ_DISABLE);
+
+ free_irq(hr_dev->irq[i], &eq_table->eq[i]);
+
+ hns_roce_v1_free_eq(hr_dev, &eq_table->eq[i]);
+ }
+ for (i = eq_num; i < irq_num; i++)
+ free_irq(hr_dev->irq[i], hr_dev);
+
+ kfree(eq_table->eqc_base);
+ kfree(eq_table->eq);
+}
+
static const struct hns_roce_hw hns_roce_hw_v1 = {
.reset = hns_roce_v1_reset,
.hw_profile = hns_roce_v1_profile,
@@ -3983,6 +4703,8 @@ static int hns_roce_v1_destroy_cq(struct ib_cq *ibcq)
.poll_cq = hns_roce_v1_poll_cq,
.dereg_mr = hns_roce_v1_dereg_mr,
.destroy_cq = hns_roce_v1_destroy_cq,
+ .init_eq = hns_roce_v1_init_eq_table,
+ .cleanup_eq = hns_roce_v1_cleanup_eq_table,
};
static const struct of_device_id hns_roce_of_match[] = {
@@ -4132,14 +4854,14 @@ static int hns_roce_get_cfg(struct hns_roce_dev *hr_dev)
/* read the interrupt names from the DT or ACPI */
ret = device_property_read_string_array(dev, "interrupt-names",
hr_dev->irq_names,
- HNS_ROCE_MAX_IRQ_NUM);
+ HNS_ROCE_V1_MAX_IRQ_NUM);
if (ret < 0) {
dev_err(dev, "couldn't get interrupt names from DT or ACPI!\n");
return ret;
}
/* fetch the interrupt numbers */
- for (i = 0; i < HNS_ROCE_MAX_IRQ_NUM; i++) {
+ for (i = 0; i < HNS_ROCE_V1_MAX_IRQ_NUM; i++) {
hr_dev->irq[i] = platform_get_irq(hr_dev->pdev, i);
if (hr_dev->irq[i] <= 0) {
dev_err(dev, "platform get of irq[=%d] failed!\n", i);
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
index 21a07ef..b44ddd2 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
@@ -60,8 +60,13 @@
#define HNS_ROCE_V1_GID_NUM 16
#define HNS_ROCE_V1_RESV_QP 8
-#define HNS_ROCE_V1_NUM_COMP_EQE 0x8000
-#define HNS_ROCE_V1_NUM_ASYNC_EQE 0x400
+#define HNS_ROCE_V1_MAX_IRQ_NUM 34
+#define HNS_ROCE_V1_COMP_VEC_NUM 32
+#define HNS_ROCE_V1_AEQE_VEC_NUM 1
+#define HNS_ROCE_V1_ABNORMAL_VEC_NUM 1
+
+#define HNS_ROCE_V1_COMP_EQE_NUM 0x8000
+#define HNS_ROCE_V1_ASYNC_EQE_NUM 0x400
#define HNS_ROCE_V1_QPC_ENTRY_SIZE 256
#define HNS_ROCE_V1_IRRL_ENTRY_SIZE 8
@@ -159,6 +164,41 @@
#define SDB_INV_CNT_OFFSET 8
#define SDB_ST_CMP_VAL 8
+#define HNS_ROCE_CEQ_DEFAULT_INTERVAL 0x10
+#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x10
+
+#define HNS_ROCE_INT_MASK_DISABLE 0
+#define HNS_ROCE_INT_MASK_ENABLE 1
+
+#define CEQ_REG_OFFSET 0x18
+
+#define HNS_ROCE_CEQE_CEQE_COMP_OWNER_S 0
+
+#define HNS_ROCE_V1_CONS_IDX_M GENMASK(15, 0)
+
+#define HNS_ROCE_CEQE_CEQE_COMP_CQN_S 16
+#define HNS_ROCE_CEQE_CEQE_COMP_CQN_M GENMASK(31, 16)
+
+#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_S 16
+#define HNS_ROCE_AEQE_U32_4_EVENT_TYPE_M GENMASK(23, 16)
+
+#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_S 24
+#define HNS_ROCE_AEQE_U32_4_EVENT_SUB_TYPE_M GENMASK(30, 24)
+
+#define HNS_ROCE_AEQE_U32_4_OWNER_S 31
+
+#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_S 0
+#define HNS_ROCE_AEQE_EVENT_QP_EVENT_QP_QPN_M GENMASK(23, 0)
+
+#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_S 25
+#define HNS_ROCE_AEQE_EVENT_QP_EVENT_PORT_NUM_M GENMASK(27, 25)
+
+#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_S 0
+#define HNS_ROCE_AEQE_EVENT_CQ_EVENT_CQ_CQN_M GENMASK(15, 0)
+
+#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_S 0
+#define HNS_ROCE_AEQE_EVENT_CE_EVENT_CEQE_CEQN_M GENMASK(4, 0)
+
struct hns_roce_cq_context {
u32 cqc_byte_4;
u32 cq_bt_l;
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index cf02ac2..aa0c242 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -748,12 +748,10 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
goto error_failed_cmd_init;
}
- if (hr_dev->cmd_mod) {
- ret = hns_roce_init_eq_table(hr_dev);
- if (ret) {
- dev_err(dev, "eq init failed!\n");
- goto error_failed_eq_table;
- }
+ ret = hr_dev->hw->init_eq(hr_dev);
+ if (ret) {
+ dev_err(dev, "eq init failed!\n");
+ goto error_failed_eq_table;
}
if (hr_dev->cmd_mod) {
@@ -805,8 +803,7 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
hns_roce_cmd_use_polling(hr_dev);
error_failed_use_event:
- if (hr_dev->cmd_mod)
- hns_roce_cleanup_eq_table(hr_dev);
+ hr_dev->hw->cleanup_eq(hr_dev);
error_failed_eq_table:
hns_roce_cmd_cleanup(hr_dev);
@@ -837,8 +834,7 @@ void hns_roce_exit(struct hns_roce_dev *hr_dev)
if (hr_dev->cmd_mod)
hns_roce_cmd_use_polling(hr_dev);
- if (hr_dev->cmd_mod)
- hns_roce_cleanup_eq_table(hr_dev);
+ hr_dev->hw->cleanup_eq(hr_dev);
hns_roce_cmd_cleanup(hr_dev);
if (hr_dev->hw->cmq_exit)
hr_dev->hw->cmq_exit(hr_dev);
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 49586ec..69e2584 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -65,6 +65,7 @@ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
if (atomic_dec_and_test(&qp->refcount))
complete(&qp->free);
}
+EXPORT_SYMBOL_GPL(hns_roce_qp_event);
static void hns_roce_ib_qp_event(struct hns_roce_qp *hr_qp,
enum hns_roce_event type)
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH for-next 2/2] RDMA/hns: Add eq support of hip08
[not found] ` <1510651577-20794-1-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-11-14 9:26 ` [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06 Yixian Liu
@ 2017-11-14 9:26 ` Yixian Liu
2017-12-04 9:33 ` [PATCH for-next 0/2] Revise eq support for hip06 & hip08 Liuyixian (Eason)
2017-12-22 16:37 ` Jason Gunthorpe
3 siblings, 0 replies; 14+ messages in thread
From: Yixian Liu @ 2017-11-14 9:26 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, linux-rdma-u79uwXL29TY76Z2rM5mHXA
This patch adds eq support for hip08. The eq table can
be multi-hop addressed.
Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Reviewed-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
drivers/infiniband/hw/hns/hns_roce_cmd.h | 10 +
drivers/infiniband/hw/hns/hns_roce_common.h | 11 +
drivers/infiniband/hw/hns/hns_roce_device.h | 26 +-
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 1177 ++++++++++++++++++++++++++-
drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 192 ++++-
5 files changed, 1408 insertions(+), 8 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.h b/drivers/infiniband/hw/hns/hns_roce_cmd.h
index b1c9422..9549ae5 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cmd.h
+++ b/drivers/infiniband/hw/hns/hns_roce_cmd.h
@@ -88,6 +88,16 @@ enum {
HNS_ROCE_CMD_DESTROY_SRQC_BT0 = 0x38,
HNS_ROCE_CMD_DESTROY_SRQC_BT1 = 0x39,
HNS_ROCE_CMD_DESTROY_SRQC_BT2 = 0x3a,
+
+ /* EQC commands */
+ HNS_ROCE_CMD_CREATE_AEQC = 0x80,
+ HNS_ROCE_CMD_MODIFY_AEQC = 0x81,
+ HNS_ROCE_CMD_QUERY_AEQC = 0x82,
+ HNS_ROCE_CMD_DESTROY_AEQC = 0x83,
+ HNS_ROCE_CMD_CREATE_CEQC = 0x90,
+ HNS_ROCE_CMD_MODIFY_CEQC = 0x91,
+ HNS_ROCE_CMD_QUERY_CEQC = 0x92,
+ HNS_ROCE_CMD_DESTROY_CEQC = 0x93,
};
enum {
diff --git a/drivers/infiniband/hw/hns/hns_roce_common.h b/drivers/infiniband/hw/hns/hns_roce_common.h
index 7ecb7a4..dd67faf 100644
--- a/drivers/infiniband/hw/hns/hns_roce_common.h
+++ b/drivers/infiniband/hw/hns/hns_roce_common.h
@@ -376,6 +376,12 @@
#define ROCEE_RX_CMQ_TAIL_REG 0x07024
#define ROCEE_RX_CMQ_HEAD_REG 0x07028
+#define ROCEE_VF_MB_CFG0_REG 0x40
+#define ROCEE_VF_MB_STATUS_REG 0x58
+
+#define ROCEE_VF_EQ_DB_CFG0_REG 0x238
+#define ROCEE_VF_EQ_DB_CFG1_REG 0x23C
+
#define ROCEE_VF_SMAC_CFG0_REG 0x12000
#define ROCEE_VF_SMAC_CFG1_REG 0x12004
@@ -385,4 +391,9 @@
#define ROCEE_VF_SGID_CFG3_REG 0x1000c
#define ROCEE_VF_SGID_CFG4_REG 0x10010
+#define ROCEE_VF_ABN_INT_CFG_REG 0x13000
+#define ROCEE_VF_ABN_INT_ST_REG 0x13004
+#define ROCEE_VF_ABN_INT_EN_REG 0x13008
+#define ROCEE_VF_EVENT_INT_EN_REG 0x1300c
+
#endif /* _HNS_ROCE_COMMON_H */
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 9aa9e94..dde5178 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -134,6 +134,7 @@ enum hns_roce_event {
HNS_ROCE_EVENT_TYPE_DB_OVERFLOW = 0x12,
HNS_ROCE_EVENT_TYPE_MB = 0x13,
HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW = 0x14,
+ HNS_ROCE_EVENT_TYPE_FLR = 0x15,
};
/* Local Work Queue Catastrophic Error,SUBTYPE 0x5 */
@@ -541,6 +542,26 @@ struct hns_roce_eq {
int log_page_size;
int cons_index;
struct hns_roce_buf_list *buf_list;
+ int over_ignore;
+ int coalesce;
+ int arm_st;
+ u64 eqe_ba;
+ int eqe_ba_pg_sz;
+ int eqe_buf_pg_sz;
+ int hop_num;
+ u64 *bt_l0; /* Base address table for L0 */
+ u64 **bt_l1; /* Base address table for L1 */
+ u64 **buf;
+ dma_addr_t l0_dma;
+ dma_addr_t *l1_dma;
+ dma_addr_t *buf_dma;
+ u32 l0_last_num; /* L0 last chunk num */
+ u32 l1_last_num; /* L1 last chunk num */
+ int eq_max_cnt;
+ int eq_period;
+ int shift;
+ dma_addr_t cur_eqe_ba;
+ dma_addr_t nxt_eqe_ba;
};
struct hns_roce_eq_table {
@@ -571,7 +592,7 @@ struct hns_roce_caps {
u32 min_wqes;
int reserved_cqs;
int num_aeq_vectors; /* 1 */
- int num_comp_vectors; /* 32 ceq */
+ int num_comp_vectors;
int num_other_vectors;
int num_mtpts;
u32 num_mtt_segs;
@@ -617,6 +638,9 @@ struct hns_roce_caps {
u32 cqe_ba_pg_sz;
u32 cqe_buf_pg_sz;
u32 cqe_hop_num;
+ u32 eqe_ba_pg_sz;
+ u32 eqe_buf_pg_sz;
+ u32 eqe_hop_num;
u32 chunk_sz; /* chunk size in non multihop mode*/
u64 flags;
};
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 8f719c0..04281d0 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -908,9 +908,9 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev)
caps->max_sq_inline = HNS_ROCE_V2_MAX_SQ_INLINE;
caps->num_uars = HNS_ROCE_V2_UAR_NUM;
caps->phy_num_uars = HNS_ROCE_V2_PHY_UAR_NUM;
- caps->num_aeq_vectors = 1;
- caps->num_comp_vectors = 63;
- caps->num_other_vectors = 0;
+ caps->num_aeq_vectors = HNS_ROCE_V2_AEQE_VEC_NUM;
+ caps->num_comp_vectors = HNS_ROCE_V2_COMP_VEC_NUM;
+ caps->num_other_vectors = HNS_ROCE_V2_ABNORMAL_VEC_NUM;
caps->num_mtpts = HNS_ROCE_V2_MAX_MTPT_NUM;
caps->num_mtt_segs = HNS_ROCE_V2_MAX_MTT_SEGS;
caps->num_cqe_segs = HNS_ROCE_V2_MAX_CQE_SEGS;
@@ -955,12 +955,17 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev)
caps->cqe_ba_pg_sz = 0;
caps->cqe_buf_pg_sz = 0;
caps->cqe_hop_num = HNS_ROCE_CQE_HOP_NUM;
+ caps->eqe_ba_pg_sz = 0;
+ caps->eqe_buf_pg_sz = 0;
+ caps->eqe_hop_num = HNS_ROCE_EQE_HOP_NUM;
caps->chunk_sz = HNS_ROCE_V2_TABLE_CHUNK_SIZE;
caps->flags = HNS_ROCE_CAP_FLAG_REREG_MR |
HNS_ROCE_CAP_FLAG_ROCE_V1_V2;
caps->pkey_table_len[0] = 1;
caps->gid_table_len[0] = HNS_ROCE_V2_GID_INDEX_NUM;
+ caps->ceqe_depth = HNS_ROCE_V2_COMP_EQE_NUM;
+ caps->aeqe_depth = HNS_ROCE_V2_ASYNC_EQE_NUM;
caps->local_ca_ack_delay = 0;
caps->max_mtu = IB_MTU_4096;
@@ -1374,6 +1379,8 @@ static void hns_roce_v2_write_cqc(struct hns_roce_dev *hr_dev,
roce_set_field(cq_context->byte_4_pg_ceqn, V2_CQC_BYTE_4_CQ_ST_M,
V2_CQC_BYTE_4_CQ_ST_S, V2_CQ_STATE_VALID);
+ roce_set_field(cq_context->byte_4_pg_ceqn, V2_CQC_BYTE_4_ARM_ST_M,
+ V2_CQC_BYTE_4_ARM_ST_S, REG_NXT_CEQE);
roce_set_field(cq_context->byte_4_pg_ceqn, V2_CQC_BYTE_4_SHIFT_M,
V2_CQC_BYTE_4_SHIFT_S, ilog2((unsigned int)nent));
roce_set_field(cq_context->byte_4_pg_ceqn, V2_CQC_BYTE_4_CEQN_M,
@@ -1414,6 +1421,15 @@ static void hns_roce_v2_write_cqc(struct hns_roce_dev *hr_dev,
roce_set_field(cq_context->byte_40_cqe_ba, V2_CQC_BYTE_40_CQE_BA_M,
V2_CQC_BYTE_40_CQE_BA_S, (dma_handle >> (32 + 3)));
+
+ roce_set_field(cq_context->byte_56_cqe_period_maxcnt,
+ V2_CQC_BYTE_56_CQ_MAX_CNT_M,
+ V2_CQC_BYTE_56_CQ_MAX_CNT_S,
+ HNS_ROCE_V2_CQ_DEFAULT_BURST_NUM);
+ roce_set_field(cq_context->byte_56_cqe_period_maxcnt,
+ V2_CQC_BYTE_56_CQ_PERIOD_M,
+ V2_CQC_BYTE_56_CQ_PERIOD_S,
+ HNS_ROCE_V2_CQ_DEFAULT_INTERVAL);
}
static int hns_roce_v2_req_notify_cq(struct ib_cq *ibcq,
@@ -3154,6 +3170,1152 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
return ret;
}
+static void set_eq_cons_index_v2(struct hns_roce_eq *eq)
+{
+ u32 doorbell[2];
+
+ doorbell[0] = 0;
+ doorbell[1] = 0;
+
+ if (eq->type_flag == HNS_ROCE_AEQ) {
+ roce_set_field(doorbell[0], HNS_ROCE_V2_EQ_DB_CMD_M,
+ HNS_ROCE_V2_EQ_DB_CMD_S,
+ eq->arm_st == HNS_ROCE_V2_EQ_ALWAYS_ARMED ?
+ HNS_ROCE_EQ_DB_CMD_AEQ :
+ HNS_ROCE_EQ_DB_CMD_AEQ_ARMED);
+ } else {
+ roce_set_field(doorbell[0], HNS_ROCE_V2_EQ_DB_TAG_M,
+ HNS_ROCE_V2_EQ_DB_TAG_S, eq->eqn);
+
+ roce_set_field(doorbell[0], HNS_ROCE_V2_EQ_DB_CMD_M,
+ HNS_ROCE_V2_EQ_DB_CMD_S,
+ eq->arm_st == HNS_ROCE_V2_EQ_ALWAYS_ARMED ?
+ HNS_ROCE_EQ_DB_CMD_CEQ :
+ HNS_ROCE_EQ_DB_CMD_CEQ_ARMED);
+ }
+
+ roce_set_field(doorbell[1], HNS_ROCE_V2_EQ_DB_PARA_M,
+ HNS_ROCE_V2_EQ_DB_PARA_S,
+ (eq->cons_index & HNS_ROCE_V2_CONS_IDX_M));
+
+ hns_roce_write64_k(doorbell, eq->doorbell);
+
+ /* Memory barrier */
+ mb();
+
+}
+
+static void hns_roce_v2_wq_catas_err_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe,
+ u32 qpn)
+{
+ struct device *dev = hr_dev->dev;
+ int sub_type;
+
+ dev_warn(dev, "Local work queue catastrophic error.\n");
+ sub_type = roce_get_field(aeqe->asyn, HNS_ROCE_V2_AEQE_SUB_TYPE_M,
+ HNS_ROCE_V2_AEQE_SUB_TYPE_S);
+ switch (sub_type) {
+ case HNS_ROCE_LWQCE_QPC_ERROR:
+ dev_warn(dev, "QP %d, QPC error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_MTU_ERROR:
+ dev_warn(dev, "QP %d, MTU error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_WQE_BA_ADDR_ERROR:
+ dev_warn(dev, "QP %d, WQE BA addr error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_WQE_ADDR_ERROR:
+ dev_warn(dev, "QP %d, WQE addr error.\n", qpn);
+ break;
+ case HNS_ROCE_LWQCE_SQ_WQE_SHIFT_ERROR:
+ dev_warn(dev, "QP %d, WQE shift error.\n", qpn);
+ break;
+ default:
+ dev_err(dev, "Unhandled sub_event type %d.\n", sub_type);
+ break;
+ }
+}
+
+static void hns_roce_v2_local_wq_access_err_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe, u32 qpn)
+{
+ struct device *dev = hr_dev->dev;
+ int sub_type;
+
+ dev_warn(dev, "Local access violation work queue error.\n");
+ sub_type = roce_get_field(aeqe->asyn, HNS_ROCE_V2_AEQE_SUB_TYPE_M,
+ HNS_ROCE_V2_AEQE_SUB_TYPE_S);
+ switch (sub_type) {
+ case HNS_ROCE_LAVWQE_R_KEY_VIOLATION:
+ dev_warn(dev, "QP %d, R_key violation.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_LENGTH_ERROR:
+ dev_warn(dev, "QP %d, length error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_VA_ERROR:
+ dev_warn(dev, "QP %d, VA error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_PD_ERROR:
+ dev_err(dev, "QP %d, PD error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_RW_ACC_ERROR:
+ dev_warn(dev, "QP %d, rw acc error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_KEY_STATE_ERROR:
+ dev_warn(dev, "QP %d, key state error.\n", qpn);
+ break;
+ case HNS_ROCE_LAVWQE_MR_OPERATION_ERROR:
+ dev_warn(dev, "QP %d, MR operation error.\n", qpn);
+ break;
+ default:
+ dev_err(dev, "Unhandled sub_event type %d.\n", sub_type);
+ break;
+ }
+}
+
+static void hns_roce_v2_qp_err_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe,
+ int event_type)
+{
+ struct device *dev = hr_dev->dev;
+ u32 qpn;
+
+ qpn = roce_get_field(aeqe->event.qp_event.qp,
+ HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_M,
+ HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_S);
+
+ switch (event_type) {
+ case HNS_ROCE_EVENT_TYPE_COMM_EST:
+ dev_warn(dev, "Communication established.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
+ dev_warn(dev, "Send queue drained.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
+ hns_roce_v2_wq_catas_err_handle(hr_dev, aeqe, qpn);
+ break;
+ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
+ dev_warn(dev, "Invalid request local work queue error.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
+ hns_roce_v2_local_wq_access_err_handle(hr_dev, aeqe, qpn);
+ break;
+ default:
+ break;
+ }
+
+ hns_roce_qp_event(hr_dev, qpn, event_type);
+}
+
+static void hns_roce_v2_cq_err_handle(struct hns_roce_dev *hr_dev,
+ struct hns_roce_aeqe *aeqe,
+ int event_type)
+{
+ struct device *dev = hr_dev->dev;
+ u32 cqn;
+
+ cqn = roce_get_field(aeqe->event.cq_event.cq,
+ HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_M,
+ HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_S);
+
+ switch (event_type) {
+ case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
+ dev_warn(dev, "CQ 0x%x access err.\n", cqn);
+ break;
+ case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+ dev_warn(dev, "CQ 0x%x overflow\n", cqn);
+ break;
+ default:
+ break;
+ }
+
+ hns_roce_cq_event(hr_dev, cqn, event_type);
+}
+
+static struct hns_roce_aeqe *get_aeqe_v2(struct hns_roce_eq *eq, u32 entry)
+{
+ u32 buf_chk_sz;
+ unsigned long off;
+
+ buf_chk_sz = 1 << (eq->eqe_buf_pg_sz + PAGE_SHIFT);
+ off = (entry & (eq->entries - 1)) * HNS_ROCE_AEQ_ENTRY_SIZE;
+
+ return (struct hns_roce_aeqe *)((char *)(eq->buf_list->buf) +
+ off % buf_chk_sz);
+}
+
+static struct hns_roce_aeqe *mhop_get_aeqe(struct hns_roce_eq *eq, u32 entry)
+{
+ u32 buf_chk_sz;
+ unsigned long off;
+
+ buf_chk_sz = 1 << (eq->eqe_buf_pg_sz + PAGE_SHIFT);
+
+ off = (entry & (eq->entries - 1)) * HNS_ROCE_AEQ_ENTRY_SIZE;
+
+ if (eq->hop_num == HNS_ROCE_HOP_NUM_0)
+ return (struct hns_roce_aeqe *)((u8 *)(eq->bt_l0) +
+ off % buf_chk_sz);
+ else
+ return (struct hns_roce_aeqe *)((u8 *)
+ (eq->buf[off / buf_chk_sz]) + off % buf_chk_sz);
+}
+
+static struct hns_roce_aeqe *next_aeqe_sw_v2(struct hns_roce_eq *eq)
+{
+ struct hns_roce_aeqe *aeqe;
+
+ if (!eq->hop_num)
+ aeqe = get_aeqe_v2(eq, eq->cons_index);
+ else
+ aeqe = mhop_get_aeqe(eq, eq->cons_index);
+
+ return (roce_get_bit(aeqe->asyn, HNS_ROCE_V2_AEQ_AEQE_OWNER_S) ^
+ !!(eq->cons_index & eq->entries)) ? aeqe : NULL;
+}
+
+static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ struct device *dev = hr_dev->dev;
+ struct hns_roce_aeqe *aeqe;
+ int aeqe_found = 0;
+ int event_type;
+
+ while ((aeqe = next_aeqe_sw_v2(eq))) {
+ /* Memory barrier */
+ rmb();
+
+ event_type = roce_get_field(aeqe->asyn,
+ HNS_ROCE_V2_AEQE_EVENT_TYPE_M,
+ HNS_ROCE_V2_AEQE_EVENT_TYPE_S);
+
+ switch (event_type) {
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+ dev_warn(dev, "Path migrated succeeded.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
+ dev_warn(dev, "Path migration failed.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_COMM_EST:
+ case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
+ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
+ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
+ hns_roce_v2_qp_err_handle(hr_dev, aeqe, event_type);
+ break;
+ case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
+ case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
+ case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
+ dev_warn(dev, "SRQ not support.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+ hns_roce_v2_cq_err_handle(hr_dev, aeqe, event_type);
+ break;
+ case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+ dev_warn(dev, "DB overflow.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_MB:
+ hns_roce_cmd_event(hr_dev,
+ le16_to_cpu(aeqe->event.cmd.token),
+ aeqe->event.cmd.status,
+ le64_to_cpu(aeqe->event.cmd.out_param));
+ break;
+ case HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW:
+ dev_warn(dev, "CEQ overflow.\n");
+ break;
+ case HNS_ROCE_EVENT_TYPE_FLR:
+ dev_warn(dev, "Function level reset.\n");
+ break;
+ default:
+ dev_err(dev, "Unhandled event %d on EQ %d at idx %u.\n",
+ event_type, eq->eqn, eq->cons_index);
+ break;
+ };
+
+ ++eq->cons_index;
+ aeqe_found = 1;
+
+ if (eq->cons_index > (2 * eq->entries - 1)) {
+ dev_warn(dev, "cons_index overflow, set back to 0.\n");
+ eq->cons_index = 0;
+ }
+ }
+
+ set_eq_cons_index_v2(eq);
+ return aeqe_found;
+}
+
+static struct hns_roce_ceqe *get_ceqe_v2(struct hns_roce_eq *eq, u32 entry)
+{
+ u32 buf_chk_sz;
+ unsigned long off;
+
+ buf_chk_sz = 1 << (eq->eqe_buf_pg_sz + PAGE_SHIFT);
+ off = (entry & (eq->entries - 1)) * HNS_ROCE_CEQ_ENTRY_SIZE;
+
+ return (struct hns_roce_ceqe *)((char *)(eq->buf_list->buf) +
+ off % buf_chk_sz);
+}
+
+static struct hns_roce_ceqe *mhop_get_ceqe(struct hns_roce_eq *eq, u32 entry)
+{
+ u32 buf_chk_sz;
+ unsigned long off;
+
+ buf_chk_sz = 1 << (eq->eqe_buf_pg_sz + PAGE_SHIFT);
+
+ off = (entry & (eq->entries - 1)) * HNS_ROCE_CEQ_ENTRY_SIZE;
+
+ if (eq->hop_num == HNS_ROCE_HOP_NUM_0)
+ return (struct hns_roce_ceqe *)((u8 *)(eq->bt_l0) +
+ off % buf_chk_sz);
+ else
+ return (struct hns_roce_ceqe *)((u8 *)(eq->buf[off /
+ buf_chk_sz]) + off % buf_chk_sz);
+}
+
+static struct hns_roce_ceqe *next_ceqe_sw_v2(struct hns_roce_eq *eq)
+{
+ struct hns_roce_ceqe *ceqe;
+
+ if (!eq->hop_num)
+ ceqe = get_ceqe_v2(eq, eq->cons_index);
+ else
+ ceqe = mhop_get_ceqe(eq, eq->cons_index);
+
+ return (!!(roce_get_bit(ceqe->comp, HNS_ROCE_V2_CEQ_CEQE_OWNER_S))) ^
+ (!!(eq->cons_index & eq->entries)) ? ceqe : NULL;
+}
+
+static int hns_roce_v2_ceq_int(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ struct device *dev = hr_dev->dev;
+ struct hns_roce_ceqe *ceqe;
+ int ceqe_found = 0;
+ u32 cqn;
+
+ while ((ceqe = next_ceqe_sw_v2(eq))) {
+
+ /* Memory barrier */
+ rmb();
+ cqn = roce_get_field(ceqe->comp,
+ HNS_ROCE_V2_CEQE_COMP_CQN_M,
+ HNS_ROCE_V2_CEQE_COMP_CQN_S);
+
+ hns_roce_cq_completion(hr_dev, cqn);
+
+ ++eq->cons_index;
+ ceqe_found = 1;
+
+ if (eq->cons_index > (2 * eq->entries - 1)) {
+ dev_warn(dev, "cons_index overflow, set back to 0.\n");
+ eq->cons_index = 0;
+ }
+ }
+
+ set_eq_cons_index_v2(eq);
+
+ return ceqe_found;
+}
+
+static irqreturn_t hns_roce_v2_msix_interrupt_eq(int irq, void *eq_ptr)
+{
+ struct hns_roce_eq *eq = eq_ptr;
+ struct hns_roce_dev *hr_dev = eq->hr_dev;
+ int int_work = 0;
+
+ if (eq->type_flag == HNS_ROCE_CEQ)
+ /* Completion event interrupt */
+ int_work = hns_roce_v2_ceq_int(hr_dev, eq);
+ else
+ /* Asychronous event interrupt */
+ int_work = hns_roce_v2_aeq_int(hr_dev, eq);
+
+ return IRQ_RETVAL(int_work);
+}
+
+static irqreturn_t hns_roce_v2_msix_interrupt_abn(int irq, void *dev_id)
+{
+ struct hns_roce_dev *hr_dev = dev_id;
+ struct device *dev = hr_dev->dev;
+ int int_work = 0;
+ u32 int_st;
+ u32 int_en;
+
+ /* Abnormal interrupt */
+ int_st = roce_read(hr_dev, ROCEE_VF_ABN_INT_ST_REG);
+ int_en = roce_read(hr_dev, ROCEE_VF_ABN_INT_EN_REG);
+
+ if (roce_get_bit(int_st, HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S)) {
+ dev_err(dev, "AEQ overflow!\n");
+
+ roce_set_bit(int_st, HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S, 1);
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG, int_st);
+
+ /* Memory barrier */
+ mb();
+
+ roce_set_bit(int_en, HNS_ROCE_V2_VF_ABN_INT_EN_S, 1);
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_EN_REG, int_en);
+
+ int_work = 1;
+ } else if (roce_get_bit(int_st, HNS_ROCE_V2_VF_INT_ST_BUS_ERR_S)) {
+ dev_err(dev, "BUS ERR!\n");
+
+ roce_set_bit(int_st, HNS_ROCE_V2_VF_INT_ST_BUS_ERR_S, 1);
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG, int_st);
+
+ /* Memory barrier */
+ mb();
+
+ roce_set_bit(int_en, HNS_ROCE_V2_VF_ABN_INT_EN_S, 1);
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_EN_REG, int_en);
+
+ int_work = 1;
+ } else if (roce_get_bit(int_st, HNS_ROCE_V2_VF_INT_ST_OTHER_ERR_S)) {
+ dev_err(dev, "OTHER ERR!\n");
+
+ roce_set_bit(int_st, HNS_ROCE_V2_VF_INT_ST_OTHER_ERR_S, 1);
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_ST_REG, int_st);
+
+ /* Memory barrier */
+ mb();
+ roce_set_bit(int_en, HNS_ROCE_V2_VF_ABN_INT_EN_S, 1);
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_EN_REG, int_en);
+
+ int_work = 1;
+ } else
+ dev_err(dev, "There is no abnormal irq found!\n");
+
+ return IRQ_RETVAL(int_work);
+}
+
+static void hns_roce_v2_int_mask_enable(struct hns_roce_dev *hr_dev,
+ int eq_num, int enable_flag)
+{
+ int i;
+
+ if (enable_flag == EQ_ENABLE) {
+ for (i = 0; i < eq_num; i++)
+ roce_write(hr_dev, ROCEE_VF_EVENT_INT_EN_REG +
+ i * EQ_REG_OFFSET,
+ HNS_ROCE_V2_VF_EVENT_INT_EN_M);
+
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_EN_REG,
+ HNS_ROCE_V2_VF_ABN_INT_EN_M);
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_CFG_REG,
+ HNS_ROCE_V2_VF_ABN_INT_CFG_M);
+ } else {
+ for (i = 0; i < eq_num; i++)
+ roce_write(hr_dev, ROCEE_VF_EVENT_INT_EN_REG +
+ i * EQ_REG_OFFSET,
+ HNS_ROCE_V2_VF_EVENT_INT_EN_M & 0x0);
+
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_EN_REG,
+ HNS_ROCE_V2_VF_ABN_INT_EN_M & 0x0);
+ roce_write(hr_dev, ROCEE_VF_ABN_INT_CFG_REG,
+ HNS_ROCE_V2_VF_ABN_INT_CFG_M & 0x0);
+ }
+}
+
+static void hns_roce_v2_destroy_eqc(struct hns_roce_dev *hr_dev, int eqn)
+{
+ struct device *dev = hr_dev->dev;
+ int ret;
+
+ if (eqn < hr_dev->caps.num_comp_vectors)
+ ret = hns_roce_cmd_mbox(hr_dev, 0, 0, eqn & HNS_ROCE_V2_EQN_M,
+ 0, HNS_ROCE_CMD_DESTROY_CEQC,
+ HNS_ROCE_CMD_TIMEOUT_MSECS);
+ else
+ ret = hns_roce_cmd_mbox(hr_dev, 0, 0, eqn & HNS_ROCE_V2_EQN_M,
+ 0, HNS_ROCE_CMD_DESTROY_AEQC,
+ HNS_ROCE_CMD_TIMEOUT_MSECS);
+ if (ret)
+ dev_err(dev, "[mailbox cmd] destroy eqc(%d) failed.\n", eqn);
+}
+
+static void hns_roce_mhop_free_eq(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ struct device *dev = hr_dev->dev;
+ u64 idx;
+ u64 size;
+ u32 buf_chk_sz;
+ u32 bt_chk_sz;
+ u32 mhop_num;
+ int eqe_alloc;
+ int ba_num;
+ int i = 0;
+ int j = 0;
+
+ mhop_num = hr_dev->caps.eqe_hop_num;
+ buf_chk_sz = 1 << (hr_dev->caps.eqe_buf_pg_sz + PAGE_SHIFT);
+ bt_chk_sz = 1 << (hr_dev->caps.eqe_ba_pg_sz + PAGE_SHIFT);
+ ba_num = (PAGE_ALIGN(eq->entries * eq->eqe_size) + buf_chk_sz - 1) /
+ buf_chk_sz;
+
+ /* hop_num = 0 */
+ if (mhop_num == HNS_ROCE_HOP_NUM_0) {
+ dma_free_coherent(dev, (unsigned int)(eq->entries *
+ eq->eqe_size), eq->bt_l0, eq->l0_dma);
+ return;
+ }
+
+ /* hop_num = 1 or hop = 2 */
+ dma_free_coherent(dev, bt_chk_sz, eq->bt_l0, eq->l0_dma);
+ if (mhop_num == 1) {
+ for (i = 0; i < eq->l0_last_num; i++) {
+ if (i == eq->l0_last_num - 1) {
+ eqe_alloc = i * (buf_chk_sz / eq->eqe_size);
+ size = (eq->entries - eqe_alloc) * eq->eqe_size;
+ dma_free_coherent(dev, size, eq->buf[i],
+ eq->buf_dma[i]);
+ break;
+ }
+ dma_free_coherent(dev, buf_chk_sz, eq->buf[i],
+ eq->buf_dma[i]);
+ }
+ } else if (mhop_num == 2) {
+ for (i = 0; i < eq->l0_last_num; i++) {
+ dma_free_coherent(dev, bt_chk_sz, eq->bt_l1[i],
+ eq->l1_dma[i]);
+
+ for (j = 0; j < bt_chk_sz / 8; j++) {
+ idx = i * (bt_chk_sz / 8) + j;
+ if ((i == eq->l0_last_num - 1)
+ && j == eq->l1_last_num - 1) {
+ eqe_alloc = (buf_chk_sz / eq->eqe_size)
+ * idx;
+ size = (eq->entries - eqe_alloc)
+ * eq->eqe_size;
+ dma_free_coherent(dev, size,
+ eq->buf[idx],
+ eq->buf_dma[idx]);
+ break;
+ }
+ dma_free_coherent(dev, buf_chk_sz, eq->buf[idx],
+ eq->buf_dma[idx]);
+ }
+ }
+ }
+ kfree(eq->buf_dma);
+ kfree(eq->buf);
+ kfree(eq->l1_dma);
+ kfree(eq->bt_l1);
+ eq->buf_dma = NULL;
+ eq->buf = NULL;
+ eq->l1_dma = NULL;
+ eq->bt_l1 = NULL;
+}
+
+static void hns_roce_v2_free_eq(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ u32 buf_chk_sz;
+
+ buf_chk_sz = 1 << (eq->eqe_buf_pg_sz + PAGE_SHIFT);
+
+ if (hr_dev->caps.eqe_hop_num) {
+ hns_roce_mhop_free_eq(hr_dev, eq);
+ return;
+ }
+
+ if (eq->buf_list)
+ dma_free_coherent(hr_dev->dev, buf_chk_sz,
+ eq->buf_list->buf, eq->buf_list->map);
+}
+
+static void hns_roce_config_eqc(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq,
+ void *mb_buf)
+{
+ struct hns_roce_eq_context *eqc;
+
+ eqc = mb_buf;
+ memset(eqc, 0, sizeof(struct hns_roce_eq_context));
+
+ /* init eqc */
+ eq->doorbell = hr_dev->reg_base + ROCEE_VF_EQ_DB_CFG0_REG;
+ eq->hop_num = hr_dev->caps.eqe_hop_num;
+ eq->cons_index = 0;
+ eq->over_ignore = HNS_ROCE_V2_EQ_OVER_IGNORE_0;
+ eq->coalesce = HNS_ROCE_V2_EQ_COALESCE_0;
+ eq->arm_st = HNS_ROCE_V2_EQ_ALWAYS_ARMED;
+ eq->eqe_ba_pg_sz = hr_dev->caps.eqe_ba_pg_sz;
+ eq->eqe_buf_pg_sz = hr_dev->caps.eqe_buf_pg_sz;
+ eq->shift = ilog2((unsigned int)eq->entries);
+
+ if (!eq->hop_num)
+ eq->eqe_ba = eq->buf_list->map;
+ else
+ eq->eqe_ba = eq->l0_dma;
+
+ /* set eqc state */
+ roce_set_field(eqc->byte_4,
+ HNS_ROCE_EQC_EQ_ST_M,
+ HNS_ROCE_EQC_EQ_ST_S,
+ HNS_ROCE_V2_EQ_STATE_VALID);
+
+ /* set eqe hop num */
+ roce_set_field(eqc->byte_4,
+ HNS_ROCE_EQC_HOP_NUM_M,
+ HNS_ROCE_EQC_HOP_NUM_S, eq->hop_num);
+
+ /* set eqc over_ignore */
+ roce_set_field(eqc->byte_4,
+ HNS_ROCE_EQC_OVER_IGNORE_M,
+ HNS_ROCE_EQC_OVER_IGNORE_S, eq->over_ignore);
+
+ /* set eqc coalesce */
+ roce_set_field(eqc->byte_4,
+ HNS_ROCE_EQC_COALESCE_M,
+ HNS_ROCE_EQC_COALESCE_S, eq->coalesce);
+
+ /* set eqc arm_state */
+ roce_set_field(eqc->byte_4,
+ HNS_ROCE_EQC_ARM_ST_M,
+ HNS_ROCE_EQC_ARM_ST_S, eq->arm_st);
+
+ /* set eqn */
+ roce_set_field(eqc->byte_4,
+ HNS_ROCE_EQC_EQN_M,
+ HNS_ROCE_EQC_EQN_S, eq->eqn);
+
+ /* set eqe_cnt */
+ roce_set_field(eqc->byte_4,
+ HNS_ROCE_EQC_EQE_CNT_M,
+ HNS_ROCE_EQC_EQE_CNT_S,
+ HNS_ROCE_EQ_INIT_EQE_CNT);
+
+ /* set eqe_ba_pg_sz */
+ roce_set_field(eqc->byte_8,
+ HNS_ROCE_EQC_BA_PG_SZ_M,
+ HNS_ROCE_EQC_BA_PG_SZ_S, eq->eqe_ba_pg_sz);
+
+ /* set eqe_buf_pg_sz */
+ roce_set_field(eqc->byte_8,
+ HNS_ROCE_EQC_BUF_PG_SZ_M,
+ HNS_ROCE_EQC_BUF_PG_SZ_S, eq->eqe_buf_pg_sz);
+
+ /* set eq_producer_idx */
+ roce_set_field(eqc->byte_8,
+ HNS_ROCE_EQC_PROD_INDX_M,
+ HNS_ROCE_EQC_PROD_INDX_S,
+ HNS_ROCE_EQ_INIT_PROD_IDX);
+
+ /* set eq_max_cnt */
+ roce_set_field(eqc->byte_12,
+ HNS_ROCE_EQC_MAX_CNT_M,
+ HNS_ROCE_EQC_MAX_CNT_S, eq->eq_max_cnt);
+
+ /* set eq_period */
+ roce_set_field(eqc->byte_12,
+ HNS_ROCE_EQC_PERIOD_M,
+ HNS_ROCE_EQC_PERIOD_S, eq->eq_period);
+
+ /* set eqe_report_timer */
+ roce_set_field(eqc->eqe_report_timer,
+ HNS_ROCE_EQC_REPORT_TIMER_M,
+ HNS_ROCE_EQC_REPORT_TIMER_S,
+ HNS_ROCE_EQ_INIT_REPORT_TIMER);
+
+ /* set eqe_ba [34:3] */
+ roce_set_field(eqc->eqe_ba0,
+ HNS_ROCE_EQC_EQE_BA_L_M,
+ HNS_ROCE_EQC_EQE_BA_L_S, eq->eqe_ba >> 3);
+
+ /* set eqe_ba [64:35] */
+ roce_set_field(eqc->eqe_ba1,
+ HNS_ROCE_EQC_EQE_BA_H_M,
+ HNS_ROCE_EQC_EQE_BA_H_S, eq->eqe_ba >> 35);
+
+ /* set eq shift */
+ roce_set_field(eqc->byte_28,
+ HNS_ROCE_EQC_SHIFT_M,
+ HNS_ROCE_EQC_SHIFT_S, eq->shift);
+
+ /* set eq MSI_IDX */
+ roce_set_field(eqc->byte_28,
+ HNS_ROCE_EQC_MSI_INDX_M,
+ HNS_ROCE_EQC_MSI_INDX_S,
+ HNS_ROCE_EQ_INIT_MSI_IDX);
+
+ /* set cur_eqe_ba [27:12] */
+ roce_set_field(eqc->byte_28,
+ HNS_ROCE_EQC_CUR_EQE_BA_L_M,
+ HNS_ROCE_EQC_CUR_EQE_BA_L_S, eq->cur_eqe_ba >> 12);
+
+ /* set cur_eqe_ba [59:28] */
+ roce_set_field(eqc->byte_32,
+ HNS_ROCE_EQC_CUR_EQE_BA_M_M,
+ HNS_ROCE_EQC_CUR_EQE_BA_M_S, eq->cur_eqe_ba >> 28);
+
+ /* set cur_eqe_ba [63:60] */
+ roce_set_field(eqc->byte_36,
+ HNS_ROCE_EQC_CUR_EQE_BA_H_M,
+ HNS_ROCE_EQC_CUR_EQE_BA_H_S, eq->cur_eqe_ba >> 60);
+
+ /* set eq consumer idx */
+ roce_set_field(eqc->byte_36,
+ HNS_ROCE_EQC_CONS_INDX_M,
+ HNS_ROCE_EQC_CONS_INDX_S,
+ HNS_ROCE_EQ_INIT_CONS_IDX);
+
+ /* set nex_eqe_ba[43:12] */
+ roce_set_field(eqc->nxt_eqe_ba0,
+ HNS_ROCE_EQC_NXT_EQE_BA_L_M,
+ HNS_ROCE_EQC_NXT_EQE_BA_L_S, eq->nxt_eqe_ba >> 12);
+
+ /* set nex_eqe_ba[63:44] */
+ roce_set_field(eqc->nxt_eqe_ba1,
+ HNS_ROCE_EQC_NXT_EQE_BA_H_M,
+ HNS_ROCE_EQC_NXT_EQE_BA_H_S, eq->nxt_eqe_ba >> 44);
+}
+
+static int hns_roce_mhop_alloc_eq(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+{
+ struct device *dev = hr_dev->dev;
+ int eq_alloc_done = 0;
+ int eq_buf_cnt = 0;
+ int eqe_alloc;
+ u32 buf_chk_sz;
+ u32 bt_chk_sz;
+ u32 mhop_num;
+ u64 size;
+ u64 idx;
+ int ba_num;
+ int bt_num;
+ int record_i;
+ int record_j;
+ int i = 0;
+ int j = 0;
+
+ mhop_num = hr_dev->caps.eqe_hop_num;
+ buf_chk_sz = 1 << (hr_dev->caps.eqe_buf_pg_sz + PAGE_SHIFT);
+ bt_chk_sz = 1 << (hr_dev->caps.eqe_ba_pg_sz + PAGE_SHIFT);
+
+ ba_num = (PAGE_ALIGN(eq->entries * eq->eqe_size) + buf_chk_sz - 1)
+ / buf_chk_sz;
+ bt_num = (ba_num + bt_chk_sz / 8 - 1) / (bt_chk_sz / 8);
+
+ /* hop_num = 0 */
+ if (mhop_num == HNS_ROCE_HOP_NUM_0) {
+ if (eq->entries > buf_chk_sz / eq->eqe_size) {
+ dev_err(dev, "eq entries %d is larger than buf_pg_sz!",
+ eq->entries);
+ return -EINVAL;
+ }
+ eq->bt_l0 = dma_alloc_coherent(dev, eq->entries * eq->eqe_size,
+ &(eq->l0_dma), GFP_KERNEL);
+ if (!eq->bt_l0)
+ return -ENOMEM;
+
+ eq->cur_eqe_ba = eq->l0_dma;
+ eq->nxt_eqe_ba = 0;
+
+ memset(eq->bt_l0, 0, eq->entries * eq->eqe_size);
+
+ return 0;
+ }
+
+ eq->buf_dma = kcalloc(ba_num, sizeof(*eq->buf_dma), GFP_KERNEL);
+ if (!eq->buf_dma)
+ return -ENOMEM;
+ eq->buf = kcalloc(ba_num, sizeof(*eq->buf), GFP_KERNEL);
+ if (!eq->buf)
+ goto err_kcalloc_buf;
+
+ if (mhop_num == 2) {
+ eq->l1_dma = kcalloc(bt_num, sizeof(*eq->l1_dma), GFP_KERNEL);
+ if (!eq->l1_dma)
+ goto err_kcalloc_l1_dma;
+
+ eq->bt_l1 = kcalloc(bt_num, sizeof(*eq->bt_l1), GFP_KERNEL);
+ if (!eq->bt_l1)
+ goto err_kcalloc_bt_l1;
+ }
+
+ /* alloc L0 BT */
+ eq->bt_l0 = dma_alloc_coherent(dev, bt_chk_sz, &eq->l0_dma, GFP_KERNEL);
+ if (!eq->bt_l0)
+ goto err_dma_alloc_l0;
+
+ if (mhop_num == 1) {
+ if (ba_num > (bt_chk_sz / 8))
+ dev_err(dev, "ba_num %d is too large for 1 hop\n",
+ ba_num);
+
+ /* alloc buf */
+ for (i = 0; i < bt_chk_sz / 8; i++) {
+ if (eq_buf_cnt + 1 < ba_num) {
+ size = buf_chk_sz;
+ } else {
+ eqe_alloc = i * (buf_chk_sz / eq->eqe_size);
+ size = (eq->entries - eqe_alloc) * eq->eqe_size;
+ }
+ eq->buf[i] = dma_alloc_coherent(dev, size,
+ &(eq->buf_dma[i]),
+ GFP_KERNEL);
+ if (!eq->buf[i])
+ goto err_dma_alloc_buf;
+
+ memset(eq->buf[i], 0, size);
+ *(eq->bt_l0 + i) = eq->buf_dma[i];
+
+ eq_buf_cnt++;
+ if (eq_buf_cnt >= ba_num)
+ break;
+ }
+ eq->cur_eqe_ba = eq->buf_dma[0];
+ eq->nxt_eqe_ba = eq->buf_dma[1];
+
+ } else if (mhop_num == 2) {
+ /* alloc L1 BT and buf */
+ for (i = 0; i < bt_chk_sz / 8; i++) {
+ eq->bt_l1[i] = dma_alloc_coherent(dev, bt_chk_sz,
+ &(eq->l1_dma[i]),
+ GFP_KERNEL);
+ if (!eq->bt_l1[i])
+ goto err_dma_alloc_l1;
+ *(eq->bt_l0 + i) = eq->l1_dma[i];
+
+ for (j = 0; j < bt_chk_sz / 8; j++) {
+ idx = i * bt_chk_sz / 8 + j;
+ if (eq_buf_cnt + 1 < ba_num) {
+ size = buf_chk_sz;
+ } else {
+ eqe_alloc = (buf_chk_sz / eq->eqe_size)
+ * idx;
+ size = (eq->entries - eqe_alloc)
+ * eq->eqe_size;
+ }
+ eq->buf[idx] = dma_alloc_coherent(dev, size,
+ &(eq->buf_dma[idx]),
+ GFP_KERNEL);
+ if (!eq->buf[idx])
+ goto err_dma_alloc_buf;
+
+ memset(eq->buf[idx], 0, size);
+ *(eq->bt_l1[i] + j) = eq->buf_dma[idx];
+
+ eq_buf_cnt++;
+ if (eq_buf_cnt >= ba_num) {
+ eq_alloc_done = 1;
+ break;
+ }
+ }
+
+ if (eq_alloc_done)
+ break;
+ }
+ eq->cur_eqe_ba = eq->buf_dma[0];
+ eq->nxt_eqe_ba = eq->buf_dma[1];
+ }
+
+ eq->l0_last_num = i + 1;
+ if (mhop_num == 2)
+ eq->l1_last_num = j + 1;
+
+ return 0;
+
+err_dma_alloc_l1:
+ dma_free_coherent(dev, bt_chk_sz, eq->bt_l0, eq->l0_dma);
+ eq->bt_l0 = NULL;
+ eq->l0_dma = 0;
+ for (i -= 1; i >= 0; i--) {
+ dma_free_coherent(dev, bt_chk_sz, eq->bt_l1[i],
+ eq->l1_dma[i]);
+
+ for (j = 0; j < bt_chk_sz / 8; j++) {
+ idx = i * bt_chk_sz / 8 + j;
+ dma_free_coherent(dev, buf_chk_sz, eq->buf[idx],
+ eq->buf_dma[idx]);
+ }
+ }
+ goto err_dma_alloc_l0;
+
+err_dma_alloc_buf:
+ dma_free_coherent(dev, bt_chk_sz, eq->bt_l0, eq->l0_dma);
+ eq->bt_l0 = NULL;
+ eq->l0_dma = 0;
+
+ if (mhop_num == 1)
+ for (i -= i; i >= 0; i--)
+ dma_free_coherent(dev, buf_chk_sz, eq->buf[i],
+ eq->buf_dma[i]);
+ else if (mhop_num == 2) {
+ record_i = i;
+ record_j = j;
+ for (; i >= 0; i--) {
+ dma_free_coherent(dev, bt_chk_sz, eq->bt_l1[i],
+ eq->l1_dma[i]);
+
+ for (j = 0; j < bt_chk_sz / 8; j++) {
+ if (i == record_i && j >= record_j)
+ break;
+
+ idx = i * bt_chk_sz / 8 + j;
+ dma_free_coherent(dev, buf_chk_sz,
+ eq->buf[idx],
+ eq->buf_dma[idx]);
+ }
+ }
+ }
+
+err_dma_alloc_l0:
+ kfree(eq->bt_l1);
+ eq->bt_l1 = NULL;
+
+err_kcalloc_bt_l1:
+ kfree(eq->l1_dma);
+ eq->l1_dma = NULL;
+
+err_kcalloc_l1_dma:
+ kfree(eq->buf);
+ eq->buf = NULL;
+
+err_kcalloc_buf:
+ kfree(eq->buf_dma);
+ eq->buf_dma = NULL;
+
+ return -ENOMEM;
+}
+
+static int hns_roce_v2_create_eq(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq,
+ unsigned int eq_cmd)
+{
+ struct device *dev = hr_dev->dev;
+ struct hns_roce_cmd_mailbox *mailbox;
+ u32 buf_chk_sz = 0;
+ int ret;
+
+ /* Allocate mailbox memory */
+ mailbox = hns_roce_alloc_cmd_mailbox(hr_dev);
+ if (IS_ERR(mailbox))
+ return PTR_ERR(mailbox);
+
+ if (!hr_dev->caps.eqe_hop_num) {
+ buf_chk_sz = 1 << (hr_dev->caps.eqe_buf_pg_sz + PAGE_SHIFT);
+
+ eq->buf_list = kzalloc(sizeof(struct hns_roce_buf_list),
+ GFP_KERNEL);
+ if (!eq->buf_list) {
+ ret = -ENOMEM;
+ goto free_cmd_mbox;
+ }
+
+ eq->buf_list->buf = dma_alloc_coherent(dev, buf_chk_sz,
+ &(eq->buf_list->map),
+ GFP_KERNEL);
+ if (!eq->buf_list->buf) {
+ ret = -ENOMEM;
+ goto err_alloc_buf;
+ }
+
+ memset(eq->buf_list->buf, 0, buf_chk_sz);
+ } else {
+ ret = hns_roce_mhop_alloc_eq(hr_dev, eq);
+ if (ret) {
+ ret = -ENOMEM;
+ goto free_cmd_mbox;
+ }
+ }
+
+ hns_roce_config_eqc(hr_dev, eq, mailbox->buf);
+
+ ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, eq->eqn, 0,
+ eq_cmd, HNS_ROCE_CMD_TIMEOUT_MSECS);
+ if (ret) {
+ dev_err(dev, "[mailbox cmd] creat eqc failed.\n");
+ goto err_cmd_mbox;
+ }
+
+ hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+
+ return 0;
+
+err_cmd_mbox:
+ if (!hr_dev->caps.eqe_hop_num)
+ dma_free_coherent(dev, buf_chk_sz, eq->buf_list->buf,
+ eq->buf_list->map);
+ else {
+ hns_roce_mhop_free_eq(hr_dev, eq);
+ goto free_cmd_mbox;
+ }
+
+err_alloc_buf:
+ kfree(eq->buf_list);
+
+free_cmd_mbox:
+ hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+
+ return ret;
+}
+
+static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
+{
+ struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
+ struct device *dev = hr_dev->dev;
+ struct hns_roce_eq *eq;
+ unsigned int eq_cmd;
+ int irq_num;
+ int eq_num;
+ int other_num;
+ int comp_num;
+ int aeq_num;
+ int i, j, k;
+ int ret;
+
+ other_num = hr_dev->caps.num_other_vectors;
+ comp_num = hr_dev->caps.num_comp_vectors;
+ aeq_num = hr_dev->caps.num_aeq_vectors;
+
+ eq_num = comp_num + aeq_num;
+ irq_num = eq_num + other_num;
+
+ eq_table->eq = kcalloc(eq_num, sizeof(*eq_table->eq), GFP_KERNEL);
+ if (!eq_table->eq)
+ return -ENOMEM;
+
+ for (i = 0; i < irq_num; i++) {
+ hr_dev->irq_names[i] = kzalloc(HNS_ROCE_INT_NAME_LEN,
+ GFP_KERNEL);
+ if (!hr_dev->irq_names[i]) {
+ ret = -ENOMEM;
+ goto err_failed_kzalloc;
+ }
+ }
+
+ /* create eq */
+ for (j = 0; j < eq_num; j++) {
+ eq = &eq_table->eq[j];
+ eq->hr_dev = hr_dev;
+ eq->eqn = j;
+ if (j < comp_num) {
+ /* CEQ */
+ eq_cmd = HNS_ROCE_CMD_CREATE_CEQC;
+ eq->type_flag = HNS_ROCE_CEQ;
+ eq->entries = hr_dev->caps.ceqe_depth;
+ eq->eqe_size = HNS_ROCE_CEQ_ENTRY_SIZE;
+ eq->irq = hr_dev->irq[j + other_num + aeq_num];
+ eq->eq_max_cnt = HNS_ROCE_CEQ_DEFAULT_BURST_NUM;
+ eq->eq_period = HNS_ROCE_CEQ_DEFAULT_INTERVAL;
+ } else {
+ /* AEQ */
+ eq_cmd = HNS_ROCE_CMD_CREATE_AEQC;
+ eq->type_flag = HNS_ROCE_AEQ;
+ eq->entries = hr_dev->caps.aeqe_depth;
+ eq->eqe_size = HNS_ROCE_AEQ_ENTRY_SIZE;
+ eq->irq = hr_dev->irq[j - comp_num + other_num];
+ eq->eq_max_cnt = HNS_ROCE_AEQ_DEFAULT_BURST_NUM;
+ eq->eq_period = HNS_ROCE_AEQ_DEFAULT_INTERVAL;
+ }
+
+ ret = hns_roce_v2_create_eq(hr_dev, eq, eq_cmd);
+ if (ret) {
+ dev_err(dev, "eq create failed.\n");
+ goto err_create_eq_fail;
+ }
+ }
+
+ /* enable irq */
+ hns_roce_v2_int_mask_enable(hr_dev, eq_num, EQ_ENABLE);
+
+ /* irq contains: abnormal + AEQ + CEQ*/
+ for (k = 0; k < irq_num; k++)
+ if (k < other_num)
+ snprintf((char *)hr_dev->irq_names[k],
+ HNS_ROCE_INT_NAME_LEN, "hns-abn-%d", k);
+ else if (k < (other_num + aeq_num))
+ snprintf((char *)hr_dev->irq_names[k],
+ HNS_ROCE_INT_NAME_LEN, "hns-aeq-%d",
+ k - other_num);
+ else
+ snprintf((char *)hr_dev->irq_names[k],
+ HNS_ROCE_INT_NAME_LEN, "hns-ceq-%d",
+ k - other_num - aeq_num);
+
+ for (k = 0; k < irq_num; k++) {
+ if (k < other_num)
+ ret = request_irq(hr_dev->irq[k],
+ hns_roce_v2_msix_interrupt_abn,
+ 0, hr_dev->irq_names[k], hr_dev);
+
+ else if (k < (other_num + comp_num))
+ ret = request_irq(eq_table->eq[k - other_num].irq,
+ hns_roce_v2_msix_interrupt_eq,
+ 0, hr_dev->irq_names[k + aeq_num],
+ &eq_table->eq[k - other_num]);
+ else
+ ret = request_irq(eq_table->eq[k - other_num].irq,
+ hns_roce_v2_msix_interrupt_eq,
+ 0, hr_dev->irq_names[k - comp_num],
+ &eq_table->eq[k - other_num]);
+ if (ret) {
+ dev_err(dev, "Request irq error!\n");
+ goto err_request_irq_fail;
+ }
+ }
+
+ return 0;
+
+err_request_irq_fail:
+ for (k -= 1; k >= 0; k--)
+ if (k < other_num)
+ free_irq(hr_dev->irq[k], hr_dev);
+ else
+ free_irq(eq_table->eq[k - other_num].irq,
+ &eq_table->eq[k - other_num]);
+
+err_create_eq_fail:
+ for (j -= 1; j >= 0; j--)
+ hns_roce_v2_free_eq(hr_dev, &eq_table->eq[j]);
+
+err_failed_kzalloc:
+ for (i -= 1; i >= 0; i--)
+ kfree(hr_dev->irq_names[i]);
+ kfree(eq_table->eq);
+
+ return ret;
+}
+
+static void hns_roce_v2_cleanup_eq_table(struct hns_roce_dev *hr_dev)
+{
+ struct hns_roce_eq_table *eq_table = &hr_dev->eq_table;
+ int irq_num;
+ int eq_num;
+ int i;
+
+ eq_num = hr_dev->caps.num_comp_vectors + hr_dev->caps.num_aeq_vectors;
+ irq_num = eq_num + hr_dev->caps.num_other_vectors;
+
+ /* Disable irq */
+ hns_roce_v2_int_mask_enable(hr_dev, eq_num, EQ_DISABLE);
+
+ for (i = 0; i < hr_dev->caps.num_other_vectors; i++)
+ free_irq(hr_dev->irq[i], hr_dev);
+
+ for (i = 0; i < eq_num; i++) {
+ hns_roce_v2_destroy_eqc(hr_dev, i);
+
+ free_irq(eq_table->eq[i].irq, &eq_table->eq[i]);
+
+ hns_roce_v2_free_eq(hr_dev, &eq_table->eq[i]);
+ }
+
+ for (i = 0; i < irq_num; i++)
+ kfree(hr_dev->irq_names[i]);
+
+ kfree(eq_table->eq);
+}
+
static const struct hns_roce_hw hns_roce_hw_v2 = {
.cmq_init = hns_roce_v2_cmq_init,
.cmq_exit = hns_roce_v2_cmq_exit,
@@ -3175,6 +4337,8 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
.post_recv = hns_roce_v2_post_recv,
.req_notify_cq = hns_roce_v2_req_notify_cq,
.poll_cq = hns_roce_v2_poll_cq,
+ .init_eq = hns_roce_v2_init_eq_table,
+ .cleanup_eq = hns_roce_v2_cleanup_eq_table,
};
static const struct pci_device_id hns_roce_hw_v2_pci_tbl[] = {
@@ -3189,6 +4353,7 @@ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
struct hnae3_handle *handle)
{
const struct pci_device_id *id;
+ int i;
id = pci_match_id(hns_roce_hw_v2_pci_tbl, hr_dev->pci_dev);
if (!id) {
@@ -3206,8 +4371,12 @@ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
hr_dev->iboe.netdevs[0] = handle->rinfo.netdev;
hr_dev->iboe.phy_port[0] = 0;
+ for (i = 0; i < HNS_ROCE_V2_MAX_IRQ_NUM; i++)
+ hr_dev->irq[i] = pci_irq_vector(handle->pdev,
+ i + handle->rinfo.base_vector);
+
/* cmd issue mode: 0 is poll, 1 is event */
- hr_dev->cmd_mod = 0;
+ hr_dev->cmd_mod = 1;
hr_dev->loop_idc = 0;
return 0;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
index 04b7a51..463edab 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
@@ -53,6 +53,10 @@
#define HNS_ROCE_V2_MAX_SQ_INLINE 0x20
#define HNS_ROCE_V2_UAR_NUM 256
#define HNS_ROCE_V2_PHY_UAR_NUM 1
+#define HNS_ROCE_V2_MAX_IRQ_NUM 65
+#define HNS_ROCE_V2_COMP_VEC_NUM 63
+#define HNS_ROCE_V2_AEQE_VEC_NUM 1
+#define HNS_ROCE_V2_ABNORMAL_VEC_NUM 1
#define HNS_ROCE_V2_MAX_MTPT_NUM 0x8000
#define HNS_ROCE_V2_MAX_MTT_SEGS 0x1000000
#define HNS_ROCE_V2_MAX_CQE_SEGS 0x1000000
@@ -78,6 +82,8 @@
#define HNS_ROCE_MTT_HOP_NUM 1
#define HNS_ROCE_CQE_HOP_NUM 1
#define HNS_ROCE_PBL_HOP_NUM 2
+#define HNS_ROCE_EQE_HOP_NUM 2
+
#define HNS_ROCE_V2_GID_INDEX_NUM 256
#define HNS_ROCE_V2_TABLE_CHUNK_SIZE (1 << 18)
@@ -105,6 +111,12 @@
(step_idx == 1 && hop_num == 1) || \
(step_idx == 2 && hop_num == 2))
+enum {
+ NO_ARMED = 0x0,
+ REG_NXT_CEQE = 0x2,
+ REG_NXT_SE_CEQE = 0x3
+};
+
#define V2_CQ_DB_REQ_NOT_SOL 0
#define V2_CQ_DB_REQ_NOT 1
@@ -229,6 +241,9 @@ struct hns_roce_v2_cq_context {
u32 cqe_report_timer;
u32 byte_64_se_cqe_idx;
};
+#define HNS_ROCE_V2_CQ_DEFAULT_BURST_NUM 0x0
+#define HNS_ROCE_V2_CQ_DEFAULT_INTERVAL 0x0
+
#define V2_CQC_BYTE_4_CQ_ST_S 0
#define V2_CQC_BYTE_4_CQ_ST_M GENMASK(1, 0)
@@ -1129,9 +1144,6 @@ struct hns_roce_cmq_desc {
u32 data[6];
};
-#define ROCEE_VF_MB_CFG0_REG 0x40
-#define ROCEE_VF_MB_STATUS_REG 0x58
-
#define HNS_ROCE_V2_GO_BIT_TIMEOUT_MSECS 10000
#define HNS_ROCE_HW_RUN_BIT_SHIFT 31
@@ -1174,4 +1186,178 @@ struct hns_roce_v2_priv {
struct hns_roce_v2_cmq cmq;
};
+struct hns_roce_eq_context {
+ u32 byte_4;
+ u32 byte_8;
+ u32 byte_12;
+ u32 eqe_report_timer;
+ u32 eqe_ba0;
+ u32 eqe_ba1;
+ u32 byte_28;
+ u32 byte_32;
+ u32 byte_36;
+ u32 nxt_eqe_ba0;
+ u32 nxt_eqe_ba1;
+ u32 rsv[5];
+};
+
+#define HNS_ROCE_AEQ_DEFAULT_BURST_NUM 0x0
+#define HNS_ROCE_AEQ_DEFAULT_INTERVAL 0x0
+#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x0
+#define HNS_ROCE_CEQ_DEFAULT_INTERVAL 0x0
+
+#define HNS_ROCE_V2_EQ_STATE_INVALID 0
+#define HNS_ROCE_V2_EQ_STATE_VALID 1
+#define HNS_ROCE_V2_EQ_STATE_OVERFLOW 2
+#define HNS_ROCE_V2_EQ_STATE_FAILURE 3
+
+#define HNS_ROCE_V2_EQ_OVER_IGNORE_0 0
+#define HNS_ROCE_V2_EQ_OVER_IGNORE_1 1
+
+#define HNS_ROCE_V2_EQ_COALESCE_0 0
+#define HNS_ROCE_V2_EQ_COALESCE_1 1
+
+#define HNS_ROCE_V2_EQ_FIRED 0
+#define HNS_ROCE_V2_EQ_ARMED 1
+#define HNS_ROCE_V2_EQ_ALWAYS_ARMED 3
+
+#define HNS_ROCE_EQ_INIT_EQE_CNT 0
+#define HNS_ROCE_EQ_INIT_PROD_IDX 0
+#define HNS_ROCE_EQ_INIT_REPORT_TIMER 0
+#define HNS_ROCE_EQ_INIT_MSI_IDX 0
+#define HNS_ROCE_EQ_INIT_CONS_IDX 0
+#define HNS_ROCE_EQ_INIT_NXT_EQE_BA 0
+
+#define HNS_ROCE_V2_CEQ_CEQE_OWNER_S 31
+#define HNS_ROCE_V2_AEQ_AEQE_OWNER_S 31
+
+#define HNS_ROCE_V2_COMP_EQE_NUM 0x1000
+#define HNS_ROCE_V2_ASYNC_EQE_NUM 0x1000
+
+#define HNS_ROCE_V2_VF_INT_ST_AEQ_OVERFLOW_S 0
+#define HNS_ROCE_V2_VF_INT_ST_BUS_ERR_S 1
+#define HNS_ROCE_V2_VF_INT_ST_OTHER_ERR_S 2
+
+#define HNS_ROCE_EQ_DB_CMD_AEQ 0x0
+#define HNS_ROCE_EQ_DB_CMD_AEQ_ARMED 0x1
+#define HNS_ROCE_EQ_DB_CMD_CEQ 0x2
+#define HNS_ROCE_EQ_DB_CMD_CEQ_ARMED 0x3
+
+#define EQ_ENABLE 1
+#define EQ_DISABLE 0
+
+#define EQ_REG_OFFSET 0x4
+
+#define HNS_ROCE_INT_NAME_LEN 32
+#define HNS_ROCE_V2_EQN_M GENMASK(23, 0)
+
+#define HNS_ROCE_V2_CONS_IDX_M GENMASK(23, 0)
+
+#define HNS_ROCE_V2_VF_ABN_INT_EN_S 0
+#define HNS_ROCE_V2_VF_ABN_INT_EN_M GENMASK(0, 0)
+#define HNS_ROCE_V2_VF_ABN_INT_ST_M GENMASK(2, 0)
+#define HNS_ROCE_V2_VF_ABN_INT_CFG_M GENMASK(2, 0)
+#define HNS_ROCE_V2_VF_EVENT_INT_EN_M GENMASK(0, 0)
+
+/* WORD0 */
+#define HNS_ROCE_EQC_EQ_ST_S 0
+#define HNS_ROCE_EQC_EQ_ST_M GENMASK(1, 0)
+
+#define HNS_ROCE_EQC_HOP_NUM_S 2
+#define HNS_ROCE_EQC_HOP_NUM_M GENMASK(3, 2)
+
+#define HNS_ROCE_EQC_OVER_IGNORE_S 4
+#define HNS_ROCE_EQC_OVER_IGNORE_M GENMASK(4, 4)
+
+#define HNS_ROCE_EQC_COALESCE_S 5
+#define HNS_ROCE_EQC_COALESCE_M GENMASK(5, 5)
+
+#define HNS_ROCE_EQC_ARM_ST_S 6
+#define HNS_ROCE_EQC_ARM_ST_M GENMASK(7, 6)
+
+#define HNS_ROCE_EQC_EQN_S 8
+#define HNS_ROCE_EQC_EQN_M GENMASK(15, 8)
+
+#define HNS_ROCE_EQC_EQE_CNT_S 16
+#define HNS_ROCE_EQC_EQE_CNT_M GENMASK(31, 16)
+
+/* WORD1 */
+#define HNS_ROCE_EQC_BA_PG_SZ_S 0
+#define HNS_ROCE_EQC_BA_PG_SZ_M GENMASK(3, 0)
+
+#define HNS_ROCE_EQC_BUF_PG_SZ_S 4
+#define HNS_ROCE_EQC_BUF_PG_SZ_M GENMASK(7, 4)
+
+#define HNS_ROCE_EQC_PROD_INDX_S 8
+#define HNS_ROCE_EQC_PROD_INDX_M GENMASK(31, 8)
+
+/* WORD2 */
+#define HNS_ROCE_EQC_MAX_CNT_S 0
+#define HNS_ROCE_EQC_MAX_CNT_M GENMASK(15, 0)
+
+#define HNS_ROCE_EQC_PERIOD_S 16
+#define HNS_ROCE_EQC_PERIOD_M GENMASK(31, 16)
+
+/* WORD3 */
+#define HNS_ROCE_EQC_REPORT_TIMER_S 0
+#define HNS_ROCE_EQC_REPORT_TIMER_M GENMASK(31, 0)
+
+/* WORD4 */
+#define HNS_ROCE_EQC_EQE_BA_L_S 0
+#define HNS_ROCE_EQC_EQE_BA_L_M GENMASK(31, 0)
+
+/* WORD5 */
+#define HNS_ROCE_EQC_EQE_BA_H_S 0
+#define HNS_ROCE_EQC_EQE_BA_H_M GENMASK(28, 0)
+
+/* WORD6 */
+#define HNS_ROCE_EQC_SHIFT_S 0
+#define HNS_ROCE_EQC_SHIFT_M GENMASK(7, 0)
+
+#define HNS_ROCE_EQC_MSI_INDX_S 8
+#define HNS_ROCE_EQC_MSI_INDX_M GENMASK(15, 8)
+
+#define HNS_ROCE_EQC_CUR_EQE_BA_L_S 16
+#define HNS_ROCE_EQC_CUR_EQE_BA_L_M GENMASK(31, 16)
+
+/* WORD7 */
+#define HNS_ROCE_EQC_CUR_EQE_BA_M_S 0
+#define HNS_ROCE_EQC_CUR_EQE_BA_M_M GENMASK(31, 0)
+
+/* WORD8 */
+#define HNS_ROCE_EQC_CUR_EQE_BA_H_S 0
+#define HNS_ROCE_EQC_CUR_EQE_BA_H_M GENMASK(3, 0)
+
+#define HNS_ROCE_EQC_CONS_INDX_S 8
+#define HNS_ROCE_EQC_CONS_INDX_M GENMASK(31, 8)
+
+/* WORD9 */
+#define HNS_ROCE_EQC_NXT_EQE_BA_L_S 0
+#define HNS_ROCE_EQC_NXT_EQE_BA_L_M GENMASK(31, 0)
+
+/* WORD10 */
+#define HNS_ROCE_EQC_NXT_EQE_BA_H_S 0
+#define HNS_ROCE_EQC_NXT_EQE_BA_H_M GENMASK(19, 0)
+
+#define HNS_ROCE_V2_CEQE_COMP_CQN_S 0
+#define HNS_ROCE_V2_CEQE_COMP_CQN_M GENMASK(23, 0)
+
+#define HNS_ROCE_V2_AEQE_EVENT_TYPE_S 0
+#define HNS_ROCE_V2_AEQE_EVENT_TYPE_M GENMASK(7, 0)
+
+#define HNS_ROCE_V2_AEQE_SUB_TYPE_S 8
+#define HNS_ROCE_V2_AEQE_SUB_TYPE_M GENMASK(15, 8)
+
+#define HNS_ROCE_V2_EQ_DB_CMD_S 16
+#define HNS_ROCE_V2_EQ_DB_CMD_M GENMASK(17, 16)
+
+#define HNS_ROCE_V2_EQ_DB_TAG_S 0
+#define HNS_ROCE_V2_EQ_DB_TAG_M GENMASK(7, 0)
+
+#define HNS_ROCE_V2_EQ_DB_PARA_S 0
+#define HNS_ROCE_V2_EQ_DB_PARA_M GENMASK(23, 0)
+
+#define HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_S 0
+#define HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_M GENMASK(23, 0)
+
#endif
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06
[not found] ` <20171114091146.GT18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2017-11-14 9:40 ` Liuyixian (Eason)
[not found] ` <7a93b747-4745-86ab-4366-7b883dc2f133-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 14+ messages in thread
From: Liuyixian (Eason) @ 2017-11-14 9:40 UTC (permalink / raw)
To: Leon Romanovsky
Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
On 2017/11/14 17:11, Leon Romanovsky wrote:
> On Tue, Nov 14, 2017 at 05:26:16PM +0800, Yixian Liu wrote:
>> Considering the compatibility of supporting hip08's eq
>> process and possible changes of data structure, this patch
>> refactors the eq code structure of hip06.
>>
>> We move all the eq process code for hip06 from hns_roce_eq.c
>> into hns_roce_hw_v1.c, and also for hns_roce_eq.h. With
>> these changes, it will be convenient to add the eq support
>> for later hardware version.
>>
>> Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> Reviewed-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> Reviewed-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> ---
>> drivers/infiniband/hw/hns/Makefile | 2 +-
>> drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
>> drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
>> drivers/infiniband/hw/hns/hns_roce_device.h | 57 ++-
>> drivers/infiniband/hw/hns/hns_roce_eq.c | 759 ----------------------------
>> drivers/infiniband/hw/hns/hns_roce_eq.h | 134 -----
>> drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++++++++++++-
>> drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
>> drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
>> drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
>> 10 files changed, 843 insertions(+), 930 deletions(-)
>> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
>> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
>>
>> diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile
>> index ff426a6..97bf2cd 100644
>> --- a/drivers/infiniband/hw/hns/Makefile
>> +++ b/drivers/infiniband/hw/hns/Makefile
>> @@ -5,7 +5,7 @@
>> ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
>>
>> obj-$(CONFIG_INFINIBAND_HNS) += hns-roce.o
>> -hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_eq.o hns_roce_pd.o \
>> +hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
>> hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
>> hns_roce_cq.o hns_roce_alloc.o
>> obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> index 1085cb2..9ebe839 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> @@ -103,6 +103,7 @@ void hns_roce_cmd_event(struct hns_roce_dev *hr_dev, u16 token, u8 status,
>> context->out_param = out_param;
>> complete(&context->done);
>> }
>> +EXPORT_SYMBOL_GPL(hns_roce_cmd_event);
>
> Are you sure that you need these symbols to be exported (used in other modules)?
>
> Thanks
>
Yes, this symbol has been used in asynchronous event handle to query
the result of mailbox cmd execution. It is in a different module.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 0/2] Revise eq support for hip06 & hip08
[not found] ` <1510651577-20794-1-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-11-14 9:26 ` [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06 Yixian Liu
2017-11-14 9:26 ` [PATCH for-next 2/2] RDMA/hns: Add eq support of hip08 Yixian Liu
@ 2017-12-04 9:33 ` Liuyixian (Eason)
[not found] ` <4636858b-4ae8-e37e-d2d3-44441560f266-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-12-22 16:37 ` Jason Gunthorpe
3 siblings, 1 reply; 14+ messages in thread
From: Liuyixian (Eason) @ 2017-12-04 9:33 UTC (permalink / raw)
To: dledford-H+wXaHxf7aLQT0dZR+AlfA
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, jgg-uk2M96/98Pc,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
Hi Doug,
Should I make some changes for this patch-set for the further review?
Regards
On 2017/11/14 17:26, Yixian Liu wrote:
> This patch-set refactor eq code for hip06 and add eq
> support for hip08.
>
> Yixian Liu (2):
> RDMA/hns: Refactor eq code for hip06
> RDMA/hns: Add eq support of hip08
>
> drivers/infiniband/hw/hns/Makefile | 2 +-
> drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
> drivers/infiniband/hw/hns/hns_roce_cmd.h | 10 +
> drivers/infiniband/hw/hns/hns_roce_common.h | 11 +
> drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
> drivers/infiniband/hw/hns/hns_roce_device.h | 83 +-
> drivers/infiniband/hw/hns/hns_roce_eq.c | 759 -----------------
> drivers/infiniband/hw/hns/hns_roce_eq.h | 134 ---
> drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++-
> drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
> drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 1177 ++++++++++++++++++++++++++-
> drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 192 ++++-
> drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
> drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
> 14 files changed, 2251 insertions(+), 938 deletions(-)
> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 0/2] Revise eq support for hip06 & hip08
[not found] ` <4636858b-4ae8-e37e-d2d3-44441560f266-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2017-12-11 8:55 ` Liuyixian (Eason)
[not found] ` <6d0a2aee-54e0-90dd-5293-7f8a5dac7d36-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 14+ messages in thread
From: Liuyixian (Eason) @ 2017-12-11 8:55 UTC (permalink / raw)
To: jgg-uk2M96/98Pc
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, dledford-H+wXaHxf7aLQT0dZR+AlfA,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
Hi Jason,
This patch-set was based on Doug's RDMA tree:
git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git
Should I resend this patch-set based on the new shared RDMA git tree?
git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git
Thanks
On 2017/12/4 17:33, Liuyixian (Eason) wrote:
> Hi Doug,
>
> Should I make some changes for this patch-set for the further review?
>
> Regards
>
>
> On 2017/11/14 17:26, Yixian Liu wrote:
>> This patch-set refactor eq code for hip06 and add eq
>> support for hip08.
>>
>> Yixian Liu (2):
>> RDMA/hns: Refactor eq code for hip06
>> RDMA/hns: Add eq support of hip08
>>
>> drivers/infiniband/hw/hns/Makefile | 2 +-
>> drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
>> drivers/infiniband/hw/hns/hns_roce_cmd.h | 10 +
>> drivers/infiniband/hw/hns/hns_roce_common.h | 11 +
>> drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
>> drivers/infiniband/hw/hns/hns_roce_device.h | 83 +-
>> drivers/infiniband/hw/hns/hns_roce_eq.c | 759 -----------------
>> drivers/infiniband/hw/hns/hns_roce_eq.h | 134 ---
>> drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++-
>> drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
>> drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 1177 ++++++++++++++++++++++++++-
>> drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 192 ++++-
>> drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
>> drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
>> 14 files changed, 2251 insertions(+), 938 deletions(-)
>> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
>> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> .
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 0/2] Revise eq support for hip06 & hip08
[not found] ` <6d0a2aee-54e0-90dd-5293-7f8a5dac7d36-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2017-12-11 15:49 ` Jason Gunthorpe
[not found] ` <20171211154902.GA27709-uk2M96/98Pc@public.gmane.org>
0 siblings, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2017-12-11 15:49 UTC (permalink / raw)
To: Liuyixian (Eason)
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, dledford-H+wXaHxf7aLQT0dZR+AlfA,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
On Mon, Dec 11, 2017 at 04:55:23PM +0800, Liuyixian (Eason) wrote:
> Hi Jason,
>
> This patch-set was based on Doug's RDMA tree:
> git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git
>
> Should I resend this patch-set based on the new shared RDMA git tree?
> git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git
At this point it needs to apply cleanly on rc2 which is what is on the
rdma tree right now.
If it doesn't then you need to retest/resend..
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 0/2] Revise eq support for hip06 & hip08
[not found] ` <20171211154902.GA27709-uk2M96/98Pc@public.gmane.org>
@ 2017-12-12 10:42 ` Liuyixian (Eason)
0 siblings, 0 replies; 14+ messages in thread
From: Liuyixian (Eason) @ 2017-12-12 10:42 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: leon-DgEjT+Ai2ygdnm+yROfE0A, dledford-H+wXaHxf7aLQT0dZR+AlfA,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
Hi Jason,
Thanks for your message.
This patch-set has been tested okay both on for-next and for-rc.
I assume that the patch-set should go to for-next branch as it is about irq feature.
What's your opinion?
On 2017/12/11 23:49, Jason Gunthorpe wrote:
> On Mon, Dec 11, 2017 at 04:55:23PM +0800, Liuyixian (Eason) wrote:
>> Hi Jason,
>>
>> This patch-set was based on Doug's RDMA tree:
>> git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git
>>
>> Should I resend this patch-set based on the new shared RDMA git tree?
>> git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git
>
> At this point it needs to apply cleanly on rc2 which is what is on the
> rdma tree right now.
>
> If it doesn't then you need to retest/resend..
>
> Jason
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06
[not found] ` <7a93b747-4745-86ab-4366-7b883dc2f133-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2017-12-19 12:38 ` Liuyixian (Eason)
0 siblings, 0 replies; 14+ messages in thread
From: Liuyixian (Eason) @ 2017-12-19 12:38 UTC (permalink / raw)
To: Leon Romanovsky
Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
On 2017/11/14 17:40, Liuyixian (Eason) wrote:
>
>
> On 2017/11/14 17:11, Leon Romanovsky wrote:
>> On Tue, Nov 14, 2017 at 05:26:16PM +0800, Yixian Liu wrote:
>>> Considering the compatibility of supporting hip08's eq
>>> process and possible changes of data structure, this patch
>>> refactors the eq code structure of hip06.
>>>
>>> We move all the eq process code for hip06 from hns_roce_eq.c
>>> into hns_roce_hw_v1.c, and also for hns_roce_eq.h. With
>>> these changes, it will be convenient to add the eq support
>>> for later hardware version.
>>>
>>> Signed-off-by: Yixian Liu <liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>> Reviewed-by: Lijun Ou <oulijun-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>> Reviewed-by: Wei Hu (Xavier) <xavier.huwei-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>> ---
>>> drivers/infiniband/hw/hns/Makefile | 2 +-
>>> drivers/infiniband/hw/hns/hns_roce_cmd.c | 1 +
>>> drivers/infiniband/hw/hns/hns_roce_cq.c | 19 +-
>>> drivers/infiniband/hw/hns/hns_roce_device.h | 57 ++-
>>> drivers/infiniband/hw/hns/hns_roce_eq.c | 759 ----------------------------
>>> drivers/infiniband/hw/hns/hns_roce_eq.h | 134 -----
>>> drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 740 ++++++++++++++++++++++++++-
>>> drivers/infiniband/hw/hns/hns_roce_hw_v1.h | 44 +-
>>> drivers/infiniband/hw/hns/hns_roce_main.c | 16 +-
>>> drivers/infiniband/hw/hns/hns_roce_qp.c | 1 +
>>> 10 files changed, 843 insertions(+), 930 deletions(-)
>>> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.c
>>> delete mode 100644 drivers/infiniband/hw/hns/hns_roce_eq.h
>>>
>>> diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile
>>> index ff426a6..97bf2cd 100644
>>> --- a/drivers/infiniband/hw/hns/Makefile
>>> +++ b/drivers/infiniband/hw/hns/Makefile
>>> @@ -5,7 +5,7 @@
>>> ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
>>>
>>> obj-$(CONFIG_INFINIBAND_HNS) += hns-roce.o
>>> -hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_eq.o hns_roce_pd.o \
>>> +hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
>>> hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
>>> hns_roce_cq.o hns_roce_alloc.o
>>> obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o
>>> diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
>>> index 1085cb2..9ebe839 100644
>>> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
>>> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
>>> @@ -103,6 +103,7 @@ void hns_roce_cmd_event(struct hns_roce_dev *hr_dev, u16 token, u8 status,
>>> context->out_param = out_param;
>>> complete(&context->done);
>>> }
>>> +EXPORT_SYMBOL_GPL(hns_roce_cmd_event);
>>
>> Are you sure that you need these symbols to be exported (used in other modules)?
>>
>> Thanks
>>
> Yes, this symbol has been used in asynchronous event handle to query
> the result of mailbox cmd execution. It is in a different module.
>
As we have moved the eq code from hns_roce_eq.c to hns_roce_hw_v1.c,
which is from the module hns-roce.ko to the another module hns-roce-hw-v1.ko,
these symbols in hns-roce.ko should be exported and then could be
called by the eq code.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> .
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 0/2] Revise eq support for hip06 & hip08
[not found] ` <1510651577-20794-1-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
` (2 preceding siblings ...)
2017-12-04 9:33 ` [PATCH for-next 0/2] Revise eq support for hip06 & hip08 Liuyixian (Eason)
@ 2017-12-22 16:37 ` Jason Gunthorpe
[not found] ` <20171222163732.GA15021-uk2M96/98Pc@public.gmane.org>
3 siblings, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2017-12-22 16:37 UTC (permalink / raw)
To: Yixian Liu
Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, leon-DgEjT+Ai2ygdnm+yROfE0A,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
On Tue, Nov 14, 2017 at 05:26:15PM +0800, Yixian Liu wrote:
> This patch-set refactor eq code for hip06 and add eq
> support for hip08.
>
> Yixian Liu (2):
> RDMA/hns: Refactor eq code for hip06
> RDMA/hns: Add eq support of hip08
I read this this whole giant patch and will apply it to for-next.
I did notice some existing bothersome things:
1) Bad commenting around all memory barriers:
/* Memory barrier */
rmb();
Everyone knows that rmb/wmb/mb are memory barriers.
If you use these calls you *MUST* have a comment explaining
what the locking situation is. Explain where the *OTHER SIDE* of
the barrier is.
2) This construct
static inline void hns_roce_write64_k(__be32 val[2], void __iomem *dest)
{
__raw_writeq(*(u64 *) val, dest);
}
static void set_eq_cons_index_v2(struct hns_roce_eq *eq)
{
u32 doorbell[2];
[..]
hns_roce_write64_k(doorbell, eq->doorbell);
Is not OK. doorbell is technically not guarenteed to be 64 bit
aligned, and the deref *(u64 *) requires 64 bit alignment in the
kernel.
I purged this stupid array thing out of the mellanox drivers in
userspace some time ago, you might do the same, or minimually a
union is needed:
union doorbell {
u64 doorbell64;
u32 doorbell[2];
}
Plese consider sending followup patches.
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH for-next 0/2] Revise eq support for hip06 & hip08
[not found] ` <20171222163732.GA15021-uk2M96/98Pc@public.gmane.org>
@ 2017-12-23 8:57 ` Liuyixian (Eason)
0 siblings, 0 replies; 14+ messages in thread
From: Liuyixian (Eason) @ 2017-12-23 8:57 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, leon-DgEjT+Ai2ygdnm+yROfE0A,
linux-rdma-u79uwXL29TY76Z2rM5mHXA
On 2017/12/23 0:37, Jason Gunthorpe wrote:
> On Tue, Nov 14, 2017 at 05:26:15PM +0800, Yixian Liu wrote:
>> This patch-set refactor eq code for hip06 and add eq
>> support for hip08.
>>
>> Yixian Liu (2):
>> RDMA/hns: Refactor eq code for hip06
>> RDMA/hns: Add eq support of hip08
>
> I read this this whole giant patch and will apply it to for-next.
>
> I did notice some existing bothersome things:
>
> 1) Bad commenting around all memory barriers:
>
> /* Memory barrier */
> rmb();
>
> Everyone knows that rmb/wmb/mb are memory barriers.
> If you use these calls you *MUST* have a comment explaining
> what the locking situation is. Explain where the *OTHER SIDE* of
> the barrier is.
>
> 2) This construct
>
> static inline void hns_roce_write64_k(__be32 val[2], void __iomem *dest)
> {
> __raw_writeq(*(u64 *) val, dest);
> }
>
> static void set_eq_cons_index_v2(struct hns_roce_eq *eq)
> {
> u32 doorbell[2];
> [..]
> hns_roce_write64_k(doorbell, eq->doorbell);
>
> Is not OK. doorbell is technically not guarenteed to be 64 bit
> aligned, and the deref *(u64 *) requires 64 bit alignment in the
> kernel.
>
> I purged this stupid array thing out of the mellanox drivers in
> userspace some time ago, you might do the same, or minimually a
> union is needed:
>
> union doorbell {
> u64 doorbell64;
> u32 doorbell[2];
> }
>
> Plese consider sending followup patches.
>
Thanks! I will send out bugfix patches according to your suggestions.
> Jason
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2017-12-23 8:57 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-14 9:26 [PATCH for-next 0/2] Revise eq support for hip06 & hip08 Yixian Liu
[not found] ` <1510651577-20794-1-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-11-14 9:26 ` [PATCH for-next 1/2] RDMA/hns: Refactor eq code for hip06 Yixian Liu
[not found] ` <1510651577-20794-2-git-send-email-liuyixian-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-11-14 8:53 ` Liuyixian (Eason)
[not found] ` <ad1fff67-3511-8252-5b6f-aa1ab0c90078-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-11-14 9:09 ` Liuyixian (Eason)
2017-11-14 9:11 ` Leon Romanovsky
[not found] ` <20171114091146.GT18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-14 9:40 ` Liuyixian (Eason)
[not found] ` <7a93b747-4745-86ab-4366-7b883dc2f133-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-12-19 12:38 ` Liuyixian (Eason)
2017-11-14 9:26 ` [PATCH for-next 2/2] RDMA/hns: Add eq support of hip08 Yixian Liu
2017-12-04 9:33 ` [PATCH for-next 0/2] Revise eq support for hip06 & hip08 Liuyixian (Eason)
[not found] ` <4636858b-4ae8-e37e-d2d3-44441560f266-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-12-11 8:55 ` Liuyixian (Eason)
[not found] ` <6d0a2aee-54e0-90dd-5293-7f8a5dac7d36-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-12-11 15:49 ` Jason Gunthorpe
[not found] ` <20171211154902.GA27709-uk2M96/98Pc@public.gmane.org>
2017-12-12 10:42 ` Liuyixian (Eason)
2017-12-22 16:37 ` Jason Gunthorpe
[not found] ` <20171222163732.GA15021-uk2M96/98Pc@public.gmane.org>
2017-12-23 8:57 ` Liuyixian (Eason)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox