linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-core 0/6] libqedr: userspace library for qedr
@ 2016-10-20  9:49 Ram Amrani
       [not found] ` <1476956952-17388-1-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Ram Amrani @ 2016-10-20  9:49 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA,
	Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA, Ram Amrani

From: Ram Amrani <Ram.Amrani-74tsMCuadCbQT0dZR+AlfA@public.gmane.org>

This series introduces RoCE RDMA userspace library for the 579xx RDMA products
by Qlogic.
The libqedr library will support both RoCE and iWarp, although this series only
adds RoCE support.

This libqedr version matches the qedr patches submitted for upstream recently- 
commit: 993d1b52615e1a549e55875c3b74308391672d9f.

Built on top of rdma-core 29e6d0ebc56cd089d9a200ba33a958899fc84bbe.

Ram Amrani (6):
  libqedr (qelr) chains
  libqedr (qelr) verbs
  libqedr (qelr) HSI
  libqedr (qelr) main
  libqedr (qelr) abi
  libqedr (qelr) addition to consolidated repo

 CMakeLists.txt                 |    1 +
 MAINTAINERS                    |    7 +
 README.md                      |    1 +
 providers/qedr/CMakeLists.txt  |    5 +
 providers/qedr/common_hsi.h    | 1502 +++++++++++++++++++++++++++++++
 providers/qedr/qelr.h          |  320 +++++++
 providers/qedr/qelr_abi.h      |  120 +++
 providers/qedr/qelr_chain.c    |  107 +++
 providers/qedr/qelr_chain.h    |  163 ++++
 providers/qedr/qelr_hsi.h      |   67 ++
 providers/qedr/qelr_hsi_rdma.h |  914 +++++++++++++++++++
 providers/qedr/qelr_main.c     |  286 ++++++
 providers/qedr/qelr_main.h     |   83 ++
 providers/qedr/qelr_verbs.c    | 1948 ++++++++++++++++++++++++++++++++++++++++
 providers/qedr/qelr_verbs.h    |   83 ++
 providers/qedr/rdma_common.h   |   74 ++
 providers/qedr/roce_common.h   |   50 ++
 17 files changed, 5731 insertions(+)
 create mode 100644 providers/qedr/CMakeLists.txt
 create mode 100644 providers/qedr/common_hsi.h
 create mode 100644 providers/qedr/qelr.h
 create mode 100644 providers/qedr/qelr_abi.h
 create mode 100644 providers/qedr/qelr_chain.c
 create mode 100644 providers/qedr/qelr_chain.h
 create mode 100644 providers/qedr/qelr_hsi.h
 create mode 100644 providers/qedr/qelr_hsi_rdma.h
 create mode 100644 providers/qedr/qelr_main.c
 create mode 100644 providers/qedr/qelr_main.h
 create mode 100644 providers/qedr/qelr_verbs.c
 create mode 100644 providers/qedr/qelr_verbs.h
 create mode 100644 providers/qedr/rdma_common.h
 create mode 100644 providers/qedr/roce_common.h

-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH rdma-core 1/6] libqedr: chains
       [not found] ` <1476956952-17388-1-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
@ 2016-10-20  9:49   ` Ram Amrani
  2016-10-20  9:49   ` [PATCH rdma-core 2/6] libqedr: verbs Ram Amrani
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Ram Amrani @ 2016-10-20  9:49 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA,
	Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA, Ram Amrani, Ram Amrani

From: Ram Amrani <Ram.Amrani-74tsMCuadCbQT0dZR+AlfA@public.gmane.org>

Introducing chains for managing user queues - SQ, RQ and CQ.

Signed-off-by: Ram Amrani <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
---
 providers/qedr/qelr_chain.c | 107 +++++++++++++++++++++++++++++
 providers/qedr/qelr_chain.h | 163 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 270 insertions(+)
 create mode 100644 providers/qedr/qelr_chain.c
 create mode 100644 providers/qedr/qelr_chain.h

diff --git a/providers/qedr/qelr_chain.c b/providers/qedr/qelr_chain.c
new file mode 100644
index 0000000..6101f74
--- /dev/null
+++ b/providers/qedr/qelr_chain.c
@@ -0,0 +1,107 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <sys/types.h>
+#include <sys/mman.h>
+#include <stdio.h>
+#include <string.h>
+#include <endian.h>
+#include <errno.h>
+
+#include "qelr.h"
+
+void *qelr_chain_get_last_elem(struct qelr_chain *p_chain)
+{
+	void			*p_virt_addr	= NULL;
+	uint32_t		size;
+
+	if (!p_chain->addr)
+		goto out;
+
+	size		= p_chain->elem_size * (p_chain->n_elems - 1);
+	p_virt_addr	= ((uint8_t *)p_chain->addr + size);
+out:
+	return p_virt_addr;
+}
+
+void qelr_chain_reset(struct qelr_chain *p_chain)
+{
+	p_chain->prod_idx	= 0;
+	p_chain->cons_idx	= 0;
+
+	p_chain->p_cons_elem	= p_chain->addr;
+	p_chain->p_prod_elem	= p_chain->addr;
+}
+
+#define QELR_ANON_FD		(-1)	/* MAP_ANONYMOUS => file desc.= -1  */
+#define QELR_ANON_OFFSET	(0)	/* MAP_ANONYMOUS => offset    = d/c */
+
+int qelr_chain_alloc(struct qelr_chain *chain, int chain_size, int page_size,
+		     uint16_t elem_size)
+{
+	int ret, a_chain_size;
+	void *addr;
+
+	/* alloc aligned page aligned chain */
+	a_chain_size = (chain_size + page_size - 1) & ~(page_size - 1);
+	addr = mmap(NULL, a_chain_size, PROT_READ | PROT_WRITE,
+			 MAP_PRIVATE | MAP_ANONYMOUS, QELR_ANON_FD,
+			 QELR_ANON_OFFSET);
+	if (chain->addr == MAP_FAILED)
+		return errno;
+
+	ret = ibv_dontfork_range(addr, a_chain_size);
+	if (ret) {
+		munmap(addr, a_chain_size);
+		return ret;
+	}
+
+	/* init chain */
+	memset(chain, 0, sizeof(*chain));
+	memset(chain->addr, 0, chain->size);
+	chain->addr = addr;
+	chain->size = a_chain_size;
+	chain->p_cons_elem = chain->addr;
+	chain->p_prod_elem = chain->addr;
+	chain->elem_size = elem_size;
+	chain->n_elems = chain->size / elem_size;
+
+	return 0;
+}
+
+void qelr_chain_free(struct qelr_chain *chain)
+{
+	if (chain->size) {
+		ibv_dofork_range(chain->addr, chain->size);
+		munmap(chain->addr, chain->size);
+	}
+}
diff --git a/providers/qedr/qelr_chain.h b/providers/qedr/qelr_chain.h
new file mode 100644
index 0000000..2ff5bf5
--- /dev/null
+++ b/providers/qedr/qelr_chain.h
@@ -0,0 +1,163 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __QELR_CHAIN_H__
+#define __QELR_CHAIN_H__
+
+struct qelr_chain {
+	/* Address of first page of the chain */
+	void		*addr;
+
+	/* Point to next element to produce/consume */
+	void		*p_prod_elem;
+	void		*p_cons_elem;
+
+	uint32_t	prod_idx;
+	uint32_t	cons_idx;
+
+	uint32_t	n_elems;
+	uint32_t	size;
+	uint16_t	elem_size;
+};
+
+/* fast path functions are inline */
+
+static inline uint32_t qelr_chain_get_cons_idx_u32(struct qelr_chain *p_chain)
+{
+	return p_chain->cons_idx;
+}
+
+static inline void *qelr_chain_produce(struct qelr_chain *p_chain)
+{
+	void *p_ret = NULL;
+
+	p_chain->prod_idx++;
+
+	p_ret = p_chain->p_prod_elem;
+
+	if (p_chain->prod_idx % p_chain->n_elems == 0)
+		p_chain->p_prod_elem = p_chain->addr;
+	else
+		p_chain->p_prod_elem = (void *)(((uint8_t *)p_chain->p_prod_elem) +
+				       p_chain->elem_size);
+
+	return p_ret;
+}
+
+static inline void *qelr_chain_produce_n(struct qelr_chain *p_chain, int n)
+{
+	void *p_ret = NULL;
+	int n_wrap;
+
+	p_chain->prod_idx++;
+	p_ret = p_chain->p_prod_elem;
+
+	n_wrap = p_chain->prod_idx % p_chain->n_elems;
+	if (n_wrap < n)
+		p_chain->p_prod_elem = (void *)(((uint8_t *)p_chain->addr) +
+				       (p_chain->elem_size * n_wrap));
+	else
+		p_chain->p_prod_elem = (void *)(((uint8_t *)p_chain->p_prod_elem) +
+				       (p_chain->elem_size * n));
+
+	return p_ret;
+}
+
+static inline void *qelr_chain_consume(struct qelr_chain *p_chain)
+{
+	void *p_ret = NULL;
+
+	p_chain->cons_idx++;
+
+	p_ret = p_chain->p_cons_elem;
+
+	if (p_chain->cons_idx % p_chain->n_elems == 0)
+		p_chain->p_cons_elem = p_chain->addr;
+	else
+		p_chain->p_cons_elem	= (void *)
+					  (((uint8_t *)p_chain->p_cons_elem) +
+					   p_chain->elem_size);
+
+	return p_ret;
+}
+
+static inline void *qelr_chain_consume_n(struct qelr_chain *p_chain, int n)
+{
+	void *p_ret = NULL;
+	int n_wrap;
+
+	p_chain->cons_idx += n;
+	p_ret = p_chain->p_cons_elem;
+
+	n_wrap = p_chain->cons_idx % p_chain->n_elems;
+	if (n_wrap < n)
+		p_chain->p_cons_elem = (void *)(((uint8_t *)p_chain->addr) +
+				       (p_chain->elem_size * n_wrap));
+	else
+		p_chain->p_cons_elem = (void *)(((uint8_t *)p_chain->p_cons_elem) +
+				       (p_chain->elem_size * n));
+
+	return p_ret;
+}
+
+static inline uint32_t qelr_chain_get_elem_left_u32(struct qelr_chain *p_chain)
+{
+	uint32_t used;
+
+	used = (uint32_t)(((uint64_t)((uint64_t) ~0U) + 1 +
+			  (uint64_t)(p_chain->prod_idx)) -
+			  (uint64_t)p_chain->cons_idx);
+
+	return p_chain->n_elems - used;
+}
+
+static inline uint8_t qelr_chain_is_full(struct qelr_chain *p_chain)
+{
+	return qelr_chain_get_elem_left_u32(p_chain) == p_chain->n_elems;
+}
+
+static inline void qelr_chain_set_prod(
+		struct qelr_chain *p_chain,
+		uint32_t prod_idx,
+		void *p_prod_elem)
+{
+	p_chain->prod_idx = prod_idx;
+	p_chain->p_prod_elem = p_prod_elem;
+}
+
+void *qelr_chain_get_last_elem(struct qelr_chain *p_chain);
+void qelr_chain_reset(struct qelr_chain *p_chain);
+int qelr_chain_alloc(struct qelr_chain *chain, int chain_size, int page_size,
+		     uint16_t elem_size);
+void qelr_chain_free(struct qelr_chain *buf);
+
+#endif /* __QELR_CHAIN_H__ */
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-core 2/6] libqedr: verbs
       [not found] ` <1476956952-17388-1-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
  2016-10-20  9:49   ` [PATCH rdma-core 1/6] libqedr: chains Ram Amrani
@ 2016-10-20  9:49   ` Ram Amrani
  2016-10-20  9:49   ` [PATCH rdma-core 3/6] libqedr: HSI Ram Amrani
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Ram Amrani @ 2016-10-20  9:49 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA,
	Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA, Ram Amrani, Ram Amrani

From: Ram Amrani <Ram.Amrani-74tsMCuadCbQT0dZR+AlfA@public.gmane.org>

Introducing verbs - create, modify, query and destroy for QPs CQs and etc.

Signed-off-by: Ram Amrani <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
---
 providers/qedr/qelr_verbs.c | 1948 +++++++++++++++++++++++++++++++++++++++++++
 providers/qedr/qelr_verbs.h |   83 ++
 2 files changed, 2031 insertions(+)
 create mode 100644 providers/qedr/qelr_verbs.c
 create mode 100644 providers/qedr/qelr_verbs.h

diff --git a/providers/qedr/qelr_verbs.c b/providers/qedr/qelr_verbs.c
new file mode 100644
index 0000000..496493a
--- /dev/null
+++ b/providers/qedr/qelr_verbs.c
@@ -0,0 +1,1948 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <config.h>
+
+#include <assert.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <signal.h>
+#include <errno.h>
+#include <pthread.h>
+#include <malloc.h>
+#include <sys/mman.h>
+#include <netinet/in.h>
+#include <unistd.h>
+
+#include "qelr.h"
+#include "qelr_abi.h"
+#include "qelr_chain.h"
+#include "qelr_verbs.h"
+
+#define PTR_LO(x) ((uint32_t)(((uint64_t)(x)) & 0xffffffff))
+#define PTR_HI(x) ((uint32_t)(((uint64_t)(x)) >> 32))
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <execinfo.h>
+
+/* Fast path debug prints */
+#define FP_DP_VERBOSE(...)
+/* #define FP_DP_VERBOSE(...)	DP_VERBOSE(__VA_ARGS__) */
+
+#define QELR_SQE_ELEMENT_SIZE	(sizeof(struct rdma_sq_sge))
+#define QELR_RQE_ELEMENT_SIZE	(sizeof(struct rdma_rq_sge))
+#define QELR_CQE_SIZE		(sizeof(union rdma_cqe))
+
+static void qelr_inc_sw_cons_u16(struct qelr_qp_hwq_info *info)
+{
+	info->cons = (info->cons + 1) % info->max_wr;
+	info->wqe_cons++;
+}
+
+static void qelr_inc_sw_prod_u16(struct qelr_qp_hwq_info *info)
+{
+	info->prod = (info->prod + 1) % info->max_wr;
+}
+
+int qelr_query_device(struct ibv_context *context,
+		      struct ibv_device_attr *attr)
+{
+	struct ibv_query_device cmd;
+	uint64_t fw_ver;
+	unsigned int major, minor, revision, eng;
+	int status;
+
+	bzero(attr, sizeof(*attr));
+	status = ibv_cmd_query_device(context, attr, &fw_ver, &cmd,
+				      sizeof(cmd));
+
+	major = (fw_ver >> 24) & 0xff;
+	minor = (fw_ver >> 16) & 0xff;
+	revision = (fw_ver >> 8) & 0xff;
+	eng = fw_ver & 0xff;
+
+	snprintf(attr->fw_ver, sizeof(attr->fw_ver),
+		 "%d.%d.%d.%d", major, minor, revision, eng);
+
+	return status;
+}
+
+int qelr_query_port(struct ibv_context *context, uint8_t port,
+		    struct ibv_port_attr *attr)
+{
+	struct ibv_query_port cmd;
+	int status;
+
+	status = ibv_cmd_query_port(context, port, attr, &cmd, sizeof(cmd));
+	return status;
+}
+
+struct ibv_pd *qelr_alloc_pd(struct ibv_context *context)
+{
+	struct qelr_alloc_pd_req cmd;
+	struct qelr_alloc_pd_resp resp;
+	struct qelr_pd *pd;
+	struct qelr_devctx *cxt = get_qelr_ctx(context);
+
+	pd = malloc(sizeof(*pd));
+	if (!pd)
+		return NULL;
+
+	bzero(pd, sizeof(*pd));
+	memset(&cmd, 0, sizeof(cmd));
+
+	if (ibv_cmd_alloc_pd(context, &pd->ibv_pd, &cmd.cmd, sizeof(cmd),
+			     &resp.ibv_resp, sizeof(resp))) {
+		free(pd);
+		return NULL;
+	}
+
+	pd->pd_id = resp.pd_id;
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_INIT, "Allocated pd: %d\n", pd->pd_id);
+
+	return &pd->ibv_pd;
+}
+
+int qelr_dealloc_pd(struct ibv_pd *ibpd)
+{
+	int rc = 0;
+	struct qelr_pd *pd = get_qelr_pd(ibpd);
+	struct qelr_devctx *cxt = get_qelr_ctx(ibpd->context);
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_INIT, "Deallocated pd: %d\n",
+		   pd->pd_id);
+
+	rc = ibv_cmd_dealloc_pd(ibpd);
+
+	if (rc)
+		return rc;
+
+	free(pd);
+
+	return rc;
+}
+
+struct ibv_mr *qelr_reg_mr(struct ibv_pd *ibpd, void *addr,
+			   size_t len, int access)
+{
+	struct qelr_mr *mr;
+	struct ibv_reg_mr cmd;
+	struct qelr_reg_mr_resp resp;
+	struct qelr_pd *pd = get_qelr_pd(ibpd);
+	struct qelr_devctx *cxt = get_qelr_ctx(ibpd->context);
+
+	uint64_t hca_va = (uintptr_t) addr;
+
+	mr = malloc(sizeof(*mr));
+	if (!mr)
+		return NULL;
+
+	bzero(mr, sizeof(*mr));
+
+	if (ibv_cmd_reg_mr(ibpd, addr, len, hca_va,
+			   access, &mr->ibv_mr, &cmd, sizeof(cmd),
+			   &resp.ibv_resp, sizeof(resp))) {
+		free(mr);
+		return NULL;
+	}
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_MR,
+		   "MR Register %p completed succesfully pd_id=%d addr=%p len=%zu access=%d lkey=%x rkey=%x\n",
+		   mr, pd->pd_id, addr, len, access, mr->ibv_mr.lkey,
+		   mr->ibv_mr.rkey);
+
+	return &mr->ibv_mr;
+}
+
+int qelr_dereg_mr(struct ibv_mr *mr)
+{
+	struct qelr_devctx *cxt = get_qelr_ctx(mr->context);
+	int rc;
+
+	rc = ibv_cmd_dereg_mr(mr);
+	if (rc)
+		return rc;
+
+	free(mr);
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_MR,
+		   "MR DERegister %p completed succesfully\n", mr);
+	return 0;
+}
+
+static void consume_cqe(struct qelr_cq *cq)
+{
+	if (cq->latest_cqe == cq->toggle_cqe)
+		cq->chain_toggle ^= RDMA_CQE_REQUESTER_TOGGLE_BIT_MASK;
+
+	cq->latest_cqe = qelr_chain_consume(&cq->chain);
+}
+
+static inline int qelr_cq_entries(int entries)
+{
+	/* FW requires an extra entry */
+	return entries + 1;
+}
+
+struct ibv_cq *qelr_create_cq(struct ibv_context *context, int cqe,
+			      struct ibv_comp_channel *channel,
+			      int comp_vector)
+{
+	struct qelr_devctx *cxt = get_qelr_ctx(context);
+	struct qelr_create_cq_resp resp;
+	struct qelr_create_cq_req cmd;
+	struct qelr_cq *cq;
+	int chain_size;
+	int rc;
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+		   "create cq: context=%p, cqe=%d, channel=%p, comp_vector=%d\n",
+		   context, cqe, channel, comp_vector);
+
+	if (!cqe || cqe > cxt->max_cqes) {
+		DP_ERR(cxt->dbg_fp,
+		       "create cq: failed. attempted to allocate %d cqes but valid range is 1...%d\n",
+		       cqe, cqe > cxt->max_cqes);
+		return NULL;
+	}
+
+	/* allocate CQ structure */
+	cq = calloc(1, sizeof(*cq));
+	if (!cq)
+		return NULL;
+
+	/* allocate CQ buffer */
+	chain_size = qelr_cq_entries(cqe) * QELR_CQE_SIZE;
+	rc = qelr_chain_alloc(&cq->chain, chain_size, cxt->kernel_page_size,
+			      QELR_CQE_SIZE);
+	if (rc)
+		goto err_0;
+
+	cmd.addr = (uintptr_t) cq->chain.addr;
+	cmd.len = cq->chain.size;
+	rc = ibv_cmd_create_cq(context, cqe, channel, comp_vector,
+			       &cq->ibv_cq, &cmd.ibv_cmd, sizeof(cmd),
+			       &resp.ibv_resp, sizeof(resp));
+	if (rc) {
+		DP_ERR(cxt->dbg_fp, "create cq: failed with rc = %d\n", rc);
+		goto err_1;
+	}
+
+	/* map the doorbell and prepare its data */
+	cq->db.data.icid = htole16(resp.icid);
+	cq->db.data.params = DB_AGG_CMD_SET <<
+		RDMA_PWM_VAL32_DATA_AGG_CMD_SHIFT;
+	cq->db_addr = cxt->db_addr + resp.db_offset;
+
+	/* point to the very last element, passing this we will toggle */
+	cq->toggle_cqe = qelr_chain_get_last_elem(&cq->chain);
+	cq->chain_toggle = RDMA_CQE_REQUESTER_TOGGLE_BIT_MASK;
+	cq->latest_cqe = NULL; /* must be different from chain_toggle */
+	consume_cqe(cq);
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+		   "create cq: successfully created %p\n", cq);
+
+	return &cq->ibv_cq;
+
+err_1:
+	qelr_chain_free(&cq->chain);
+err_0:
+	free(cq);
+
+	return NULL;
+}
+
+int qelr_destroy_cq(struct ibv_cq *ibv_cq)
+{
+	struct qelr_devctx *cxt = get_qelr_ctx(ibv_cq->context);
+	struct qelr_cq *cq = get_qelr_cq(ibv_cq);
+	int rc;
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ, "destroy cq: %p\n", cq);
+
+	rc = ibv_cmd_destroy_cq(ibv_cq);
+	if (rc) {
+		DP_ERR(cxt->dbg_fp,
+		       "destroy cq: failed to destroy %p, got %d.\n", cq, rc);
+		return rc;
+	}
+
+	qelr_chain_free(&cq->chain);
+	free(cq);
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+		   "destroy cq: successfully destroyed %p\n", cq);
+
+	return 0;
+}
+
+static void qelr_free_rq(struct qelr_qp *qp)
+{
+	free(qp->rqe_wr_id);
+}
+
+static void qelr_free_sq(struct qelr_qp *qp)
+{
+	free(qp->wqe_wr_id);
+}
+
+static void qelr_chain_free_sq(struct qelr_qp *qp)
+{
+	qelr_chain_free(&qp->sq.chain);
+}
+
+static void qelr_chain_free_rq(struct qelr_qp *qp)
+{
+	qelr_chain_free(&qp->rq.chain);
+}
+
+static inline int qelr_create_qp_buffers_sq(struct qelr_devctx *cxt,
+					    struct qelr_qp *qp,
+					    struct ibv_qp_init_attr *attrs)
+{
+	uint32_t max_send_wr, max_send_sges, max_send_buf;
+	int chain_size;
+	int rc;
+
+	/* SQ */
+	max_send_wr = attrs->cap.max_send_wr;
+	max_send_wr = max_t(uint32_t, max_send_wr, 1);
+	max_send_wr = min_t(uint32_t, max_send_wr, cxt->max_send_wr);
+	max_send_sges = max_send_wr * cxt->sges_per_send_wr;
+	max_send_buf = max_send_sges * QELR_SQE_ELEMENT_SIZE;
+
+	chain_size = max_send_buf;
+	rc = qelr_chain_alloc(&qp->sq.chain, chain_size, cxt->kernel_page_size,
+			      QELR_SQE_ELEMENT_SIZE);
+	if (rc)
+		DP_ERR(cxt->dbg_fp, "create qp: failed to map SQ, got %d", rc);
+
+	qp->sq.max_wr = max_send_wr;
+	qp->sq.max_sges = cxt->sges_per_send_wr;
+
+	return rc;
+}
+
+static inline int qelr_create_qp_buffers_rq(struct qelr_devctx *cxt,
+					    struct qelr_qp *qp,
+					    struct ibv_qp_init_attr *attrs)
+{
+	uint32_t max_recv_wr, max_recv_sges, max_recv_buf;
+	int chain_size;
+	int rc;
+
+	/* RQ */
+	max_recv_wr = attrs->cap.max_recv_wr;
+	max_recv_wr = max_t(uint32_t, max_recv_wr, 1);
+	max_recv_wr = min_t(uint32_t, max_recv_wr, cxt->max_recv_wr);
+	max_recv_sges = max_recv_wr * cxt->sges_per_recv_wr;
+	max_recv_buf = max_recv_sges * QELR_RQE_ELEMENT_SIZE;
+	qp->rq.max_wr = max_recv_wr;
+	qp->rq.max_sges = RDMA_MAX_SGE_PER_RQ_WQE;
+
+	chain_size = max_recv_buf;
+	rc = qelr_chain_alloc(&qp->rq.chain, chain_size, cxt->kernel_page_size,
+			      QELR_RQE_ELEMENT_SIZE);
+	if (rc)
+		DP_ERR(cxt->dbg_fp, "create qp: failed to map RQ, got %d", rc);
+
+	qp->rq.max_wr = max_recv_wr;
+	qp->rq.max_sges = cxt->sges_per_recv_wr;
+
+	return rc;
+}
+
+static inline int qelr_create_qp_buffers(struct qelr_devctx *cxt,
+					 struct qelr_qp *qp,
+					 struct ibv_qp_init_attr *attrs)
+{
+	int rc;
+
+	rc = qelr_create_qp_buffers_sq(cxt, qp, attrs);
+	if (rc)
+		return rc;
+
+	rc = qelr_create_qp_buffers_rq(cxt, qp, attrs);
+	if (rc) {
+		qelr_chain_free_sq(qp);
+		return rc;
+	}
+
+	return 0;
+}
+
+static inline int qelr_configure_qp_sq(struct qelr_devctx *cxt,
+				       struct qelr_qp *qp,
+				       struct ibv_qp_init_attr *attrs,
+				       struct qelr_create_qp_resp *resp)
+{
+	qp->sq.icid = resp->sq_icid;
+	qp->sq.db_data.data.icid = htole16(resp->sq_icid);
+	qp->sq.prod = 0;
+	qp->sq.db = cxt->db_addr + resp->sq_db_offset;
+	qp->sq.edpm_db = cxt->db_addr;
+
+	/* shadow SQ */
+	qp->wqe_wr_id = calloc(qp->sq.max_wr, sizeof(*qp->wqe_wr_id));
+	if (!qp->wqe_wr_id) {
+		DP_ERR(cxt->dbg_fp,
+		       "create qp: failed shdow SQ memory allocation\n");
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+static inline int qelr_configure_qp_rq(struct qelr_devctx *cxt,
+				       struct qelr_qp *qp,
+				       struct ibv_qp_init_attr *attrs,
+				       struct qelr_create_qp_resp *resp)
+{
+	/* RQ */
+	qp->rq.icid = resp->rq_icid;
+	qp->rq.db_data.data.icid = htole16(resp->rq_icid);
+	qp->rq.db = cxt->db_addr + resp->rq_db_offset;
+	qp->rq.prod = 0;
+
+	/* shadow RQ */
+	qp->rqe_wr_id = calloc(qp->rq.max_wr, sizeof(*qp->rqe_wr_id));
+	if (!qp->rqe_wr_id) {
+		DP_ERR(cxt->dbg_fp,
+		       "create qp: failed shdow RQ memory allocation\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static inline int qelr_configure_qp(struct qelr_devctx *cxt, struct qelr_qp *qp,
+				    struct ibv_qp_init_attr *attrs,
+				    struct qelr_create_qp_resp *resp)
+{
+	int rc;
+
+	/* general */
+	pthread_spin_init(&qp->q_lock, PTHREAD_PROCESS_PRIVATE);
+	qp->qp_id = resp->qp_id;
+	qp->state = QELR_QPS_RST;
+	qp->sq_sig_all = attrs->sq_sig_all;
+	qp->atomic_supported = resp->atomic_supported;
+
+	rc = qelr_configure_qp_sq(cxt, qp, attrs, resp);
+	if (rc)
+		return rc;
+	rc = qelr_configure_qp_rq(cxt, qp, attrs, resp);
+	if (rc)
+		qelr_free_sq(qp);
+
+	return rc;
+}
+
+static inline void qelr_print_qp_init_attr(
+		struct qelr_devctx *cxt,
+		struct ibv_qp_init_attr *attr)
+{
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP,
+		   "create qp: send_cq=%p, recv_cq=%p, srq=%p, max_inline_data=%d, max_recv_sge=%d, max_recv_wr=%d, max_send_sge=%d, max_send_wr=%d, qp_type=%d, sq_sig_all=%d\n",
+		   attr->send_cq, attr->recv_cq, attr->srq,
+		   attr->cap.max_inline_data, attr->cap.max_recv_sge,
+		   attr->cap.max_recv_wr, attr->cap.max_send_sge,
+		   attr->cap.max_send_wr, attr->qp_type, attr->sq_sig_all);
+}
+
+static inline void
+qelr_create_qp_configure_sq_req(struct qelr_qp *qp,
+				struct qelr_create_qp_req *req)
+{
+	req->sq_addr = (uintptr_t)qp->sq.chain.addr;
+	req->sq_len = qp->sq.chain.size;
+}
+
+static inline void
+qelr_create_qp_configure_rq_req(struct qelr_qp *qp,
+				struct qelr_create_qp_req *req)
+{
+	req->rq_addr = (uintptr_t)qp->rq.chain.addr;
+	req->rq_len = qp->rq.chain.size;
+}
+
+static inline void
+qelr_create_qp_configure_req(struct qelr_qp *qp,
+			     struct qelr_create_qp_req *req)
+{
+	memset(req, 0, sizeof(*req));
+	req->qp_handle_hi = PTR_HI(qp);
+	req->qp_handle_lo = PTR_LO(qp);
+	qelr_create_qp_configure_sq_req(qp, req);
+	qelr_create_qp_configure_rq_req(qp, req);
+}
+
+struct ibv_qp *qelr_create_qp(struct ibv_pd *pd,
+			      struct ibv_qp_init_attr *attrs)
+{
+	struct qelr_devctx *cxt = get_qelr_ctx(pd->context);
+	struct qelr_create_qp_resp resp;
+	struct qelr_create_qp_req req;
+	struct qelr_qp *qp;
+	int rc;
+
+	qelr_print_qp_init_attr(cxt, attrs);
+
+	qp = calloc(1, sizeof(*qp));
+	if (!qp)
+		return NULL;
+
+	rc = qelr_create_qp_buffers(cxt, qp, attrs);
+	if (rc)
+		goto err0;
+
+	qelr_create_qp_configure_req(qp, &req);
+
+	rc = ibv_cmd_create_qp(pd, &qp->ibv_qp, attrs, &req.ibv_qp, sizeof(req),
+			       &resp.ibv_resp, sizeof(resp));
+	if (rc) {
+		DP_ERR(cxt->dbg_fp,
+		       "create qp: failed on ibv_cmd_create_qp with %d\n", rc);
+		goto err1;
+	}
+
+	rc = qelr_configure_qp(cxt, qp, attrs, &resp);
+	if (rc)
+		goto err2;
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP,
+		   "create qp: successfully created %p. handle_hi=%x handle_lo=%x\n",
+		   qp, req.qp_handle_hi, req.qp_handle_lo);
+
+	return &qp->ibv_qp;
+
+err2:
+	rc = ibv_cmd_destroy_qp(&qp->ibv_qp);
+	if (rc)
+		DP_ERR(cxt->dbg_fp, "create qp: fatal fault. rc=%d\n", rc);
+err1:
+	qelr_chain_free_sq(qp);
+	qelr_chain_free_rq(qp);
+err0:
+	free(qp);
+
+	return NULL;
+}
+
+static void qelr_print_ah_attr(struct qelr_devctx *cxt, struct ibv_ah_attr *attr)
+{
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP,
+		   "grh.dgid=[%lx:%lx], grh.flow_label=%d, grh.sgid_index=%d, grh.hop_limit=%d, grh.traffic_class=%d, dlid=%d, sl=%d, src_path_bits=%d, static_rate = %d, port_num=%d\n",
+		   attr->grh.dgid.global.interface_id,
+		   attr->grh.dgid.global.subnet_prefix,
+		   attr->grh.flow_label, attr->grh.hop_limit,
+		   attr->grh.sgid_index, attr->grh.traffic_class, attr->dlid,
+		   attr->sl, attr->src_path_bits,
+		   attr->static_rate, attr->port_num);
+}
+
+static void qelr_print_qp_attr(struct qelr_devctx *cxt, struct ibv_qp_attr *attr)
+{
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP,
+		   "\tqp_state=%d\tcur_qp_state=%d\tpath_mtu=%d\tpath_mig_state=%d\tqkey=%d\trq_psn=%d\tsq_psn=%d\tdest_qp_num=%d\tqp_access_flags=%d\tmax_inline_data=%d\tmax_recv_sge=%d\tmax_recv_wr=%d\tmax_send_sge=%d\tmax_send_wr=%d\tpkey_index=%d\talt_pkey_index=%d\ten_sqd_async_notify=%d\tsq_draining=%d\tmax_rd_atomic=%d\tmax_dest_rd_atomic=%d\tmin_rnr_timer=%d\tport_num=%d\ttimeout=%d\tretry_cnt=%d\trnr_retry=%d\talt_port_num=%d\talt_timeout=%d\n",
+		   attr->qp_state, attr->cur_qp_state, attr->path_mtu,
+		   attr->path_mig_state, attr->qkey, attr->rq_psn, attr->sq_psn,
+		   attr->dest_qp_num, attr->qp_access_flags,
+		   attr->cap.max_inline_data, attr->cap.max_recv_sge,
+		   attr->cap.max_recv_wr, attr->cap.max_send_sge,
+		   attr->cap.max_send_wr, attr->pkey_index,
+		   attr->alt_pkey_index, attr->en_sqd_async_notify,
+		   attr->sq_draining, attr->max_rd_atomic,
+		   attr->max_dest_rd_atomic, attr->min_rnr_timer,
+		   attr->port_num, attr->timeout, attr->retry_cnt,
+		   attr->rnr_retry, attr->alt_port_num, attr->alt_timeout);
+
+	qelr_print_ah_attr(cxt, &attr->ah_attr);
+	qelr_print_ah_attr(cxt, &attr->alt_ah_attr);
+}
+
+int qelr_query_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr,
+		    int attr_mask, struct ibv_qp_init_attr *init_attr)
+{
+	struct ibv_query_qp cmd;
+	struct qelr_devctx *cxt = get_qelr_ctx(qp->context);
+	int rc;
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP, "QP Query %p, attr_mask=0x%x\n",
+		   get_qelr_qp(qp), attr_mask);
+
+	rc = ibv_cmd_query_qp(qp, attr, attr_mask,
+			      init_attr, &cmd, sizeof(cmd));
+
+	qelr_print_qp_attr(cxt, attr);
+
+	return rc;
+}
+
+static enum qelr_qp_state get_qelr_qp_state(enum ibv_qp_state qps)
+{
+	switch (qps) {
+	case IBV_QPS_RESET:
+		return QELR_QPS_RST;
+	case IBV_QPS_INIT:
+		return QELR_QPS_INIT;
+	case IBV_QPS_RTR:
+		return QELR_QPS_RTR;
+	case IBV_QPS_RTS:
+		return QELR_QPS_RTS;
+	case IBV_QPS_SQD:
+		return QELR_QPS_SQD;
+	case IBV_QPS_SQE:
+		return QELR_QPS_SQE;
+	case IBV_QPS_ERR:
+	default:
+		return QELR_QPS_ERR;
+	};
+}
+
+static void qelr_reset_qp_hwq_info(struct qelr_qp_hwq_info *q)
+{
+	qelr_chain_reset(&q->chain);
+	q->prod = 0;
+	q->cons = 0;
+	q->wqe_cons = 0;
+	q->db_data.data.value = 0;
+}
+
+static int qelr_update_qp_state(struct qelr_qp *qp,
+				enum ibv_qp_state new_ib_state)
+{
+	int status = 0;
+	enum qelr_qp_state new_state;
+
+	new_state = get_qelr_qp_state(new_ib_state);
+
+	pthread_spin_lock(&qp->q_lock);
+
+	if (new_state == qp->state) {
+		pthread_spin_unlock(&qp->q_lock);
+		return 0;
+	}
+
+	switch (qp->state) {
+	case QELR_QPS_RST:
+		switch (new_state) {
+		case QELR_QPS_INIT:
+			qp->prev_wqe_size = 0;
+			qelr_reset_qp_hwq_info(&qp->sq);
+			qelr_reset_qp_hwq_info(&qp->rq);
+			break;
+		default:
+			status = -EINVAL;
+			break;
+		};
+		break;
+	case QELR_QPS_INIT:
+		/* INIT->XXX */
+		switch (new_state) {
+		case QELR_QPS_RTR:
+			/* Update doorbell (in case post_recv was done before
+			 * move to RTR)
+			 */
+			wmb();
+			writel(qp->rq.db_data.raw, qp->rq.db);
+			wc_wmb();
+			break;
+		case QELR_QPS_ERR:
+			break;
+		default:
+			/* invalid state change. */
+			status = -EINVAL;
+			break;
+		};
+		break;
+	case QELR_QPS_RTR:
+		/* RTR->XXX */
+		switch (new_state) {
+		case QELR_QPS_RTS:
+			break;
+		case QELR_QPS_ERR:
+			break;
+		default:
+			/* invalid state change. */
+			status = -EINVAL;
+			break;
+		};
+		break;
+	case QELR_QPS_RTS:
+		/* RTS->XXX */
+		switch (new_state) {
+		case QELR_QPS_SQD:
+		case QELR_QPS_SQE:
+			break;
+		case QELR_QPS_ERR:
+			break;
+		default:
+			/* invalid state change. */
+			status = -EINVAL;
+			break;
+		};
+		break;
+	case QELR_QPS_SQD:
+		/* SQD->XXX */
+		switch (new_state) {
+		case QELR_QPS_RTS:
+		case QELR_QPS_SQE:
+		case QELR_QPS_ERR:
+			break;
+		default:
+			/* invalid state change. */
+			status = -EINVAL;
+			break;
+		};
+		break;
+	case QELR_QPS_SQE:
+		switch (new_state) {
+		case QELR_QPS_RTS:
+		case QELR_QPS_ERR:
+			break;
+		default:
+			/* invalid state change. */
+			status = -EINVAL;
+			break;
+		};
+		break;
+	case QELR_QPS_ERR:
+		/* ERR->XXX */
+		switch (new_state) {
+		case QELR_QPS_RST:
+			break;
+		default:
+			status = -EINVAL;
+			break;
+		};
+		break;
+	default:
+		status = -EINVAL;
+		break;
+	};
+	if (!status)
+		qp->state = new_state;
+
+	pthread_spin_unlock(&qp->q_lock);
+
+	return status;
+}
+
+int qelr_modify_qp(struct ibv_qp *ibqp, struct ibv_qp_attr *attr,
+		     int attr_mask)
+{
+	struct ibv_modify_qp cmd;
+	struct qelr_qp *qp = get_qelr_qp(ibqp);
+	struct qelr_devctx *cxt = get_qelr_ctx(ibqp->context);
+	int rc;
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP, "QP Modify %p, attr_mask=0x%x\n",
+		   qp, attr_mask);
+
+	qelr_print_qp_attr(cxt, attr);
+
+	rc = ibv_cmd_modify_qp(ibqp, attr, attr_mask, &cmd, sizeof(cmd));
+
+	if (!rc && (attr_mask & IBV_QP_STATE)) {
+		DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP, "QP Modify state %d->%d\n",
+			   qp->state, attr->qp_state);
+		qelr_update_qp_state(qp, attr->qp_state);
+	}
+
+	return rc;
+}
+
+int qelr_destroy_qp(struct ibv_qp *ibqp)
+{
+	struct qelr_devctx *cxt = get_qelr_ctx(ibqp->context);
+	struct qelr_qp *qp = get_qelr_qp(ibqp);
+	int rc = 0;
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP, "destroy qp: %p\n", qp);
+
+	rc = ibv_cmd_destroy_qp(ibqp);
+	if (rc) {
+		DP_ERR(cxt->dbg_fp,
+		       "destroy qp: failed to destroy %p, got %d.\n", qp, rc);
+		return rc;
+	}
+
+	qelr_free_sq(qp);
+	qelr_free_rq(qp);
+	qelr_chain_free_sq(qp);
+	qelr_chain_free_rq(qp);
+	free(qp);
+
+	DP_VERBOSE(cxt->dbg_fp, QELR_MSG_QP,
+		   "destroy cq: succesfully destroyed %p\n", qp);
+
+	return 0;
+}
+
+static int sge_data_len(struct ibv_sge *sg_list, int num_sge)
+{
+	int i, len = 0;
+
+	for (i = 0; i < num_sge; i++)
+		len += sg_list[i].length;
+	return len;
+}
+
+static void swap_wqe_data64(uint64_t *p)
+{
+	int i;
+
+	for (i = 0; i < ROCE_WQE_ELEM_SIZE / sizeof(uint64_t); i++, p++)
+		*p = htobe64(htole64(*p));
+}
+
+static void qelr_init_edpm_info(struct qelr_qp *qp, struct qelr_devctx *cxt)
+{
+	memset(&qp->edpm, 0, sizeof(qp->edpm));
+
+	qp->edpm.rdma_ext = (struct qelr_rdma_ext *)&qp->edpm.dpm_payload;
+	if (qelr_chain_is_full(&qp->sq.chain))
+		qp->edpm.is_edpm = 1;
+}
+
+#define QELR_IB_OPCODE_SEND_ONLY                         0x04
+#define QELR_IB_OPCODE_SEND_ONLY_WITH_IMMEDIATE          0x05
+#define QELR_IB_OPCODE_RDMA_WRITE_ONLY                   0x0a
+#define QELR_IB_OPCODE_RDMA_WRITE_ONLY_WITH_IMMEDIATE    0x0b
+#define QELR_IS_IMM(opcode) \
+	((opcode == QELR_IB_OPCODE_SEND_ONLY_WITH_IMMEDIATE) || \
+	 (opcode == QELR_IB_OPCODE_RDMA_WRITE_ONLY_WITH_IMMEDIATE))
+
+static inline void qelr_edpm_set_msg_data(struct qelr_qp *qp,
+					  uint8_t opcode,
+					  uint16_t length,
+					  uint8_t se,
+					  uint8_t comp)
+{
+	uint32_t wqe_size = length +
+		(QELR_IS_IMM(opcode) ? sizeof(uint32_t) : 0);
+	uint32_t dpm_size = wqe_size + sizeof(struct db_roce_dpm_data);
+
+	if (!qp->edpm.is_edpm)
+		return;
+
+	SET_FIELD(qp->edpm.msg.data.params.params,
+		  DB_ROCE_DPM_PARAMS_SIZE,
+		  (dpm_size + sizeof(uint64_t) - 1) / sizeof(uint64_t));
+
+	SET_FIELD(qp->edpm.msg.data.params.params,
+		  DB_ROCE_DPM_PARAMS_DPM_TYPE, DPM_ROCE);
+
+	SET_FIELD(qp->edpm.msg.data.params.params,
+		  DB_ROCE_DPM_PARAMS_OPCODE,
+		  opcode);
+
+	SET_FIELD(qp->edpm.msg.data.params.params,
+		  DB_ROCE_DPM_PARAMS_WQE_SIZE,
+		  wqe_size);
+
+	SET_FIELD(qp->edpm.msg.data.params.params,
+		  DB_ROCE_DPM_PARAMS_COMPLETION_FLG, comp ? 1 : 0);
+
+	SET_FIELD(qp->edpm.msg.data.params.params,
+		  DB_ROCE_DPM_PARAMS_S_FLG,
+		  se ? 1 : 0);
+}
+
+static inline void qelr_edpm_set_inv_imm(struct qelr_qp *qp,
+					 uint32_t inv_key_or_imm_data)
+{
+	if (!qp->edpm.is_edpm)
+		return;
+
+	memcpy(&qp->edpm.dpm_payload[qp->edpm.dpm_payload_offset],
+	       &inv_key_or_imm_data, sizeof(inv_key_or_imm_data));
+
+	qp->edpm.dpm_payload_offset += sizeof(inv_key_or_imm_data);
+	qp->edpm.dpm_payload_size += sizeof(inv_key_or_imm_data);
+}
+
+static inline void qelr_edpm_set_rdma_ext(struct qelr_qp *qp,
+					  uint64_t remote_addr,
+					  uint32_t rkey)
+{
+	if (!qp->edpm.is_edpm)
+		return;
+
+	qp->edpm.rdma_ext->remote_va = htonll(remote_addr);
+	qp->edpm.rdma_ext->remote_key = htonl(rkey);
+	qp->edpm.dpm_payload_offset += sizeof(*qp->edpm.rdma_ext);
+	qp->edpm.dpm_payload_size += sizeof(*qp->edpm.rdma_ext);
+}
+
+static inline void qelr_edpm_set_payload(struct qelr_qp *qp, char *buf,
+					 uint32_t length)
+{
+	if (!qp->edpm.is_edpm)
+		return;
+
+	memcpy(&qp->edpm.dpm_payload[qp->edpm.dpm_payload_offset],
+	       buf,
+	       length);
+
+	qp->edpm.dpm_payload_offset += length;
+}
+
+#define MIN(X, Y) (((X) < (Y)) ? (X) : (Y))
+
+static uint32_t qelr_prepare_sq_inline_data(struct qelr_qp *qp,
+					    uint8_t *wqe_size,
+					    struct ibv_send_wr *wr,
+					    struct ibv_send_wr **bad_wr,
+					    uint8_t *bits, uint8_t bit)
+{
+	int i, seg_siz;
+	char *seg_prt, *wqe;
+	uint32_t data_size = sge_data_len(wr->sg_list, wr->num_sge);
+
+	if (data_size > ROCE_REQ_MAX_INLINE_DATA_SIZE) {
+		DP_ERR(stderr, "Too much inline data in WR: %d\n", data_size);
+		*bad_wr = wr;
+		return 0;
+	}
+
+	if (!data_size)
+		return data_size;
+
+	/* set the bit */
+	*bits |= bit;
+
+	seg_prt = NULL;
+	wqe = NULL;
+	seg_siz = 0;
+
+	/* copy data inline */
+	for (i = 0; i < wr->num_sge; i++) {
+		uint32_t len = wr->sg_list[i].length;
+		void *src = (void *)wr->sg_list[i].addr;
+
+		qelr_edpm_set_payload(qp, src, wr->sg_list[i].length);
+
+		while (len > 0) {
+			uint32_t cur;
+
+			/* new segment required */
+			if (!seg_siz) {
+				wqe = (char *)qelr_chain_produce(&qp->sq.chain);
+				seg_prt = wqe;
+				seg_siz = sizeof(struct rdma_sq_common_wqe);
+				(*wqe_size)++;
+			}
+
+			/* calculate currently allowed length */
+			cur = MIN(len, seg_siz);
+
+			memcpy(seg_prt, src, cur);
+
+			/* update segment variables */
+			seg_prt += cur;
+			seg_siz -= cur;
+			/* update sge variables */
+			src += cur;
+			len -= cur;
+
+			/* swap fully-completed segments */
+			if (!seg_siz)
+				swap_wqe_data64((uint64_t *)wqe);
+		}
+	}
+
+	/* swap last not completed segment */
+	if (seg_siz)
+		swap_wqe_data64((uint64_t *)wqe);
+
+	if (qp->edpm.is_edpm) {
+		qp->edpm.dpm_payload_size += data_size;
+		qp->edpm.rdma_ext->dma_length = htonl(data_size);
+	}
+
+	return data_size;
+}
+
+static uint32_t qelr_prepare_sq_sges(struct qelr_qp *qp,
+				     uint8_t *wqe_size,
+				     struct ibv_send_wr *wr)
+{
+	uint32_t data_size = 0;
+	int i;
+
+	for (i = 0; i < wr->num_sge; i++) {
+		struct rdma_sq_sge *sge = qelr_chain_produce(&qp->sq.chain);
+
+		TYPEPTR_ADDR_SET(sge, addr, wr->sg_list[i].addr);
+		sge->l_key = htole32(wr->sg_list[i].lkey);
+		sge->length = htole32(wr->sg_list[i].length);
+		data_size += wr->sg_list[i].length;
+	}
+
+	if (wqe_size)
+		*wqe_size += wr->num_sge;
+
+	return data_size;
+}
+
+static uint32_t qelr_prepare_sq_rdma_data(struct qelr_qp *qp,
+					  struct rdma_sq_rdma_wqe_1st *rwqe,
+					  struct rdma_sq_rdma_wqe_2nd *rwqe2,
+					  struct ibv_send_wr *wr,
+					  struct ibv_send_wr **bad_wr)
+{
+	memset(rwqe2, 0, sizeof(*rwqe2));
+	rwqe2->r_key = htole32(wr->wr.rdma.rkey);
+	TYPEPTR_ADDR_SET(rwqe2, remote_va, wr->wr.rdma.remote_addr);
+
+	if (wr->send_flags & IBV_SEND_INLINE) {
+		uint8_t flags = 0;
+
+		SET_FIELD2(flags, RDMA_SQ_RDMA_WQE_1ST_INLINE_FLG, 1);
+		return qelr_prepare_sq_inline_data(qp, &rwqe->wqe_size, wr,
+						   bad_wr, &rwqe->flags, flags);
+	}
+	/* else */
+	qp->edpm.is_edpm = 0;
+
+	return qelr_prepare_sq_sges(qp, &rwqe->wqe_size, wr);
+}
+
+static uint32_t qelr_prepare_sq_send_data(struct qelr_qp *qp,
+					  struct rdma_sq_send_wqe_1st *swqe,
+					  struct rdma_sq_send_wqe_2st *swqe2,
+					  struct ibv_send_wr *wr,
+					  struct ibv_send_wr **bad_wr)
+{
+	memset(swqe2, 0, sizeof(*swqe2));
+	if (wr->send_flags & IBV_SEND_INLINE) {
+		uint8_t flags = 0;
+
+		SET_FIELD2(flags, RDMA_SQ_SEND_WQE_INLINE_FLG, 1);
+		return qelr_prepare_sq_inline_data(qp, &swqe->wqe_size, wr,
+						   bad_wr, &swqe->flags, flags);
+	}
+
+	qp->edpm.is_edpm = 0;
+
+	/* else */
+
+	return qelr_prepare_sq_sges(qp, &swqe->wqe_size, wr);
+}
+
+static enum ibv_wc_opcode qelr_ibv_to_wc_opcode(enum ibv_wr_opcode opcode)
+{
+	switch (opcode) {
+	case IBV_WR_RDMA_WRITE:
+	case IBV_WR_RDMA_WRITE_WITH_IMM:
+		return IBV_WC_RDMA_WRITE;
+	case IBV_WR_SEND_WITH_IMM:
+	case IBV_WR_SEND:
+		return IBV_WC_SEND;
+	case IBV_WR_RDMA_READ:
+		return IBV_WC_RDMA_READ;
+	case IBV_WR_ATOMIC_CMP_AND_SWP:
+		return IBV_WC_COMP_SWAP;
+	case IBV_WR_ATOMIC_FETCH_AND_ADD:
+		return IBV_WC_FETCH_ADD;
+	default:
+		return IBV_WC_SEND;
+	}
+}
+
+static void doorbell_edpm_qp(struct qelr_qp *qp)
+{
+	uint32_t offset = 0;
+	uint64_t data;
+	uint64_t *dpm_payload = (uint64_t *)qp->edpm.dpm_payload;
+	uint32_t num_dwords;
+	int bytes = 0;
+
+	if (!qp->edpm.is_edpm)
+		return;
+
+	wmb();
+
+	qp->edpm.msg.data.icid = qp->sq.db_data.data.icid;
+	qp->edpm.msg.data.prod_val = qp->sq.db_data.data.value;
+
+	writeq(qp->edpm.msg.raw, qp->sq.edpm_db);
+
+	bytes += sizeof(uint64_t);
+
+	num_dwords = (qp->edpm.dpm_payload_size + sizeof(uint64_t) - 1) /
+		sizeof(uint64_t);
+
+	while (offset < num_dwords) {
+		data = dpm_payload[offset];
+
+		writeq(data,
+		       qp->sq.edpm_db + sizeof(qp->edpm.msg.data) + offset *
+		       sizeof(uint64_t));
+
+		bytes += sizeof(uint64_t);
+		/* Need to place a barrier after every 64 bytes */
+		if (bytes == 64) {
+			wc_wmb();
+			bytes = 0;
+		}
+		offset++;
+	}
+
+	wc_wmb();
+}
+
+int qelr_post_send(struct ibv_qp *ib_qp, struct ibv_send_wr *wr,
+		   struct ibv_send_wr **bad_wr)
+{
+	int status = 0;
+	struct qelr_qp *qp = get_qelr_qp(ib_qp);
+	struct qelr_devctx *cxt = get_qelr_ctx(ib_qp->context);
+	uint8_t se, comp, fence;
+	uint16_t db_val;
+	*bad_wr = NULL;
+
+	pthread_spin_lock(&qp->q_lock);
+
+	if (qp->state != QELR_QPS_RTS && qp->state != QELR_QPS_SQD) {
+		pthread_spin_unlock(&qp->q_lock);
+		*bad_wr = wr;
+		return -EINVAL;
+	}
+
+	while (wr) {
+		struct rdma_sq_common_wqe *wqe;
+		struct rdma_sq_send_wqe_1st *swqe;
+		struct rdma_sq_send_wqe_2st *swqe2;
+		struct rdma_sq_rdma_wqe_1st *rwqe;
+		struct rdma_sq_rdma_wqe_2nd *rwqe2;
+		struct rdma_sq_atomic_wqe_1st *awqe1;
+		struct rdma_sq_atomic_wqe_2nd *awqe2;
+		struct rdma_sq_atomic_wqe_3rd *awqe3;
+
+		if ((qelr_chain_get_elem_left_u32(&qp->sq.chain) <
+					QELR_MAX_SQ_WQE_SIZE) ||
+		     (wr->num_sge > qp->sq.max_sges)) {
+			status = -ENOMEM;
+			*bad_wr = wr;
+			break;
+		}
+
+		qelr_init_edpm_info(qp, cxt);
+
+		wqe = qelr_chain_produce(&qp->sq.chain);
+
+		comp = (!!(wr->send_flags & IBV_SEND_SIGNALED)) ||
+				(!!qp->sq_sig_all);
+		qp->wqe_wr_id[qp->sq.prod].signaled = comp;
+
+		/* common fields */
+		wqe->flags = 0;
+		se = !!(wr->send_flags & IBV_SEND_SOLICITED);
+		fence = !!(wr->send_flags & IBV_SEND_FENCE);
+		SET_FIELD2(wqe->flags, RDMA_SQ_COMMON_WQE_SE_FLG, se);
+		SET_FIELD2(wqe->flags, RDMA_SQ_COMMON_WQE_COMP_FLG, comp);
+		SET_FIELD2(wqe->flags, RDMA_SQ_COMMON_WQE_RD_FENCE_FLG, fence);
+		wqe->prev_wqe_size = qp->prev_wqe_size;
+
+		qp->wqe_wr_id[qp->sq.prod].opcode =
+		qelr_ibv_to_wc_opcode(wr->opcode);
+
+		switch (wr->opcode) {
+		case IBV_WR_SEND_WITH_IMM:
+			wqe->req_type = RDMA_SQ_REQ_TYPE_SEND_WITH_IMM;
+			swqe = (struct rdma_sq_send_wqe_1st *)wqe;
+
+			swqe->wqe_size = 2;
+			swqe2 = (struct rdma_sq_send_wqe_2st *)
+					qelr_chain_produce(&qp->sq.chain);
+			swqe->inv_key_or_imm_data =
+					htonl(htole32(wr->imm_data));
+			qelr_edpm_set_inv_imm(qp, swqe->inv_key_or_imm_data);
+			swqe->length = htole32(
+					qelr_prepare_sq_send_data(qp, swqe,
+								  swqe2, wr,
+								  bad_wr));
+			qelr_edpm_set_msg_data(qp,
+					       QELR_IB_OPCODE_SEND_ONLY_WITH_IMMEDIATE,
+					       swqe->length,
+					       se, comp);
+			qp->wqe_wr_id[qp->sq.prod].wqe_size = swqe->wqe_size;
+			qp->prev_wqe_size = swqe->wqe_size;
+			qp->wqe_wr_id[qp->sq.prod].bytes_len = swqe->length;
+			FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				      "SEND w/ IMM length = %d imm data=%x\n",
+				      swqe->length, wr->imm_data);
+			break;
+
+		case IBV_WR_SEND:
+			wqe->req_type = RDMA_SQ_REQ_TYPE_SEND;
+			swqe = (struct rdma_sq_send_wqe_1st *)wqe;
+
+			swqe->wqe_size = 2;
+			swqe2 = (struct rdma_sq_send_wqe_2st *)
+					qelr_chain_produce(&qp->sq.chain);
+			swqe->length = htole32(
+					qelr_prepare_sq_send_data(qp, swqe,
+								  swqe2, wr,
+								  bad_wr));
+			qelr_edpm_set_msg_data(qp, QELR_IB_OPCODE_SEND_ONLY,
+					       swqe->length,
+					       se, comp);
+			qp->wqe_wr_id[qp->sq.prod].wqe_size = swqe->wqe_size;
+			qp->prev_wqe_size = swqe->wqe_size;
+			qp->wqe_wr_id[qp->sq.prod].bytes_len = swqe->length;
+			FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				      "SEND w/o IMM length = %d\n",
+				      swqe->length);
+			break;
+
+		case IBV_WR_RDMA_WRITE_WITH_IMM:
+			wqe->req_type = RDMA_SQ_REQ_TYPE_RDMA_WR_WITH_IMM;
+			rwqe = (struct rdma_sq_rdma_wqe_1st *)wqe;
+
+			rwqe->wqe_size = 2;
+			rwqe->imm_data = htonl(htole32(wr->imm_data));
+			qelr_edpm_set_rdma_ext(qp, wr->wr.rdma.remote_addr,
+					       wr->wr.rdma.rkey);
+			qelr_edpm_set_inv_imm(qp, rwqe->imm_data);
+			rwqe2 = (struct rdma_sq_rdma_wqe_2nd *)
+					qelr_chain_produce(&qp->sq.chain);
+			rwqe->length = htole32(
+					qelr_prepare_sq_rdma_data(qp, rwqe,
+								  rwqe2, wr,
+								  bad_wr));
+			qelr_edpm_set_msg_data(qp,
+					       QELR_IB_OPCODE_RDMA_WRITE_ONLY_WITH_IMMEDIATE,
+					       rwqe->length + sizeof(*qp->edpm.rdma_ext),
+					       se, comp);
+			qp->wqe_wr_id[qp->sq.prod].wqe_size = rwqe->wqe_size;
+			qp->prev_wqe_size = rwqe->wqe_size;
+			qp->wqe_wr_id[qp->sq.prod].bytes_len = rwqe->length;
+			FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				      "RDMA WRITE w/ IMM length = %d imm data=%x\n",
+				      rwqe->length, rwqe->imm_data);
+			break;
+
+		case IBV_WR_RDMA_WRITE:
+			wqe->req_type = RDMA_SQ_REQ_TYPE_RDMA_WR;
+			rwqe = (struct rdma_sq_rdma_wqe_1st *)wqe;
+
+			rwqe->wqe_size = 2;
+			qelr_edpm_set_rdma_ext(qp, wr->wr.rdma.remote_addr,
+					       wr->wr.rdma.rkey);
+			rwqe2 = (struct rdma_sq_rdma_wqe_2nd *)
+					qelr_chain_produce(&qp->sq.chain);
+			rwqe->length = htole32(
+				qelr_prepare_sq_rdma_data(qp, rwqe, rwqe2, wr,
+							  bad_wr));
+			qelr_edpm_set_msg_data(qp,
+					       QELR_IB_OPCODE_RDMA_WRITE_ONLY,
+					       rwqe->length + sizeof(*qp->edpm.rdma_ext),
+					       se, comp);
+			qp->wqe_wr_id[qp->sq.prod].wqe_size = rwqe->wqe_size;
+			qp->prev_wqe_size = rwqe->wqe_size;
+			qp->wqe_wr_id[qp->sq.prod].bytes_len = rwqe->length;
+			FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				      "RDMA WRITE w/o IMM length = %d\n",
+				      rwqe->length);
+			break;
+
+		case IBV_WR_RDMA_READ:
+			wqe->req_type = RDMA_SQ_REQ_TYPE_RDMA_RD;
+			rwqe = (struct rdma_sq_rdma_wqe_1st *)wqe;
+
+			rwqe->wqe_size = 2;
+			rwqe2 = (struct rdma_sq_rdma_wqe_2nd *)
+					qelr_chain_produce(&qp->sq.chain);
+			rwqe->length = htole32(
+					qelr_prepare_sq_rdma_data(qp, rwqe,
+								  rwqe2, wr,
+								  bad_wr));
+
+			qp->wqe_wr_id[qp->sq.prod].wqe_size = rwqe->wqe_size;
+			qp->prev_wqe_size = rwqe->wqe_size;
+			qp->wqe_wr_id[qp->sq.prod].bytes_len = rwqe->length;
+			FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				      "RDMA READ length = %d\n", rwqe->length);
+			break;
+
+		case IBV_WR_ATOMIC_CMP_AND_SWP:
+		case IBV_WR_ATOMIC_FETCH_AND_ADD:
+			FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ, "ATOMIC\n");
+			if (!qp->atomic_supported) {
+				DP_ERR(cxt->dbg_fp,
+				       "Atomic not supported on this machine\n");
+				status = -EINVAL;
+				*bad_wr = wr;
+				break;
+			}
+			awqe1 = (struct rdma_sq_atomic_wqe_1st *)wqe;
+			awqe1->wqe_size = 4;
+
+			awqe2 = (struct rdma_sq_atomic_wqe_2nd *)
+					qelr_chain_produce(&qp->sq.chain);
+			TYPEPTR_ADDR_SET(awqe2, remote_va,
+					 wr->wr.atomic.remote_addr);
+			awqe2->r_key = htole32(wr->wr.atomic.rkey);
+
+			awqe3 = (struct rdma_sq_atomic_wqe_3rd *)
+				qelr_chain_produce(&qp->sq.chain);
+
+			if (wr->opcode == IBV_WR_ATOMIC_FETCH_AND_ADD) {
+				wqe->req_type = RDMA_SQ_REQ_TYPE_ATOMIC_ADD;
+				TYPEPTR_ADDR_SET(awqe3, swap_data,
+						 wr->wr.atomic.compare_add);
+			} else {
+				wqe->req_type =
+					RDMA_SQ_REQ_TYPE_ATOMIC_CMP_AND_SWAP;
+				TYPEPTR_ADDR_SET(awqe3, swap_data,
+						 wr->wr.atomic.swap);
+				TYPEPTR_ADDR_SET(awqe3, cmp_data,
+						 wr->wr.atomic.compare_add);
+			}
+
+			qelr_prepare_sq_sges(qp, NULL, wr);
+
+			qp->wqe_wr_id[qp->sq.prod].wqe_size = awqe1->wqe_size;
+			qp->prev_wqe_size = awqe1->wqe_size;
+
+			break;
+
+		default:
+			*bad_wr = wr;
+			break;
+		}
+
+		if (*bad_wr) {
+			/* restore prod to its position before this WR was
+			 * processed
+			 */
+			qelr_chain_set_prod(&qp->sq.chain,
+					    le16toh(qp->sq.db_data.data.value),
+					    wqe);
+			/* restore prev_wqe_size */
+			qp->prev_wqe_size = wqe->prev_wqe_size;
+			status = -EINVAL;
+			DP_ERR(cxt->dbg_fp, "POST SEND FAILED\n");
+			break; /* out of the loop */
+		}
+
+		qp->wqe_wr_id[qp->sq.prod].wr_id = wr->wr_id;
+
+		qelr_inc_sw_prod_u16(&qp->sq);
+
+		db_val = le16toh(qp->sq.db_data.data.value) + 1;
+		qp->sq.db_data.data.value = htole16(db_val);
+
+		wr = wr->next;
+
+		/* Doorbell */
+		doorbell_edpm_qp(qp);
+	}
+
+	if (!qp->edpm.is_edpm) {
+		wmb();
+
+		writel(qp->sq.db_data.raw, qp->sq.db);
+
+		wc_wmb();
+	}
+
+	pthread_spin_unlock(&qp->q_lock);
+
+	return status;
+}
+
+int qelr_post_recv(struct ibv_qp *ibqp, struct ibv_recv_wr *wr,
+		   struct ibv_recv_wr **bad_wr)
+{
+	int status = 0;
+	struct qelr_qp *qp =  get_qelr_qp(ibqp);
+	struct qelr_devctx *cxt = get_qelr_ctx(ibqp->context);
+	uint16_t db_val;
+
+	pthread_spin_lock(&qp->q_lock);
+
+	if (qp->state == QELR_QPS_RST || qp->state == QELR_QPS_ERR) {
+		pthread_spin_unlock(&qp->q_lock);
+		*bad_wr = wr;
+		return -EINVAL;
+	}
+
+	while (wr) {
+		int i;
+
+		if (qelr_chain_get_elem_left_u32(&qp->rq.chain) <
+		    QELR_MAX_RQ_WQE_SIZE || wr->num_sge > qp->rq.max_sges) {
+			DP_ERR(cxt->dbg_fp,
+			       "Can't post WR  (%d < %d) || (%d > %d)\n",
+			       qelr_chain_get_elem_left_u32(&qp->rq.chain),
+			       QELR_MAX_RQ_WQE_SIZE, wr->num_sge,
+			       qp->rq.max_sges);
+			status = -ENOMEM;
+			*bad_wr = wr;
+			break;
+		}
+		FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+			      "RQ WR: SGEs: %d with wr_id[%d] = %lx\n",
+			      wr->num_sge, qp->rq.prod, wr->wr_id);
+		for (i = 0; i < wr->num_sge; i++) {
+			uint32_t flags = 0;
+			struct rdma_rq_sge *rqe;
+
+			/* first one must include the number of SGE in the
+			 * list
+			 */
+			if (!i)
+				SET_FIELD(flags, RDMA_RQ_SGE_NUM_SGES,
+					  wr->num_sge);
+
+			SET_FIELD(flags, RDMA_RQ_SGE_L_KEY,
+				  wr->sg_list[i].lkey);
+			rqe = qelr_chain_produce(&qp->rq.chain);
+			RQ_SGE_SET(rqe, wr->sg_list[i].addr,
+				   wr->sg_list[i].length, flags);
+			FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				      "[%d]: len %d key %x addr %x:%x\n", i,
+				      rqe->length, rqe->flags, rqe->addr.hi,
+				      rqe->addr.lo);
+		}
+		/* Special case of no sges. FW requires between 1-4 sges...
+		 * in this case we need to post 1 sge with length zero. this is
+		 * because rdma write with immediate consumes an RQ.
+		 */
+		if (!wr->num_sge) {
+			uint32_t flags = 0;
+			struct rdma_rq_sge *rqe;
+
+			/* first one must include the number of SGE in the
+			 * list
+			 */
+			SET_FIELD(flags, RDMA_RQ_SGE_L_KEY, 0);
+			SET_FIELD(flags, RDMA_RQ_SGE_NUM_SGES, 1);
+
+			rqe = qelr_chain_produce(&qp->rq.chain);
+			RQ_SGE_SET(rqe, 0, 0, flags);
+			i = 1;
+		}
+
+		qp->rqe_wr_id[qp->rq.prod].wr_id = wr->wr_id;
+		qp->rqe_wr_id[qp->rq.prod].wqe_size = i;
+
+		qelr_inc_sw_prod_u16(&qp->rq);
+
+		wmb();
+
+		db_val = le16toh(qp->rq.db_data.data.value) + 1;
+		qp->rq.db_data.data.value = htole16(db_val);
+
+		writel(qp->rq.db_data.raw, qp->rq.db);
+
+		wc_wmb();
+
+		wr = wr->next;
+	}
+
+	FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ, "POST: Elements in RespQ: %d\n",
+		      qelr_chain_get_elem_left_u32(&qp->rq.chain));
+	pthread_spin_unlock(&qp->q_lock);
+
+	return status;
+}
+
+static int is_valid_cqe(struct qelr_cq *cq, union rdma_cqe *cqe)
+{
+	struct rdma_cqe_requester *resp_cqe = &cqe->req;
+
+	return (resp_cqe->flags & RDMA_CQE_REQUESTER_TOGGLE_BIT_MASK) ==
+		cq->chain_toggle;
+}
+
+static enum rdma_cqe_type cqe_get_type(union rdma_cqe *cqe)
+{
+	struct rdma_cqe_requester *resp_cqe = &cqe->req;
+
+	return GET_FIELD(resp_cqe->flags, RDMA_CQE_REQUESTER_TYPE);
+}
+
+static struct qelr_qp *cqe_get_qp(union rdma_cqe *cqe)
+{
+	struct rdma_cqe_requester *resp_cqe = &cqe->req;
+	struct qelr_qp *qp;
+
+	qp = (struct qelr_qp *)HILO_U64(resp_cqe->qp_handle.hi,
+					resp_cqe->qp_handle.lo);
+	return qp;
+}
+
+static int process_req(struct qelr_qp *qp, struct qelr_cq *cq, int num_entries,
+		       struct ibv_wc *wc, uint16_t hw_cons,
+		       enum ibv_wc_status status, int force)
+{
+	struct qelr_devctx *cxt = get_qelr_ctx(qp->ibv_qp.context);
+	uint16_t cnt = 0;
+
+	while (num_entries && qp->sq.wqe_cons != hw_cons) {
+		if (!qp->wqe_wr_id[qp->sq.cons].signaled && !force) {
+			/* skip WC */
+			goto next_cqe;
+		}
+
+		/* fill WC */
+		wc->status = status;
+		wc->wc_flags = 0;
+		wc->qp_num = qp->qp_id;
+
+		/* common section */
+		wc->wr_id = qp->wqe_wr_id[qp->sq.cons].wr_id;
+		wc->opcode = qp->wqe_wr_id[qp->sq.cons].opcode;
+
+		switch (wc->opcode) {
+		case IBV_WC_RDMA_WRITE:
+			wc->byte_len = qp->wqe_wr_id[qp->sq.cons].bytes_len;
+			DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				   "POLL REQ CQ: IBV_WC_RDMA_WRITE byte_len=%d\n",
+				   qp->wqe_wr_id[qp->sq.cons].bytes_len);
+			break;
+		case IBV_WC_COMP_SWAP:
+		case IBV_WC_FETCH_ADD:
+			wc->byte_len = 8;
+			break;
+		case IBV_WC_RDMA_READ:
+		case IBV_WC_SEND:
+		case IBV_WC_BIND_MW:
+			DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				   "POLL REQ CQ: IBV_WC_RDMA_READ / IBV_WC_SEND\n");
+			break;
+		default:
+			break;
+		}
+
+		num_entries--;
+		wc++;
+		cnt++;
+next_cqe:
+		while (qp->wqe_wr_id[qp->sq.cons].wqe_size--)
+			qelr_chain_consume(&qp->sq.chain);
+		qelr_inc_sw_cons_u16(&qp->sq);
+	}
+
+	return cnt;
+}
+
+static int qelr_poll_cq_req(struct qelr_qp *qp, struct qelr_cq *cq,
+			    int num_entries, struct ibv_wc *wc,
+			    struct rdma_cqe_requester *req)
+{
+	struct qelr_devctx *cxt = get_qelr_ctx(qp->ibv_qp.context);
+	int cnt = 0;
+
+	switch (req->status) {
+	case RDMA_CQE_REQ_STS_OK:
+		cnt = process_req(qp, cq, num_entries, wc, req->sq_cons,
+				  IBV_WC_SUCCESS, 0);
+		break;
+	case RDMA_CQE_REQ_STS_WORK_REQUEST_FLUSHED_ERR:
+		DP_ERR(cxt->dbg_fp,
+		       "Error: POLL CQ with ROCE_CQE_REQ_STS_WORK_REQUEST_FLUSHED_ERR. QP icid=0x%x\n",
+		       qp->sq.icid);
+		cnt = process_req(qp, cq, num_entries, wc, req->sq_cons,
+				  IBV_WC_WR_FLUSH_ERR, 0);
+		break;
+	default: /* other errors case */
+		/* process all WQE before the consumer */
+		qp->state = QELR_QPS_ERR;
+		cnt = process_req(qp, cq, num_entries, wc, req->sq_cons - 1,
+				  IBV_WC_SUCCESS, 0);
+		wc += cnt;
+		/* if we have extra WC fill it with actual error info */
+		if (cnt < num_entries) {
+			enum ibv_wc_status wc_status;
+
+			switch (req->status) {
+			case	RDMA_CQE_REQ_STS_BAD_RESPONSE_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_BAD_RESPONSE_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_BAD_RESP_ERR;
+				break;
+			case	RDMA_CQE_REQ_STS_LOCAL_LENGTH_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_LOCAL_LENGTH_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_LOC_LEN_ERR;
+				break;
+			case    RDMA_CQE_REQ_STS_LOCAL_QP_OPERATION_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_LOCAL_QP_OPERATION_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_LOC_QP_OP_ERR;
+				break;
+			case    RDMA_CQE_REQ_STS_LOCAL_PROTECTION_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_LOCAL_PROTECTION_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_LOC_PROT_ERR;
+				break;
+			case    RDMA_CQE_REQ_STS_MEMORY_MGT_OPERATION_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_MEMORY_MGT_OPERATION_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_MW_BIND_ERR;
+				break;
+			case    RDMA_CQE_REQ_STS_REMOTE_INVALID_REQUEST_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_REMOTE_INVALID_REQUEST_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_REM_INV_REQ_ERR;
+				break;
+			case    RDMA_CQE_REQ_STS_REMOTE_ACCESS_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_REMOTE_ACCESS_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_REM_ACCESS_ERR;
+				break;
+			case    RDMA_CQE_REQ_STS_REMOTE_OPERATION_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_REMOTE_OPERATION_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_REM_OP_ERR;
+				break;
+			case    RDMA_CQE_REQ_STS_RNR_NAK_RETRY_CNT_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "Error: POLL CQ with RDMA_CQE_REQ_STS_RNR_NAK_RETRY_CNT_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_RNR_RETRY_EXC_ERR;
+				break;
+			case    RDMA_CQE_REQ_STS_TRANSPORT_RETRY_CNT_ERR:
+				DP_ERR(cxt->dbg_fp,
+				       "RDMA_CQE_REQ_STS_TRANSPORT_RETRY_CNT_ERR. QP icid=0x%x\n",
+				       qp->sq.icid);
+				wc_status = IBV_WC_RETRY_EXC_ERR;
+				break;
+			default:
+				DP_ERR(cxt->dbg_fp,
+				       "IBV_WC_GENERAL_ERR. QP icid=0x%x\n",
+					qp->sq.icid);
+				wc_status = IBV_WC_GENERAL_ERR;
+			}
+
+			cnt += process_req(qp, cq, 1, wc, req->sq_cons,
+					   wc_status, 1 /* force use of WC */);
+		}
+	}
+
+	return cnt;
+}
+
+static void __process_resp_one(struct qelr_qp *qp, struct qelr_cq *cq,
+			       struct ibv_wc *wc,
+			       struct rdma_cqe_responder *resp, uint64_t wr_id)
+{
+	struct qelr_devctx *cxt = get_qelr_ctx(qp->ibv_qp.context);
+	enum ibv_wc_status wc_status = IBV_WC_SUCCESS;
+	uint8_t flags;
+
+	wc->opcode = IBV_WC_RECV;
+
+	FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ, "\n");
+
+	switch (resp->status) {
+	case RDMA_CQE_RESP_STS_LOCAL_ACCESS_ERR:
+		wc_status = IBV_WC_LOC_ACCESS_ERR;
+		break;
+	case RDMA_CQE_RESP_STS_LOCAL_LENGTH_ERR:
+		wc_status = IBV_WC_LOC_LEN_ERR;
+		break;
+	case RDMA_CQE_RESP_STS_LOCAL_QP_OPERATION_ERR:
+		wc_status = IBV_WC_LOC_QP_OP_ERR;
+		break;
+	case RDMA_CQE_RESP_STS_LOCAL_PROTECTION_ERR:
+		wc_status = IBV_WC_LOC_PROT_ERR;
+		break;
+	case RDMA_CQE_RESP_STS_MEMORY_MGT_OPERATION_ERR:
+		wc_status = IBV_WC_MW_BIND_ERR;
+		break;
+	case RDMA_CQE_RESP_STS_REMOTE_INVALID_REQUEST_ERR:
+		wc_status = IBV_WC_REM_INV_RD_REQ_ERR;
+		break;
+	case RDMA_CQE_RESP_STS_OK:
+		wc_status = IBV_WC_SUCCESS;
+		wc->byte_len = le32toh(resp->length);
+
+		flags = resp->flags & QELR_RESP_RDMA_IMM;
+
+		switch (flags) {
+		case QELR_RESP_RDMA_IMM:
+			/* update opcode */
+			wc->opcode = IBV_WC_RECV_RDMA_WITH_IMM;
+			/* fall to set imm data */
+		case QELR_RESP_IMM:
+			wc->imm_data =
+				ntohl(le32toh(resp->imm_data_or_inv_r_Key));
+			wc->wc_flags |= IBV_WC_WITH_IMM;
+			FP_DP_VERBOSE(cxt->dbg_fp, QELR_MSG_CQ,
+				      "POLL CQ RQ2: RESP_RDMA_IMM imm_data = %x resp_len=%d\n",
+				      wc->imm_data, wc->byte_len);
+			break;
+		case QELR_RESP_RDMA:
+			DP_ERR(cxt->dbg_fp, "Invalid flags detected\n");
+			break;
+		default:
+			/* valid configuration, but nothing to do here */
+			break;
+		}
+
+		wc->wr_id = wr_id;
+		break;
+	default:
+		wc->status = IBV_WC_GENERAL_ERR;
+		DP_ERR(cxt->dbg_fp, "Invalid CQE status detected\n");
+	}
+
+	/* fill WC */
+	wc->status = wc_status;
+	wc->qp_num = qp->qp_id;
+}
+
+static int process_resp_one(struct qelr_qp *qp, struct qelr_cq *cq,
+			    struct ibv_wc *wc, struct rdma_cqe_responder *resp)
+{
+	uint64_t wr_id = qp->rqe_wr_id[qp->rq.cons].wr_id;
+
+	__process_resp_one(qp, cq, wc, resp, wr_id);
+
+	while (qp->rqe_wr_id[qp->rq.cons].wqe_size--)
+		qelr_chain_consume(&qp->rq.chain);
+
+	qelr_inc_sw_cons_u16(&qp->rq);
+
+	return 1;
+}
+
+static int process_resp_flush(struct qelr_qp *qp, struct qelr_cq *cq,
+			      int num_entries, struct ibv_wc *wc,
+			      uint16_t hw_cons)
+{
+	uint16_t cnt = 0;
+
+	while (num_entries && qp->rq.wqe_cons != hw_cons) {
+		/* fill WC */
+		wc->status = IBV_WC_WR_FLUSH_ERR;
+		wc->qp_num = qp->qp_id;
+		wc->byte_len = 0;
+		wc->wr_id = qp->rqe_wr_id[qp->rq.cons].wr_id;
+		num_entries--;
+		wc++;
+		cnt++;
+		while (qp->rqe_wr_id[qp->rq.cons].wqe_size--)
+			qelr_chain_consume(&qp->rq.chain);
+		qelr_inc_sw_cons_u16(&qp->rq);
+	}
+
+	return cnt;
+}
+
+/* return latest CQE (needs processing) */
+static union rdma_cqe *get_cqe(struct qelr_cq *cq)
+{
+	return cq->latest_cqe;
+}
+
+static void try_consume_req_cqe(struct qelr_cq *cq, struct qelr_qp *qp,
+				struct rdma_cqe_requester *req, int *update)
+{
+	if (le16toh(req->sq_cons) == qp->sq.wqe_cons) {
+		consume_cqe(cq);
+		*update |= 1;
+	}
+}
+
+/* used with flush only, when resp->rq_cons is valid */
+static void try_consume_resp_cqe(struct qelr_cq *cq, struct qelr_qp *qp,
+				 struct rdma_cqe_responder *resp, int *update)
+{
+	if (le16toh(resp->rq_cons) == qp->rq.wqe_cons) {
+		consume_cqe(cq);
+		*update |= 1;
+	}
+}
+
+static int qelr_poll_cq_resp(struct qelr_qp *qp, struct qelr_cq *cq,
+			     int num_entries, struct ibv_wc *wc,
+			     struct rdma_cqe_responder *resp, int *update)
+{
+	int cnt;
+
+	if (resp->status == RDMA_CQE_RESP_STS_WORK_REQUEST_FLUSHED_ERR) {
+		cnt = process_resp_flush(qp, cq, num_entries, wc,
+					 resp->rq_cons);
+		try_consume_resp_cqe(cq, qp, resp, update);
+	} else {
+		cnt = process_resp_one(qp, cq, wc, resp);
+		consume_cqe(cq);
+		*update |= 1;
+	}
+
+	return cnt;
+}
+
+static void doorbell_cq(struct qelr_cq *cq, uint32_t cons, uint8_t flags)
+{
+	wmb();
+	cq->db.data.agg_flags = flags;
+	cq->db.data.value = htole32(cons);
+
+	writeq(cq->db.raw, cq->db_addr);
+	wc_wmb();
+}
+
+int qelr_poll_cq(struct ibv_cq *ibcq, int num_entries, struct ibv_wc *wc)
+{
+	struct qelr_cq *cq = get_qelr_cq(ibcq);
+	int done = 0;
+	union rdma_cqe *cqe = get_cqe(cq);
+	int update = 0;
+	uint32_t db_cons;
+
+	while (num_entries && is_valid_cqe(cq, cqe)) {
+		int cnt = 0;
+		struct qelr_qp *qp;
+
+		/* prevent speculative reads of any field of CQE */
+		rmb();
+
+		qp = cqe_get_qp(cqe);
+		if (!qp) {
+			DP_ERR(stderr,
+			       "Error: CQE QP pointer is NULL. CQE=%p\n", cqe);
+			break;
+		}
+
+		switch (cqe_get_type(cqe)) {
+		case RDMA_CQE_TYPE_REQUESTER:
+			cnt = qelr_poll_cq_req(qp, cq, num_entries, wc,
+					       &cqe->req);
+			try_consume_req_cqe(cq, qp, &cqe->req, &update);
+			break;
+		case RDMA_CQE_TYPE_RESPONDER_RQ:
+			cnt = qelr_poll_cq_resp(qp, cq, num_entries, wc,
+						&cqe->resp, &update);
+			break;
+		case RDMA_CQE_TYPE_INVALID:
+		default:
+			printf("Error: invalid CQE type = %d\n",
+			       cqe_get_type(cqe));
+		}
+		num_entries -= cnt;
+		wc += cnt;
+		done += cnt;
+
+		cqe = get_cqe(cq);
+	}
+
+	db_cons = qelr_chain_get_cons_idx_u32(&cq->chain) - 1;
+	if (update) {
+		/* doorbell notifies about latest VALID entry,
+		 * but chain already point to the next INVALID one
+		 */
+		doorbell_cq(cq, db_cons, cq->arm_flags);
+		FP_DP_VERBOSE(stderr, QELR_MSG_CQ, "doorbell_cq cons=%x\n",
+			      db_cons);
+	}
+
+	return done;
+}
+
+void qelr_cq_event(struct ibv_cq *ibcq)
+{
+	/* Trigger received, can reset arm flags */
+	struct qelr_cq *cq = get_qelr_cq(ibcq);
+
+	cq->arm_flags = 0;
+}
+
+int qelr_arm_cq(struct ibv_cq *ibcq, int solicited)
+{
+	struct qelr_cq *cq = get_qelr_cq(ibcq);
+	uint32_t db_cons;
+
+	db_cons = qelr_chain_get_cons_idx_u32(&cq->chain) - 1;
+	FP_DP_VERBOSE(get_qelr_ctx(ibcq->context)->dbg_fp, QELR_MSG_CQ,
+		      "Arm CQ cons=%x solicited=%d\n", db_cons, solicited);
+
+	cq->arm_flags = solicited ? DQ_UCM_ROCE_CQ_ARM_SE_CF_CMD :
+				    DQ_UCM_ROCE_CQ_ARM_CF_CMD;
+
+	doorbell_cq(cq, db_cons, cq->arm_flags);
+
+	return 0;
+}
+
+void qelr_async_event(struct ibv_async_event *event)
+{
+	struct qelr_cq *cq = NULL;
+	struct qelr_qp *qp = NULL;
+
+	switch (event->event_type) {
+	case IBV_EVENT_CQ_ERR:
+		cq = get_qelr_cq(event->element.cq);
+		break;
+	case IBV_EVENT_QP_FATAL:
+	case IBV_EVENT_QP_REQ_ERR:
+	case IBV_EVENT_QP_ACCESS_ERR:
+	case IBV_EVENT_PATH_MIG_ERR:{
+			qp = get_qelr_qp(event->element.qp);
+			break;
+		}
+	case IBV_EVENT_SQ_DRAINED:
+	case IBV_EVENT_PATH_MIG:
+	case IBV_EVENT_COMM_EST:
+	case IBV_EVENT_QP_LAST_WQE_REACHED:
+		break;
+	case IBV_EVENT_PORT_ACTIVE:
+	case IBV_EVENT_PORT_ERR:
+		break;
+	default:
+		break;
+	}
+
+	fprintf(stderr, "qelr_async_event not implemented yet cq=%p qp=%p\n",
+		cq, qp);
+}
diff --git a/providers/qedr/qelr_verbs.h b/providers/qedr/qelr_verbs.h
new file mode 100644
index 0000000..f10b76b
--- /dev/null
+++ b/providers/qedr/qelr_verbs.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __QELR_VERBS_H__
+#define __QELR_VERBS_H__
+
+#include <inttypes.h>
+#include <stddef.h>
+#include <endian.h>
+
+#include <infiniband/driver.h>
+#include <infiniband/arch.h>
+
+struct ibv_device *qelr_driver_init(const char *, int);
+
+int qelr_query_device(struct ibv_context *, struct ibv_device_attr *);
+int qelr_query_port(struct ibv_context *, uint8_t, struct ibv_port_attr *);
+
+struct ibv_pd *qelr_alloc_pd(struct ibv_context *);
+int qelr_dealloc_pd(struct ibv_pd *);
+
+struct ibv_mr *qelr_reg_mr(struct ibv_pd *, void *, size_t,
+			   int ibv_access_flags);
+int qelr_dereg_mr(struct ibv_mr *);
+
+struct ibv_cq *qelr_create_cq(struct ibv_context *, int,
+			      struct ibv_comp_channel *, int);
+int qelr_destroy_cq(struct ibv_cq *);
+int qelr_poll_cq(struct ibv_cq *, int, struct ibv_wc *);
+void qelr_cq_event(struct ibv_cq *);
+int qelr_arm_cq(struct ibv_cq *, int);
+
+int qelr_query_srq(struct ibv_srq *ibv_srq, struct ibv_srq_attr *attr);
+int qelr_modify_srq(struct ibv_srq *ibv_srq, struct ibv_srq_attr *attr,
+		    int attr_mask);
+struct ibv_srq *qelr_create_srq(struct ibv_pd *, struct ibv_srq_init_attr *);
+int qelr_destroy_srq(struct ibv_srq *ibv_srq);
+int qelr_post_srq_recv(struct ibv_srq *, struct ibv_recv_wr *,
+		       struct ibv_recv_wr **bad_wr);
+
+struct ibv_qp *qelr_create_qp(struct ibv_pd *, struct ibv_qp_init_attr *);
+int qelr_modify_qp(struct ibv_qp *, struct ibv_qp_attr *,
+		   int ibv_qp_attr_mask);
+int qelr_query_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr, int attr_mask,
+		  struct ibv_qp_init_attr *init_attr);
+int qelr_destroy_qp(struct ibv_qp *);
+
+int qelr_post_send(struct ibv_qp *, struct ibv_send_wr *,
+		   struct ibv_send_wr **);
+int qelr_post_recv(struct ibv_qp *, struct ibv_recv_wr *,
+		   struct ibv_recv_wr **);
+
+void qelr_async_event(struct ibv_async_event *event);
+#endif /* __QELR_VERBS_H__ */
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-core 3/6] libqedr: HSI
       [not found] ` <1476956952-17388-1-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
  2016-10-20  9:49   ` [PATCH rdma-core 1/6] libqedr: chains Ram Amrani
  2016-10-20  9:49   ` [PATCH rdma-core 2/6] libqedr: verbs Ram Amrani
@ 2016-10-20  9:49   ` Ram Amrani
  2016-10-20  9:49   ` [PATCH rdma-core 4/6] libqedr: main Ram Amrani
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Ram Amrani @ 2016-10-20  9:49 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA,
	Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA, Ram Amrani

From: Ram Amrani <Ram.Amrani-74tsMCuadCbQT0dZR+AlfA@public.gmane.org>

Introduce the HSI that allows interfacing directly with the NIC.

Ram Amrani <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
---
 providers/qedr/common_hsi.h    | 1502 ++++++++++++++++++++++++++++++++++++++++
 providers/qedr/qelr_hsi.h      |   67 ++
 providers/qedr/qelr_hsi_rdma.h |  914 ++++++++++++++++++++++++
 providers/qedr/rdma_common.h   |   74 ++
 providers/qedr/roce_common.h   |   50 ++
 5 files changed, 2607 insertions(+)
 create mode 100644 providers/qedr/common_hsi.h
 create mode 100644 providers/qedr/qelr_hsi.h
 create mode 100644 providers/qedr/qelr_hsi_rdma.h
 create mode 100644 providers/qedr/rdma_common.h
 create mode 100644 providers/qedr/roce_common.h

diff --git a/providers/qedr/common_hsi.h b/providers/qedr/common_hsi.h
new file mode 100644
index 0000000..8027866
--- /dev/null
+++ b/providers/qedr/common_hsi.h
@@ -0,0 +1,1502 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __COMMON_HSI__
+#define __COMMON_HSI__
+/********************************/
+/* PROTOCOL COMMON FW CONSTANTS */
+/********************************/
+
+/* Temporarily here should be added to HSI automatically by resource allocation tool.*/
+#define T_TEST_AGG_INT_TEMP    6
+#define	M_TEST_AGG_INT_TEMP    8
+#define	U_TEST_AGG_INT_TEMP    6
+#define	X_TEST_AGG_INT_TEMP    14
+#define	Y_TEST_AGG_INT_TEMP    4
+#define	P_TEST_AGG_INT_TEMP    4
+
+#define X_FINAL_CLEANUP_AGG_INT  1
+
+#define EVENT_RING_PAGE_SIZE_BYTES          4096
+
+#define NUM_OF_GLOBAL_QUEUES				128
+#define COMMON_QUEUE_ENTRY_MAX_BYTE_SIZE	64
+
+#define ISCSI_CDU_TASK_SEG_TYPE       0
+#define FCOE_CDU_TASK_SEG_TYPE        0
+#define RDMA_CDU_TASK_SEG_TYPE        1
+
+#define FW_ASSERT_GENERAL_ATTN_IDX    32
+
+#define MAX_PINNED_CCFC			32
+
+#define EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE	3
+
+/* Queue Zone sizes in bytes */
+#define TSTORM_QZONE_SIZE    8	 /*tstorm_scsi_queue_zone*/
+#define MSTORM_QZONE_SIZE    16  /*mstorm_eth_queue_zone. Used only for RX producer of VFs in backward compatibility mode.*/
+#define USTORM_QZONE_SIZE    8	 /*ustorm_eth_queue_zone*/
+#define XSTORM_QZONE_SIZE    8	 /*xstorm_eth_queue_zone*/
+#define YSTORM_QZONE_SIZE    0
+#define PSTORM_QZONE_SIZE    0
+
+#define MSTORM_VF_ZONE_DEFAULT_SIZE_LOG       7     /*Log of mstorm default VF zone size.*/
+#define ETH_MAX_NUM_RX_QUEUES_PER_VF_DEFAULT  16    /*Maximum number of RX queues that can be allocated to VF by default*/
+#define ETH_MAX_NUM_RX_QUEUES_PER_VF_DOUBLE   48    /*Maximum number of RX queues that can be allocated to VF with doubled VF zone size. Up to 96 VF supported in this mode*/
+#define ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD     112   /*Maximum number of RX queues that can be allocated to VF with 4 VF zone size. Up to 48 VF supported in this mode*/
+
+
+/********************************/
+/* CORE (LIGHT L2) FW CONSTANTS */
+/********************************/
+
+#define CORE_LL2_MAX_RAMROD_PER_CON				8
+#define CORE_LL2_TX_BD_PAGE_SIZE_BYTES			4096
+#define CORE_LL2_RX_BD_PAGE_SIZE_BYTES			4096
+#define CORE_LL2_RX_CQE_PAGE_SIZE_BYTES			4096
+#define CORE_LL2_RX_NUM_NEXT_PAGE_BDS			1
+
+#define CORE_LL2_TX_MAX_BDS_PER_PACKET				12
+
+#define CORE_SPQE_PAGE_SIZE_BYTES			4096
+
+#define MAX_NUM_LL2_RX_QUEUES					32
+#define MAX_NUM_LL2_TX_STATS_COUNTERS			32
+
+
+///////////////////////////////////////////////////////////////////////////////////////////////////
+// Include firmware verison number only- do not add constants here to avoid redundunt compilations
+///////////////////////////////////////////////////////////////////////////////////////////////////
+
+
+#define FW_MAJOR_VERSION		8
+#define FW_MINOR_VERSION		10
+#define FW_REVISION_VERSION		9
+#define FW_ENGINEERING_VERSION	0
+
+/***********************/
+/* COMMON HW CONSTANTS */
+/***********************/
+
+/* PCI functions */
+#define MAX_NUM_PORTS_K2		(4)
+#define MAX_NUM_PORTS_BB		(2)
+#define MAX_NUM_PORTS			(MAX_NUM_PORTS_K2)
+
+#define MAX_NUM_PFS_K2			(16)
+#define MAX_NUM_PFS_BB			(8)
+#define MAX_NUM_PFS				(MAX_NUM_PFS_K2)
+#define MAX_NUM_OF_PFS_IN_CHIP	(16) /* On both engines */
+
+#define MAX_NUM_VFS_K2			(192)
+#define MAX_NUM_VFS_BB			(120)
+#define MAX_NUM_VFS				(MAX_NUM_VFS_K2)
+
+#define MAX_NUM_FUNCTIONS_BB	(MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
+#define MAX_NUM_FUNCTIONS_K2	(MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
+#define MAX_NUM_FUNCTIONS		(MAX_NUM_PFS + MAX_NUM_VFS)
+
+/* in both BB and K2, the VF number starts from 16. so for arrays containing all */
+/* possible PFs and VFs - we need a constant for this size */
+#define MAX_FUNCTION_NUMBER_BB	(MAX_NUM_PFS + MAX_NUM_VFS_BB)
+#define MAX_FUNCTION_NUMBER_K2	(MAX_NUM_PFS + MAX_NUM_VFS_K2)
+#define MAX_FUNCTION_NUMBER		(MAX_NUM_PFS + MAX_NUM_VFS)
+
+#define MAX_NUM_VPORTS_K2		(208)
+#define MAX_NUM_VPORTS_BB		(160)
+#define MAX_NUM_VPORTS			(MAX_NUM_VPORTS_K2)
+
+#define MAX_NUM_L2_QUEUES_K2	(320)
+#define MAX_NUM_L2_QUEUES_BB	(256)
+#define MAX_NUM_L2_QUEUES		(MAX_NUM_L2_QUEUES_K2)
+
+/* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */
+// 4-Port K2.
+#define NUM_PHYS_TCS_4PORT_K2	(4)
+#define NUM_OF_PHYS_TCS			(8)
+
+#define NUM_TCS_4PORT_K2		(NUM_PHYS_TCS_4PORT_K2 + 1)
+#define NUM_OF_TCS				(NUM_OF_PHYS_TCS + 1)
+
+#define LB_TC					(NUM_OF_PHYS_TCS)
+
+/* Num of possible traffic priority values */
+#define NUM_OF_PRIO				(8)
+
+#define MAX_NUM_VOQS_K2			(NUM_TCS_4PORT_K2 * MAX_NUM_PORTS_K2)
+#define MAX_NUM_VOQS_BB         (NUM_OF_TCS * MAX_NUM_PORTS_BB)
+#define MAX_NUM_VOQS			(MAX_NUM_VOQS_K2)
+#define MAX_PHYS_VOQS			(NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
+
+/* CIDs */
+#define NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_LCIDS			(320)
+#define NUM_OF_LTIDS			(320)
+
+/* Clock values */
+#define MASTER_CLK_FREQ_E4		(375e6)
+#define STORM_CLK_FREQ_E4		(1000e6)
+#define CLK25M_CLK_FREQ_E4		(25e6)
+
+/* Global PXP windows (GTT) */
+#define NUM_OF_GTT			19
+#define GTT_DWORD_SIZE_BITS	10
+#define GTT_BYTE_SIZE_BITS	(GTT_DWORD_SIZE_BITS + 2)
+#define GTT_DWORD_SIZE		(1 << GTT_DWORD_SIZE_BITS)
+
+/* Tools Version */
+#define TOOLS_VERSION 10
+/*****************/
+/* CDU CONSTANTS */
+/*****************/
+
+#define CDU_SEG_TYPE_OFFSET_REG_TYPE_SHIFT		(17)
+#define CDU_SEG_TYPE_OFFSET_REG_OFFSET_MASK		(0x1ffff)
+
+#define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT	(12)
+#define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK	(0xfff)
+
+
+/*****************/
+/* DQ CONSTANTS  */
+/*****************/
+
+/* DEMS */
+#define	DQ_DEMS_LEGACY						0
+#define DQ_DEMS_TOE_MORE_TO_SEND			3
+#define DQ_DEMS_TOE_LOCAL_ADV_WND			4
+#define DQ_DEMS_ROCE_CQ_CONS				7
+
+/* XCM agg val selection (HW) */
+#define DQ_XCM_AGG_VAL_SEL_WORD2  0
+#define DQ_XCM_AGG_VAL_SEL_WORD3  1
+#define DQ_XCM_AGG_VAL_SEL_WORD4  2
+#define DQ_XCM_AGG_VAL_SEL_WORD5  3
+#define DQ_XCM_AGG_VAL_SEL_REG3   4
+#define DQ_XCM_AGG_VAL_SEL_REG4   5
+#define DQ_XCM_AGG_VAL_SEL_REG5   6
+#define DQ_XCM_AGG_VAL_SEL_REG6   7
+
+/* XCM agg val selection (FW) */
+#define DQ_XCM_CORE_TX_BD_CONS_CMD          DQ_XCM_AGG_VAL_SEL_WORD3
+#define DQ_XCM_CORE_TX_BD_PROD_CMD          DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_CORE_SPQ_PROD_CMD            DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_ETH_EDPM_NUM_BDS_CMD         DQ_XCM_AGG_VAL_SEL_WORD2
+#define DQ_XCM_ETH_TX_BD_CONS_CMD           DQ_XCM_AGG_VAL_SEL_WORD3
+#define DQ_XCM_ETH_TX_BD_PROD_CMD           DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_ETH_GO_TO_BD_CONS_CMD        DQ_XCM_AGG_VAL_SEL_WORD5
+#define DQ_XCM_FCOE_SQ_CONS_CMD             DQ_XCM_AGG_VAL_SEL_WORD3
+#define DQ_XCM_FCOE_SQ_PROD_CMD             DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_FCOE_X_FERQ_PROD_CMD         DQ_XCM_AGG_VAL_SEL_WORD5
+#define DQ_XCM_ISCSI_SQ_CONS_CMD            DQ_XCM_AGG_VAL_SEL_WORD3
+#define DQ_XCM_ISCSI_SQ_PROD_CMD            DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_ISCSI_MORE_TO_SEND_SEQ_CMD   DQ_XCM_AGG_VAL_SEL_REG3
+#define DQ_XCM_ISCSI_EXP_STAT_SN_CMD        DQ_XCM_AGG_VAL_SEL_REG6
+#define DQ_XCM_ROCE_SQ_PROD_CMD             DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_TOE_TX_BD_PROD_CMD           DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_TOE_MORE_TO_SEND_SEQ_CMD     DQ_XCM_AGG_VAL_SEL_REG3
+#define DQ_XCM_TOE_LOCAL_ADV_WND_SEQ_CMD    DQ_XCM_AGG_VAL_SEL_REG4
+
+/* UCM agg val selection (HW) */
+#define DQ_UCM_AGG_VAL_SEL_WORD0  0
+#define DQ_UCM_AGG_VAL_SEL_WORD1  1
+#define DQ_UCM_AGG_VAL_SEL_WORD2  2
+#define DQ_UCM_AGG_VAL_SEL_WORD3  3
+#define DQ_UCM_AGG_VAL_SEL_REG0   4
+#define DQ_UCM_AGG_VAL_SEL_REG1   5
+#define DQ_UCM_AGG_VAL_SEL_REG2   6
+#define DQ_UCM_AGG_VAL_SEL_REG3   7
+
+/* UCM agg val selection (FW) */
+#define DQ_UCM_ETH_PMD_TX_CONS_CMD			DQ_UCM_AGG_VAL_SEL_WORD2
+#define DQ_UCM_ETH_PMD_RX_CONS_CMD			DQ_UCM_AGG_VAL_SEL_WORD3
+#define DQ_UCM_ROCE_CQ_CONS_CMD				DQ_UCM_AGG_VAL_SEL_REG0
+#define DQ_UCM_ROCE_CQ_PROD_CMD				DQ_UCM_AGG_VAL_SEL_REG2
+
+/* TCM agg val selection (HW) */
+#define DQ_TCM_AGG_VAL_SEL_WORD0  0
+#define DQ_TCM_AGG_VAL_SEL_WORD1  1
+#define DQ_TCM_AGG_VAL_SEL_WORD2  2
+#define DQ_TCM_AGG_VAL_SEL_WORD3  3
+#define DQ_TCM_AGG_VAL_SEL_REG1   4
+#define DQ_TCM_AGG_VAL_SEL_REG2   5
+#define DQ_TCM_AGG_VAL_SEL_REG6   6
+#define DQ_TCM_AGG_VAL_SEL_REG9   7
+
+/* TCM agg val selection (FW) */
+#define DQ_TCM_L2B_BD_PROD_CMD				DQ_TCM_AGG_VAL_SEL_WORD1
+#define DQ_TCM_ROCE_RQ_PROD_CMD				DQ_TCM_AGG_VAL_SEL_WORD0
+
+/* XCM agg counter flag selection (HW) */
+#define DQ_XCM_AGG_FLG_SHIFT_BIT14  0
+#define DQ_XCM_AGG_FLG_SHIFT_BIT15  1
+#define DQ_XCM_AGG_FLG_SHIFT_CF12   2
+#define DQ_XCM_AGG_FLG_SHIFT_CF13   3
+#define DQ_XCM_AGG_FLG_SHIFT_CF18   4
+#define DQ_XCM_AGG_FLG_SHIFT_CF19   5
+#define DQ_XCM_AGG_FLG_SHIFT_CF22   6
+#define DQ_XCM_AGG_FLG_SHIFT_CF23   7
+
+/* XCM agg counter flag selection (FW) */
+#define DQ_XCM_CORE_DQ_CF_CMD               (1 << DQ_XCM_AGG_FLG_SHIFT_CF18)
+#define DQ_XCM_CORE_TERMINATE_CMD           (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
+#define DQ_XCM_CORE_SLOW_PATH_CMD           (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+#define DQ_XCM_ETH_DQ_CF_CMD                (1 << DQ_XCM_AGG_FLG_SHIFT_CF18)
+#define DQ_XCM_ETH_TERMINATE_CMD            (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
+#define DQ_XCM_ETH_SLOW_PATH_CMD            (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+#define DQ_XCM_ETH_TPH_EN_CMD               (1 << DQ_XCM_AGG_FLG_SHIFT_CF23)
+#define DQ_XCM_FCOE_SLOW_PATH_CMD           (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+#define DQ_XCM_ISCSI_DQ_FLUSH_CMD           (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
+#define DQ_XCM_ISCSI_SLOW_PATH_CMD          (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+#define DQ_XCM_ISCSI_PROC_ONLY_CLEANUP_CMD  (1 << DQ_XCM_AGG_FLG_SHIFT_CF23)
+#define DQ_XCM_TOE_DQ_FLUSH_CMD             (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
+#define DQ_XCM_TOE_SLOW_PATH_CMD            (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+
+/* UCM agg counter flag selection (HW) */
+#define DQ_UCM_AGG_FLG_SHIFT_CF0       0
+#define DQ_UCM_AGG_FLG_SHIFT_CF1       1
+#define DQ_UCM_AGG_FLG_SHIFT_CF3       2
+#define DQ_UCM_AGG_FLG_SHIFT_CF4       3
+#define DQ_UCM_AGG_FLG_SHIFT_CF5       4
+#define DQ_UCM_AGG_FLG_SHIFT_CF6       5
+#define DQ_UCM_AGG_FLG_SHIFT_RULE0EN   6
+#define DQ_UCM_AGG_FLG_SHIFT_RULE1EN   7
+
+/* UCM agg counter flag selection (FW) */
+#define DQ_UCM_ETH_PMD_TX_ARM_CMD           (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
+#define DQ_UCM_ETH_PMD_RX_ARM_CMD           (1 << DQ_UCM_AGG_FLG_SHIFT_CF5)
+#define DQ_UCM_ROCE_CQ_ARM_SE_CF_CMD        (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
+#define DQ_UCM_ROCE_CQ_ARM_CF_CMD           (1 << DQ_UCM_AGG_FLG_SHIFT_CF5)
+#define DQ_UCM_TOE_TIMER_STOP_ALL_CMD       (1 << DQ_UCM_AGG_FLG_SHIFT_CF3)
+#define DQ_UCM_TOE_SLOW_PATH_CF_CMD         (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
+#define DQ_UCM_TOE_DQ_CF_CMD                (1 << DQ_UCM_AGG_FLG_SHIFT_CF5)
+
+/* TCM agg counter flag selection (HW) */
+#define DQ_TCM_AGG_FLG_SHIFT_CF0  0
+#define DQ_TCM_AGG_FLG_SHIFT_CF1  1
+#define DQ_TCM_AGG_FLG_SHIFT_CF2  2
+#define DQ_TCM_AGG_FLG_SHIFT_CF3  3
+#define DQ_TCM_AGG_FLG_SHIFT_CF4  4
+#define DQ_TCM_AGG_FLG_SHIFT_CF5  5
+#define DQ_TCM_AGG_FLG_SHIFT_CF6  6
+#define DQ_TCM_AGG_FLG_SHIFT_CF7  7
+
+/* TCM agg counter flag selection (FW) */
+#define DQ_TCM_FCOE_FLUSH_Q0_CMD            (1 << DQ_TCM_AGG_FLG_SHIFT_CF1)
+#define DQ_TCM_FCOE_DUMMY_TIMER_CMD         (1 << DQ_TCM_AGG_FLG_SHIFT_CF2)
+#define DQ_TCM_FCOE_TIMER_STOP_ALL_CMD      (1 << DQ_TCM_AGG_FLG_SHIFT_CF3)
+#define DQ_TCM_ISCSI_FLUSH_Q0_CMD           (1 << DQ_TCM_AGG_FLG_SHIFT_CF1)
+#define DQ_TCM_ISCSI_TIMER_STOP_ALL_CMD     (1 << DQ_TCM_AGG_FLG_SHIFT_CF3)
+#define DQ_TCM_TOE_FLUSH_Q0_CMD             (1 << DQ_TCM_AGG_FLG_SHIFT_CF1)
+#define DQ_TCM_TOE_TIMER_STOP_ALL_CMD       (1 << DQ_TCM_AGG_FLG_SHIFT_CF3)
+#define DQ_TCM_IWARP_POST_RQ_CF_CMD         (1 << DQ_TCM_AGG_FLG_SHIFT_CF1)
+
+/* PWM address mapping */
+#define DQ_PWM_OFFSET_DPM_BASE				0x0
+#define DQ_PWM_OFFSET_DPM_END				0x27
+#define DQ_PWM_OFFSET_XCM16_BASE			0x40
+#define DQ_PWM_OFFSET_XCM32_BASE			0x44
+#define DQ_PWM_OFFSET_UCM16_BASE			0x48
+#define DQ_PWM_OFFSET_UCM32_BASE			0x4C
+#define DQ_PWM_OFFSET_UCM16_4				0x50
+#define DQ_PWM_OFFSET_TCM16_BASE			0x58
+#define DQ_PWM_OFFSET_TCM32_BASE			0x5C
+#define DQ_PWM_OFFSET_XCM_FLAGS				0x68
+#define DQ_PWM_OFFSET_UCM_FLAGS				0x69
+#define DQ_PWM_OFFSET_TCM_FLAGS				0x6B
+
+#define DQ_PWM_OFFSET_XCM_RDMA_SQ_PROD			(DQ_PWM_OFFSET_XCM16_BASE + 2)
+#define DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_32BIT	(DQ_PWM_OFFSET_UCM32_BASE)
+#define DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_16BIT	(DQ_PWM_OFFSET_UCM16_4)
+#define DQ_PWM_OFFSET_UCM_RDMA_INT_TIMEOUT		(DQ_PWM_OFFSET_UCM16_BASE + 2)
+#define DQ_PWM_OFFSET_UCM_RDMA_ARM_FLAGS		(DQ_PWM_OFFSET_UCM_FLAGS)
+#define DQ_PWM_OFFSET_TCM_ROCE_RQ_PROD			(DQ_PWM_OFFSET_TCM16_BASE + 1)
+#define DQ_PWM_OFFSET_TCM_IWARP_RQ_PROD			(DQ_PWM_OFFSET_TCM16_BASE + 3)
+
+#define DQ_REGION_SHIFT				        (12)
+
+/* DPM */
+#define	DQ_DPM_WQE_BUFF_SIZE			    (320)
+
+// Conn type ranges
+#define DQ_CONN_TYPE_RANGE_SHIFT			(4)
+
+/*****************/
+/* QM CONSTANTS  */
+/*****************/
+
+/* number of TX queues in the QM */
+#define MAX_QM_TX_QUEUES_K2			512
+#define MAX_QM_TX_QUEUES_BB			448
+#define MAX_QM_TX_QUEUES			MAX_QM_TX_QUEUES_K2
+
+/* number of Other queues in the QM */
+#define MAX_QM_OTHER_QUEUES_BB		64
+#define MAX_QM_OTHER_QUEUES_K2		128
+#define MAX_QM_OTHER_QUEUES			MAX_QM_OTHER_QUEUES_K2
+
+/* number of queues in a PF queue group */
+#define QM_PF_QUEUE_GROUP_SIZE		8
+
+/* the size of a single queue element in bytes */
+#define QM_PQ_ELEMENT_SIZE			4
+
+/* base number of Tx PQs in the CM PQ representation.
+   should be used when storing PQ IDs in CM PQ registers and context */
+#define CM_TX_PQ_BASE               0x200
+
+/* number of global Vport/QCN rate limiters */
+#define MAX_QM_GLOBAL_RLS			256
+
+/* QM registers data */
+#define QM_LINE_CRD_REG_WIDTH		16
+#define QM_LINE_CRD_REG_SIGN_BIT	(1 << (QM_LINE_CRD_REG_WIDTH - 1))
+#define QM_BYTE_CRD_REG_WIDTH		24
+#define QM_BYTE_CRD_REG_SIGN_BIT	(1 << (QM_BYTE_CRD_REG_WIDTH - 1))
+#define QM_WFQ_CRD_REG_WIDTH		32
+#define QM_WFQ_CRD_REG_SIGN_BIT		(1 << (QM_WFQ_CRD_REG_WIDTH - 1))
+#define QM_RL_CRD_REG_WIDTH			32
+#define QM_RL_CRD_REG_SIGN_BIT		(1 << (QM_RL_CRD_REG_WIDTH - 1))
+
+/*****************/
+/* CAU CONSTANTS */
+/*****************/
+
+#define CAU_FSM_ETH_RX  0
+#define CAU_FSM_ETH_TX  1
+
+/* Number of Protocol Indices per Status Block */
+#define PIS_PER_SB    12
+
+
+#define CAU_HC_STOPPED_STATE		3			/* fsm is stopped or not valid for this sb */
+#define CAU_HC_DISABLE_STATE		4			/* fsm is working without interrupt coalescing for this sb*/
+#define CAU_HC_ENABLE_STATE			0			/* fsm is working with interrupt coalescing for this sb*/
+
+
+/*****************/
+/* IGU CONSTANTS */
+/*****************/
+
+#define MAX_SB_PER_PATH_K2					(368)
+#define MAX_SB_PER_PATH_BB					(288)
+#define MAX_TOT_SB_PER_PATH					MAX_SB_PER_PATH_K2
+
+#define MAX_SB_PER_PF_MIMD					129
+#define MAX_SB_PER_PF_SIMD					64
+#define MAX_SB_PER_VF						64
+
+/* Memory addresses on the BAR for the IGU Sub Block */
+#define IGU_MEM_BASE						0x0000
+
+#define IGU_MEM_MSIX_BASE					0x0000
+#define IGU_MEM_MSIX_UPPER					0x0101
+#define IGU_MEM_MSIX_RESERVED_UPPER			0x01ff
+
+#define IGU_MEM_PBA_MSIX_BASE				0x0200
+#define IGU_MEM_PBA_MSIX_UPPER				0x0202
+#define IGU_MEM_PBA_MSIX_RESERVED_UPPER		0x03ff
+
+#define IGU_CMD_INT_ACK_BASE				0x0400
+#define IGU_CMD_INT_ACK_UPPER				(IGU_CMD_INT_ACK_BASE + MAX_TOT_SB_PER_PATH - 1)
+#define IGU_CMD_INT_ACK_RESERVED_UPPER		0x05ff
+
+#define IGU_CMD_ATTN_BIT_UPD_UPPER			0x05f0
+#define IGU_CMD_ATTN_BIT_SET_UPPER			0x05f1
+#define IGU_CMD_ATTN_BIT_CLR_UPPER			0x05f2
+
+#define IGU_REG_SISR_MDPC_WMASK_UPPER		0x05f3
+#define IGU_REG_SISR_MDPC_WMASK_LSB_UPPER	0x05f4
+#define IGU_REG_SISR_MDPC_WMASK_MSB_UPPER	0x05f5
+#define IGU_REG_SISR_MDPC_WOMASK_UPPER		0x05f6
+
+#define IGU_CMD_PROD_UPD_BASE				0x0600
+#define IGU_CMD_PROD_UPD_UPPER				(IGU_CMD_PROD_UPD_BASE + MAX_TOT_SB_PER_PATH  - 1)
+#define IGU_CMD_PROD_UPD_RESERVED_UPPER		0x07ff
+
+/*****************/
+/* PXP CONSTANTS */
+/*****************/
+
+/* Bars for Blocks */
+#define PXP_BAR_GRC                                         0
+#define PXP_BAR_TSDM                                        0
+#define PXP_BAR_USDM                                        0
+#define PXP_BAR_XSDM                                        0
+#define PXP_BAR_MSDM                                        0
+#define PXP_BAR_YSDM                                        0
+#define PXP_BAR_PSDM                                        0
+#define PXP_BAR_IGU                                         0
+#define PXP_BAR_DQ                                          1
+
+/* PTT and GTT */
+#define PXP_NUM_PF_WINDOWS                                  12
+#define PXP_PER_PF_ENTRY_SIZE                               8
+#define PXP_NUM_GLOBAL_WINDOWS                              243
+#define PXP_GLOBAL_ENTRY_SIZE                               4
+#define PXP_ADMIN_WINDOW_ALLOWED_LENGTH                     4
+#define PXP_PF_WINDOW_ADMIN_START                           0
+#define PXP_PF_WINDOW_ADMIN_LENGTH                          0x1000
+#define PXP_PF_WINDOW_ADMIN_END                             (PXP_PF_WINDOW_ADMIN_START + PXP_PF_WINDOW_ADMIN_LENGTH - 1)
+#define PXP_PF_WINDOW_ADMIN_PER_PF_START                    0
+#define PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH                   (PXP_NUM_PF_WINDOWS * PXP_PER_PF_ENTRY_SIZE)
+#define PXP_PF_WINDOW_ADMIN_PER_PF_END                      (PXP_PF_WINDOW_ADMIN_PER_PF_START + PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH - 1)
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_START                    0x200
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH                   (PXP_NUM_GLOBAL_WINDOWS * PXP_GLOBAL_ENTRY_SIZE)
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_END                      (PXP_PF_WINDOW_ADMIN_GLOBAL_START + PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH - 1)
+#define PXP_PF_GLOBAL_PRETEND_ADDR                          0x1f0
+#define PXP_PF_ME_OPAQUE_MASK_ADDR                          0xf4
+#define PXP_PF_ME_OPAQUE_ADDR                               0x1f8
+#define PXP_PF_ME_CONCRETE_ADDR                             0x1fc
+
+#define PXP_EXTERNAL_BAR_PF_WINDOW_START                    0x1000
+#define PXP_EXTERNAL_BAR_PF_WINDOW_NUM                      PXP_NUM_PF_WINDOWS
+#define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE              0x1000
+#define PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH                   (PXP_EXTERNAL_BAR_PF_WINDOW_NUM * PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE)
+#define PXP_EXTERNAL_BAR_PF_WINDOW_END                      (PXP_EXTERNAL_BAR_PF_WINDOW_START + PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH - 1)
+
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START                (PXP_EXTERNAL_BAR_PF_WINDOW_END + 1)
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM                  PXP_NUM_GLOBAL_WINDOWS
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE          0x1000
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH               (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM * PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE)
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_END                  (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START + PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
+
+/* PF BAR */
+//#define PXP_BAR0_START_GRC 0x1000
+//#define PXP_BAR0_GRC_LENGTH 0xBFF000
+#define PXP_BAR0_START_GRC                      0x0000
+#define PXP_BAR0_GRC_LENGTH                     0x1C00000
+#define PXP_BAR0_END_GRC                        (PXP_BAR0_START_GRC + PXP_BAR0_GRC_LENGTH - 1)
+
+#define PXP_BAR0_START_IGU                      0x1C00000
+#define PXP_BAR0_IGU_LENGTH                     0x10000
+#define PXP_BAR0_END_IGU                        (PXP_BAR0_START_IGU + PXP_BAR0_IGU_LENGTH - 1)
+
+#define PXP_BAR0_START_TSDM                     0x1C80000
+#define PXP_BAR0_SDM_LENGTH                     0x40000
+#define PXP_BAR0_SDM_RESERVED_LENGTH            0x40000
+#define PXP_BAR0_END_TSDM                       (PXP_BAR0_START_TSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_MSDM                     0x1D00000
+#define PXP_BAR0_END_MSDM                       (PXP_BAR0_START_MSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_USDM                     0x1D80000
+#define PXP_BAR0_END_USDM                       (PXP_BAR0_START_USDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_XSDM                     0x1E00000
+#define PXP_BAR0_END_XSDM                       (PXP_BAR0_START_XSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_YSDM                     0x1E80000
+#define PXP_BAR0_END_YSDM                       (PXP_BAR0_START_YSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_PSDM                     0x1F00000
+#define PXP_BAR0_END_PSDM                       (PXP_BAR0_START_PSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_FIRST_INVALID_ADDRESS          (PXP_BAR0_END_PSDM + 1)
+
+/* VF BAR */
+#define PXP_VF_BAR0                             0
+
+#define PXP_VF_BAR0_START_GRC                   0x3E00
+#define PXP_VF_BAR0_GRC_LENGTH                  0x200
+#define PXP_VF_BAR0_END_GRC                     (PXP_VF_BAR0_START_GRC + PXP_VF_BAR0_GRC_LENGTH - 1)
+
+#define PXP_VF_BAR0_START_IGU                   0
+#define PXP_VF_BAR0_IGU_LENGTH                  0x3000
+#define PXP_VF_BAR0_END_IGU                     (PXP_VF_BAR0_START_IGU + PXP_VF_BAR0_IGU_LENGTH - 1)
+
+#define PXP_VF_BAR0_START_DQ                    0x3000
+#define PXP_VF_BAR0_DQ_LENGTH                   0x200
+#define PXP_VF_BAR0_DQ_OPAQUE_OFFSET            0
+#define PXP_VF_BAR0_ME_OPAQUE_ADDRESS           (PXP_VF_BAR0_START_DQ + PXP_VF_BAR0_DQ_OPAQUE_OFFSET)
+#define PXP_VF_BAR0_ME_CONCRETE_ADDRESS         (PXP_VF_BAR0_ME_OPAQUE_ADDRESS + 4)
+#define PXP_VF_BAR0_END_DQ                      (PXP_VF_BAR0_START_DQ + PXP_VF_BAR0_DQ_LENGTH - 1)
+
+#define PXP_VF_BAR0_START_TSDM_ZONE_B           0x3200
+#define PXP_VF_BAR0_SDM_LENGTH_ZONE_B           0x200
+#define PXP_VF_BAR0_END_TSDM_ZONE_B             (PXP_VF_BAR0_START_TSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1)
+
+#define PXP_VF_BAR0_START_MSDM_ZONE_B           0x3400
+#define PXP_VF_BAR0_END_MSDM_ZONE_B             (PXP_VF_BAR0_START_MSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1)
+
+#define PXP_VF_BAR0_START_USDM_ZONE_B           0x3600
+#define PXP_VF_BAR0_END_USDM_ZONE_B             (PXP_VF_BAR0_START_USDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1)
+
+#define PXP_VF_BAR0_START_XSDM_ZONE_B           0x3800
+#define PXP_VF_BAR0_END_XSDM_ZONE_B             (PXP_VF_BAR0_START_XSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1)
+
+#define PXP_VF_BAR0_START_YSDM_ZONE_B           0x3a00
+#define PXP_VF_BAR0_END_YSDM_ZONE_B             (PXP_VF_BAR0_START_YSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1)
+
+#define PXP_VF_BAR0_START_PSDM_ZONE_B           0x3c00
+#define PXP_VF_BAR0_END_PSDM_ZONE_B             (PXP_VF_BAR0_START_PSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1)
+
+#define PXP_VF_BAR0_START_SDM_ZONE_A            0x4000
+#define PXP_VF_BAR0_END_SDM_ZONE_A              0x10000
+
+#define PXP_VF_BAR0_GRC_WINDOW_LENGTH           32
+
+#define PXP_ILT_PAGE_SIZE_NUM_BITS_MIN          12
+#define PXP_ILT_BLOCK_FACTOR_MULTIPLIER         1024
+
+// ILT Records
+#define PXP_NUM_ILT_RECORDS_BB 7600
+#define PXP_NUM_ILT_RECORDS_K2 11000
+#define MAX_NUM_ILT_RECORDS MAX(PXP_NUM_ILT_RECORDS_BB,PXP_NUM_ILT_RECORDS_K2)
+
+
+// Host Interface
+#define PXP_QUEUES_ZONE_MAX_NUM	320
+
+
+
+
+/*****************/
+/* PRM CONSTANTS */
+/*****************/
+#define PRM_DMA_PAD_BYTES_NUM  2
+/*****************/
+/* SDMs CONSTANTS  */
+/*****************/
+
+
+#define SDM_OP_GEN_TRIG_NONE			0
+#define SDM_OP_GEN_TRIG_WAKE_THREAD		1
+#define SDM_OP_GEN_TRIG_AGG_INT			2
+#define SDM_OP_GEN_TRIG_LOADER			4
+#define SDM_OP_GEN_TRIG_INDICATE_ERROR	6
+#define SDM_OP_GEN_TRIG_RELEASE_THREAD	7
+
+/////////////////////////////////////////////////////////////
+// Completion types
+/////////////////////////////////////////////////////////////
+
+#define SDM_COMP_TYPE_NONE				0
+#define SDM_COMP_TYPE_WAKE_THREAD		1
+#define SDM_COMP_TYPE_AGG_INT			2
+#define SDM_COMP_TYPE_CM				3		// Send direct message to local CM and/or remote CMs. Destinations are defined by vector in CompParams.
+#define SDM_COMP_TYPE_LOADER			4
+#define SDM_COMP_TYPE_PXP				5		// Send direct message to PXP (like "internal write" command) to write to remote Storm RAM via remote SDM
+#define SDM_COMP_TYPE_INDICATE_ERROR	6		// Indicate error per thread
+#define SDM_COMP_TYPE_RELEASE_THREAD	7
+#define SDM_COMP_TYPE_RAM				8		// Write to local RAM as a completion
+
+
+/******************/
+/* PBF CONSTANTS  */
+/******************/
+
+/* Number of PBF command queue lines. Each line is 32B. */
+#define PBF_MAX_CMD_LINES 3328
+
+/* Number of BTB blocks. Each block is 256B. */
+#define BTB_MAX_BLOCKS 1440
+
+/*****************/
+/* PRS CONSTANTS */
+/*****************/
+
+#define PRS_GFT_CAM_LINES_NO_MATCH  31
+
+/*
+ * Async data KCQ CQE
+ */
+struct async_data
+{
+	__le32 cid /* Context ID of the connection */;
+	__le16 itid /* Task Id of the task (for error that happened on a a task) */;
+	uint8_t error_code /* error code - relevant only if the opcode indicates its an error */;
+	uint8_t fw_debug_param /* internal fw debug parameter */;
+};
+
+
+/*
+ * Interrupt coalescing TimeSet
+ */
+struct coalescing_timeset
+{
+	uint8_t value;
+#define COALESCING_TIMESET_TIMESET_MASK  0x7F /* Interrupt coalescing TimeSet (timeout_ticks = TimeSet shl (TimerRes+1)) */
+#define COALESCING_TIMESET_TIMESET_SHIFT 0
+#define COALESCING_TIMESET_VALID_MASK    0x1 /* Only if this flag is set, timeset will take effect */
+#define COALESCING_TIMESET_VALID_SHIFT   7
+};
+
+
+struct common_queue_zone
+{
+	__le16 ring_drv_data_consumer;
+	__le16 reserved;
+};
+
+
+/*
+ * ETH Rx producers data
+ */
+struct eth_rx_prod_data
+{
+	__le16 bd_prod /* BD producer. */;
+	__le16 cqe_prod /* CQE producer. */;
+};
+
+
+struct regpair
+{
+	__le32 lo /* low word for reg-pair */;
+	__le32 hi /* high word for reg-pair */;
+};
+
+/*
+ * Event Ring VF-PF Channel data
+ */
+struct vf_pf_channel_eqe_data
+{
+	struct regpair msg_addr /* VF-PF message address */;
+};
+
+struct iscsi_eqe_data
+{
+	__le32 cid /* Context ID of the connection */;
+	__le16 conn_id /* Task Id of the task (for error that happened on a a task) */;
+	uint8_t error_code /* error code - relevant only if the opcode indicates its an error */;
+	uint8_t error_pdu_opcode_reserved;
+#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_MASK        0x3F /* The processed PDUs opcode on which happened the error - updated for specific error codes, by defualt=0xFF */
+#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_SHIFT       0
+#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_VALID_MASK  0x1 /* Indication for driver is the error_pdu_opcode field has valid value */
+#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_VALID_SHIFT 6
+#define ISCSI_EQE_DATA_RESERVED0_MASK               0x1
+#define ISCSI_EQE_DATA_RESERVED0_SHIFT              7
+};
+
+/*
+ * Event Ring malicious VF data
+ */
+struct malicious_vf_eqe_data
+{
+	uint8_t vfId /* Malicious VF ID */;
+	uint8_t errId /* Malicious VF error */;
+	__le16 reserved[3];
+};
+
+/*
+ * Event Ring initial cleanup data
+ */
+struct initial_cleanup_eqe_data
+{
+	uint8_t vfId /* VF ID */;
+	uint8_t reserved[7];
+};
+
+/*
+ * Event Data Union
+ */
+union event_ring_data
+{
+	uint8_t bytes[8] /* Byte Array */;
+	struct vf_pf_channel_eqe_data vf_pf_channel /* VF-PF Channel data */;
+	struct iscsi_eqe_data iscsi_info /* Dedicated fields to iscsi data */;
+	struct regpair roceHandle /* Dedicated field for RoCE affiliated asynchronous error */;
+	struct malicious_vf_eqe_data malicious_vf /* Malicious VF data */;
+	struct initial_cleanup_eqe_data vf_init_cleanup /* VF Initial Cleanup data */;
+	struct regpair iwarp_handle /* Host handle for the Async Completions */;
+};
+
+
+/*
+ * Event Ring Entry
+ */
+struct event_ring_entry
+{
+	uint8_t protocol_id /* Event Protocol ID */;
+	uint8_t opcode /* Event Opcode */;
+	__le16 reserved0 /* Reserved */;
+	__le16 echo /* Echo value from ramrod data on the host */;
+	uint8_t fw_return_code /* FW return code for SP ramrods */;
+	uint8_t flags;
+#define EVENT_RING_ENTRY_ASYNC_MASK      0x1 /* 0: synchronous EQE - a completion of SP message. 1: asynchronous EQE */
+#define EVENT_RING_ENTRY_ASYNC_SHIFT     0
+#define EVENT_RING_ENTRY_RESERVED1_MASK  0x7F
+#define EVENT_RING_ENTRY_RESERVED1_SHIFT 1
+	union event_ring_data data;
+};
+
+
+
+
+
+/*
+ * Multi function mode
+ */
+enum mf_mode
+{
+	ERROR_MODE /* Unsupported mode */,
+	MF_OVLAN /* Multi function based on outer VLAN */,
+	MF_NPAR /* Multi function based on MAC address (NIC partitioning) */,
+	MAX_MF_MODE
+};
+
+
+/*
+ * Per-protocol connection types
+ */
+enum protocol_type
+{
+	PROTOCOLID_ISCSI /* iSCSI */,
+	PROTOCOLID_FCOE /* FCoE */,
+	PROTOCOLID_ROCE /* RoCE */,
+	PROTOCOLID_CORE /* Core (light L2, slow path core) */,
+	PROTOCOLID_ETH /* Ethernet */,
+	PROTOCOLID_IWARP /* iWARP */,
+	PROTOCOLID_TOE /* TOE */,
+	PROTOCOLID_PREROCE /* Pre (tapeout) RoCE */,
+	PROTOCOLID_COMMON /* ProtocolCommon */,
+	PROTOCOLID_TCP /* TCP */,
+	MAX_PROTOCOL_TYPE
+};
+
+
+/*
+ * Ustorm Queue Zone
+ */
+struct ustorm_eth_queue_zone
+{
+	struct coalescing_timeset int_coalescing_timeset /* Rx interrupt coalescing TimeSet */;
+	uint8_t reserved[3];
+};
+
+
+struct ustorm_queue_zone
+{
+	struct ustorm_eth_queue_zone eth;
+	struct common_queue_zone common;
+};
+
+
+
+/*
+ * status block structure
+ */
+struct cau_pi_entry
+{
+	__le32 prod;
+#define CAU_PI_ENTRY_PROD_VAL_MASK    0xFFFF /* A per protocol indexPROD value. */
+#define CAU_PI_ENTRY_PROD_VAL_SHIFT   0
+#define CAU_PI_ENTRY_PI_TIMESET_MASK  0x7F /* This value determines the TimeSet that the PI is associated with  */
+#define CAU_PI_ENTRY_PI_TIMESET_SHIFT 16
+#define CAU_PI_ENTRY_FSM_SEL_MASK     0x1 /* Select the FSM within the SB */
+#define CAU_PI_ENTRY_FSM_SEL_SHIFT    23
+#define CAU_PI_ENTRY_RESERVED_MASK    0xFF /* Select the FSM within the SB */
+#define CAU_PI_ENTRY_RESERVED_SHIFT   24
+};
+
+
+/*
+ * status block structure
+ */
+struct cau_sb_entry
+{
+	__le32 data;
+#define CAU_SB_ENTRY_SB_PROD_MASK      0xFFFFFF /* The SB PROD index which is sent to the IGU. */
+#define CAU_SB_ENTRY_SB_PROD_SHIFT     0
+#define CAU_SB_ENTRY_STATE0_MASK       0xF /* RX state */
+#define CAU_SB_ENTRY_STATE0_SHIFT      24
+#define CAU_SB_ENTRY_STATE1_MASK       0xF /* TX state */
+#define CAU_SB_ENTRY_STATE1_SHIFT      28
+	__le32 params;
+#define CAU_SB_ENTRY_SB_TIMESET0_MASK  0x7F /* Indicates the RX TimeSet that this SB is associated with. */
+#define CAU_SB_ENTRY_SB_TIMESET0_SHIFT 0
+#define CAU_SB_ENTRY_SB_TIMESET1_MASK  0x7F /* Indicates the TX TimeSet that this SB is associated with. */
+#define CAU_SB_ENTRY_SB_TIMESET1_SHIFT 7
+#define CAU_SB_ENTRY_TIMER_RES0_MASK   0x3 /* This value will determine the RX FSM timer resolution in ticks  */
+#define CAU_SB_ENTRY_TIMER_RES0_SHIFT  14
+#define CAU_SB_ENTRY_TIMER_RES1_MASK   0x3 /* This value will determine the TX FSM timer resolution in ticks  */
+#define CAU_SB_ENTRY_TIMER_RES1_SHIFT  16
+#define CAU_SB_ENTRY_VF_NUMBER_MASK    0xFF
+#define CAU_SB_ENTRY_VF_NUMBER_SHIFT   18
+#define CAU_SB_ENTRY_VF_VALID_MASK     0x1
+#define CAU_SB_ENTRY_VF_VALID_SHIFT    26
+#define CAU_SB_ENTRY_PF_NUMBER_MASK    0xF
+#define CAU_SB_ENTRY_PF_NUMBER_SHIFT   27
+#define CAU_SB_ENTRY_TPH_MASK          0x1 /* If set then indicates that the TPH STAG is equal to the SB number. Otherwise the STAG will be equal to all ones. */
+#define CAU_SB_ENTRY_TPH_SHIFT         31
+};
+
+
+/*
+ * core doorbell data
+ */
+struct core_db_data
+{
+	uint8_t params;
+#define CORE_DB_DATA_DEST_MASK         0x3 /* destination of doorbell (use enum db_dest) */
+#define CORE_DB_DATA_DEST_SHIFT        0
+#define CORE_DB_DATA_AGG_CMD_MASK      0x3 /* aggregative command to CM (use enum db_agg_cmd_sel) */
+#define CORE_DB_DATA_AGG_CMD_SHIFT     2
+#define CORE_DB_DATA_BYPASS_EN_MASK    0x1 /* enable QM bypass */
+#define CORE_DB_DATA_BYPASS_EN_SHIFT   4
+#define CORE_DB_DATA_RESERVED_MASK     0x1
+#define CORE_DB_DATA_RESERVED_SHIFT    5
+#define CORE_DB_DATA_AGG_VAL_SEL_MASK  0x3 /* aggregative value selection */
+#define CORE_DB_DATA_AGG_VAL_SEL_SHIFT 6
+	uint8_t agg_flags /* bit for every DQ counter flags in CM context that DQ can increment */;
+	__le16 spq_prod;
+};
+
+
+/*
+ * Enum of doorbell aggregative command selection
+ */
+enum db_agg_cmd_sel
+{
+	DB_AGG_CMD_NOP /* No operation */,
+	DB_AGG_CMD_SET /* Set the value */,
+	DB_AGG_CMD_ADD /* Add the value */,
+	DB_AGG_CMD_MAX /* Set max of current and new value */,
+	MAX_DB_AGG_CMD_SEL
+};
+
+
+/*
+ * Enum of doorbell destination
+ */
+enum db_dest
+{
+	DB_DEST_XCM /* TX doorbell to XCM */,
+	DB_DEST_UCM /* RX doorbell to UCM */,
+	DB_DEST_TCM /* RX doorbell to TCM */,
+	DB_NUM_DESTINATIONS,
+	MAX_DB_DEST
+};
+
+
+/*
+ * Enum of doorbell DPM types
+ */
+enum db_dpm_type
+{
+	DPM_LEGACY /* Legacy DPM- to Xstorm RAM */,
+	DPM_ROCE /* RoCE DPM- to NIG */,
+	DPM_L2_INLINE /* L2 DPM inline- to PBF, with packet data on doorbell */,
+	DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */,
+	MAX_DB_DPM_TYPE
+};
+
+
+/*
+ * Structure for doorbell data, in L2 DPM mode, for the first doorbell in a DPM burst
+ */
+struct db_l2_dpm_data
+{
+	__le16 icid /* internal CID */;
+	__le16 bd_prod /* bd producer value to update */;
+	__le32 params;
+#define DB_L2_DPM_DATA_SIZE_MASK       0x3F /* Size in QWORD-s of the DPM burst */
+#define DB_L2_DPM_DATA_SIZE_SHIFT      0
+#define DB_L2_DPM_DATA_DPM_TYPE_MASK   0x3 /* Type of DPM transaction (DPM_L2_INLINE or DPM_L2_BD) (use enum db_dpm_type) */
+#define DB_L2_DPM_DATA_DPM_TYPE_SHIFT  6
+#define DB_L2_DPM_DATA_NUM_BDS_MASK    0xFF /* number of BD-s */
+#define DB_L2_DPM_DATA_NUM_BDS_SHIFT   8
+#define DB_L2_DPM_DATA_PKT_SIZE_MASK   0x7FF /* size of the packet to be transmitted in bytes */
+#define DB_L2_DPM_DATA_PKT_SIZE_SHIFT  16
+#define DB_L2_DPM_DATA_RESERVED0_MASK  0x1
+#define DB_L2_DPM_DATA_RESERVED0_SHIFT 27
+#define DB_L2_DPM_DATA_SGE_NUM_MASK    0x7 /* In DPM_L2_BD mode: the number of SGE-s */
+#define DB_L2_DPM_DATA_SGE_NUM_SHIFT   28
+#define DB_L2_DPM_DATA_RESERVED1_MASK  0x1
+#define DB_L2_DPM_DATA_RESERVED1_SHIFT 31
+};
+
+
+/*
+ * Structure for SGE in a DPM doorbell of type DPM_L2_BD
+ */
+struct db_l2_dpm_sge
+{
+	struct regpair addr /* Single continuous buffer */;
+	__le16 nbytes /* Number of bytes in this BD. */;
+	__le16 bitfields;
+#define DB_L2_DPM_SGE_TPH_ST_INDEX_MASK  0x1FF /* The TPH STAG index value */
+#define DB_L2_DPM_SGE_TPH_ST_INDEX_SHIFT 0
+#define DB_L2_DPM_SGE_RESERVED0_MASK     0x3
+#define DB_L2_DPM_SGE_RESERVED0_SHIFT    9
+#define DB_L2_DPM_SGE_ST_VALID_MASK      0x1 /* Indicate if ST hint is requested or not */
+#define DB_L2_DPM_SGE_ST_VALID_SHIFT     11
+#define DB_L2_DPM_SGE_RESERVED1_MASK     0xF
+#define DB_L2_DPM_SGE_RESERVED1_SHIFT    12
+	__le32 reserved2;
+};
+
+
+/*
+ * Structure for doorbell address, in legacy mode
+ */
+struct db_legacy_addr
+{
+	__le32 addr;
+#define DB_LEGACY_ADDR_RESERVED0_MASK  0x3
+#define DB_LEGACY_ADDR_RESERVED0_SHIFT 0
+#define DB_LEGACY_ADDR_DEMS_MASK       0x7 /* doorbell extraction mode specifier- 0 if not used */
+#define DB_LEGACY_ADDR_DEMS_SHIFT      2
+#define DB_LEGACY_ADDR_ICID_MASK       0x7FFFFFF /* internal CID */
+#define DB_LEGACY_ADDR_ICID_SHIFT      5
+};
+
+
+/*
+ * Structure for doorbell address, in PWM mode
+ */
+struct db_pwm_addr
+{
+	__le32 addr;
+#define DB_PWM_ADDR_RESERVED0_MASK  0x7
+#define DB_PWM_ADDR_RESERVED0_SHIFT 0
+#define DB_PWM_ADDR_OFFSET_MASK     0x7F /* Offset in PWM address space */
+#define DB_PWM_ADDR_OFFSET_SHIFT    3
+#define DB_PWM_ADDR_WID_MASK        0x3 /* Window ID */
+#define DB_PWM_ADDR_WID_SHIFT       10
+#define DB_PWM_ADDR_DPI_MASK        0xFFFF /* Doorbell page ID */
+#define DB_PWM_ADDR_DPI_SHIFT       12
+#define DB_PWM_ADDR_RESERVED1_MASK  0xF
+#define DB_PWM_ADDR_RESERVED1_SHIFT 28
+};
+
+
+/*
+ * Parameters to RoCE firmware, passed in EDPM doorbell
+ */
+struct db_roce_dpm_params
+{
+	__le32 params;
+#define DB_ROCE_DPM_PARAMS_SIZE_MASK            0x3F /* Size in QWORD-s of the DPM burst */
+#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT           0
+#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK        0x3 /* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */
+#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT       6
+#define DB_ROCE_DPM_PARAMS_OPCODE_MASK          0xFF /* opcode for ROCE operation */
+#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT         8
+#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK        0x7FF /* the size of the WQE payload in bytes */
+#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT       16
+#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK       0x1
+#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT      27
+#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK  0x1 /* RoCE completion flag */
+#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
+#define DB_ROCE_DPM_PARAMS_S_FLG_MASK           0x1 /* RoCE S flag */
+#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT          29
+#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK       0x3
+#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT      30
+};
+
+/*
+ * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a DPM burst
+ */
+struct db_roce_dpm_data
+{
+	__le16 icid /* internal CID */;
+	__le16 prod_val /* aggregated value to update */;
+	struct db_roce_dpm_params params /* parametes passed to RoCE firmware */;
+};
+
+
+
+/*
+ * Igu interrupt command
+ */
+enum igu_int_cmd
+{
+	IGU_INT_ENABLE=0,
+	IGU_INT_DISABLE=1,
+	IGU_INT_NOP=2,
+	IGU_INT_NOP2=3,
+	MAX_IGU_INT_CMD
+};
+
+
+/*
+ * IGU producer or consumer update command
+ */
+struct igu_prod_cons_update
+{
+	__le32 sb_id_and_flags;
+#define IGU_PROD_CONS_UPDATE_SB_INDEX_MASK        0xFFFFFF
+#define IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT       0
+#define IGU_PROD_CONS_UPDATE_UPDATE_FLAG_MASK     0x1
+#define IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT    24
+#define IGU_PROD_CONS_UPDATE_ENABLE_INT_MASK      0x3 /* interrupt enable/disable/nop (use enum igu_int_cmd) */
+#define IGU_PROD_CONS_UPDATE_ENABLE_INT_SHIFT     25
+#define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_MASK  0x1 /*  (use enum igu_seg_access) */
+#define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_SHIFT 27
+#define IGU_PROD_CONS_UPDATE_TIMER_MASK_MASK      0x1
+#define IGU_PROD_CONS_UPDATE_TIMER_MASK_SHIFT     28
+#define IGU_PROD_CONS_UPDATE_RESERVED0_MASK       0x3
+#define IGU_PROD_CONS_UPDATE_RESERVED0_SHIFT      29
+#define IGU_PROD_CONS_UPDATE_COMMAND_TYPE_MASK    0x1 /* must always be set cleared (use enum command_type_bit) */
+#define IGU_PROD_CONS_UPDATE_COMMAND_TYPE_SHIFT   31
+	__le32 reserved1;
+};
+
+
+/*
+ * Igu segments access for default status block only
+ */
+enum igu_seg_access
+{
+	IGU_SEG_ACCESS_REG=0,
+	IGU_SEG_ACCESS_ATTN=1,
+	MAX_IGU_SEG_ACCESS
+};
+
+
+/*
+ * Enumeration for L3 type field of parsing_and_err_flags_union. L3Type: 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled according to the last-ethertype)
+ */
+enum l3_type
+{
+	e_l3Type_unknown,
+	e_l3Type_ipv4,
+	e_l3Type_ipv6,
+	MAX_L3_TYPE
+};
+
+
+/*
+ * Enumeration for l4Protocol field of parsing_and_err_flags_union. L4-protocol 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and its not the first fragment, the protocol-type should be set to none.
+ */
+enum l4_protocol
+{
+	e_l4Protocol_none,
+	e_l4Protocol_tcp,
+	e_l4Protocol_udp,
+	MAX_L4_PROTOCOL
+};
+
+
+/*
+ * Parsing and error flags field.
+ */
+struct parsing_and_err_flags
+{
+	__le16 flags;
+#define PARSING_AND_ERR_FLAGS_L3TYPE_MASK                      0x3 /* L3Type: 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled according to the last-ethertype) (use enum l3_type) */
+#define PARSING_AND_ERR_FLAGS_L3TYPE_SHIFT                     0
+#define PARSING_AND_ERR_FLAGS_L4PROTOCOL_MASK                  0x3 /* L4-protocol 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and its not the first fragment, the protocol-type should be set to none. (use enum l4_protocol) */
+#define PARSING_AND_ERR_FLAGS_L4PROTOCOL_SHIFT                 2
+#define PARSING_AND_ERR_FLAGS_IPV4FRAG_MASK                    0x1 /* Set if the packet is IPv4 fragment. */
+#define PARSING_AND_ERR_FLAGS_IPV4FRAG_SHIFT                   4
+#define PARSING_AND_ERR_FLAGS_TAG8021QEXIST_MASK               0x1 /* Set if VLAN tag exists. Invalid if tunnel type are IP GRE or IP GENEVE. */
+#define PARSING_AND_ERR_FLAGS_TAG8021QEXIST_SHIFT              5
+#define PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK        0x1 /* Set if L4 checksum was calculated. */
+#define PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT       6
+#define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_MASK                 0x1 /* Set for PTP packet. */
+#define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_SHIFT                7
+#define PARSING_AND_ERR_FLAGS_TIMESTAMPRECORDED_MASK           0x1 /* Set if PTP timestamp recorded. */
+#define PARSING_AND_ERR_FLAGS_TIMESTAMPRECORDED_SHIFT          8
+#define PARSING_AND_ERR_FLAGS_IPHDRERROR_MASK                  0x1 /* Set if either version-mismatch or hdr-len-error or ipv4-cksm is set or ipv6 ver mismatch */
+#define PARSING_AND_ERR_FLAGS_IPHDRERROR_SHIFT                 9
+#define PARSING_AND_ERR_FLAGS_L4CHKSMERROR_MASK                0x1 /* Set if L4 checksum validation failed. Valid only if L4 checksum was calculated. */
+#define PARSING_AND_ERR_FLAGS_L4CHKSMERROR_SHIFT               10
+#define PARSING_AND_ERR_FLAGS_TUNNELEXIST_MASK                 0x1 /* Set if GRE/VXLAN/GENEVE tunnel detected. */
+#define PARSING_AND_ERR_FLAGS_TUNNELEXIST_SHIFT                11
+#define PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_MASK         0x1 /* Set if VLAN tag exists in tunnel header. */
+#define PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_SHIFT        12
+#define PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_MASK            0x1 /* Set if either tunnel-ipv4-version-mismatch or tunnel-ipv4-hdr-len-error or tunnel-ipv4-cksm is set or tunneling ipv6 ver mismatch */
+#define PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_SHIFT           13
+#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_MASK  0x1 /* Set if GRE or VXLAN/GENEVE UDP checksum was calculated. */
+#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_SHIFT 14
+#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_MASK          0x1 /* Set if tunnel L4 checksum validation failed. Valid only if tunnel L4 checksum was calculated. */
+#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_SHIFT         15
+};
+
+
+/*
+ * Pb context
+ */
+struct pb_context
+{
+	__le32 crc[4];
+};
+
+
+/*
+ * Concrete Function ID.
+ */
+struct pxp_concrete_fid
+{
+	__le16 fid;
+#define PXP_CONCRETE_FID_PFID_MASK     0xF /* Parent PFID */
+#define PXP_CONCRETE_FID_PFID_SHIFT    0
+#define PXP_CONCRETE_FID_PORT_MASK     0x3 /* port number */
+#define PXP_CONCRETE_FID_PORT_SHIFT    4
+#define PXP_CONCRETE_FID_PATH_MASK     0x1 /* path number */
+#define PXP_CONCRETE_FID_PATH_SHIFT    6
+#define PXP_CONCRETE_FID_VFVALID_MASK  0x1
+#define PXP_CONCRETE_FID_VFVALID_SHIFT 7
+#define PXP_CONCRETE_FID_VFID_MASK     0xFF
+#define PXP_CONCRETE_FID_VFID_SHIFT    8
+};
+
+
+/*
+ * Concrete Function ID.
+ */
+struct pxp_pretend_concrete_fid
+{
+	__le16 fid;
+#define PXP_PRETEND_CONCRETE_FID_PFID_MASK      0xF /* Parent PFID */
+#define PXP_PRETEND_CONCRETE_FID_PFID_SHIFT     0
+#define PXP_PRETEND_CONCRETE_FID_RESERVED_MASK  0x7 /* port number. Only when part of ME register. */
+#define PXP_PRETEND_CONCRETE_FID_RESERVED_SHIFT 4
+#define PXP_PRETEND_CONCRETE_FID_VFVALID_MASK   0x1
+#define PXP_PRETEND_CONCRETE_FID_VFVALID_SHIFT  7
+#define PXP_PRETEND_CONCRETE_FID_VFID_MASK      0xFF
+#define PXP_PRETEND_CONCRETE_FID_VFID_SHIFT     8
+};
+
+/*
+ * Function ID.
+ */
+union pxp_pretend_fid
+{
+	struct pxp_pretend_concrete_fid concrete_fid;
+	__le16 opaque_fid;
+};
+
+/*
+ * Pxp Pretend Command Register.
+ */
+struct pxp_pretend_cmd
+{
+	union pxp_pretend_fid fid;
+	__le16 control;
+#define PXP_PRETEND_CMD_PATH_MASK              0x1
+#define PXP_PRETEND_CMD_PATH_SHIFT             0
+#define PXP_PRETEND_CMD_USE_PORT_MASK          0x1
+#define PXP_PRETEND_CMD_USE_PORT_SHIFT         1
+#define PXP_PRETEND_CMD_PORT_MASK              0x3
+#define PXP_PRETEND_CMD_PORT_SHIFT             2
+#define PXP_PRETEND_CMD_RESERVED0_MASK         0xF
+#define PXP_PRETEND_CMD_RESERVED0_SHIFT        4
+#define PXP_PRETEND_CMD_RESERVED1_MASK         0xF
+#define PXP_PRETEND_CMD_RESERVED1_SHIFT        8
+#define PXP_PRETEND_CMD_PRETEND_PATH_MASK      0x1 /* is pretend mode? */
+#define PXP_PRETEND_CMD_PRETEND_PATH_SHIFT     12
+#define PXP_PRETEND_CMD_PRETEND_PORT_MASK      0x1 /* is pretend mode? */
+#define PXP_PRETEND_CMD_PRETEND_PORT_SHIFT     13
+#define PXP_PRETEND_CMD_PRETEND_FUNCTION_MASK  0x1 /* is pretend mode? */
+#define PXP_PRETEND_CMD_PRETEND_FUNCTION_SHIFT 14
+#define PXP_PRETEND_CMD_IS_CONCRETE_MASK       0x1 /* is fid concrete? */
+#define PXP_PRETEND_CMD_IS_CONCRETE_SHIFT      15
+};
+
+
+
+
+/*
+ * PTT Record in PXP Admin Window.
+ */
+struct pxp_ptt_entry
+{
+	__le32 offset;
+#define PXP_PTT_ENTRY_OFFSET_MASK     0x7FFFFF
+#define PXP_PTT_ENTRY_OFFSET_SHIFT    0
+#define PXP_PTT_ENTRY_RESERVED0_MASK  0x1FF
+#define PXP_PTT_ENTRY_RESERVED0_SHIFT 23
+	struct pxp_pretend_cmd pretend;
+};
+
+
+/*
+ * VF Zone A Permission Register.
+ */
+struct pxp_vf_zone_a_permission
+{
+	__le32 control;
+#define PXP_VF_ZONE_A_PERMISSION_VFID_MASK       0xFF
+#define PXP_VF_ZONE_A_PERMISSION_VFID_SHIFT      0
+#define PXP_VF_ZONE_A_PERMISSION_VALID_MASK      0x1
+#define PXP_VF_ZONE_A_PERMISSION_VALID_SHIFT     8
+#define PXP_VF_ZONE_A_PERMISSION_RESERVED0_MASK  0x7F
+#define PXP_VF_ZONE_A_PERMISSION_RESERVED0_SHIFT 9
+#define PXP_VF_ZONE_A_PERMISSION_RESERVED1_MASK  0xFFFF
+#define PXP_VF_ZONE_A_PERMISSION_RESERVED1_SHIFT 16
+};
+
+
+/*
+ * Rdif context
+ */
+struct rdif_task_context
+{
+	__le32 initialRefTag;
+	__le16 appTagValue;
+	__le16 appTagMask;
+	uint8_t flags0;
+#define RDIF_TASK_CONTEXT_IGNOREAPPTAG_MASK            0x1
+#define RDIF_TASK_CONTEXT_IGNOREAPPTAG_SHIFT           0
+#define RDIF_TASK_CONTEXT_INITIALREFTAGVALID_MASK      0x1
+#define RDIF_TASK_CONTEXT_INITIALREFTAGVALID_SHIFT     1
+#define RDIF_TASK_CONTEXT_HOSTGUARDTYPE_MASK           0x1 /* 0 = IP checksum, 1 = CRC */
+#define RDIF_TASK_CONTEXT_HOSTGUARDTYPE_SHIFT          2
+#define RDIF_TASK_CONTEXT_SETERRORWITHEOP_MASK         0x1
+#define RDIF_TASK_CONTEXT_SETERRORWITHEOP_SHIFT        3
+#define RDIF_TASK_CONTEXT_PROTECTIONTYPE_MASK          0x3 /* 1/2/3 - Protection Type */
+#define RDIF_TASK_CONTEXT_PROTECTIONTYPE_SHIFT         4
+#define RDIF_TASK_CONTEXT_CRC_SEED_MASK                0x1 /* 0=0x0000, 1=0xffff */
+#define RDIF_TASK_CONTEXT_CRC_SEED_SHIFT               6
+#define RDIF_TASK_CONTEXT_KEEPREFTAGCONST_MASK         0x1 /* Keep reference tag constant */
+#define RDIF_TASK_CONTEXT_KEEPREFTAGCONST_SHIFT        7
+	uint8_t partialDifData[7];
+	__le16 partialCrcValue;
+	__le16 partialChecksumValue;
+	__le32 offsetInIO;
+	__le16 flags1;
+#define RDIF_TASK_CONTEXT_VALIDATEGUARD_MASK           0x1
+#define RDIF_TASK_CONTEXT_VALIDATEGUARD_SHIFT          0
+#define RDIF_TASK_CONTEXT_VALIDATEAPPTAG_MASK          0x1
+#define RDIF_TASK_CONTEXT_VALIDATEAPPTAG_SHIFT         1
+#define RDIF_TASK_CONTEXT_VALIDATEREFTAG_MASK          0x1
+#define RDIF_TASK_CONTEXT_VALIDATEREFTAG_SHIFT         2
+#define RDIF_TASK_CONTEXT_FORWARDGUARD_MASK            0x1
+#define RDIF_TASK_CONTEXT_FORWARDGUARD_SHIFT           3
+#define RDIF_TASK_CONTEXT_FORWARDAPPTAG_MASK           0x1
+#define RDIF_TASK_CONTEXT_FORWARDAPPTAG_SHIFT          4
+#define RDIF_TASK_CONTEXT_FORWARDREFTAG_MASK           0x1
+#define RDIF_TASK_CONTEXT_FORWARDREFTAG_SHIFT          5
+#define RDIF_TASK_CONTEXT_INTERVALSIZE_MASK            0x7 /* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
+#define RDIF_TASK_CONTEXT_INTERVALSIZE_SHIFT           6
+#define RDIF_TASK_CONTEXT_HOSTINTERFACE_MASK           0x3 /* 0=None, 1=DIF, 2=DIX */
+#define RDIF_TASK_CONTEXT_HOSTINTERFACE_SHIFT          9
+#define RDIF_TASK_CONTEXT_DIFBEFOREDATA_MASK           0x1 /* DIF tag right at the beginning of DIF interval */
+#define RDIF_TASK_CONTEXT_DIFBEFOREDATA_SHIFT          11
+#define RDIF_TASK_CONTEXT_RESERVED0_MASK               0x1
+#define RDIF_TASK_CONTEXT_RESERVED0_SHIFT              12
+#define RDIF_TASK_CONTEXT_NETWORKINTERFACE_MASK        0x1 /* 0=None, 1=DIF */
+#define RDIF_TASK_CONTEXT_NETWORKINTERFACE_SHIFT       13
+#define RDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_MASK   0x1 /* Forward application tag with mask */
+#define RDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_SHIFT  14
+#define RDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_MASK   0x1 /* Forward reference tag with mask */
+#define RDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_SHIFT  15
+	__le16 state;
+#define RDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFT_MASK    0xF
+#define RDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFT_SHIFT   0
+#define RDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFT_MASK  0xF
+#define RDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFT_SHIFT 4
+#define RDIF_TASK_CONTEXT_ERRORINIO_MASK               0x1
+#define RDIF_TASK_CONTEXT_ERRORINIO_SHIFT              8
+#define RDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_MASK        0x1
+#define RDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_SHIFT       9
+#define RDIF_TASK_CONTEXT_REFTAGMASK_MASK              0xF /* mask for refernce tag handling */
+#define RDIF_TASK_CONTEXT_REFTAGMASK_SHIFT             10
+#define RDIF_TASK_CONTEXT_RESERVED1_MASK               0x3
+#define RDIF_TASK_CONTEXT_RESERVED1_SHIFT              14
+	__le32 reserved2;
+};
+
+
+
+/*
+ * RSS hash type
+ */
+enum rss_hash_type
+{
+	RSS_HASH_TYPE_DEFAULT=0,
+	RSS_HASH_TYPE_IPV4=1,
+	RSS_HASH_TYPE_TCP_IPV4=2,
+	RSS_HASH_TYPE_IPV6=3,
+	RSS_HASH_TYPE_TCP_IPV6=4,
+	RSS_HASH_TYPE_UDP_IPV4=5,
+	RSS_HASH_TYPE_UDP_IPV6=6,
+	MAX_RSS_HASH_TYPE
+};
+
+
+/*
+ * status block structure
+ */
+struct status_block
+{
+	__le16 pi_array[PIS_PER_SB];
+	__le32 sb_num;
+#define STATUS_BLOCK_SB_NUM_MASK      0x1FF
+#define STATUS_BLOCK_SB_NUM_SHIFT     0
+#define STATUS_BLOCK_ZERO_PAD_MASK    0x7F
+#define STATUS_BLOCK_ZERO_PAD_SHIFT   9
+#define STATUS_BLOCK_ZERO_PAD2_MASK   0xFFFF
+#define STATUS_BLOCK_ZERO_PAD2_SHIFT  16
+	__le32 prod_index;
+#define STATUS_BLOCK_PROD_INDEX_MASK  0xFFFFFF
+#define STATUS_BLOCK_PROD_INDEX_SHIFT 0
+#define STATUS_BLOCK_ZERO_PAD3_MASK   0xFF
+#define STATUS_BLOCK_ZERO_PAD3_SHIFT  24
+};
+
+
+/*
+ * Tdif context
+ */
+struct tdif_task_context
+{
+	__le32 initialRefTag;
+	__le16 appTagValue;
+	__le16 appTagMask;
+	__le16 partialCrcValueB;
+	__le16 partialChecksumValueB;
+	__le16 stateB;
+#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTB_MASK    0xF
+#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTB_SHIFT   0
+#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTB_MASK  0xF
+#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTB_SHIFT 4
+#define TDIF_TASK_CONTEXT_ERRORINIOB_MASK               0x1
+#define TDIF_TASK_CONTEXT_ERRORINIOB_SHIFT              8
+#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_MASK         0x1
+#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_SHIFT        9
+#define TDIF_TASK_CONTEXT_RESERVED0_MASK                0x3F
+#define TDIF_TASK_CONTEXT_RESERVED0_SHIFT               10
+	uint8_t reserved1;
+	uint8_t flags0;
+#define TDIF_TASK_CONTEXT_IGNOREAPPTAG_MASK             0x1
+#define TDIF_TASK_CONTEXT_IGNOREAPPTAG_SHIFT            0
+#define TDIF_TASK_CONTEXT_INITIALREFTAGVALID_MASK       0x1
+#define TDIF_TASK_CONTEXT_INITIALREFTAGVALID_SHIFT      1
+#define TDIF_TASK_CONTEXT_HOSTGUARDTYPE_MASK            0x1 /* 0 = IP checksum, 1 = CRC */
+#define TDIF_TASK_CONTEXT_HOSTGUARDTYPE_SHIFT           2
+#define TDIF_TASK_CONTEXT_SETERRORWITHEOP_MASK          0x1
+#define TDIF_TASK_CONTEXT_SETERRORWITHEOP_SHIFT         3
+#define TDIF_TASK_CONTEXT_PROTECTIONTYPE_MASK           0x3 /* 1/2/3 - Protection Type */
+#define TDIF_TASK_CONTEXT_PROTECTIONTYPE_SHIFT          4
+#define TDIF_TASK_CONTEXT_CRC_SEED_MASK                 0x1 /* 0=0x0000, 1=0xffff */
+#define TDIF_TASK_CONTEXT_CRC_SEED_SHIFT                6
+#define TDIF_TASK_CONTEXT_RESERVED2_MASK                0x1
+#define TDIF_TASK_CONTEXT_RESERVED2_SHIFT               7
+	__le32 flags1;
+#define TDIF_TASK_CONTEXT_VALIDATEGUARD_MASK            0x1
+#define TDIF_TASK_CONTEXT_VALIDATEGUARD_SHIFT           0
+#define TDIF_TASK_CONTEXT_VALIDATEAPPTAG_MASK           0x1
+#define TDIF_TASK_CONTEXT_VALIDATEAPPTAG_SHIFT          1
+#define TDIF_TASK_CONTEXT_VALIDATEREFTAG_MASK           0x1
+#define TDIF_TASK_CONTEXT_VALIDATEREFTAG_SHIFT          2
+#define TDIF_TASK_CONTEXT_FORWARDGUARD_MASK             0x1
+#define TDIF_TASK_CONTEXT_FORWARDGUARD_SHIFT            3
+#define TDIF_TASK_CONTEXT_FORWARDAPPTAG_MASK            0x1
+#define TDIF_TASK_CONTEXT_FORWARDAPPTAG_SHIFT           4
+#define TDIF_TASK_CONTEXT_FORWARDREFTAG_MASK            0x1
+#define TDIF_TASK_CONTEXT_FORWARDREFTAG_SHIFT           5
+#define TDIF_TASK_CONTEXT_INTERVALSIZE_MASK             0x7 /* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
+#define TDIF_TASK_CONTEXT_INTERVALSIZE_SHIFT            6
+#define TDIF_TASK_CONTEXT_HOSTINTERFACE_MASK            0x3 /* 0=None, 1=DIF, 2=DIX */
+#define TDIF_TASK_CONTEXT_HOSTINTERFACE_SHIFT           9
+#define TDIF_TASK_CONTEXT_DIFBEFOREDATA_MASK            0x1 /* DIF tag right at the beginning of DIF interval */
+#define TDIF_TASK_CONTEXT_DIFBEFOREDATA_SHIFT           11
+#define TDIF_TASK_CONTEXT_RESERVED3_MASK                0x1 /* reserved */
+#define TDIF_TASK_CONTEXT_RESERVED3_SHIFT               12
+#define TDIF_TASK_CONTEXT_NETWORKINTERFACE_MASK         0x1 /* 0=None, 1=DIF */
+#define TDIF_TASK_CONTEXT_NETWORKINTERFACE_SHIFT        13
+#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTA_MASK    0xF
+#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTA_SHIFT   14
+#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTA_MASK  0xF
+#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTA_SHIFT 18
+#define TDIF_TASK_CONTEXT_ERRORINIOA_MASK               0x1
+#define TDIF_TASK_CONTEXT_ERRORINIOA_SHIFT              22
+#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOWA_MASK        0x1
+#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOWA_SHIFT       23
+#define TDIF_TASK_CONTEXT_REFTAGMASK_MASK               0xF /* mask for refernce tag handling */
+#define TDIF_TASK_CONTEXT_REFTAGMASK_SHIFT              24
+#define TDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_MASK    0x1 /* Forward application tag with mask */
+#define TDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_SHIFT   28
+#define TDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_MASK    0x1 /* Forward reference tag with mask */
+#define TDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_SHIFT   29
+#define TDIF_TASK_CONTEXT_KEEPREFTAGCONST_MASK          0x1 /* Keep reference tag constant */
+#define TDIF_TASK_CONTEXT_KEEPREFTAGCONST_SHIFT         30
+#define TDIF_TASK_CONTEXT_RESERVED4_MASK                0x1
+#define TDIF_TASK_CONTEXT_RESERVED4_SHIFT               31
+	__le32 offsetInIOB;
+	__le16 partialCrcValueA;
+	__le16 partialChecksumValueA;
+	__le32 offsetInIOA;
+	uint8_t partialDifDataA[8];
+	uint8_t partialDifDataB[8];
+};
+
+
+/*
+ * Timers context
+ */
+struct timers_context
+{
+	__le32 logical_client_0;
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK     0xFFFFFFF /* Expiration time of logical client 0 */
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT    0
+#define TIMERS_CONTEXT_VALIDLC0_MASK              0x1 /* Valid bit of logical client 0 */
+#define TIMERS_CONTEXT_VALIDLC0_SHIFT             28
+#define TIMERS_CONTEXT_ACTIVELC0_MASK             0x1 /* Active bit of logical client 0 */
+#define TIMERS_CONTEXT_ACTIVELC0_SHIFT            29
+#define TIMERS_CONTEXT_RESERVED0_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED0_SHIFT            30
+	__le32 logical_client_1;
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK     0xFFFFFFF /* Expiration time of logical client 1 */
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT    0
+#define TIMERS_CONTEXT_VALIDLC1_MASK              0x1 /* Valid bit of logical client 1 */
+#define TIMERS_CONTEXT_VALIDLC1_SHIFT             28
+#define TIMERS_CONTEXT_ACTIVELC1_MASK             0x1 /* Active bit of logical client 1 */
+#define TIMERS_CONTEXT_ACTIVELC1_SHIFT            29
+#define TIMERS_CONTEXT_RESERVED1_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED1_SHIFT            30
+	__le32 logical_client_2;
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK     0xFFFFFFF /* Expiration time of logical client 2 */
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT    0
+#define TIMERS_CONTEXT_VALIDLC2_MASK              0x1 /* Valid bit of logical client 2 */
+#define TIMERS_CONTEXT_VALIDLC2_SHIFT             28
+#define TIMERS_CONTEXT_ACTIVELC2_MASK             0x1 /* Active bit of logical client 2 */
+#define TIMERS_CONTEXT_ACTIVELC2_SHIFT            29
+#define TIMERS_CONTEXT_RESERVED2_MASK             0x3
+#define TIMERS_CONTEXT_RESERVED2_SHIFT            30
+	__le32 host_expiration_fields;
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK  0xFFFFFFF /* Expiration time on host (closest one) */
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK  0x1 /* Valid bit of host expiration */
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28
+#define TIMERS_CONTEXT_RESERVED3_MASK             0x7
+#define TIMERS_CONTEXT_RESERVED3_SHIFT            29
+};
+
+
+/*
+ * Enum for next_protocol field of tunnel_parsing_flags
+ */
+enum tunnel_next_protocol
+{
+	e_unknown=0,
+	e_l2=1,
+	e_ipv4=2,
+	e_ipv6=3,
+	MAX_TUNNEL_NEXT_PROTOCOL
+};
+
+#endif /* __COMMON_HSI__ */
diff --git a/providers/qedr/qelr_hsi.h b/providers/qedr/qelr_hsi.h
new file mode 100644
index 0000000..8eaf183
--- /dev/null
+++ b/providers/qedr/qelr_hsi.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __QED_HSI_ROCE__
+#define __QED_HSI_ROCE__
+/********************************/
+/* Add include to common target */
+/********************************/
+#include "common_hsi.h"
+
+/************************************************************************/
+/* Add include to common roce target for both eCore and protocol roce driver */
+/************************************************************************/
+#include "roce_common.h"
+/************************************************************************/
+/* Add include to qed hsi rdma target for both roce and iwarp qed driver */
+/************************************************************************/
+#include "qelr_hsi_rdma.h"
+
+/* Affiliated asynchronous events / errors enumeration */
+enum roce_async_events_type
+{
+	ROCE_ASYNC_EVENT_NONE,
+	ROCE_ASYNC_EVENT_COMM_EST,
+	ROCE_ASYNC_EVENT_SQ_DRAINED,
+	ROCE_ASYNC_EVENT_SRQ_LIMIT,
+	ROCE_ASYNC_EVENT_LAST_WQE_REACHED,
+	ROCE_ASYNC_EVENT_CQ_ERR,
+	ROCE_ASYNC_EVENT_LOCAL_INVALID_REQUEST_ERR,
+	ROCE_ASYNC_EVENT_LOCAL_CATASTROPHIC_ERR,
+	ROCE_ASYNC_EVENT_LOCAL_ACCESS_ERR,
+	ROCE_ASYNC_EVENT_QP_CATASTROPHIC_ERR,
+	ROCE_ASYNC_EVENT_CQ_OVERFLOW_ERR,
+	ROCE_ASYNC_EVENT_SRQ_EMPTY,
+	MAX_ROCE_ASYNC_EVENTS_TYPE
+};
+
+#endif /* __QED_HSI_ROCE__ */
diff --git a/providers/qedr/qelr_hsi_rdma.h b/providers/qedr/qelr_hsi_rdma.h
new file mode 100644
index 0000000..c18ce86
--- /dev/null
+++ b/providers/qedr/qelr_hsi_rdma.h
@@ -0,0 +1,914 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __QED_HSI_RDMA__
+#define __QED_HSI_RDMA__
+/************************************************************************/
+/* Add include to common rdma target for both eCore and protocol rdma driver */
+/************************************************************************/
+#include "rdma_common.h"
+
+/*
+ * rdma completion notification queue element
+ */
+struct rdma_cnqe
+{
+	struct regpair cq_handle;
+};
+
+
+struct rdma_cqe_responder
+{
+	struct regpair srq_wr_id;
+	struct regpair qp_handle;
+	__le32 imm_data_or_inv_r_Key /* immediate data in case imm_flg is set, or invalidated r_key in case inv_flg is set */;
+	__le32 length;
+	__le32 imm_data_hi /* High bytes of immediate data in case imm_flg is set in iWARP only */;
+	__le16 rq_cons /* Valid only when status is WORK_REQUEST_FLUSHED_ERR. Indicates an aggregative flush on all posted RQ WQEs until the reported rq_cons. */;
+	uint8_t flags;
+#define RDMA_CQE_RESPONDER_TOGGLE_BIT_MASK  0x1 /* indicates a valid completion written by FW. FW toggle this bit each time it finishes producing all PBL entries */
+#define RDMA_CQE_RESPONDER_TOGGLE_BIT_SHIFT 0
+#define RDMA_CQE_RESPONDER_TYPE_MASK        0x3 /*  (use enum rdma_cqe_type) */
+#define RDMA_CQE_RESPONDER_TYPE_SHIFT       1
+#define RDMA_CQE_RESPONDER_INV_FLG_MASK     0x1 /* r_key invalidated indicator */
+#define RDMA_CQE_RESPONDER_INV_FLG_SHIFT    3
+#define RDMA_CQE_RESPONDER_IMM_FLG_MASK     0x1 /* immediate data indicator */
+#define RDMA_CQE_RESPONDER_IMM_FLG_SHIFT    4
+#define RDMA_CQE_RESPONDER_RDMA_FLG_MASK    0x1 /* 1=this CQE relates to an RDMA Write. 0=Send. */
+#define RDMA_CQE_RESPONDER_RDMA_FLG_SHIFT   5
+#define RDMA_CQE_RESPONDER_RESERVED2_MASK   0x3
+#define RDMA_CQE_RESPONDER_RESERVED2_SHIFT  6
+	uint8_t status;
+};
+
+struct rdma_cqe_requester
+{
+	__le16 sq_cons;
+	__le16 reserved0;
+	__le32 reserved1;
+	struct regpair qp_handle;
+	struct regpair reserved2;
+	__le32 reserved3;
+	__le16 reserved4;
+	uint8_t flags;
+#define RDMA_CQE_REQUESTER_TOGGLE_BIT_MASK  0x1 /* indicates a valid completion written by FW. FW toggle this bit each time it finishes producing all PBL entries */
+#define RDMA_CQE_REQUESTER_TOGGLE_BIT_SHIFT 0
+#define RDMA_CQE_REQUESTER_TYPE_MASK        0x3 /*  (use enum rdma_cqe_type) */
+#define RDMA_CQE_REQUESTER_TYPE_SHIFT       1
+#define RDMA_CQE_REQUESTER_RESERVED5_MASK   0x1F
+#define RDMA_CQE_REQUESTER_RESERVED5_SHIFT  3
+	uint8_t status;
+};
+
+struct rdma_cqe_common
+{
+	struct regpair reserved0;
+	struct regpair qp_handle;
+	__le16 reserved1[7];
+	uint8_t flags;
+#define RDMA_CQE_COMMON_TOGGLE_BIT_MASK  0x1 /* indicates a valid completion written by FW. FW toggle this bit each time it finishes producing all PBL entries */
+#define RDMA_CQE_COMMON_TOGGLE_BIT_SHIFT 0
+#define RDMA_CQE_COMMON_TYPE_MASK        0x3 /*  (use enum rdma_cqe_type) */
+#define RDMA_CQE_COMMON_TYPE_SHIFT       1
+#define RDMA_CQE_COMMON_RESERVED2_MASK   0x1F
+#define RDMA_CQE_COMMON_RESERVED2_SHIFT  3
+	uint8_t status;
+};
+
+/*
+ * rdma completion queue element
+ */
+union rdma_cqe
+{
+	struct rdma_cqe_responder resp;
+	struct rdma_cqe_requester req;
+	struct rdma_cqe_common cmn;
+};
+
+
+
+
+/*
+ * CQE requester status enumeration
+ */
+enum rdma_cqe_requester_status_enum
+{
+	RDMA_CQE_REQ_STS_OK,
+	RDMA_CQE_REQ_STS_BAD_RESPONSE_ERR,
+	RDMA_CQE_REQ_STS_LOCAL_LENGTH_ERR,
+	RDMA_CQE_REQ_STS_LOCAL_QP_OPERATION_ERR,
+	RDMA_CQE_REQ_STS_LOCAL_PROTECTION_ERR,
+	RDMA_CQE_REQ_STS_MEMORY_MGT_OPERATION_ERR,
+	RDMA_CQE_REQ_STS_REMOTE_INVALID_REQUEST_ERR,
+	RDMA_CQE_REQ_STS_REMOTE_ACCESS_ERR,
+	RDMA_CQE_REQ_STS_REMOTE_OPERATION_ERR,
+	RDMA_CQE_REQ_STS_RNR_NAK_RETRY_CNT_ERR,
+	RDMA_CQE_REQ_STS_TRANSPORT_RETRY_CNT_ERR,
+	RDMA_CQE_REQ_STS_WORK_REQUEST_FLUSHED_ERR,
+	MAX_RDMA_CQE_REQUESTER_STATUS_ENUM
+};
+
+
+
+/*
+ * CQE responder status enumeration
+ */
+enum rdma_cqe_responder_status_enum
+{
+	RDMA_CQE_RESP_STS_OK,
+	RDMA_CQE_RESP_STS_LOCAL_ACCESS_ERR,
+	RDMA_CQE_RESP_STS_LOCAL_LENGTH_ERR,
+	RDMA_CQE_RESP_STS_LOCAL_QP_OPERATION_ERR,
+	RDMA_CQE_RESP_STS_LOCAL_PROTECTION_ERR,
+	RDMA_CQE_RESP_STS_MEMORY_MGT_OPERATION_ERR,
+	RDMA_CQE_RESP_STS_REMOTE_INVALID_REQUEST_ERR,
+	RDMA_CQE_RESP_STS_WORK_REQUEST_FLUSHED_ERR,
+	MAX_RDMA_CQE_RESPONDER_STATUS_ENUM
+};
+
+
+/*
+ * CQE type enumeration
+ */
+enum rdma_cqe_type
+{
+	RDMA_CQE_TYPE_REQUESTER,
+	RDMA_CQE_TYPE_RESPONDER_RQ,
+	RDMA_CQE_TYPE_RESPONDER_SRQ,
+	RDMA_CQE_TYPE_INVALID,
+	MAX_RDMA_CQE_TYPE
+};
+
+
+/*
+ * DIF Block size options
+ */
+enum rdma_dif_block_size
+{
+	RDMA_DIF_BLOCK_512=0,
+	RDMA_DIF_BLOCK_4096=1,
+	MAX_RDMA_DIF_BLOCK_SIZE
+};
+
+
+/*
+ * DIF CRC initial value
+ */
+enum rdma_dif_crc_seed
+{
+	RDMA_DIF_CRC_SEED_0000=0,
+	RDMA_DIF_CRC_SEED_FFFF=1,
+	MAX_RDMA_DIF_CRC_SEED
+};
+
+
+/*
+ * RDMA DIF Error Result Structure
+ */
+struct rdma_dif_error_result
+{
+	__le32 error_intervals /* Total number of error intervals in the IO. */;
+	__le32 dif_error_1st_interval /* Number of the first interval that contained error. Set to 0xFFFFFFFF if error occurred in the Runt Block. */;
+	uint8_t flags;
+#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_CRC_MASK      0x1 /* CRC error occurred. */
+#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_CRC_SHIFT     0
+#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_APP_TAG_MASK  0x1 /* App Tag error occurred. */
+#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_APP_TAG_SHIFT 1
+#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_REF_TAG_MASK  0x1 /* Ref Tag error occurred. */
+#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_REF_TAG_SHIFT 2
+#define RDMA_DIF_ERROR_RESULT_RESERVED0_MASK               0xF
+#define RDMA_DIF_ERROR_RESULT_RESERVED0_SHIFT              3
+#define RDMA_DIF_ERROR_RESULT_TOGGLE_BIT_MASK              0x1 /* Used to indicate the structure is valid. Toggles each time an invalidate region is performed. */
+#define RDMA_DIF_ERROR_RESULT_TOGGLE_BIT_SHIFT             7
+	uint8_t reserved1[55] /* Pad to 64 bytes to ensure efficient word line writing. */;
+};
+
+
+/*
+ * DIF IO direction
+ */
+enum rdma_dif_io_direction_flg
+{
+	RDMA_DIF_DIR_RX=0,
+	RDMA_DIF_DIR_TX=1,
+	MAX_RDMA_DIF_IO_DIRECTION_FLG
+};
+
+
+/*
+ * RDMA DIF Runt Result Structure
+ */
+struct rdma_dif_runt_result
+{
+	__le16 guard_tag /* CRC result of received IO. */;
+	__le16 reserved[3];
+};
+
+
+/*
+ * memory window type enumeration
+ */
+enum rdma_mw_type
+{
+	RDMA_MW_TYPE_1,
+	RDMA_MW_TYPE_2A,
+	MAX_RDMA_MW_TYPE
+};
+
+
+struct rdma_rq_sge
+{
+	struct regpair addr;
+	__le32 length;
+	__le32 flags;
+#define RDMA_RQ_SGE_L_KEY_MASK      0x3FFFFFF /* key of memory relating to this RQ */
+#define RDMA_RQ_SGE_L_KEY_SHIFT     0
+#define RDMA_RQ_SGE_NUM_SGES_MASK   0x7 /* first SGE - number of SGEs in this RQ WQE. Other SGEs - should be set to 0 */
+#define RDMA_RQ_SGE_NUM_SGES_SHIFT  26
+#define RDMA_RQ_SGE_RESERVED0_MASK  0x7
+#define RDMA_RQ_SGE_RESERVED0_SHIFT 29
+};
+
+
+struct rdma_sq_atomic_wqe
+{
+	__le32 reserved1;
+	__le32 length /* Total data length (8 bytes for Atomic) */;
+	__le32 xrc_srq /* Valid only when XRC is set for the QP */;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_ATOMIC_WQE_COMP_FLG_MASK         0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_ATOMIC_WQE_COMP_FLG_SHIFT        0
+#define RDMA_SQ_ATOMIC_WQE_RD_FENCE_FLG_MASK     0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_ATOMIC_WQE_RD_FENCE_FLG_SHIFT    1
+#define RDMA_SQ_ATOMIC_WQE_INV_FENCE_FLG_MASK    0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_ATOMIC_WQE_INV_FENCE_FLG_SHIFT   2
+#define RDMA_SQ_ATOMIC_WQE_SE_FLG_MASK           0x1 /* Dont care for atomic wqe */
+#define RDMA_SQ_ATOMIC_WQE_SE_FLG_SHIFT          3
+#define RDMA_SQ_ATOMIC_WQE_INLINE_FLG_MASK       0x1 /* Should be 0 for atomic wqe */
+#define RDMA_SQ_ATOMIC_WQE_INLINE_FLG_SHIFT      4
+#define RDMA_SQ_ATOMIC_WQE_DIF_ON_HOST_FLG_MASK  0x1 /* Should be 0 for atomic wqe */
+#define RDMA_SQ_ATOMIC_WQE_DIF_ON_HOST_FLG_SHIFT 5
+#define RDMA_SQ_ATOMIC_WQE_RESERVED0_MASK        0x3
+#define RDMA_SQ_ATOMIC_WQE_RESERVED0_SHIFT       6
+	uint8_t wqe_size /* Size of WQE in 16B chunks including SGE */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+	struct regpair remote_va /* remote virtual address */;
+	__le32 r_key /* Remote key */;
+	__le32 reserved2;
+	struct regpair cmp_data /* Data to compare in case of ATOMIC_CMP_AND_SWAP */;
+	struct regpair swap_data /* Swap or add data */;
+};
+
+
+/*
+ * First element (16 bytes) of atomic wqe
+ */
+struct rdma_sq_atomic_wqe_1st
+{
+	__le32 reserved1;
+	__le32 length /* Total data length (8 bytes for Atomic) */;
+	__le32 xrc_srq /* Valid only when XRC is set for the QP */;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_ATOMIC_WQE_1ST_COMP_FLG_MASK       0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_ATOMIC_WQE_1ST_COMP_FLG_SHIFT      0
+#define RDMA_SQ_ATOMIC_WQE_1ST_RD_FENCE_FLG_MASK   0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_ATOMIC_WQE_1ST_RD_FENCE_FLG_SHIFT  1
+#define RDMA_SQ_ATOMIC_WQE_1ST_INV_FENCE_FLG_MASK  0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_ATOMIC_WQE_1ST_INV_FENCE_FLG_SHIFT 2
+#define RDMA_SQ_ATOMIC_WQE_1ST_SE_FLG_MASK         0x1 /* Dont care for atomic wqe */
+#define RDMA_SQ_ATOMIC_WQE_1ST_SE_FLG_SHIFT        3
+#define RDMA_SQ_ATOMIC_WQE_1ST_INLINE_FLG_MASK     0x1 /* Should be 0 for atomic wqe */
+#define RDMA_SQ_ATOMIC_WQE_1ST_INLINE_FLG_SHIFT    4
+#define RDMA_SQ_ATOMIC_WQE_1ST_RESERVED0_MASK      0x7
+#define RDMA_SQ_ATOMIC_WQE_1ST_RESERVED0_SHIFT     5
+	uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs. Set to number of SGEs + 1. */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+};
+
+
+/*
+ * Second element (16 bytes) of atomic wqe
+ */
+struct rdma_sq_atomic_wqe_2nd
+{
+	struct regpair remote_va /* remote virtual address */;
+	__le32 r_key /* Remote key */;
+	__le32 reserved2;
+};
+
+
+/*
+ * Third element (16 bytes) of atomic wqe
+ */
+struct rdma_sq_atomic_wqe_3rd
+{
+	struct regpair cmp_data /* Data to compare in case of ATOMIC_CMP_AND_SWAP */;
+	struct regpair swap_data /* Swap or add data */;
+};
+
+
+struct rdma_sq_bind_wqe
+{
+	struct regpair addr;
+	__le32 l_key;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_BIND_WQE_COMP_FLG_MASK       0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_BIND_WQE_COMP_FLG_SHIFT      0
+#define RDMA_SQ_BIND_WQE_RD_FENCE_FLG_MASK   0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_BIND_WQE_RD_FENCE_FLG_SHIFT  1
+#define RDMA_SQ_BIND_WQE_INV_FENCE_FLG_MASK  0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_BIND_WQE_INV_FENCE_FLG_SHIFT 2
+#define RDMA_SQ_BIND_WQE_SE_FLG_MASK         0x1 /* Dont care for bind wqe */
+#define RDMA_SQ_BIND_WQE_SE_FLG_SHIFT        3
+#define RDMA_SQ_BIND_WQE_INLINE_FLG_MASK     0x1 /* Should be 0 for bind wqe */
+#define RDMA_SQ_BIND_WQE_INLINE_FLG_SHIFT    4
+#define RDMA_SQ_BIND_WQE_RESERVED0_MASK      0x7
+#define RDMA_SQ_BIND_WQE_RESERVED0_SHIFT     5
+	uint8_t wqe_size /* Size of WQE in 16B chunks */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+	uint8_t bind_ctrl;
+#define RDMA_SQ_BIND_WQE_ZERO_BASED_MASK     0x1 /* zero based indication */
+#define RDMA_SQ_BIND_WQE_ZERO_BASED_SHIFT    0
+#define RDMA_SQ_BIND_WQE_MW_TYPE_MASK        0x1 /*  (use enum rdma_mw_type) */
+#define RDMA_SQ_BIND_WQE_MW_TYPE_SHIFT       1
+#define RDMA_SQ_BIND_WQE_RESERVED1_MASK      0x3F
+#define RDMA_SQ_BIND_WQE_RESERVED1_SHIFT     2
+	uint8_t access_ctrl;
+#define RDMA_SQ_BIND_WQE_REMOTE_READ_MASK    0x1
+#define RDMA_SQ_BIND_WQE_REMOTE_READ_SHIFT   0
+#define RDMA_SQ_BIND_WQE_REMOTE_WRITE_MASK   0x1
+#define RDMA_SQ_BIND_WQE_REMOTE_WRITE_SHIFT  1
+#define RDMA_SQ_BIND_WQE_ENABLE_ATOMIC_MASK  0x1
+#define RDMA_SQ_BIND_WQE_ENABLE_ATOMIC_SHIFT 2
+#define RDMA_SQ_BIND_WQE_LOCAL_READ_MASK     0x1
+#define RDMA_SQ_BIND_WQE_LOCAL_READ_SHIFT    3
+#define RDMA_SQ_BIND_WQE_LOCAL_WRITE_MASK    0x1
+#define RDMA_SQ_BIND_WQE_LOCAL_WRITE_SHIFT   4
+#define RDMA_SQ_BIND_WQE_RESERVED2_MASK      0x7
+#define RDMA_SQ_BIND_WQE_RESERVED2_SHIFT     5
+	uint8_t reserved3;
+	uint8_t length_hi /* upper 8 bits of the registered MW length */;
+	__le32 length_lo /* lower 32 bits of the registered MW length */;
+	__le32 parent_l_key /* l_key of the parent MR */;
+	__le32 reserved4;
+};
+
+
+/*
+ * First element (16 bytes) of bind wqe
+ */
+struct rdma_sq_bind_wqe_1st
+{
+	struct regpair addr;
+	__le32 l_key;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_BIND_WQE_1ST_COMP_FLG_MASK       0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_BIND_WQE_1ST_COMP_FLG_SHIFT      0
+#define RDMA_SQ_BIND_WQE_1ST_RD_FENCE_FLG_MASK   0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_BIND_WQE_1ST_RD_FENCE_FLG_SHIFT  1
+#define RDMA_SQ_BIND_WQE_1ST_INV_FENCE_FLG_MASK  0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_BIND_WQE_1ST_INV_FENCE_FLG_SHIFT 2
+#define RDMA_SQ_BIND_WQE_1ST_SE_FLG_MASK         0x1 /* Dont care for bind wqe */
+#define RDMA_SQ_BIND_WQE_1ST_SE_FLG_SHIFT        3
+#define RDMA_SQ_BIND_WQE_1ST_INLINE_FLG_MASK     0x1 /* Should be 0 for bind wqe */
+#define RDMA_SQ_BIND_WQE_1ST_INLINE_FLG_SHIFT    4
+#define RDMA_SQ_BIND_WQE_1ST_RESERVED0_MASK      0x7
+#define RDMA_SQ_BIND_WQE_1ST_RESERVED0_SHIFT     5
+	uint8_t wqe_size /* Size of WQE in 16B chunks */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+};
+
+
+/*
+ * Second element (16 bytes) of bind wqe
+ */
+struct rdma_sq_bind_wqe_2nd
+{
+	uint8_t bind_ctrl;
+#define RDMA_SQ_BIND_WQE_2ND_ZERO_BASED_MASK     0x1 /* zero based indication */
+#define RDMA_SQ_BIND_WQE_2ND_ZERO_BASED_SHIFT    0
+#define RDMA_SQ_BIND_WQE_2ND_MW_TYPE_MASK        0x1 /*  (use enum rdma_mw_type) */
+#define RDMA_SQ_BIND_WQE_2ND_MW_TYPE_SHIFT       1
+#define RDMA_SQ_BIND_WQE_2ND_RESERVED1_MASK      0x3F
+#define RDMA_SQ_BIND_WQE_2ND_RESERVED1_SHIFT     2
+	uint8_t access_ctrl;
+#define RDMA_SQ_BIND_WQE_2ND_REMOTE_READ_MASK    0x1
+#define RDMA_SQ_BIND_WQE_2ND_REMOTE_READ_SHIFT   0
+#define RDMA_SQ_BIND_WQE_2ND_REMOTE_WRITE_MASK   0x1
+#define RDMA_SQ_BIND_WQE_2ND_REMOTE_WRITE_SHIFT  1
+#define RDMA_SQ_BIND_WQE_2ND_ENABLE_ATOMIC_MASK  0x1
+#define RDMA_SQ_BIND_WQE_2ND_ENABLE_ATOMIC_SHIFT 2
+#define RDMA_SQ_BIND_WQE_2ND_LOCAL_READ_MASK     0x1
+#define RDMA_SQ_BIND_WQE_2ND_LOCAL_READ_SHIFT    3
+#define RDMA_SQ_BIND_WQE_2ND_LOCAL_WRITE_MASK    0x1
+#define RDMA_SQ_BIND_WQE_2ND_LOCAL_WRITE_SHIFT   4
+#define RDMA_SQ_BIND_WQE_2ND_RESERVED2_MASK      0x7
+#define RDMA_SQ_BIND_WQE_2ND_RESERVED2_SHIFT     5
+	uint8_t reserved3;
+	uint8_t length_hi /* upper 8 bits of the registered MW length */;
+	__le32 length_lo /* lower 32 bits of the registered MW length */;
+	__le32 parent_l_key /* l_key of the parent MR */;
+	__le32 reserved4;
+};
+
+
+/*
+ * Structure with only the SQ WQE common fields. Size is of one SQ element (16B)
+ */
+struct rdma_sq_common_wqe
+{
+	__le32 reserved1[3];
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_COMMON_WQE_COMP_FLG_MASK       0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_COMMON_WQE_COMP_FLG_SHIFT      0
+#define RDMA_SQ_COMMON_WQE_RD_FENCE_FLG_MASK   0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_COMMON_WQE_RD_FENCE_FLG_SHIFT  1
+#define RDMA_SQ_COMMON_WQE_INV_FENCE_FLG_MASK  0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_COMMON_WQE_INV_FENCE_FLG_SHIFT 2
+#define RDMA_SQ_COMMON_WQE_SE_FLG_MASK         0x1 /* If set, signal the responder to generate a solicited event on this WQE (only relevant in SENDs and RDMA write with Imm) */
+#define RDMA_SQ_COMMON_WQE_SE_FLG_SHIFT        3
+#define RDMA_SQ_COMMON_WQE_INLINE_FLG_MASK     0x1 /* if set, indicates inline data is following this WQE instead of SGEs (only relevant in SENDs and RDMA writes) */
+#define RDMA_SQ_COMMON_WQE_INLINE_FLG_SHIFT    4
+#define RDMA_SQ_COMMON_WQE_RESERVED0_MASK      0x7
+#define RDMA_SQ_COMMON_WQE_RESERVED0_SHIFT     5
+	uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+};
+
+
+struct rdma_sq_fmr_wqe
+{
+	struct regpair addr;
+	__le32 l_key;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_FMR_WQE_COMP_FLG_MASK                0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_FMR_WQE_COMP_FLG_SHIFT               0
+#define RDMA_SQ_FMR_WQE_RD_FENCE_FLG_MASK            0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_FMR_WQE_RD_FENCE_FLG_SHIFT           1
+#define RDMA_SQ_FMR_WQE_INV_FENCE_FLG_MASK           0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_FMR_WQE_INV_FENCE_FLG_SHIFT          2
+#define RDMA_SQ_FMR_WQE_SE_FLG_MASK                  0x1 /* Dont care for FMR wqe */
+#define RDMA_SQ_FMR_WQE_SE_FLG_SHIFT                 3
+#define RDMA_SQ_FMR_WQE_INLINE_FLG_MASK              0x1 /* Should be 0 for FMR wqe */
+#define RDMA_SQ_FMR_WQE_INLINE_FLG_SHIFT             4
+#define RDMA_SQ_FMR_WQE_DIF_ON_HOST_FLG_MASK         0x1 /* If set, indicated host memory of this WQE is DIF protected. */
+#define RDMA_SQ_FMR_WQE_DIF_ON_HOST_FLG_SHIFT        5
+#define RDMA_SQ_FMR_WQE_RESERVED0_MASK               0x3
+#define RDMA_SQ_FMR_WQE_RESERVED0_SHIFT              6
+	uint8_t wqe_size /* Size of WQE in 16B chunks */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+	uint8_t fmr_ctrl;
+#define RDMA_SQ_FMR_WQE_PAGE_SIZE_LOG_MASK           0x1F /* 0 is 4k, 1 is 8k... */
+#define RDMA_SQ_FMR_WQE_PAGE_SIZE_LOG_SHIFT          0
+#define RDMA_SQ_FMR_WQE_ZERO_BASED_MASK              0x1 /* zero based indication */
+#define RDMA_SQ_FMR_WQE_ZERO_BASED_SHIFT             5
+#define RDMA_SQ_FMR_WQE_BIND_EN_MASK                 0x1 /* indication whether bind is enabled for this MR */
+#define RDMA_SQ_FMR_WQE_BIND_EN_SHIFT                6
+#define RDMA_SQ_FMR_WQE_RESERVED1_MASK               0x1
+#define RDMA_SQ_FMR_WQE_RESERVED1_SHIFT              7
+	uint8_t access_ctrl;
+#define RDMA_SQ_FMR_WQE_REMOTE_READ_MASK             0x1
+#define RDMA_SQ_FMR_WQE_REMOTE_READ_SHIFT            0
+#define RDMA_SQ_FMR_WQE_REMOTE_WRITE_MASK            0x1
+#define RDMA_SQ_FMR_WQE_REMOTE_WRITE_SHIFT           1
+#define RDMA_SQ_FMR_WQE_ENABLE_ATOMIC_MASK           0x1
+#define RDMA_SQ_FMR_WQE_ENABLE_ATOMIC_SHIFT          2
+#define RDMA_SQ_FMR_WQE_LOCAL_READ_MASK              0x1
+#define RDMA_SQ_FMR_WQE_LOCAL_READ_SHIFT             3
+#define RDMA_SQ_FMR_WQE_LOCAL_WRITE_MASK             0x1
+#define RDMA_SQ_FMR_WQE_LOCAL_WRITE_SHIFT            4
+#define RDMA_SQ_FMR_WQE_RESERVED2_MASK               0x7
+#define RDMA_SQ_FMR_WQE_RESERVED2_SHIFT              5
+	uint8_t reserved3;
+	uint8_t length_hi /* upper 8 bits of the registered MR length */;
+	__le32 length_lo /* lower 32 bits of the registered MR length. In case of DIF the length is specified including the DIF guards. */;
+	struct regpair pbl_addr /* Address of PBL */;
+	__le32 dif_base_ref_tag /* Ref tag of the first DIF Block. */;
+	__le16 dif_app_tag /* App tag of all DIF Blocks. */;
+	__le16 dif_app_tag_mask /* Bitmask for verifying dif_app_tag. */;
+	__le16 dif_runt_crc_value /* In TX IO, in case the runt_valid_flg is set, this value is used to validate the last Block in the IO. */;
+	__le16 dif_flags;
+#define RDMA_SQ_FMR_WQE_DIF_IO_DIRECTION_FLG_MASK    0x1 /* 0=RX, 1=TX (use enum rdma_dif_io_direction_flg) */
+#define RDMA_SQ_FMR_WQE_DIF_IO_DIRECTION_FLG_SHIFT   0
+#define RDMA_SQ_FMR_WQE_DIF_BLOCK_SIZE_MASK          0x1 /* DIF block size. 0=512B 1=4096B (use enum rdma_dif_block_size) */
+#define RDMA_SQ_FMR_WQE_DIF_BLOCK_SIZE_SHIFT         1
+#define RDMA_SQ_FMR_WQE_DIF_RUNT_VALID_FLG_MASK      0x1 /* In TX IO, indicates the runt_value field is valid. In RX IO, indicates the calculated runt value is to be placed on host buffer. */
+#define RDMA_SQ_FMR_WQE_DIF_RUNT_VALID_FLG_SHIFT     2
+#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_CRC_GUARD_MASK  0x1 /* In TX IO, indicates CRC of each DIF guard tag is checked. */
+#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_CRC_GUARD_SHIFT 3
+#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_REF_TAG_MASK    0x1 /* In TX IO, indicates Ref tag of each DIF guard tag is checked. */
+#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_REF_TAG_SHIFT   4
+#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_APP_TAG_MASK    0x1 /* In TX IO, indicates App tag of each DIF guard tag is checked. */
+#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_APP_TAG_SHIFT   5
+#define RDMA_SQ_FMR_WQE_DIF_CRC_SEED_MASK            0x1 /* DIF CRC Seed to use. 0=0x000 1=0xFFFF (use enum rdma_dif_crc_seed) */
+#define RDMA_SQ_FMR_WQE_DIF_CRC_SEED_SHIFT           6
+#define RDMA_SQ_FMR_WQE_RESERVED4_MASK               0x1FF
+#define RDMA_SQ_FMR_WQE_RESERVED4_SHIFT              7
+	__le32 Reserved5;
+};
+
+
+/*
+ * First element (16 bytes) of fmr wqe
+ */
+struct rdma_sq_fmr_wqe_1st
+{
+	struct regpair addr;
+	__le32 l_key;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_FMR_WQE_1ST_COMP_FLG_MASK         0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_FMR_WQE_1ST_COMP_FLG_SHIFT        0
+#define RDMA_SQ_FMR_WQE_1ST_RD_FENCE_FLG_MASK     0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_FMR_WQE_1ST_RD_FENCE_FLG_SHIFT    1
+#define RDMA_SQ_FMR_WQE_1ST_INV_FENCE_FLG_MASK    0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_FMR_WQE_1ST_INV_FENCE_FLG_SHIFT   2
+#define RDMA_SQ_FMR_WQE_1ST_SE_FLG_MASK           0x1 /* Dont care for FMR wqe */
+#define RDMA_SQ_FMR_WQE_1ST_SE_FLG_SHIFT          3
+#define RDMA_SQ_FMR_WQE_1ST_INLINE_FLG_MASK       0x1 /* Should be 0 for FMR wqe */
+#define RDMA_SQ_FMR_WQE_1ST_INLINE_FLG_SHIFT      4
+#define RDMA_SQ_FMR_WQE_1ST_DIF_ON_HOST_FLG_MASK  0x1 /* If set, indicated host memory of this WQE is DIF protected. */
+#define RDMA_SQ_FMR_WQE_1ST_DIF_ON_HOST_FLG_SHIFT 5
+#define RDMA_SQ_FMR_WQE_1ST_RESERVED0_MASK        0x3
+#define RDMA_SQ_FMR_WQE_1ST_RESERVED0_SHIFT       6
+	uint8_t wqe_size /* Size of WQE in 16B chunks */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+};
+
+
+/*
+ * Second element (16 bytes) of fmr wqe
+ */
+struct rdma_sq_fmr_wqe_2nd
+{
+	uint8_t fmr_ctrl;
+#define RDMA_SQ_FMR_WQE_2ND_PAGE_SIZE_LOG_MASK  0x1F /* 0 is 4k, 1 is 8k... */
+#define RDMA_SQ_FMR_WQE_2ND_PAGE_SIZE_LOG_SHIFT 0
+#define RDMA_SQ_FMR_WQE_2ND_ZERO_BASED_MASK     0x1 /* zero based indication */
+#define RDMA_SQ_FMR_WQE_2ND_ZERO_BASED_SHIFT    5
+#define RDMA_SQ_FMR_WQE_2ND_BIND_EN_MASK        0x1 /* indication whether bind is enabled for this MR */
+#define RDMA_SQ_FMR_WQE_2ND_BIND_EN_SHIFT       6
+#define RDMA_SQ_FMR_WQE_2ND_RESERVED1_MASK      0x1
+#define RDMA_SQ_FMR_WQE_2ND_RESERVED1_SHIFT     7
+	uint8_t access_ctrl;
+#define RDMA_SQ_FMR_WQE_2ND_REMOTE_READ_MASK    0x1
+#define RDMA_SQ_FMR_WQE_2ND_REMOTE_READ_SHIFT   0
+#define RDMA_SQ_FMR_WQE_2ND_REMOTE_WRITE_MASK   0x1
+#define RDMA_SQ_FMR_WQE_2ND_REMOTE_WRITE_SHIFT  1
+#define RDMA_SQ_FMR_WQE_2ND_ENABLE_ATOMIC_MASK  0x1
+#define RDMA_SQ_FMR_WQE_2ND_ENABLE_ATOMIC_SHIFT 2
+#define RDMA_SQ_FMR_WQE_2ND_LOCAL_READ_MASK     0x1
+#define RDMA_SQ_FMR_WQE_2ND_LOCAL_READ_SHIFT    3
+#define RDMA_SQ_FMR_WQE_2ND_LOCAL_WRITE_MASK    0x1
+#define RDMA_SQ_FMR_WQE_2ND_LOCAL_WRITE_SHIFT   4
+#define RDMA_SQ_FMR_WQE_2ND_RESERVED2_MASK      0x7
+#define RDMA_SQ_FMR_WQE_2ND_RESERVED2_SHIFT     5
+	uint8_t reserved3;
+	uint8_t length_hi /* upper 8 bits of the registered MR length */;
+	__le32 length_lo /* lower 32 bits of the registered MR length. In case of zero based MR, will hold FBO */;
+	struct regpair pbl_addr /* Address of PBL */;
+};
+
+
+/*
+ * Third element (16 bytes) of fmr wqe
+ */
+struct rdma_sq_fmr_wqe_3rd
+{
+	__le32 dif_base_ref_tag /* Ref tag of the first DIF Block. */;
+	__le16 dif_app_tag /* App tag of all DIF Blocks. */;
+	__le16 dif_app_tag_mask /* Bitmask for verifying dif_app_tag. */;
+	__le16 dif_runt_crc_value /* In TX IO, in case the runt_valid_flg is set, this value is used to validate the last Block in the IO. */;
+	__le16 dif_flags;
+#define RDMA_SQ_FMR_WQE_3RD_DIF_IO_DIRECTION_FLG_MASK    0x1 /* 0=RX, 1=TX (use enum rdma_dif_io_direction_flg) */
+#define RDMA_SQ_FMR_WQE_3RD_DIF_IO_DIRECTION_FLG_SHIFT   0
+#define RDMA_SQ_FMR_WQE_3RD_DIF_BLOCK_SIZE_MASK          0x1 /* DIF block size. 0=512B 1=4096B (use enum rdma_dif_block_size) */
+#define RDMA_SQ_FMR_WQE_3RD_DIF_BLOCK_SIZE_SHIFT         1
+#define RDMA_SQ_FMR_WQE_3RD_DIF_RUNT_VALID_FLG_MASK      0x1 /* In TX IO, indicates the runt_value field is valid. In RX IO, indicates the calculated runt value is to be placed on host buffer. */
+#define RDMA_SQ_FMR_WQE_3RD_DIF_RUNT_VALID_FLG_SHIFT     2
+#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_CRC_GUARD_MASK  0x1 /* In TX IO, indicates CRC of each DIF guard tag is checked. */
+#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_CRC_GUARD_SHIFT 3
+#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_REF_TAG_MASK    0x1 /* In TX IO, indicates Ref tag of each DIF guard tag is checked. */
+#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_REF_TAG_SHIFT   4
+#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_APP_TAG_MASK    0x1 /* In TX IO, indicates App tag of each DIF guard tag is checked. */
+#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_APP_TAG_SHIFT   5
+#define RDMA_SQ_FMR_WQE_3RD_DIF_CRC_SEED_MASK            0x1 /* DIF CRC Seed to use. 0=0x000 1=0xFFFF (use enum rdma_dif_crc_seed) */
+#define RDMA_SQ_FMR_WQE_3RD_DIF_CRC_SEED_SHIFT           6
+#define RDMA_SQ_FMR_WQE_3RD_RESERVED4_MASK               0x1FF
+#define RDMA_SQ_FMR_WQE_3RD_RESERVED4_SHIFT              7
+	__le32 Reserved5;
+};
+
+
+struct rdma_sq_local_inv_wqe
+{
+	struct regpair reserved;
+	__le32 inv_l_key /* The invalidate local key */;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_LOCAL_INV_WQE_COMP_FLG_MASK         0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_LOCAL_INV_WQE_COMP_FLG_SHIFT        0
+#define RDMA_SQ_LOCAL_INV_WQE_RD_FENCE_FLG_MASK     0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_LOCAL_INV_WQE_RD_FENCE_FLG_SHIFT    1
+#define RDMA_SQ_LOCAL_INV_WQE_INV_FENCE_FLG_MASK    0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_LOCAL_INV_WQE_INV_FENCE_FLG_SHIFT   2
+#define RDMA_SQ_LOCAL_INV_WQE_SE_FLG_MASK           0x1 /* Dont care for local invalidate wqe */
+#define RDMA_SQ_LOCAL_INV_WQE_SE_FLG_SHIFT          3
+#define RDMA_SQ_LOCAL_INV_WQE_INLINE_FLG_MASK       0x1 /* Should be 0 for local invalidate wqe */
+#define RDMA_SQ_LOCAL_INV_WQE_INLINE_FLG_SHIFT      4
+#define RDMA_SQ_LOCAL_INV_WQE_DIF_ON_HOST_FLG_MASK  0x1 /* If set, indicated host memory of this WQE is DIF protected. */
+#define RDMA_SQ_LOCAL_INV_WQE_DIF_ON_HOST_FLG_SHIFT 5
+#define RDMA_SQ_LOCAL_INV_WQE_RESERVED0_MASK        0x3
+#define RDMA_SQ_LOCAL_INV_WQE_RESERVED0_SHIFT       6
+	uint8_t wqe_size /* Size of WQE in 16B chunks */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+};
+
+
+struct rdma_sq_rdma_wqe
+{
+	__le32 imm_data /* The immediate data in case of RDMA_WITH_IMM */;
+	__le32 length /* Total data length. If DIF on host is enabled, length does NOT include DIF guards. */;
+	__le32 xrc_srq /* Valid only when XRC is set for the QP */;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_RDMA_WQE_COMP_FLG_MASK                  0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_RDMA_WQE_COMP_FLG_SHIFT                 0
+#define RDMA_SQ_RDMA_WQE_RD_FENCE_FLG_MASK              0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_RDMA_WQE_RD_FENCE_FLG_SHIFT             1
+#define RDMA_SQ_RDMA_WQE_INV_FENCE_FLG_MASK             0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_RDMA_WQE_INV_FENCE_FLG_SHIFT            2
+#define RDMA_SQ_RDMA_WQE_SE_FLG_MASK                    0x1 /* If set, signal the responder to generate a solicited event on this WQE */
+#define RDMA_SQ_RDMA_WQE_SE_FLG_SHIFT                   3
+#define RDMA_SQ_RDMA_WQE_INLINE_FLG_MASK                0x1 /* if set, indicates inline data is following this WQE instead of SGEs. Applicable for RDMA_WR or RDMA_WR_WITH_IMM. Should be 0 for RDMA_RD */
+#define RDMA_SQ_RDMA_WQE_INLINE_FLG_SHIFT               4
+#define RDMA_SQ_RDMA_WQE_DIF_ON_HOST_FLG_MASK           0x1 /* If set, indicated host memory of this WQE is DIF protected. */
+#define RDMA_SQ_RDMA_WQE_DIF_ON_HOST_FLG_SHIFT          5
+#define RDMA_SQ_RDMA_WQE_RESERVED0_MASK                 0x3
+#define RDMA_SQ_RDMA_WQE_RESERVED0_SHIFT                6
+	uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+	struct regpair remote_va /* Remote virtual address */;
+	__le32 r_key /* Remote key */;
+	uint8_t dif_flags;
+#define RDMA_SQ_RDMA_WQE_DIF_BLOCK_SIZE_MASK            0x1 /* if dif_on_host_flg set: DIF block size. 0=512B 1=4096B (use enum rdma_dif_block_size) */
+#define RDMA_SQ_RDMA_WQE_DIF_BLOCK_SIZE_SHIFT           0
+#define RDMA_SQ_RDMA_WQE_DIF_FIRST_RDMA_IN_IO_FLG_MASK  0x1 /* if dif_on_host_flg set: WQE executes first RDMA on related IO. */
+#define RDMA_SQ_RDMA_WQE_DIF_FIRST_RDMA_IN_IO_FLG_SHIFT 1
+#define RDMA_SQ_RDMA_WQE_DIF_LAST_RDMA_IN_IO_FLG_MASK   0x1 /* if dif_on_host_flg set: WQE executes last RDMA on related IO. */
+#define RDMA_SQ_RDMA_WQE_DIF_LAST_RDMA_IN_IO_FLG_SHIFT  2
+#define RDMA_SQ_RDMA_WQE_RESERVED1_MASK                 0x1F
+#define RDMA_SQ_RDMA_WQE_RESERVED1_SHIFT                3
+	uint8_t reserved2[3];
+};
+
+
+/*
+ * First element (16 bytes) of rdma wqe
+ */
+struct rdma_sq_rdma_wqe_1st
+{
+	__le32 imm_data /* The immediate data in case of RDMA_WITH_IMM */;
+	__le32 length /* Total data length */;
+	__le32 xrc_srq /* Valid only when XRC is set for the QP */;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_RDMA_WQE_1ST_COMP_FLG_MASK         0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_RDMA_WQE_1ST_COMP_FLG_SHIFT        0
+#define RDMA_SQ_RDMA_WQE_1ST_RD_FENCE_FLG_MASK     0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_RDMA_WQE_1ST_RD_FENCE_FLG_SHIFT    1
+#define RDMA_SQ_RDMA_WQE_1ST_INV_FENCE_FLG_MASK    0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_RDMA_WQE_1ST_INV_FENCE_FLG_SHIFT   2
+#define RDMA_SQ_RDMA_WQE_1ST_SE_FLG_MASK           0x1 /* If set, signal the responder to generate a solicited event on this WQE */
+#define RDMA_SQ_RDMA_WQE_1ST_SE_FLG_SHIFT          3
+#define RDMA_SQ_RDMA_WQE_1ST_INLINE_FLG_MASK       0x1 /* if set, indicates inline data is following this WQE instead of SGEs. Applicable for RDMA_WR or RDMA_WR_WITH_IMM. Should be 0 for RDMA_RD */
+#define RDMA_SQ_RDMA_WQE_1ST_INLINE_FLG_SHIFT      4
+#define RDMA_SQ_RDMA_WQE_1ST_DIF_ON_HOST_FLG_MASK  0x1 /* If set, indicated host memory of this WQE is DIF protected. */
+#define RDMA_SQ_RDMA_WQE_1ST_DIF_ON_HOST_FLG_SHIFT 5
+#define RDMA_SQ_RDMA_WQE_1ST_RESERVED0_MASK        0x3
+#define RDMA_SQ_RDMA_WQE_1ST_RESERVED0_SHIFT       6
+	uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+};
+
+
+/*
+ * Second element (16 bytes) of rdma wqe
+ */
+struct rdma_sq_rdma_wqe_2nd
+{
+	struct regpair remote_va /* Remote virtual address */;
+	__le32 r_key /* Remote key */;
+	uint8_t dif_flags;
+#define RDMA_SQ_RDMA_WQE_2ND_DIF_BLOCK_SIZE_MASK         0x1 /* if dif_on_host_flg set: DIF block size. 0=512B 1=4096B (use enum rdma_dif_block_size) */
+#define RDMA_SQ_RDMA_WQE_2ND_DIF_BLOCK_SIZE_SHIFT        0
+#define RDMA_SQ_RDMA_WQE_2ND_DIF_FIRST_SEGMENT_FLG_MASK  0x1 /* if dif_on_host_flg set: WQE executes first DIF on related MR. */
+#define RDMA_SQ_RDMA_WQE_2ND_DIF_FIRST_SEGMENT_FLG_SHIFT 1
+#define RDMA_SQ_RDMA_WQE_2ND_DIF_LAST_SEGMENT_FLG_MASK   0x1 /* if dif_on_host_flg set: WQE executes last DIF on related MR. */
+#define RDMA_SQ_RDMA_WQE_2ND_DIF_LAST_SEGMENT_FLG_SHIFT  2
+#define RDMA_SQ_RDMA_WQE_2ND_RESERVED1_MASK              0x1F
+#define RDMA_SQ_RDMA_WQE_2ND_RESERVED1_SHIFT             3
+	uint8_t reserved2[3];
+};
+
+
+/*
+ * SQ WQE req type enumeration
+ */
+enum rdma_sq_req_type
+{
+	RDMA_SQ_REQ_TYPE_SEND,
+	RDMA_SQ_REQ_TYPE_SEND_WITH_IMM,
+	RDMA_SQ_REQ_TYPE_SEND_WITH_INVALIDATE,
+	RDMA_SQ_REQ_TYPE_RDMA_WR,
+	RDMA_SQ_REQ_TYPE_RDMA_WR_WITH_IMM,
+	RDMA_SQ_REQ_TYPE_RDMA_RD,
+	RDMA_SQ_REQ_TYPE_ATOMIC_CMP_AND_SWAP,
+	RDMA_SQ_REQ_TYPE_ATOMIC_ADD,
+	RDMA_SQ_REQ_TYPE_LOCAL_INVALIDATE,
+	RDMA_SQ_REQ_TYPE_FAST_MR,
+	RDMA_SQ_REQ_TYPE_BIND,
+	RDMA_SQ_REQ_TYPE_INVALID,
+	MAX_RDMA_SQ_REQ_TYPE
+};
+
+
+struct rdma_sq_send_wqe
+{
+	__le32 inv_key_or_imm_data /* the r_key to invalidate in case of SEND_WITH_INVALIDATE, or the immediate data in case of SEND_WITH_IMM */;
+	__le32 length /* Total data length */;
+	__le32 xrc_srq /* Valid only when XRC is set for the QP */;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_SEND_WQE_COMP_FLG_MASK         0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_SEND_WQE_COMP_FLG_SHIFT        0
+#define RDMA_SQ_SEND_WQE_RD_FENCE_FLG_MASK     0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_SEND_WQE_RD_FENCE_FLG_SHIFT    1
+#define RDMA_SQ_SEND_WQE_INV_FENCE_FLG_MASK    0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_SEND_WQE_INV_FENCE_FLG_SHIFT   2
+#define RDMA_SQ_SEND_WQE_SE_FLG_MASK           0x1 /* If set, signal the responder to generate a solicited event on this WQE */
+#define RDMA_SQ_SEND_WQE_SE_FLG_SHIFT          3
+#define RDMA_SQ_SEND_WQE_INLINE_FLG_MASK       0x1 /* if set, indicates inline data is following this WQE instead of SGEs */
+#define RDMA_SQ_SEND_WQE_INLINE_FLG_SHIFT      4
+#define RDMA_SQ_SEND_WQE_DIF_ON_HOST_FLG_MASK  0x1 /* Should be 0 for send wqe */
+#define RDMA_SQ_SEND_WQE_DIF_ON_HOST_FLG_SHIFT 5
+#define RDMA_SQ_SEND_WQE_RESERVED0_MASK        0x3
+#define RDMA_SQ_SEND_WQE_RESERVED0_SHIFT       6
+	uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+	__le32 reserved1[4];
+};
+
+
+struct rdma_sq_send_wqe_1st
+{
+	__le32 inv_key_or_imm_data /* the r_key to invalidate in case of SEND_WITH_INVALIDATE, or the immediate data in case of SEND_WITH_IMM */;
+	__le32 length /* Total data length */;
+	__le32 xrc_srq /* Valid only when XRC is set for the QP */;
+	uint8_t req_type /* Type of WQE */;
+	uint8_t flags;
+#define RDMA_SQ_SEND_WQE_1ST_COMP_FLG_MASK       0x1 /* If set, completion will be generated when the WQE is completed */
+#define RDMA_SQ_SEND_WQE_1ST_COMP_FLG_SHIFT      0
+#define RDMA_SQ_SEND_WQE_1ST_RD_FENCE_FLG_MASK   0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */
+#define RDMA_SQ_SEND_WQE_1ST_RD_FENCE_FLG_SHIFT  1
+#define RDMA_SQ_SEND_WQE_1ST_INV_FENCE_FLG_MASK  0x1 /* If set, all pending operations will be completed before start processing this WQE */
+#define RDMA_SQ_SEND_WQE_1ST_INV_FENCE_FLG_SHIFT 2
+#define RDMA_SQ_SEND_WQE_1ST_SE_FLG_MASK         0x1 /* If set, signal the responder to generate a solicited event on this WQE */
+#define RDMA_SQ_SEND_WQE_1ST_SE_FLG_SHIFT        3
+#define RDMA_SQ_SEND_WQE_1ST_INLINE_FLG_MASK     0x1 /* if set, indicates inline data is following this WQE instead of SGEs */
+#define RDMA_SQ_SEND_WQE_1ST_INLINE_FLG_SHIFT    4
+#define RDMA_SQ_SEND_WQE_1ST_RESERVED0_MASK      0x7
+#define RDMA_SQ_SEND_WQE_1ST_RESERVED0_SHIFT     5
+	uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */;
+	uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */;
+};
+
+
+struct rdma_sq_send_wqe_2st
+{
+	__le32 reserved1[4];
+};
+
+
+struct rdma_sq_sge
+{
+	__le32 length /* Total length of the send. If DIF on host is enabled, SGE length includes the DIF guards. */;
+	struct regpair addr;
+	__le32 l_key;
+};
+
+
+struct rdma_srq_wqe_header
+{
+	struct regpair wr_id;
+	uint8_t num_sges /* number of SGEs in WQE */;
+	uint8_t reserved2[7];
+};
+
+struct rdma_srq_sge
+{
+	struct regpair addr;
+	__le32 length;
+	__le32 l_key;
+};
+
+/*
+ * rdma srq sge
+ */
+union rdma_srq_elm
+{
+	struct rdma_srq_wqe_header header;
+	struct rdma_srq_sge sge;
+};
+
+
+
+
+/*
+ * Rdma doorbell data for flags update
+ */
+struct rdma_pwm_flags_data
+{
+	__le16 icid /* internal CID */;
+	uint8_t agg_flags /* aggregative flags */;
+	uint8_t reserved;
+};
+
+
+/*
+ * Rdma doorbell data for SQ and RQ
+ */
+struct rdma_pwm_val16_data
+{
+	__le16 icid /* internal CID */;
+	__le16 value /* aggregated value to update */;
+};
+
+
+union rdma_pwm_val16_data_union
+{
+	struct rdma_pwm_val16_data as_struct /* Parameters field */;
+	__le32 as_dword;
+};
+
+
+/*
+ * Rdma doorbell data for CQ
+ */
+struct rdma_pwm_val32_data
+{
+	__le16 icid /* internal CID */;
+	uint8_t agg_flags /* bit for every DQ counter flags in CM context that DQ can increment */;
+	uint8_t params;
+#define RDMA_PWM_VAL32_DATA_AGG_CMD_MASK    0x3 /* aggregative command to CM (use enum db_agg_cmd_sel) */
+#define RDMA_PWM_VAL32_DATA_AGG_CMD_SHIFT   0
+#define RDMA_PWM_VAL32_DATA_BYPASS_EN_MASK  0x1 /* enable QM bypass */
+#define RDMA_PWM_VAL32_DATA_BYPASS_EN_SHIFT 2
+#define RDMA_PWM_VAL32_DATA_RESERVED_MASK   0x1F
+#define RDMA_PWM_VAL32_DATA_RESERVED_SHIFT  3
+	__le32 value /* aggregated value to update */;
+};
+
+
+union rdma_pwm_val32_data_union
+{
+	struct rdma_pwm_val32_data as_struct /* Parameters field */;
+	struct regpair as_repair;
+};
+
+#endif /* __QED_HSI_RDMA__ */
diff --git a/providers/qedr/rdma_common.h b/providers/qedr/rdma_common.h
new file mode 100644
index 0000000..0c25793
--- /dev/null
+++ b/providers/qedr/rdma_common.h
@@ -0,0 +1,74 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __RDMA_COMMON__
+#define __RDMA_COMMON__
+/************************/
+/* RDMA FW CONSTANTS */
+/************************/
+
+#define RDMA_RESERVED_LKEY			(0)			//Reserved lkey
+#define RDMA_RING_PAGE_SIZE			(0x1000)	//4KB pages
+
+#define	RDMA_MAX_SGE_PER_SQ_WQE		(4)		//max number of SGEs in a single request
+#define	RDMA_MAX_SGE_PER_RQ_WQE		(4)		//max number of SGEs in a single request
+
+#define RDMA_MAX_DATA_SIZE_IN_WQE	(0x7FFFFFFF)	//max size of data in single request
+
+#define RDMA_REQ_RD_ATOMIC_ELM_SIZE		(0x50)
+#define RDMA_RESP_RD_ATOMIC_ELM_SIZE	(0x20)
+
+#define RDMA_MAX_CQS				(64*1024)
+#define RDMA_MAX_TIDS				(128*1024-1)
+#define RDMA_MAX_PDS				(64*1024)
+
+#define RDMA_NUM_STATISTIC_COUNTERS			MAX_NUM_VPORTS
+#define RDMA_NUM_STATISTIC_COUNTERS_K2			MAX_NUM_VPORTS_K2
+#define RDMA_NUM_STATISTIC_COUNTERS_BB			MAX_NUM_VPORTS_BB
+
+#define RDMA_TASK_TYPE (PROTOCOLID_ROCE)
+
+
+struct rdma_srq_id
+{
+	__le16 srq_idx /* SRQ index */;
+	__le16 opaque_fid;
+};
+
+
+struct rdma_srq_producers
+{
+	__le32 sge_prod /* Current produced sge in SRQ */;
+	__le32 wqe_prod /* Current produced WQE to SRQ */;
+};
+
+#endif /* __RDMA_COMMON__ */
diff --git a/providers/qedr/roce_common.h b/providers/qedr/roce_common.h
new file mode 100644
index 0000000..b01c2ad
--- /dev/null
+++ b/providers/qedr/roce_common.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __ROCE_COMMON__
+#define __ROCE_COMMON__
+/************************************************************************/
+/* Add include to common rdma target for both eCore and protocol rdma driver */
+/************************************************************************/
+#include "rdma_common.h"
+/************************/
+/* ROCE FW CONSTANTS */
+/************************/
+
+#define ROCE_REQ_MAX_INLINE_DATA_SIZE (256)	//max size of inline data in single request
+#define ROCE_REQ_MAX_SINGLE_SQ_WQE_SIZE	(288)	//Maximum size of single SQ WQE (rdma wqe and inline data)
+
+#define ROCE_MAX_QPS				(32*1024)
+#define ROCE_DCQCN_NP_MAX_QPS  (64)	/* notification point max QPs*/
+#define ROCE_DCQCN_RP_MAX_QPS  (64)		/* reaction point max QPs*/
+
+#endif /* __ROCE_COMMON__ */
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-core 4/6] libqedr: main
       [not found] ` <1476956952-17388-1-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
                     ` (2 preceding siblings ...)
  2016-10-20  9:49   ` [PATCH rdma-core 3/6] libqedr: HSI Ram Amrani
@ 2016-10-20  9:49   ` Ram Amrani
       [not found]     ` <1476956952-17388-5-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
  2016-10-20  9:49   ` [PATCH rdma-core 5/6] libqedr: abi Ram Amrani
  2016-10-20  9:49   ` [PATCH rdma-core 6/6] libqedr: addition to consolidated repo Ram Amrani
  5 siblings, 1 reply; 9+ messages in thread
From: Ram Amrani @ 2016-10-20  9:49 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA,
	Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA, Ram Amrani, Ram Amrani

From: Ram Amrani <Ram.Amrani-74tsMCuadCbQT0dZR+AlfA@public.gmane.org>

Introducing main, responsible for initializing the driver
and allocating the user context.

Signed-off-by: Ram Amrani <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
---
 providers/qedr/qelr.h      | 320 +++++++++++++++++++++++++++++++++++++++++++++
 providers/qedr/qelr_main.c | 286 ++++++++++++++++++++++++++++++++++++++++
 providers/qedr/qelr_main.h |  83 ++++++++++++
 3 files changed, 689 insertions(+)
 create mode 100644 providers/qedr/qelr.h
 create mode 100644 providers/qedr/qelr_main.c
 create mode 100644 providers/qedr/qelr_main.h

diff --git a/providers/qedr/qelr.h b/providers/qedr/qelr.h
new file mode 100644
index 0000000..e321195
--- /dev/null
+++ b/providers/qedr/qelr.h
@@ -0,0 +1,320 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __QELR_H__
+#define __QELR_H__
+
+#include <inttypes.h>
+#include <stddef.h>
+#include <endian.h>
+#include <ccan/minmax.h>
+
+#include <infiniband/driver.h>
+#include <infiniband/arch.h>
+
+#define writel(b, p) (*(uint32_t *)(p) = (b))
+#define writeq(b, p) (*(uint64_t *)(p) = (b))
+
+#include "qelr_hsi.h"
+#include "qelr_chain.h"
+
+#define qelr_err(format, arg...) printf(format, ##arg)
+
+extern uint32_t qelr_dp_level;
+extern uint32_t qelr_dp_module;
+
+enum DP_MODULE {
+	QELR_MSG_CQ		= 0x10000,
+	QELR_MSG_RQ		= 0x20000,
+	QELR_MSG_SQ		= 0x40000,
+	QELR_MSG_QP		= (QELR_MSG_SQ | QELR_MSG_RQ),
+	QELR_MSG_MR		= 0x80000,
+	QELR_MSG_INIT		= 0x100000,
+	/* to be added...up to 0x8000000 */
+};
+
+enum DP_LEVEL {
+	QELR_LEVEL_VERBOSE	= 0x0,
+	QELR_LEVEL_INFO		= 0x1,
+	QELR_LEVEL_NOTICE	= 0x2,
+	QELR_LEVEL_ERR		= 0x3,
+};
+
+#define DP_ERR(fd, fmt, ...)					\
+do {								\
+	fprintf(fd, "[%s:%d]" fmt,				\
+		__func__, __LINE__,				\
+		##__VA_ARGS__);					\
+	fflush(fd); \
+} while (0)
+
+#define DP_NOTICE(fd, fmt, ...)					\
+do {								\
+	if (qelr_dp_level <= QELR_LEVEL_NOTICE)	{\
+		fprintf(fd, "[%s:%d]" fmt,			\
+		      __func__, __LINE__,			\
+		      ##__VA_ARGS__);				\
+		      fflush(fd); }				\
+} while (0)
+
+#define DP_INFO(fd, fmt, ...)					\
+do {								\
+	if (qelr_dp_level <= QELR_LEVEL_INFO)	{		\
+		fprintf(fd, "[%s:%d]" fmt,			\
+		      __func__, __LINE__,			\
+		      ##__VA_ARGS__); fflush(fd);		\
+	}							\
+} while (0)
+
+#define DP_VERBOSE(fd, module, fmt, ...)			\
+do {								\
+	if ((qelr_dp_level <= QELR_LEVEL_VERBOSE) &&		\
+		     (qelr_dp_module & (module))) {		\
+		fprintf(fd, "[%s:%d]" fmt,			\
+		      __func__, __LINE__,			\
+		      ##__VA_ARGS__);	fflush(fd); }		\
+} while (0)
+
+#define ROUND_UP_X(_val, _x) \
+	(((unsigned long)(_val) + ((_x)-1)) & (long)~((_x)-1))
+
+struct qelr_buf {
+	void		*addr;
+	size_t		len;		/* a 64 uint is used as s preparation
+					 * for double layer pbl.
+					 */
+};
+
+struct qelr_device {
+	struct ibv_device ibv_dev;
+};
+
+struct qelr_devctx {
+	struct ibv_context	ibv_ctx;
+	FILE			*dbg_fp;
+	void			*db_addr;
+	uint64_t		db_pa;
+	uint32_t		db_size;
+	uint8_t			disable_edpm;
+	uint32_t		kernel_page_size;
+
+	uint32_t		max_send_wr;
+	uint32_t		max_recv_wr;
+	uint32_t		sges_per_send_wr;
+	uint32_t		sges_per_recv_wr;
+	int			max_cqes;
+};
+
+struct qelr_pd {
+	struct ibv_pd		ibv_pd;
+	uint32_t		pd_id;
+};
+
+struct qelr_mr {
+	struct ibv_mr		ibv_mr;
+};
+
+union db_prod64 {
+	struct rdma_pwm_val32_data data;
+	uint64_t raw;
+};
+
+struct qelr_cq {
+	struct ibv_cq		ibv_cq;	/* must be first */
+
+	struct qelr_chain	chain;
+
+	void			*db_addr;
+	union db_prod64		db;
+
+	uint8_t			chain_toggle;
+	union rdma_cqe		*latest_cqe;
+	union rdma_cqe		*toggle_cqe;
+
+	uint8_t			arm_flags;
+};
+
+enum qelr_qp_state {
+	QELR_QPS_RST,
+	QELR_QPS_INIT,
+	QELR_QPS_RTR,
+	QELR_QPS_RTS,
+	QELR_QPS_SQD,
+	QELR_QPS_ERR,
+	QELR_QPS_SQE
+};
+
+union db_prod32 {
+	struct rdma_pwm_val16_data	data;
+	uint32_t			raw;
+};
+
+struct qelr_qp_hwq_info {
+	/* WQE */
+	struct qelr_chain			chain;
+	uint8_t					max_sges;
+
+	/* WQ */
+	uint16_t				prod;
+	uint16_t				wqe_cons;
+	uint16_t				cons;
+	uint16_t				max_wr;
+
+	/* DB */
+	void					*db;      /* Doorbell address */
+	void					*edpm_db;
+	union db_prod32				db_data;  /* Doorbell data */
+
+	uint16_t				icid;
+};
+
+struct qelr_rdma_ext {
+	uint64_t remote_va;
+	uint32_t remote_key;
+	uint32_t dma_length;
+};
+
+/* rdma extension, invalidate / immediate data + padding, inline data... */
+#define QELR_MAX_DPM_PAYLOAD (sizeof(struct qelr_rdma_ext) + sizeof(uint64_t) +\
+			       ROCE_REQ_MAX_INLINE_DATA_SIZE)
+struct qelr_edpm {
+	union {
+		struct db_roce_dpm_data	data;
+		uint64_t raw;
+	} msg;
+
+	uint8_t			dpm_payload[QELR_MAX_DPM_PAYLOAD];
+	uint32_t		dpm_payload_size;
+	uint32_t		dpm_payload_offset;
+	uint8_t			is_edpm;
+	struct qelr_rdma_ext    *rdma_ext;
+};
+
+struct qelr_qp {
+	struct ibv_qp				ibv_qp;
+	pthread_spinlock_t			q_lock;
+	enum qelr_qp_state			state;   /*  QP state */
+
+	struct qelr_qp_hwq_info			sq;
+	struct qelr_qp_hwq_info			rq;
+	struct {
+		uint64_t wr_id;
+		enum ibv_wc_opcode opcode;
+		uint32_t bytes_len;
+		uint8_t wqe_size;
+		uint8_t signaled;
+	} *wqe_wr_id;
+
+	struct {
+		uint64_t wr_id;
+		uint8_t wqe_size;
+	} *rqe_wr_id;
+
+	struct qelr_edpm			edpm;
+	uint8_t					prev_wqe_size;
+	uint32_t				max_inline_data;
+	uint32_t				qp_id;
+	int					sq_sig_all;
+	int					atomic_supported;
+
+};
+
+static inline struct qelr_devctx *get_qelr_ctx(struct ibv_context *ibctx)
+{
+	return container_of(ibctx, struct qelr_devctx, ibv_ctx);
+}
+
+static inline struct qelr_device *get_qelr_dev(struct ibv_device *ibdev)
+{
+	return container_of(ibdev, struct qelr_device, ibv_dev);
+}
+
+static inline struct qelr_qp *get_qelr_qp(struct ibv_qp *ibqp)
+{
+	return container_of(ibqp, struct qelr_qp, ibv_qp);
+}
+
+static inline struct qelr_pd *get_qelr_pd(struct ibv_pd *ibpd)
+{
+	return container_of(ibpd, struct qelr_pd, ibv_pd);
+}
+
+static inline struct qelr_cq *get_qelr_cq(struct ibv_cq *ibcq)
+{
+	return container_of(ibcq, struct qelr_cq, ibv_cq);
+}
+
+#define SET_FIELD(value, name, flag)				\
+	do {							\
+		(value) &= ~(name ## _MASK << name ## _SHIFT);	\
+		(value) |= ((flag) << (name ## _SHIFT));	\
+	} while (0)
+
+#define SET_FIELD2(value, name, flag)				\
+		((value) |= ((flag) << (name ## _SHIFT)))
+
+#define GET_FIELD(value, name) \
+	(((value) >> (name ## _SHIFT)) & name ## _MASK)
+
+#define ROCE_WQE_ELEM_SIZE	sizeof(struct rdma_sq_sge)
+
+#define QELR_RESP_IMM (RDMA_CQE_RESPONDER_IMM_FLG_MASK <<	\
+			RDMA_CQE_RESPONDER_IMM_FLG_SHIFT)
+#define QELR_RESP_RDMA (RDMA_CQE_RESPONDER_RDMA_FLG_MASK <<	\
+			RDMA_CQE_RESPONDER_RDMA_FLG_SHIFT)
+#define QELR_RESP_RDMA_IMM (QELR_RESP_IMM | QELR_RESP_RDMA)
+
+#define round_up(_val, _x) \
+	(((unsigned long)(_val) + ((_x)-1)) & (long)~((_x)-1))
+
+#define TYPEPTR_ADDR_SET(type_ptr, field, vaddr)			\
+	do {								\
+		(type_ptr)->field.hi = htole32(U64_HI(vaddr));	\
+		(type_ptr)->field.lo = htole32(U64_LO(vaddr));	\
+	} while (0)
+
+#define RQ_SGE_SET(sge, vaddr, vlength, vflags)			\
+	do {							\
+		TYPEPTR_ADDR_SET(sge, addr, vaddr);		\
+		(sge)->length = htole32(vlength);		\
+		(sge)->flags = htole32(vflags);		\
+	} while (0)
+
+#define U64_HI(val) ((uint32_t)(((uint64_t)(val)) >> 32))
+#define U64_LO(val) ((uint32_t)(((uint64_t)(val)) & 0xffffffff))
+#define HILO_U64(hi, lo)		((((uint64_t)(hi)) << 32) + (lo))
+
+#define QELR_MAX_RQ_WQE_SIZE (RDMA_MAX_SGE_PER_RQ_WQE)
+#define QELR_MAX_SQ_WQE_SIZE (ROCE_REQ_MAX_SINGLE_SQ_WQE_SIZE /	\
+			      ROCE_WQE_ELEM_SIZE)
+
+#endif /* __QELR_H__ */
diff --git a/providers/qedr/qelr_main.c b/providers/qedr/qelr_main.c
new file mode 100644
index 0000000..386d7e6
--- /dev/null
+++ b/providers/qedr/qelr_main.c
@@ -0,0 +1,286 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <pthread.h>
+
+#include "qelr.h"
+#include "qelr_main.h"
+#include "qelr_abi.h"
+#include "qelr_chain.h"
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+
+#define PCI_VENDOR_ID_QLOGIC           (0x1077)
+#define PCI_DEVICE_ID_QLOGIC_57980S    (0x1629)
+#define PCI_DEVICE_ID_QLOGIC_57980S_40 (0x1634)
+#define PCI_DEVICE_ID_QLOGIC_57980S_10 (0x1666)
+#define PCI_DEVICE_ID_QLOGIC_57980S_MF (0x1636)
+#define PCI_DEVICE_ID_QLOGIC_57980S_100 (0x1644)
+#define PCI_DEVICE_ID_QLOGIC_57980S_50  (0x1654)
+#define PCI_DEVICE_ID_QLOGIC_57980S_25  (0x1656)
+#define PCI_DEVICE_ID_QLOGIC_57980S_IOV (0x1664)
+#define PCI_DEVICE_ID_QLOGIC_AH_50G     (0x8070)
+#define PCI_DEVICE_ID_QLOGIC_AH_10G     (0x8071)
+#define PCI_DEVICE_ID_QLOGIC_AH_40G     (0x8072)
+#define PCI_DEVICE_ID_QLOGIC_AH_25G     (0x8073)
+#define PCI_DEVICE_ID_QLOGIC_AH_IOV     (0x8090)
+
+uint32_t qelr_dp_level;
+uint32_t qelr_dp_module;
+
+#define QHCA(d)					\
+	{ .vendor = PCI_VENDOR_ID_QLOGIC,	\
+	  .device = PCI_DEVICE_ID_QLOGIC_##d }
+
+struct {
+	unsigned int vendor;
+	unsigned int device;
+} hca_table[] = {
+	QHCA(57980S),
+	QHCA(57980S_40),
+	QHCA(57980S_10),
+	QHCA(57980S_MF),
+	QHCA(57980S_100),
+	QHCA(57980S_50),
+	QHCA(57980S_25),
+	QHCA(57980S_IOV),
+	QHCA(AH_50G),
+	QHCA(AH_10G),
+	QHCA(AH_40G),
+	QHCA(AH_25G),
+	QHCA(AH_IOV),
+};
+
+static struct ibv_context *qelr_alloc_context(struct ibv_device *, int);
+static void qelr_free_context(struct ibv_context *);
+
+static struct ibv_context_ops qelr_ctx_ops = {
+	.query_device = qelr_query_device,
+	.query_port = qelr_query_port,
+	.alloc_pd = qelr_alloc_pd,
+	.dealloc_pd = qelr_dealloc_pd,
+	.reg_mr = qelr_reg_mr,
+	.dereg_mr = qelr_dereg_mr,
+	.create_cq = qelr_create_cq,
+	.poll_cq = qelr_poll_cq,
+	.req_notify_cq = qelr_arm_cq,
+	.cq_event = qelr_cq_event,
+	.destroy_cq = qelr_destroy_cq,
+	.create_qp = qelr_create_qp,
+	.query_qp = qelr_query_qp,
+	.modify_qp = qelr_modify_qp,
+	.destroy_qp = qelr_destroy_qp,
+	.post_send = qelr_post_send,
+	.post_recv = qelr_post_recv,
+	.async_event = qelr_async_event,
+};
+
+static struct ibv_device_ops qelr_dev_ops = {
+	.alloc_context = qelr_alloc_context,
+	.free_context = qelr_free_context
+};
+
+static void qelr_open_debug_file(struct qelr_devctx *ctx)
+{
+	char *env;
+
+	env = getenv("QELR_DEBUG_FILE");
+	if (!env) {
+		ctx->dbg_fp = stderr;
+		DP_VERBOSE(ctx->dbg_fp, QELR_MSG_INIT,
+			   "Debug file opened: stderr\n");
+		return;
+	}
+
+	ctx->dbg_fp = fopen(env, "aw+");
+	if (!ctx->dbg_fp) {
+		fprintf(stderr, "Failed opening debug file %s, using stderr\n",
+			env);
+		ctx->dbg_fp = stderr;
+		DP_VERBOSE(ctx->dbg_fp, QELR_MSG_INIT,
+			   "Debug file opened: stderr\n");
+		return;
+	}
+
+	DP_VERBOSE(ctx->dbg_fp, QELR_MSG_INIT, "Debug file opened: %s\n", env);
+}
+
+static void qelr_close_debug_file(struct qelr_devctx *ctx)
+{
+	if (ctx->dbg_fp && ctx->dbg_fp != stderr)
+		fclose(ctx->dbg_fp);
+}
+
+static void qelr_set_debug_mask(void)
+{
+	char *env;
+
+	qelr_dp_level = QELR_LEVEL_NOTICE;
+	qelr_dp_module = 0;
+
+	env = getenv("QELR_DP_LEVEL");
+	if (env)
+		qelr_dp_level = atoi(env);
+
+	env = getenv("QELR_DP_MODULE");
+	if (env)
+		qelr_dp_module = atoi(env);
+}
+
+static struct ibv_context *qelr_alloc_context(struct ibv_device *ibdev,
+					      int cmd_fd)
+{
+	struct qelr_devctx *ctx;
+	struct qelr_get_context cmd;
+	struct qelr_alloc_ucontext_resp resp;
+
+	ctx = calloc(1, sizeof(struct qelr_devctx));
+	if (!ctx)
+		return NULL;
+	memset(&resp, 0, sizeof(resp));
+
+	ctx->ibv_ctx.cmd_fd = cmd_fd;
+
+	qelr_open_debug_file(ctx);
+	qelr_set_debug_mask();
+
+	if (ibv_cmd_get_context(&ctx->ibv_ctx,
+				(struct ibv_get_context *)&cmd, sizeof(cmd),
+				&resp.ibv_resp, sizeof(resp)))
+		goto cmd_err;
+
+	ctx->kernel_page_size = sysconf(_SC_PAGESIZE);
+	ctx->ibv_ctx.device = ibdev;
+	ctx->ibv_ctx.ops = qelr_ctx_ops;
+	ctx->db_pa = resp.db_pa;
+	ctx->db_size = resp.db_size;
+	ctx->max_send_wr = resp.max_send_wr;
+	ctx->max_recv_wr = resp.max_recv_wr;
+	ctx->sges_per_send_wr = resp.sges_per_send_wr;
+	ctx->sges_per_recv_wr = resp.sges_per_recv_wr;
+	ctx->max_cqes = resp.max_cqes;
+
+	ctx->db_addr = mmap(NULL, ctx->db_size, PROT_WRITE, MAP_SHARED,
+			    cmd_fd, ctx->db_pa);
+
+	if (ctx->db_addr == MAP_FAILED) {
+		int errsv = errno;
+
+		DP_ERR(ctx->dbg_fp,
+		       "alloc context: doorbell mapping failed resp.db_pa = %llx resp.db_size=%d context->cmd_fd=%d errno=%d\n",
+		       resp.db_pa, resp.db_size, cmd_fd, errsv);
+		goto cmd_err;
+	}
+
+	return &ctx->ibv_ctx;
+
+cmd_err:
+	qelr_err("%s: Failed to allocate context for device.\n", __func__);
+	qelr_close_debug_file(ctx);
+	free(ctx);
+	return NULL;
+}
+
+static void qelr_free_context(struct ibv_context *ibctx)
+{
+	struct qelr_devctx *ctx = get_qelr_ctx(ibctx);
+
+	if (ctx->db_addr)
+		munmap(ctx->db_addr, ctx->db_size);
+
+	qelr_close_debug_file(ctx);
+	free(ctx);
+}
+
+struct ibv_device *qelr_driver_init(const char *uverbs_sys_path,
+				    int abi_version)
+{
+	char value[16];
+	struct qelr_device *dev;
+	unsigned int vendor, device;
+	int i;
+
+	if (ibv_read_sysfs_file(uverbs_sys_path, "device/vendor",
+				value, sizeof(value)) < 0)
+		return NULL;
+
+	sscanf(value, "%i", &vendor);
+
+	if (ibv_read_sysfs_file(uverbs_sys_path, "device/device",
+				value, sizeof(value)) < 0)
+		return NULL;
+
+	sscanf(value, "%i", &device);
+
+	for (i = 0; i < sizeof(hca_table) / sizeof(hca_table[0]); ++i)
+		if (vendor == hca_table[i].vendor &&
+		    device == hca_table[i].device)
+			goto found;
+
+	return NULL;
+found:
+	if (abi_version != QELR_ABI_VERSION) {
+		fprintf(stderr,
+			"Fatal: libqedr ABI version %d of %s is not supported.\n",
+			abi_version, uverbs_sys_path);
+		return NULL;
+	}
+
+	dev = malloc(sizeof(*dev));
+	if (!dev) {
+		qelr_err("%s() Fatal: fail allocate device for libqedr\n",
+			 __func__);
+		return NULL;
+	}
+
+	bzero(dev, sizeof(*dev));
+
+	dev->ibv_dev.ops = qelr_dev_ops;
+
+	return &dev->ibv_dev;
+}
+
+static __attribute__ ((constructor))
+void qelr_register_driver(void)
+{
+	ibv_register_driver("qelr", qelr_driver_init);
+}
diff --git a/providers/qedr/qelr_main.h b/providers/qedr/qelr_main.h
new file mode 100644
index 0000000..1f65be6
--- /dev/null
+++ b/providers/qedr/qelr_main.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __QELR_MAIN_H__
+#define __QELR_MAIN_H__
+
+#include <inttypes.h>
+#include <stddef.h>
+#include <endian.h>
+
+#include <infiniband/driver.h>
+#include <infiniband/arch.h>
+
+struct ibv_device *qelr_driver_init(const char *, int);
+
+int qelr_query_device(struct ibv_context *, struct ibv_device_attr *);
+int qelr_query_port(struct ibv_context *, uint8_t, struct ibv_port_attr *);
+
+struct ibv_pd *qelr_alloc_pd(struct ibv_context *);
+int qelr_dealloc_pd(struct ibv_pd *);
+
+struct ibv_mr *qelr_reg_mr(struct ibv_pd *, void *, size_t,
+			   int ibv_access_flags);
+int qelr_dereg_mr(struct ibv_mr *);
+
+struct ibv_cq *qelr_create_cq(struct ibv_context *, int,
+			      struct ibv_comp_channel *, int);
+int qelr_destroy_cq(struct ibv_cq *);
+int qelr_poll_cq(struct ibv_cq *, int, struct ibv_wc *);
+void qelr_cq_event(struct ibv_cq *);
+int qelr_arm_cq(struct ibv_cq *, int);
+
+int qelr_query_srq(struct ibv_srq *ibv_srq, struct ibv_srq_attr *attr);
+int qelr_modify_srq(struct ibv_srq *ibv_srq, struct ibv_srq_attr *attr,
+		    int attr_mask);
+struct ibv_srq *qelr_create_srq(struct ibv_pd *, struct ibv_srq_init_attr *);
+int qelr_destroy_srq(struct ibv_srq *ibv_srq);
+int qelr_post_srq_recv(struct ibv_srq *, struct ibv_recv_wr *,
+		       struct ibv_recv_wr **bad_wr);
+
+struct ibv_qp *qelr_create_qp(struct ibv_pd *, struct ibv_qp_init_attr *);
+int qelr_modify_qp(struct ibv_qp *, struct ibv_qp_attr *,
+		   int ibv_qp_attr_mask);
+int qelr_query_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr, int attr_mask,
+		  struct ibv_qp_init_attr *init_attr);
+int qelr_destroy_qp(struct ibv_qp *);
+
+int qelr_post_send(struct ibv_qp *, struct ibv_send_wr *,
+		   struct ibv_send_wr **);
+int qelr_post_recv(struct ibv_qp *, struct ibv_recv_wr *,
+		   struct ibv_recv_wr **);
+
+void qelr_async_event(struct ibv_async_event *event);
+#endif /* __QELR_MAIN_H__ */
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-core 5/6] libqedr: abi
       [not found] ` <1476956952-17388-1-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
                     ` (3 preceding siblings ...)
  2016-10-20  9:49   ` [PATCH rdma-core 4/6] libqedr: main Ram Amrani
@ 2016-10-20  9:49   ` Ram Amrani
  2016-10-20  9:49   ` [PATCH rdma-core 6/6] libqedr: addition to consolidated repo Ram Amrani
  5 siblings, 0 replies; 9+ messages in thread
From: Ram Amrani @ 2016-10-20  9:49 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA,
	Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA, Ram Amrani, Ram Amrani

From: Ram Amrani <Ram.Amrani-74tsMCuadCbQT0dZR+AlfA@public.gmane.org>

Introducing abi structures that allows interfacing with the kernel.

Signed-off-by: Ram Amrani <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
---
 providers/qedr/qelr_abi.h | 120 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 120 insertions(+)
 create mode 100644 providers/qedr/qelr_abi.h

diff --git a/providers/qedr/qelr_abi.h b/providers/qedr/qelr_abi.h
new file mode 100644
index 0000000..a7a0638
--- /dev/null
+++ b/providers/qedr/qelr_abi.h
@@ -0,0 +1,120 @@
+/*
+ * Copyright (c) 2015-2016  QLogic Corporation
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and /or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __QELR_ABI_H__
+#define __QELR_ABI_H__
+
+#include <infiniband/kern-abi.h>
+
+#define QELR_ABI_VERSION			(8)
+
+struct qelr_get_context {
+	struct ibv_get_context cmd;		/* must be first */
+};
+
+struct qelr_alloc_ucontext_resp {
+	struct ibv_get_context_resp ibv_resp;	/* must be first */
+	__u64 db_pa;
+	__u32 db_size;
+
+	__u32 max_send_wr;
+	__u32 max_recv_wr;
+	__u32 max_srq_wr;
+	__u32 sges_per_send_wr;
+	__u32 sges_per_recv_wr;
+	__u32 sges_per_srq_wr;
+	__u32 max_cqes;
+};
+
+struct qelr_alloc_pd_req {
+	struct ibv_alloc_pd cmd;		/* must be first */
+};
+
+struct qelr_alloc_pd_resp {
+	struct ibv_alloc_pd_resp ibv_resp;	/* must be first */
+	__u32 pd_id;
+};
+
+struct qelr_create_cq_req {
+	struct ibv_create_cq ibv_cmd;		/* must be first */
+
+	__u64 addr;	/* user space virtual address of CQ buffer */
+	__u64 len;	/* size of CQ buffer */
+};
+
+struct qelr_create_cq_resp {
+	struct ibv_create_cq_resp ibv_resp;	/* must be first */
+	__u32 db_offset;
+	__u16 icid;
+};
+
+struct qelr_reg_mr {
+	struct ibv_reg_mr ibv_cmd;		/* must be first */
+};
+
+struct qelr_reg_mr_resp {
+	struct ibv_reg_mr_resp ibv_resp;	/* must be first */
+};
+
+struct qelr_create_qp_req {
+	struct ibv_create_qp ibv_qp;	/* must be first */
+
+	__u32 qp_handle_hi;
+	__u32 qp_handle_lo;
+
+	/* SQ */
+	__u64 sq_addr;	/* user space virtual address of SQ buffer */
+	__u64 sq_len;		/* length of SQ buffer */
+
+	/* RQ */
+	__u64 rq_addr;	/* user space virtual address of RQ buffer */
+	__u64 rq_len;		/* length of RQ buffer */
+};
+
+struct qelr_create_qp_resp {
+	struct ibv_create_qp_resp ibv_resp;	/* must be first */
+
+	__u32 qp_id;
+	__u32 atomic_supported;
+
+	/* SQ */
+	__u32 sq_db_offset;
+	__u16 sq_icid;
+
+	/* RQ */
+	__u32 rq_db_offset;
+	__u16 rq_icid;
+
+	__u32 rq_db2_offset;
+};
+
+#endif /* __QELR_ABI_H__ */
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-core 6/6] libqedr: addition to consolidated repo
       [not found] ` <1476956952-17388-1-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
                     ` (4 preceding siblings ...)
  2016-10-20  9:49   ` [PATCH rdma-core 5/6] libqedr: abi Ram Amrani
@ 2016-10-20  9:49   ` Ram Amrani
  5 siblings, 0 replies; 9+ messages in thread
From: Ram Amrani @ 2016-10-20  9:49 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA,
	Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA, Ram Amrani, Ram Amrani

From: Ram Amrani <Ram.Amrani-74tsMCuadCbQT0dZR+AlfA@public.gmane.org>

Configure the consolidated repo to build libqedr (qelr).

Signed-off-by: Ram Amrani <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
---
 CMakeLists.txt                | 1 +
 MAINTAINERS                   | 7 +++++++
 README.md                     | 1 +
 providers/qedr/CMakeLists.txt | 5 +++++
 4 files changed, 14 insertions(+)
 create mode 100644 providers/qedr/CMakeLists.txt

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 375859d..fef7f03 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -335,6 +335,7 @@ add_subdirectory(providers/mlx5)
 add_subdirectory(providers/mthca)
 add_subdirectory(providers/nes)
 add_subdirectory(providers/ocrdma)
+add_subdirectory(providers/qedr)
 add_subdirectory(providers/rxe)
 add_subdirectory(providers/rxe/man)
 
diff --git a/MAINTAINERS b/MAINTAINERS
index fb15276..65fad74 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -139,6 +139,13 @@ M:	Devesh Sharma <Devesh.sharma-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
 S:	Supported
 F:	providers/ocrdma/
 
+QEDR USERSPACE PROVIDER (for qedr.ko)
+M:	Ram Amrani <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
+M:	Ariel Elior <Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
+S:	Supported
+F:	providers/qedr/
+P:	Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
+
 RXE SOFT ROCEE USERSPACE PROVIDER (for rdma_rxe.ko)
 M:	Moni Shoua <monis-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
 S:	Supported
diff --git a/README.md b/README.md
index 66aee3f..3a13042 100644
--- a/README.md
+++ b/README.md
@@ -25,6 +25,7 @@ is included:
  - ib_mthca.ko
  - iw_nes.ko
  - ocrdma.ko
+ - qedr.ko
  - rdma_rxe.ko
 
 Additional service daemons are provided for:
diff --git a/providers/qedr/CMakeLists.txt b/providers/qedr/CMakeLists.txt
new file mode 100644
index 0000000..8d4f3ce
--- /dev/null
+++ b/providers/qedr/CMakeLists.txt
@@ -0,0 +1,5 @@
+rdma_provider(qedr
+  qelr_main.c
+  qelr_verbs.c
+  qelr_chain.c
+  )
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH rdma-core 4/6] libqedr: main
       [not found]     ` <1476956952-17388-5-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
@ 2016-10-20 17:08       ` Jason Gunthorpe
       [not found]         ` <20161020170828.GC28181-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2016-10-20 17:08 UTC (permalink / raw)
  To: Ram Amrani
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA,
	Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA

On Thu, Oct 20, 2016 at 12:49:10PM +0300, Ram Amrani wrote:
> +struct {
> +	unsigned int vendor;
> +	unsigned int device;
> +} hca_table[] = {

needs static const, please check all your stuff for static and const..

> +int qelr_modify_qp(struct ibv_qp *, struct ibv_qp_attr *,
> +		   int ibv_qp_attr_mask);
> +int qelr_query_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr, int attr_mask,
> +		  struct ibv_qp_init_attr *init_attr);

It would be nice to be consistent, I prefer the argument name to be in
the prototype.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH rdma-core 4/6] libqedr: main
       [not found]         ` <20161020170828.GC28181-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2016-10-25 12:39           ` Amrani, Ram
  0 siblings, 0 replies; 9+ messages in thread
From: Amrani, Ram @ 2016-10-25 12:39 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Elior, Ariel,
	Kalderon, Michal

I've just submitted fixes via github.

Thanks for the feedback,

Ram


> -----Original Message-----
> From: Jason Gunthorpe [mailto:jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org]
> Sent: Thursday, October 20, 2016 8:08 PM
> To: Amrani, Ram <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
> Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org; linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Elior, Ariel
> <Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>; Kalderon, Michal <Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
> Subject: Re: [PATCH rdma-core 4/6] libqedr: main
> 
> On Thu, Oct 20, 2016 at 12:49:10PM +0300, Ram Amrani wrote:
> > +struct {
> > +	unsigned int vendor;
> > +	unsigned int device;
> > +} hca_table[] = {
> 
> needs static const, please check all your stuff for static and const..
> 
> > +int qelr_modify_qp(struct ibv_qp *, struct ibv_qp_attr *,
> > +		   int ibv_qp_attr_mask);
> > +int qelr_query_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr, int attr_mask,
> > +		  struct ibv_qp_init_attr *init_attr);
> 
> It would be nice to be consistent, I prefer the argument name to be in the
> prototype.
> 
> Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-10-25 12:39 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-10-20  9:49 [PATCH rdma-core 0/6] libqedr: userspace library for qedr Ram Amrani
     [not found] ` <1476956952-17388-1-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
2016-10-20  9:49   ` [PATCH rdma-core 1/6] libqedr: chains Ram Amrani
2016-10-20  9:49   ` [PATCH rdma-core 2/6] libqedr: verbs Ram Amrani
2016-10-20  9:49   ` [PATCH rdma-core 3/6] libqedr: HSI Ram Amrani
2016-10-20  9:49   ` [PATCH rdma-core 4/6] libqedr: main Ram Amrani
     [not found]     ` <1476956952-17388-5-git-send-email-Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>
2016-10-20 17:08       ` Jason Gunthorpe
     [not found]         ` <20161020170828.GC28181-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2016-10-25 12:39           ` Amrani, Ram
2016-10-20  9:49   ` [PATCH rdma-core 5/6] libqedr: abi Ram Amrani
2016-10-20  9:49   ` [PATCH rdma-core 6/6] libqedr: addition to consolidated repo Ram Amrani

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).