netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/45] Update bna driver version to 3.0.2.0
@ 2011-07-18  8:19 Rasesh Mody
  2011-07-18  8:19 ` [PATCH 01/45] bna: Checkpatch Cleanup Rasesh Mody
                   ` (10 more replies)
  0 siblings, 11 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Hello David,

   The following patch set adds support for Brocade-1860 Fabric Adapter code
   re-org and cleaning.

   It updates the Brocade BNA driver to v3.0.2.0.

   The driver has been compiled & tested against net-next-2.6(3.0.0-rc7)

   Note: pathces 0007 to 0009 are split for ease of review.

Thanks,
Rasesh

Rasesh Mody (45):
  bna: Checkpatch Cleanup
  bna: Update ASIC Register Definition to Support New Hardware
  bna: Change IOC Event Notification Call Back Mechanism
  bna: State Machine Fault Handling Cleanup
  bna: MSGQ Implementation
  bna: Use DEFINE_PCI_DEVICE_TABLE and Print Driver Version
  bna: Introduce ENET as New Driver and FW Interface
  bna: Tx and Rx Redesign
  bna: ENET and Tx Rx Redesign Update
  bna: Remove Obsolete Files
  bna: Brocade-1860 Fabric Adapter Enablement
  bna: Hardware Clock Setup
  bna: IOC PLL changes and init cleanup
  bna: Brocade 1860 Register and ASIC Mode Changes
  bna: Set MBOX MSIX Index to Zero
  bna: IOC PCI Init & Enable Changes
  bna: Mailbox Interface Changes and FW MBOX fix
  bna: Implement Polling Mechanism for FW Ready
  bna: HW Type Check Fix
  bna: MBOX IRQ Sync Vector Num Fix
  bna: Remove Reset Call Back
  bna: Capability Map and MFG Block Changes for New HW
  bna: Added Defines for Multi TXQ Support
  bna: Mboxq Flush When Ioc Disabled
  bna: Move FW Init to HW Init and Disable Hang Unmapped Fix
  bna: Ethfn LPU DMA Read Fix
  bna: IOC Event Name Change
  bna: Add New IOC event
  bna: Add Sub-System Device ID Info
  bna: Add HW Semaphore Unlock Logic
  bna: Configuration changes
  bna: TxRx Coalesce Settings Fix and Reorg PCI Probe Failure
  bna: Device Init Fix
  bna: Add Multiple Tx Queue Support
  bna: Change TxQ Select Logic and Interrupt Handling
  bna: Data Path and API Changes
  bna: Adpater and Port Mode Settings
  bna: HW Error Counter Fix
  bna: RX Path Changes
  bna: Add IOC MBOX Call Back to Client
  bna: Ethtool Ring Param Set changes and Add Stats Attr
  bna: PLL Init Fix and Add Stats Attributes
  bna: Dropped BUG_ONs and Rx id init fix
  bna: Header File and Unused Code Cleanup
  bna: Driver Version changed to 3.0.2.0

 drivers/net/bna/Makefile            |    5 +-
 drivers/net/bna/bfa_cee.c           |   70 +-
 drivers/net/bna/bfa_cee.h           |    3 +-
 drivers/net/bna/bfa_cs.h            |  148 ++
 drivers/net/bna/bfa_defs.h          |   77 +-
 drivers/net/bna/bfa_defs_cna.h      |    8 +-
 drivers/net/bna/bfa_defs_mfg_comm.h |   93 +-
 drivers/net/bna/bfa_defs_status.h   |  143 +-
 drivers/net/bna/bfa_ioc.c           |  712 ++++--
 drivers/net/bna/bfa_ioc.h           |  106 +-
 drivers/net/bna/bfa_ioc_ct.c        |  498 ++++-
 drivers/net/bna/bfa_msgq.c          |  669 ++++++
 drivers/net/bna/bfa_msgq.h          |  130 +
 drivers/net/bna/bfi.h               |  280 ++-
 drivers/net/bna/bfi_ctreg.h         |  646 -----
 drivers/net/bna/bfi_enet.h          |  902 +++++++
 drivers/net/bna/bfi_ll.h            |  438 ----
 drivers/net/bna/bfi_reg.h           |  452 ++++
 drivers/net/bna/bna.h               |  456 +++--
 drivers/net/bna/bna_ctrl.c          | 3077 ------------------------
 drivers/net/bna/bna_enet.c          | 2202 +++++++++++++++++
 drivers/net/bna/bna_hw.h            | 1516 ++----------
 drivers/net/bna/bna_txrx.c          | 4417 +++++++++++++++++------------------
 drivers/net/bna/bna_types.h         |  696 +++----
 drivers/net/bna/bnad.c              | 1249 ++++++----
 drivers/net/bna/bnad.h              |  157 +-
 drivers/net/bna/bnad_ethtool.c      |  481 +---
 drivers/net/bna/cna.h               |   44 +-
 include/linux/pci_ids.h             |    1 +
 29 files changed, 9824 insertions(+), 9852 deletions(-)
 create mode 100644 drivers/net/bna/bfa_cs.h
 create mode 100644 drivers/net/bna/bfa_msgq.c
 create mode 100644 drivers/net/bna/bfa_msgq.h
 delete mode 100644 drivers/net/bna/bfi_ctreg.h
 create mode 100644 drivers/net/bna/bfi_enet.h
 delete mode 100644 drivers/net/bna/bfi_ll.h
 create mode 100644 drivers/net/bna/bfi_reg.h
 delete mode 100644 drivers/net/bna/bna_ctrl.c
 create mode 100644 drivers/net/bna/bna_enet.c


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 01/45] bna: Checkpatch Cleanup
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 02/45] bna: Update ASIC Register Definition to Support New Hardware Rasesh Mody
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Driver cleanup as per new check patch v0.31

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfa_cs.h            |  148 +++++++++++++++++++++++++++++++++++
 drivers/net/bna/bfa_defs.h          |    4 +-
 drivers/net/bna/bfa_defs_mfg_comm.h |   20 +++---
 drivers/net/bna/bfa_defs_status.h   |  134 ++++++++++++++++----------------
 drivers/net/bna/bfa_ioc.c           |    2 +-
 drivers/net/bna/bfa_ioc.h           |   16 ++--
 drivers/net/bna/bfi.h               |   14 ++--
 drivers/net/bna/bna.h               |   16 ++--
 drivers/net/bna/bna_hw.h            |   92 +++++++++++-----------
 drivers/net/bna/bna_txrx.c          |    4 +-
 drivers/net/bna/bna_types.h         |   58 +++++++-------
 drivers/net/bna/bnad.c              |   42 +++++-----
 drivers/net/bna/bnad.h              |   24 +++---
 drivers/net/bna/bnad_ethtool.c      |    2 +-
 14 files changed, 362 insertions(+), 214 deletions(-)
 create mode 100644 drivers/net/bna/bfa_cs.h

diff --git a/drivers/net/bna/bfa_cs.h b/drivers/net/bna/bfa_cs.h
new file mode 100644
index 0000000..1651e71
--- /dev/null
+++ b/drivers/net/bna/bfa_cs.h
@@ -0,0 +1,148 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/**
+ * @file bfa_cs.h BFA common services
+ */
+
+#ifndef __BFA_CS_H__
+#define __BFA_CS_H__
+
+#include "cna.h"
+
+#define bfa_assert(__cond)	do {					\
+	if (!(__cond))							\
+		pr_err("Assertion failure: %s:%d: %d",			\
+			__FILE__, __LINE__, __cond);			\
+} while (0)
+
+#define bfa_assert_fp(__cond)	bfa_assert(__cond)
+
+/**
+ * @ BFA state machine interfaces
+ */
+
+typedef void (*bfa_sm_t)(void *sm, int event);
+
+/**
+ * oc - object class eg. bfa_ioc
+ * st - state, eg. reset
+ * otype - object type, eg. struct bfa_ioc
+ * etype - object type, eg. enum ioc_event
+ */
+#define bfa_sm_state_decl(oc, st, otype, etype)		\
+	static void oc ## _sm_ ## st(otype * fsm, etype event)
+
+#define bfa_sm_set_state(_sm, _state)	((_sm)->sm = (bfa_sm_t)(_state))
+#define bfa_sm_send_event(_sm, _event)	((_sm)->sm((_sm), (_event)))
+#define bfa_sm_get_state(_sm)		((_sm)->sm)
+#define bfa_sm_cmp_state(_sm, _state)	((_sm)->sm == (bfa_sm_t)(_state))
+
+/**
+ * For converting from state machine function to state encoding.
+ */
+struct bfa_sm_table {
+	bfa_sm_t	sm;	/*!< state machine function	*/
+	int		state;	/*!< state machine encoding	*/
+	char		*name;	/*!< state name for display	*/
+};
+#define BFA_SM(_sm)	((bfa_sm_t)(_sm))
+
+/**
+ * State machine with entry actions.
+ */
+typedef void (*bfa_fsm_t)(void *fsm, int event);
+
+/**
+ * oc - object class eg. bfa_ioc
+ * st - state, eg. reset
+ * otype - object type, eg. struct bfa_ioc
+ * etype - object type, eg. enum ioc_event
+ */
+#define bfa_fsm_state_decl(oc, st, otype, etype)		\
+	static void oc ## _sm_ ## st(otype * fsm, etype event);	\
+	static void oc ## _sm_ ## st ## _entry(otype * fsm)
+
+#define bfa_fsm_set_state(_fsm, _state) do {	\
+	(_fsm)->fsm = (bfa_fsm_t)(_state);	\
+	_state ## _entry(_fsm);			\
+} while (0)
+
+#define bfa_fsm_send_event(_fsm, _event)	((_fsm)->fsm((_fsm), (_event)))
+#define bfa_fsm_get_state(_fsm)			((_fsm)->fsm)
+#define bfa_fsm_cmp_state(_fsm, _state)		\
+	((_fsm)->fsm == (bfa_fsm_t)(_state))
+
+static inline int
+bfa_sm_to_state(struct bfa_sm_table *smt, bfa_sm_t sm)
+{
+	int	i = 0;
+
+	while (smt[i].sm && smt[i].sm != sm)
+		i++;
+	return smt[i].state;
+}
+
+/**
+ * @ Generic wait counter.
+ */
+
+typedef void (*bfa_wc_resume_t) (void *cbarg);
+
+struct bfa_wc {
+	bfa_wc_resume_t wc_resume;
+	void		*wc_cbarg;
+	int		wc_count;
+};
+
+static inline void
+bfa_wc_up(struct bfa_wc *wc)
+{
+	wc->wc_count++;
+}
+
+static inline void
+bfa_wc_down(struct bfa_wc *wc)
+{
+	wc->wc_count--;
+	if (wc->wc_count == 0)
+		wc->wc_resume(wc->wc_cbarg);
+}
+
+/**
+ * Initialize a waiting counter.
+ */
+static inline void
+bfa_wc_init(struct bfa_wc *wc, bfa_wc_resume_t wc_resume, void *wc_cbarg)
+{
+	wc->wc_resume = wc_resume;
+	wc->wc_cbarg = wc_cbarg;
+	wc->wc_count = 0;
+	bfa_wc_up(wc);
+}
+
+/**
+ * Wait for counter to reach zero
+ */
+static inline void
+bfa_wc_wait(struct bfa_wc *wc)
+{
+	bfa_wc_down(wc);
+}
+
+#endif /* __BFA_CS_H__ */
diff --git a/drivers/net/bna/bfa_defs.h b/drivers/net/bna/bfa_defs.h
index 2ea0dfe..fa81f3c 100644
--- a/drivers/net/bna/bfa_defs.h
+++ b/drivers/net/bna/bfa_defs.h
@@ -80,7 +80,7 @@ struct bfa_adapter_attr {
 
 enum {
 	BFA_IOC_DRIVER_LEN	= 16,
-	BFA_IOC_CHIP_REV_LEN 	= 8,
+	BFA_IOC_CHIP_REV_LEN	= 8,
 };
 
 /**
@@ -174,7 +174,7 @@ enum bfa_ioc_type {
  */
 struct bfa_ioc_attr {
 	enum bfa_ioc_type ioc_type;
-	enum bfa_ioc_state 		state;		/*!< IOC state      */
+	enum bfa_ioc_state		state;		/*!< IOC state      */
 	struct bfa_adapter_attr adapter_attr;	/*!< HBA attributes */
 	struct bfa_ioc_driver_attr driver_attr;	/*!< driver attr    */
 	struct bfa_ioc_pci_attr pci_attr;
diff --git a/drivers/net/bna/bfa_defs_mfg_comm.h b/drivers/net/bna/bfa_defs_mfg_comm.h
index fdd6776..885ef3a 100644
--- a/drivers/net/bna/bfa_defs_mfg_comm.h
+++ b/drivers/net/bna/bfa_defs_mfg_comm.h
@@ -192,14 +192,14 @@ do {								\
  * VPD vendor tag
  */
 enum {
-	BFA_MFG_VPD_UNKNOWN	= 0,     /*!< vendor unknown 		*/
-	BFA_MFG_VPD_IBM 	= 1,     /*!< vendor IBM 		*/
-	BFA_MFG_VPD_HP  	= 2,     /*!< vendor HP  		*/
-	BFA_MFG_VPD_DELL  	= 3,     /*!< vendor DELL  		*/
-	BFA_MFG_VPD_PCI_IBM 	= 0x08,  /*!< PCI VPD IBM     		*/
-	BFA_MFG_VPD_PCI_HP  	= 0x10,  /*!< PCI VPD HP		*/
-	BFA_MFG_VPD_PCI_DELL  	= 0x20,  /*!< PCI VPD DELL		*/
-	BFA_MFG_VPD_PCI_BRCD 	= 0xf8,  /*!< PCI VPD Brocade 		*/
+	BFA_MFG_VPD_UNKNOWN	= 0,     /*!< vendor unknown		*/
+	BFA_MFG_VPD_IBM		= 1,     /*!< vendor IBM		*/
+	BFA_MFG_VPD_HP		= 2,     /*!< vendor HP			*/
+	BFA_MFG_VPD_DELL	= 3,     /*!< vendor DELL		*/
+	BFA_MFG_VPD_PCI_IBM	= 0x08,  /*!< PCI VPD IBM		*/
+	BFA_MFG_VPD_PCI_HP	= 0x10,  /*!< PCI VPD HP		*/
+	BFA_MFG_VPD_PCI_DELL	= 0x20,  /*!< PCI VPD DELL		*/
+	BFA_MFG_VPD_PCI_BRCD	= 0xf8,  /*!< PCI VPD Brocade		*/
 };
 
 /**
@@ -212,8 +212,8 @@ struct bfa_mfg_vpd {
 	u8		vpd_sig[3];	/*!< characters 'V', 'P', 'D' */
 	u8		chksum;		/*!< u8 checksum */
 	u8		vendor;		/*!< vendor */
-	u8 	len;		/*!< vpd data length excluding header */
-	u8 	rsv;
+	u8	len;		/*!< vpd data length excluding header */
+	u8	rsv;
 	u8		data[BFA_MFG_VPD_LEN];	/*!< vpd data */
 };
 
diff --git a/drivers/net/bna/bfa_defs_status.h b/drivers/net/bna/bfa_defs_status.h
index af95112..7c5fe6c 100644
--- a/drivers/net/bna/bfa_defs_status.h
+++ b/drivers/net/bna/bfa_defs_status.h
@@ -25,95 +25,95 @@
  * comments are supported
  */
 enum bfa_status {
-	BFA_STATUS_OK 		= 0,
-	BFA_STATUS_FAILED 	= 1,
-	BFA_STATUS_EINVAL 	= 2,
-	BFA_STATUS_ENOMEM 	= 3,
-	BFA_STATUS_ENOSYS 	= 4,
-	BFA_STATUS_ETIMER 	= 5,
-	BFA_STATUS_EPROTOCOL 	= 6,
-	BFA_STATUS_ENOFCPORTS 	= 7,
-	BFA_STATUS_NOFLASH 	= 8,
-	BFA_STATUS_BADFLASH 	= 9,
-	BFA_STATUS_SFP_UNSUPP 	= 10,
+	BFA_STATUS_OK = 0,
+	BFA_STATUS_FAILED = 1,
+	BFA_STATUS_EINVAL = 2,
+	BFA_STATUS_ENOMEM = 3,
+	BFA_STATUS_ENOSYS = 4,
+	BFA_STATUS_ETIMER = 5,
+	BFA_STATUS_EPROTOCOL = 6,
+	BFA_STATUS_ENOFCPORTS = 7,
+	BFA_STATUS_NOFLASH = 8,
+	BFA_STATUS_BADFLASH = 9,
+	BFA_STATUS_SFP_UNSUPP = 10,
 	BFA_STATUS_UNKNOWN_VFID = 11,
 	BFA_STATUS_DATACORRUPTED = 12,
-	BFA_STATUS_DEVBUSY 	= 13,
-	BFA_STATUS_ABORTED 	= 14,
-	BFA_STATUS_NODEV 	= 15,
-	BFA_STATUS_HDMA_FAILED 	= 16,
+	BFA_STATUS_DEVBUSY = 13,
+	BFA_STATUS_ABORTED = 14,
+	BFA_STATUS_NODEV = 15,
+	BFA_STATUS_HDMA_FAILED = 16,
 	BFA_STATUS_FLASH_BAD_LEN = 17,
 	BFA_STATUS_UNKNOWN_LWWN = 18,
 	BFA_STATUS_UNKNOWN_RWWN = 19,
-	BFA_STATUS_FCPT_LS_RJT 	= 20,
+	BFA_STATUS_FCPT_LS_RJT = 20,
 	BFA_STATUS_VPORT_EXISTS = 21,
-	BFA_STATUS_VPORT_MAX 	= 22,
+	BFA_STATUS_VPORT_MAX = 22,
 	BFA_STATUS_UNSUPP_SPEED = 23,
-	BFA_STATUS_INVLD_DFSZ 	= 24,
-	BFA_STATUS_CNFG_FAILED 	= 25,
-	BFA_STATUS_CMD_NOTSUPP 	= 26,
-	BFA_STATUS_NO_ADAPTER 	= 27,
-	BFA_STATUS_LINKDOWN 	= 28,
-	BFA_STATUS_FABRIC_RJT 	= 29,
+	BFA_STATUS_INVLD_DFSZ = 24,
+	BFA_STATUS_CNFG_FAILED = 25,
+	BFA_STATUS_CMD_NOTSUPP = 26,
+	BFA_STATUS_NO_ADAPTER = 27,
+	BFA_STATUS_LINKDOWN = 28,
+	BFA_STATUS_FABRIC_RJT = 29,
 	BFA_STATUS_UNKNOWN_VWWN = 30,
 	BFA_STATUS_NSLOGIN_FAILED = 31,
-	BFA_STATUS_NO_RPORTS 	= 32,
+	BFA_STATUS_NO_RPORTS = 32,
 	BFA_STATUS_NSQUERY_FAILED = 33,
 	BFA_STATUS_PORT_OFFLINE = 34,
 	BFA_STATUS_RPORT_OFFLINE = 35,
 	BFA_STATUS_TGTOPEN_FAILED = 36,
-	BFA_STATUS_BAD_LUNS 	= 37,
-	BFA_STATUS_IO_FAILURE 	= 38,
-	BFA_STATUS_NO_FABRIC 	= 39,
-	BFA_STATUS_EBADF 	= 40,
-	BFA_STATUS_EINTR 	= 41,
-	BFA_STATUS_EIO 		= 42,
-	BFA_STATUS_ENOTTY 	= 43,
-	BFA_STATUS_ENXIO 	= 44,
-	BFA_STATUS_EFOPEN 	= 45,
+	BFA_STATUS_BAD_LUNS = 37,
+	BFA_STATUS_IO_FAILURE = 38,
+	BFA_STATUS_NO_FABRIC = 39,
+	BFA_STATUS_EBADF = 40,
+	BFA_STATUS_EINTR = 41,
+	BFA_STATUS_EIO = 42,
+	BFA_STATUS_ENOTTY = 43,
+	BFA_STATUS_ENXIO = 44,
+	BFA_STATUS_EFOPEN = 45,
 	BFA_STATUS_VPORT_WWN_BP = 46,
 	BFA_STATUS_PORT_NOT_DISABLED = 47,
-	BFA_STATUS_BADFRMHDR 	= 48,
-	BFA_STATUS_BADFRMSZ 	= 49,
-	BFA_STATUS_MISSINGFRM 	= 50,
-	BFA_STATUS_LINKTIMEOUT 	= 51,
+	BFA_STATUS_BADFRMHDR = 48,
+	BFA_STATUS_BADFRMSZ = 49,
+	BFA_STATUS_MISSINGFRM = 50,
+	BFA_STATUS_LINKTIMEOUT = 51,
 	BFA_STATUS_NO_FCPIM_NEXUS = 52,
 	BFA_STATUS_CHECKSUM_FAIL = 53,
-	BFA_STATUS_GZME_FAILED 	= 54,
+	BFA_STATUS_GZME_FAILED = 54,
 	BFA_STATUS_SCSISTART_REQD = 55,
-	BFA_STATUS_IOC_FAILURE 	= 56,
-	BFA_STATUS_INVALID_WWN 	= 57,
-	BFA_STATUS_MISMATCH 	= 58,
-	BFA_STATUS_IOC_ENABLED 	= 59,
+	BFA_STATUS_IOC_FAILURE = 56,
+	BFA_STATUS_INVALID_WWN = 57,
+	BFA_STATUS_MISMATCH = 58,
+	BFA_STATUS_IOC_ENABLED = 59,
 	BFA_STATUS_ADAPTER_ENABLED = 60,
-	BFA_STATUS_IOC_NON_OP 	= 61,
+	BFA_STATUS_IOC_NON_OP = 61,
 	BFA_STATUS_ADDR_MAP_FAILURE = 62,
-	BFA_STATUS_SAME_NAME 	= 63,
-	BFA_STATUS_PENDING      = 64,
-	BFA_STATUS_8G_SPD	= 65,
-	BFA_STATUS_4G_SPD	= 66,
+	BFA_STATUS_SAME_NAME = 63,
+	BFA_STATUS_PENDING = 64,
+	BFA_STATUS_8G_SPD = 65,
+	BFA_STATUS_4G_SPD = 66,
 	BFA_STATUS_AD_IS_ENABLE = 67,
-	BFA_STATUS_EINVAL_TOV 	= 68,
+	BFA_STATUS_EINVAL_TOV = 68,
 	BFA_STATUS_EINVAL_QDEPTH = 69,
 	BFA_STATUS_VERSION_FAIL = 70,
-	BFA_STATUS_DIAG_BUSY    = 71,
-	BFA_STATUS_BEACON_ON	= 72,
-	BFA_STATUS_BEACON_OFF	= 73,
-	BFA_STATUS_LBEACON_ON   = 74,
-	BFA_STATUS_LBEACON_OFF	= 75,
+	BFA_STATUS_DIAG_BUSY = 71,
+	BFA_STATUS_BEACON_ON = 72,
+	BFA_STATUS_BEACON_OFF = 73,
+	BFA_STATUS_LBEACON_ON = 74,
+	BFA_STATUS_LBEACON_OFF = 75,
 	BFA_STATUS_PORT_NOT_INITED = 76,
 	BFA_STATUS_RPSC_ENABLED = 77,
-	BFA_STATUS_ENOFSAVE 	= 78,
-	BFA_STATUS_BAD_FILE		= 79,
-	BFA_STATUS_RLIM_EN		= 80,
-	BFA_STATUS_RLIM_DIS		= 81,
-	BFA_STATUS_IOC_DISABLED  = 82,
-	BFA_STATUS_ADAPTER_DISABLED  = 83,
-	BFA_STATUS_BIOS_DISABLED  = 84,
-	BFA_STATUS_AUTH_ENABLED  = 85,
-	BFA_STATUS_AUTH_DISABLED  = 86,
-	BFA_STATUS_ERROR_TRL_ENABLED  = 87,
-	BFA_STATUS_ERROR_QOS_ENABLED  = 88,
+	BFA_STATUS_ENOFSAVE = 78,
+	BFA_STATUS_BAD_FILE = 79,
+	BFA_STATUS_RLIM_EN = 80,
+	BFA_STATUS_RLIM_DIS = 81,
+	BFA_STATUS_IOC_DISABLED = 82,
+	BFA_STATUS_ADAPTER_DISABLED = 83,
+	BFA_STATUS_BIOS_DISABLED = 84,
+	BFA_STATUS_AUTH_ENABLED = 85,
+	BFA_STATUS_AUTH_DISABLED = 86,
+	BFA_STATUS_ERROR_TRL_ENABLED = 87,
+	BFA_STATUS_ERROR_QOS_ENABLED = 88,
 	BFA_STATUS_NO_SFP_DEV = 89,
 	BFA_STATUS_MEMTEST_FAILED = 90,
 	BFA_STATUS_INVALID_DEVID = 91,
@@ -190,7 +190,7 @@ enum bfa_status {
 	BFA_STATUS_FLASH_CKFAIL = 162,
 	BFA_STATUS_TRUNK_UNSUPP = 163,
 	BFA_STATUS_TRUNK_ENABLED = 164,
-	BFA_STATUS_TRUNK_DISABLED  = 165,
+	BFA_STATUS_TRUNK_DISABLED = 165,
 	BFA_STATUS_TRUNK_ERROR_TRL_ENABLED = 166,
 	BFA_STATUS_BOOT_CODE_UPDATED = 167,
 	BFA_STATUS_BOOT_VERSION = 168,
@@ -198,8 +198,8 @@ enum bfa_status {
 	BFA_STATUS_INVALID_CARDTYPE = 170,
 	BFA_STATUS_NO_TOPOLOGY_FOR_CNA = 171,
 	BFA_STATUS_IM_VLAN_OVER_TEAM_DELETE_FAILED = 172,
-	BFA_STATUS_ETHBOOT_ENABLED  = 173,
-	BFA_STATUS_ETHBOOT_DISABLED  = 174,
+	BFA_STATUS_ETHBOOT_ENABLED = 173,
+	BFA_STATUS_ETHBOOT_DISABLED = 174,
 	BFA_STATUS_IOPROFILE_OFF = 175,
 	BFA_STATUS_NO_PORT_INSTANCE = 176,
 	BFA_STATUS_BOOT_CODE_TIMEDOUT = 177,
diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index fcb9bb3..04bfb29 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -156,7 +156,7 @@ enum iocpf_event {
 	IOCPF_E_ENABLE		= 1,	/*!< IOCPF enable request	*/
 	IOCPF_E_DISABLE		= 2,	/*!< IOCPF disable request	*/
 	IOCPF_E_STOP		= 3,	/*!< stop on driver detach	*/
-	IOCPF_E_FWREADY	 	= 4,	/*!< f/w initialization done	*/
+	IOCPF_E_FWREADY		= 4,	/*!< f/w initialization done	*/
 	IOCPF_E_FWRSP_ENABLE	= 5,	/*!< enable f/w response	*/
 	IOCPF_E_FWRSP_DISABLE	= 6,	/*!< disable f/w response	*/
 	IOCPF_E_FAIL		= 7,	/*!< failure notice by ioc sm	*/
diff --git a/drivers/net/bna/bfa_ioc.h b/drivers/net/bna/bfa_ioc.h
index bd48abe..8473c00 100644
--- a/drivers/net/bna/bfa_ioc.h
+++ b/drivers/net/bna/bfa_ioc.h
@@ -155,11 +155,11 @@ struct bfa_iocpf {
 
 struct bfa_ioc {
 	bfa_fsm_t		fsm;
-	struct bfa 		*bfa;
-	struct bfa_pcidev 	pcidev;
-	struct timer_list 	ioc_timer;
-	struct timer_list 	iocpf_timer;
-	struct timer_list 	sem_timer;
+	struct bfa		*bfa;
+	struct bfa_pcidev	pcidev;
+	struct timer_list	ioc_timer;
+	struct timer_list	iocpf_timer;
+	struct timer_list	sem_timer;
 	struct timer_list	hb_timer;
 	u32			hb_count;
 	struct list_head	hb_notify_q;
@@ -167,13 +167,13 @@ struct bfa_ioc {
 	int			dbg_fwsave_len;
 	bool			dbg_fwsave_once;
 	enum bfi_mclass		ioc_mc;
-	struct bfa_ioc_regs 	ioc_regs;
+	struct bfa_ioc_regs	ioc_regs;
 	struct bfa_ioc_drv_stats stats;
 	bool			fcmode;
 	bool			ctdev;
 	bool			cna;
 	bool			pllinit;
-	bool   			stats_busy;	/*!< outstanding stats */
+	bool			stats_busy;	/*!< outstanding stats */
 	u8			port_id;
 
 	struct bfa_dma		attr_dma;
@@ -219,7 +219,7 @@ struct bfa_ioc_hwif {
 #define bfa_ioc_stats(_ioc, _stats)	((_ioc)->stats._stats++)
 #define BFA_IOC_FWIMG_MINSZ	(16 * 1024)
 #define BFA_IOC_FWIMG_TYPE(__ioc)					\
-	(((__ioc)->ctdev) ? 						\
+	(((__ioc)->ctdev) ?						\
 	 (((__ioc)->fcmode) ? BFI_IMAGE_CT_FC : BFI_IMAGE_CT_CNA) :	\
 	 BFI_IMAGE_CB_FC)
 #define BFA_IOC_FW_SMEM_SIZE(__ioc)					\
diff --git a/drivers/net/bna/bfi.h b/drivers/net/bna/bfi.h
index 6050379..683a7d7 100644
--- a/drivers/net/bna/bfi.h
+++ b/drivers/net/bna/bfi.h
@@ -51,13 +51,13 @@ struct bfi_mhdr {
 };
 
 #define bfi_h2i_set(_mh, _mc, _op, _lpuid) do {		\
-	(_mh).msg_class 		= (_mc);		\
+	(_mh).msg_class			= (_mc);		\
 	(_mh).msg_id			= (_op);		\
 	(_mh).mtag.h2i.lpu_id	= (_lpuid);			\
 } while (0)
 
 #define bfi_i2h_set(_mh, _mc, _op, _i2htok) do {		\
-	(_mh).msg_class 		= (_mc);		\
+	(_mh).msg_class			= (_mc);		\
 	(_mh).msg_id			= (_op);		\
 	(_mh).mtag.i2htok		= (_i2htok);		\
 } while (0)
@@ -66,7 +66,7 @@ struct bfi_mhdr {
  * Message opcodes: 0-127 to firmware, 128-255 to host
  */
 #define BFI_I2H_OPCODE_BASE	128
-#define BFA_I2HM(_x) 			((_x) + BFI_I2H_OPCODE_BASE)
+#define BFA_I2HM(_x)			((_x) + BFI_I2H_OPCODE_BASE)
 
 /**
  ****************************************************************************
@@ -186,7 +186,7 @@ enum bfi_mclass {
 #define BFI_BOOT_TYPE_OFF		8
 #define BFI_BOOT_LOADER_OFF		12
 
-#define BFI_BOOT_TYPE_NORMAL 		0
+#define BFI_BOOT_TYPE_NORMAL		0
 #define	BFI_BOOT_TYPE_FLASH		1
 #define	BFI_BOOT_TYPE_MEMTEST		2
 
@@ -211,9 +211,9 @@ enum bfi_ioc_h2i_msgs {
 
 enum bfi_ioc_i2h_msgs {
 	BFI_IOC_I2H_ENABLE_REPLY	= BFA_I2HM(1),
-	BFI_IOC_I2H_DISABLE_REPLY 	= BFA_I2HM(2),
-	BFI_IOC_I2H_GETATTR_REPLY 	= BFA_I2HM(3),
-	BFI_IOC_I2H_READY_EVENT 	= BFA_I2HM(4),
+	BFI_IOC_I2H_DISABLE_REPLY	= BFA_I2HM(2),
+	BFI_IOC_I2H_GETATTR_REPLY	= BFA_I2HM(3),
+	BFI_IOC_I2H_READY_EVENT		= BFA_I2HM(4),
 	BFI_IOC_I2H_HBEAT		= BFA_I2HM(5),
 };
 
diff --git a/drivers/net/bna/bna.h b/drivers/net/bna/bna.h
index a287f89..6b14c1d 100644
--- a/drivers/net/bna/bna.h
+++ b/drivers/net/bna/bna.h
@@ -88,7 +88,7 @@ do {								\
 } while (0)
 
 #define	containing_rec(addr, type, field)				\
-	((type *)((unsigned char *)(addr) - 				\
+	((type *)((unsigned char *)(addr) -				\
 	(unsigned char *)(&((type *)0)->field)))
 
 #define BNA_TXQ_WI_NEEDED(_vectors)	(((_vectors) + 3) >> 2)
@@ -101,8 +101,8 @@ do {								\
 {									\
 	unsigned int page_index;	/* index within a page */	\
 	void *page_addr;						\
-	page_index = (_qe_idx) & (BNA_TXQ_PAGE_INDEX_MAX - 1); 		\
-	(_qe_ptr_range) = (BNA_TXQ_PAGE_INDEX_MAX - page_index); 	\
+	page_index = (_qe_idx) & (BNA_TXQ_PAGE_INDEX_MAX - 1);		\
+	(_qe_ptr_range) = (BNA_TXQ_PAGE_INDEX_MAX - page_index);	\
 	page_addr = (_qpt_ptr)[((_qe_idx) >>  BNA_TXQ_PAGE_INDEX_MAX_SHIFT)];\
 	(_qe_ptr) = &((struct bna_txq_entry *)(page_addr))[page_index]; \
 }
@@ -166,25 +166,25 @@ do {								\
 		(((_q_ptr)->q.producer_index + (_num)) &		\
 		((_q_ptr)->q.q_depth - 1))
 
-#define BNA_Q_CI_ADD(_q_ptr, _num) 					\
+#define BNA_Q_CI_ADD(_q_ptr, _num)					\
 	(_q_ptr)->q.consumer_index =					\
-		(((_q_ptr)->q.consumer_index + (_num))  		\
+		(((_q_ptr)->q.consumer_index + (_num))			\
 		& ((_q_ptr)->q.q_depth - 1))
 
 #define BNA_Q_FREE_COUNT(_q_ptr)					\
 	(BNA_QE_FREE_CNT(&((_q_ptr)->q), (_q_ptr)->q.q_depth))
 
-#define BNA_Q_IN_USE_COUNT(_q_ptr)  					\
+#define BNA_Q_IN_USE_COUNT(_q_ptr)					\
 	(BNA_QE_IN_USE_CNT(&(_q_ptr)->q, (_q_ptr)->q.q_depth))
 
 /* These macros build the data portion of the TxQ/RxQ doorbell */
-#define BNA_DOORBELL_Q_PRD_IDX(_pi) 	(0x80000000 | (_pi))
+#define BNA_DOORBELL_Q_PRD_IDX(_pi)	(0x80000000 | (_pi))
 #define BNA_DOORBELL_Q_STOP		(0x40000000)
 
 /* These macros build the data portion of the IB doorbell */
 #define BNA_DOORBELL_IB_INT_ACK(_timeout, _events) \
 	(0x80000000 | ((_timeout) << 16) | (_events))
-#define BNA_DOORBELL_IB_INT_DISABLE 	(0x40000000)
+#define BNA_DOORBELL_IB_INT_DISABLE	(0x40000000)
 
 /* Set the coalescing timer for the given ib */
 #define bna_ib_coalescing_timer_set(_i_dbell, _cls_timer)		\
diff --git a/drivers/net/bna/bna_hw.h b/drivers/net/bna/bna_hw.h
index 6cb8969..cad233d 100644
--- a/drivers/net/bna/bna_hw.h
+++ b/drivers/net/bna/bna_hw.h
@@ -67,7 +67,7 @@ static struct bna_ibidx_pool name[BFI_IBIDX_TOTAL_POOLS] =		\
 
 /**
  * There are 2 free RIT segment pools:
- * 	Pool1: 192 segments of 1 RIT entry each
+ *	Pool1: 192 segments of 1 RIT entry each
  *	Pool2: 1 segment of 64 RIT entry
  */
 #define BFI_RIT_SEG_POOL1_SIZE		192
@@ -357,14 +357,14 @@ static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
  * To clear set the value to 0.
  * Range : 0x20 to 0x5c
  */
-#define PSS_SEM_LOCK_REG(_num) 		\
+#define PSS_SEM_LOCK_REG(_num)		\
 	(PSS_BLK_REG_ADDR + 0x020 + ((_num) << 2))
 
 /**
  * PSS Semaphore Status Registers,
  * corresponding to the lock registers above
  */
-#define PSS_SEM_STATUS_REG(_num) 		\
+#define PSS_SEM_STATUS_REG(_num)		\
 	(PSS_BLK_REG_ADDR + 0x060 + ((_num) << 2))
 
 /**
@@ -1044,7 +1044,7 @@ static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
 		__LPU12HOST_MBOX1_STATUS_BITS))
 
 #define BNA_IS_MBOX_INTR(_intr_status)		\
-	((_intr_status) &  			\
+	((_intr_status) &			\
 	(__LPU02HOST_MBOX0_STATUS_BITS |	\
 	 __LPU02HOST_MBOX1_STATUS_BITS |	\
 	 __LPU12HOST_MBOX0_STATUS_BITS |	\
@@ -1070,11 +1070,11 @@ static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
 	  __HALT_MASK_BITS)
 
 #define BNA_IS_ERR_INTR(_intr_status)	\
-	((_intr_status) &  		\
-	(__EMC_ERROR_STATUS_BITS |  	\
-	 __LPU0_ERROR_STATUS_BITS | 	\
-	 __LPU1_ERROR_STATUS_BITS | 	\
-	 __PSS_ERROR_STATUS_BITS  | 	\
+	((_intr_status) &		\
+	(__EMC_ERROR_STATUS_BITS |	\
+	 __LPU0_ERROR_STATUS_BITS |	\
+	 __LPU1_ERROR_STATUS_BITS |	\
+	 __PSS_ERROR_STATUS_BITS  |	\
 	 __HALT_STATUS_BITS))
 
 #define BNA_IS_MBOX_ERR_INTR(_intr_status)	\
@@ -1087,9 +1087,9 @@ static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
 #define BNA_INTR_STATUS_MBOX_CLR(_intr_status)			\
 do {								\
 	(_intr_status) &= ~(__LPU02HOST_MBOX0_STATUS_BITS |	\
-			__LPU02HOST_MBOX1_STATUS_BITS | 	\
-			__LPU12HOST_MBOX0_STATUS_BITS | 	\
-			__LPU12HOST_MBOX1_STATUS_BITS); 	\
+			__LPU02HOST_MBOX1_STATUS_BITS |		\
+			__LPU12HOST_MBOX0_STATUS_BITS |		\
+			__LPU12HOST_MBOX1_STATUS_BITS);		\
 } while (0)
 
 #define BNA_INTR_STATUS_ERR_CLR(_intr_status)		\
@@ -1107,7 +1107,7 @@ do {							\
 	writel(0xffffffff, (_bna)->regs.fn_int_mask);\
 }
 
-#define bna_intx_enable(bna, new_mask) 			\
+#define bna_intx_enable(bna, new_mask)			\
 	writel((new_mask), (bna)->regs.fn_int_mask)
 
 #define bna_mbox_intr_disable(bna)		\
@@ -1179,18 +1179,18 @@ do {\
 #define BNA_DOORBELL_IB_INT_DISABLE		(0x40000000)
 
 /* TxQ Entry Opcodes */
-#define BNA_TXQ_WI_SEND 		(0x402)	/* Single Frame Transmission */
-#define BNA_TXQ_WI_SEND_LSO 		(0x403)	/* Multi-Frame Transmission */
+#define BNA_TXQ_WI_SEND			(0x402)	/* Single Frame Transmission */
+#define BNA_TXQ_WI_SEND_LSO		(0x403)	/* Multi-Frame Transmission */
 #define BNA_TXQ_WI_EXTENSION		(0x104)	/* Extension WI */
 
 /* TxQ Entry Control Flags */
-#define BNA_TXQ_WI_CF_FCOE_CRC  	(1 << 8)
-#define BNA_TXQ_WI_CF_IPID_MODE 	(1 << 5)
-#define BNA_TXQ_WI_CF_INS_PRIO  	(1 << 4)
-#define BNA_TXQ_WI_CF_INS_VLAN  	(1 << 3)
-#define BNA_TXQ_WI_CF_UDP_CKSUM 	(1 << 2)
-#define BNA_TXQ_WI_CF_TCP_CKSUM 	(1 << 1)
-#define BNA_TXQ_WI_CF_IP_CKSUM  	(1 << 0)
+#define BNA_TXQ_WI_CF_FCOE_CRC		(1 << 8)
+#define BNA_TXQ_WI_CF_IPID_MODE		(1 << 5)
+#define BNA_TXQ_WI_CF_INS_PRIO		(1 << 4)
+#define BNA_TXQ_WI_CF_INS_VLAN		(1 << 3)
+#define BNA_TXQ_WI_CF_UDP_CKSUM		(1 << 2)
+#define BNA_TXQ_WI_CF_TCP_CKSUM		(1 << 1)
+#define BNA_TXQ_WI_CF_IP_CKSUM		(1 << 0)
 
 #define BNA_TXQ_WI_L4_HDR_N_OFFSET(_hdr_size, _offset) \
 		(((_hdr_size) << 10) | ((_offset) & 0x3FF))
@@ -1199,30 +1199,30 @@ do {\
  * Completion Q defines
  */
 /* CQ Entry Flags */
-#define	BNA_CQ_EF_MAC_ERROR 	(1 <<  0)
-#define	BNA_CQ_EF_FCS_ERROR 	(1 <<  1)
-#define	BNA_CQ_EF_TOO_LONG  	(1 <<  2)
-#define	BNA_CQ_EF_FC_CRC_OK 	(1 <<  3)
+#define	BNA_CQ_EF_MAC_ERROR	(1 <<  0)
+#define	BNA_CQ_EF_FCS_ERROR	(1 <<  1)
+#define	BNA_CQ_EF_TOO_LONG	(1 <<  2)
+#define	BNA_CQ_EF_FC_CRC_OK	(1 <<  3)
 
-#define	BNA_CQ_EF_RSVD1 	(1 <<  4)
+#define	BNA_CQ_EF_RSVD1		(1 <<  4)
 #define	BNA_CQ_EF_L4_CKSUM_OK	(1 <<  5)
 #define	BNA_CQ_EF_L3_CKSUM_OK	(1 <<  6)
 #define	BNA_CQ_EF_HDS_HEADER	(1 <<  7)
 
-#define	BNA_CQ_EF_UDP   	(1 <<  8)
-#define	BNA_CQ_EF_TCP   	(1 <<  9)
+#define	BNA_CQ_EF_UDP		(1 <<  8)
+#define	BNA_CQ_EF_TCP		(1 <<  9)
 #define	BNA_CQ_EF_IP_OPTIONS	(1 << 10)
-#define	BNA_CQ_EF_IPV6  	(1 << 11)
+#define	BNA_CQ_EF_IPV6		(1 << 11)
 
-#define	BNA_CQ_EF_IPV4  	(1 << 12)
-#define	BNA_CQ_EF_VLAN  	(1 << 13)
-#define	BNA_CQ_EF_RSS   	(1 << 14)
-#define	BNA_CQ_EF_RSVD2 	(1 << 15)
+#define	BNA_CQ_EF_IPV4		(1 << 12)
+#define	BNA_CQ_EF_VLAN		(1 << 13)
+#define	BNA_CQ_EF_RSS		(1 << 14)
+#define	BNA_CQ_EF_RSVD2		(1 << 15)
 
 #define	BNA_CQ_EF_MCAST_MATCH   (1 << 16)
-#define	BNA_CQ_EF_MCAST 	(1 << 17)
-#define BNA_CQ_EF_BCAST 	(1 << 18)
-#define	BNA_CQ_EF_REMOTE 	(1 << 19)
+#define	BNA_CQ_EF_MCAST		(1 << 17)
+#define BNA_CQ_EF_BCAST		(1 << 18)
+#define	BNA_CQ_EF_REMOTE	(1 << 19)
 
 #define	BNA_CQ_EF_LOCAL		(1 << 20)
 
@@ -1257,10 +1257,10 @@ enum ib_flags {
 };
 
 enum rss_hash_type {
-	BFI_RSS_T_V4_TCP    		= (1 << 11),
-	BFI_RSS_T_V4_IP     		= (1 << 10),
-	BFI_RSS_T_V6_TCP    		= (1 <<  9),
-	BFI_RSS_T_V6_IP     		= (1 <<  8)
+	BFI_RSS_T_V4_TCP		= (1 << 11),
+	BFI_RSS_T_V4_IP			= (1 << 10),
+	BFI_RSS_T_V6_TCP		= (1 <<  9),
+	BFI_RSS_T_V6_IP			= (1 <<  8)
 };
 enum hds_header_type {
 	BNA_HDS_T_V4_TCP	= (1 << 11),
@@ -1298,7 +1298,7 @@ struct bna_txq_mem {
 	u32 reserved2;
 	u32 pg_cnt_n_prd_ptr;	/* 31:16->total page count */
 					/* 15:0 ->producer pointer (index?) */
-	u32 entry_n_pg_size; 	/* 31:16->entry size */
+	u32 entry_n_pg_size;	/* 31:16->entry size */
 					/* 15:0 ->page size */
 	u32 int_blk_n_cns_ptr;	/* 31:24->Int Blk Id;  */
 					/* 23:16->Int Blk Offset */
@@ -1326,7 +1326,7 @@ struct bna_rxq_mem {
 	u32 sg_n_cq_n_cns_ptr;	/* 31:28->reserved; 27:24->sg count */
 					/* 23:16->CQ; */
 					/* 15:0->consumer pointer(index?) */
-	u32 buf_sz_n_q_state; 	/* 31:16->buffer size; 15:0-> Q state */
+	u32 buf_sz_n_q_state;	/* 31:16->buffer size; 15:0-> Q state */
 	u32 next_qid;		/* 17:10->next QId */
 	u32 reserved3;
 	u32 reserved4[4];
@@ -1426,8 +1426,8 @@ struct bna_dma_addr {
 };
 
 struct bna_txq_wi_vector {
-	u16 		reserved;
-	u16 		length;		/* Only 14 LSB are valid */
+	u16		reserved;
+	u16		length;		/* Only 14 LSB are valid */
 	struct bna_dma_addr host_addr; /* Tx-Buf DMA addr */
 };
 
@@ -1465,7 +1465,7 @@ struct bna_txq_entry {
 	} hdr;
 	struct bna_txq_wi_vector vector[4];
 };
-#define wi_hdr  	hdr.wi
+#define wi_hdr		hdr.wi
 #define wi_ext_hdr  hdr.wi_ext
 
 /* RxQ Entry Structure */
diff --git a/drivers/net/bna/bna_txrx.c b/drivers/net/bna/bna_txrx.c
index 380085c..2c06e08 100644
--- a/drivers/net/bna/bna_txrx.c
+++ b/drivers/net/bna/bna_txrx.c
@@ -734,7 +734,7 @@ bna_rxf_sm_cam_fltr_clr_wait_entry(struct bna_rxf *rxf)
 	/**
 	 *  Note: Do not add rxf_clear_packet_filter here.
 	 * It will overstep mbox when this transition happens:
-	 * 	cam_fltr_mod_wait -> cam_fltr_clr_wait on RXF_E_STOP event
+	 *	cam_fltr_mod_wait -> cam_fltr_clr_wait on RXF_E_STOP event
 	 */
 }
 
@@ -771,7 +771,7 @@ bna_rxf_sm_stop_wait_entry(struct bna_rxf *rxf)
 	/**
 	 * NOTE: Do not add  rxf_disable here.
 	 * It will overstep mbox when this transition happens:
-	 * 	start_wait -> stop_wait on RXF_E_STOP event
+	 *	start_wait -> stop_wait on RXF_E_STOP event
 	 */
 }
 
diff --git a/drivers/net/bna/bna_types.h b/drivers/net/bna/bna_types.h
index b9c134f..2f89cb2 100644
--- a/drivers/net/bna/bna_types.h
+++ b/drivers/net/bna/bna_types.h
@@ -50,12 +50,12 @@ enum bna_status {
 };
 
 enum bna_cleanup_type {
-	BNA_HARD_CLEANUP 	= 0,
-	BNA_SOFT_CLEANUP 	= 1
+	BNA_HARD_CLEANUP	= 0,
+	BNA_SOFT_CLEANUP	= 1
 };
 
 enum bna_cb_status {
-	BNA_CB_SUCCESS 		= 0,
+	BNA_CB_SUCCESS		= 0,
 	BNA_CB_FAIL		= 1,
 	BNA_CB_INTERRUPT	= 2,
 	BNA_CB_BUSY		= 3,
@@ -72,8 +72,8 @@ enum bna_res_type {
 };
 
 enum bna_mem_type {
-	BNA_MEM_T_KVA 		= 1,
-	BNA_MEM_T_DMA 		= 2
+	BNA_MEM_T_KVA		= 1,
+	BNA_MEM_T_DMA		= 2
 };
 
 enum bna_intr_type {
@@ -82,10 +82,10 @@ enum bna_intr_type {
 };
 
 enum bna_res_req_type {
-	BNA_RES_MEM_T_COM 		= 0,
-	BNA_RES_MEM_T_ATTR 		= 1,
-	BNA_RES_MEM_T_FWTRC 		= 2,
-	BNA_RES_MEM_T_STATS 		= 3,
+	BNA_RES_MEM_T_COM		= 0,
+	BNA_RES_MEM_T_ATTR		= 1,
+	BNA_RES_MEM_T_FWTRC		= 2,
+	BNA_RES_MEM_T_STATS		= 3,
 	BNA_RES_MEM_T_SWSTATS		= 4,
 	BNA_RES_MEM_T_IBIDX		= 5,
 	BNA_RES_MEM_T_IB_ARRAY		= 6,
@@ -107,9 +107,9 @@ enum bna_res_req_type {
 enum bna_tx_res_req_type {
 	BNA_TX_RES_MEM_T_TCB	= 0,
 	BNA_TX_RES_MEM_T_UNMAPQ	= 1,
-	BNA_TX_RES_MEM_T_QPT 	= 2,
+	BNA_TX_RES_MEM_T_QPT	= 2,
 	BNA_TX_RES_MEM_T_SWQPT	= 3,
-	BNA_TX_RES_MEM_T_PAGE 	= 4,
+	BNA_TX_RES_MEM_T_PAGE	= 4,
 	BNA_TX_RES_INTR_T_TXCMPL = 5,
 	BNA_TX_RES_T_MAX,
 };
@@ -158,14 +158,14 @@ enum bna_rx_type {
 };
 
 enum bna_rxp_type {
-	BNA_RXP_SINGLE 		= 1,
-	BNA_RXP_SLR 		= 2,
-	BNA_RXP_HDS 		= 3
+	BNA_RXP_SINGLE		= 1,
+	BNA_RXP_SLR		= 2,
+	BNA_RXP_HDS		= 3
 };
 
 enum bna_rxmode {
-	BNA_RXMODE_PROMISC 	= 1,
-	BNA_RXMODE_ALLMULTI 	= 2
+	BNA_RXMODE_PROMISC	= 1,
+	BNA_RXMODE_ALLMULTI	= 2
 };
 
 enum bna_rx_event {
@@ -202,7 +202,7 @@ enum bna_rxf_oper_state {
 };
 
 enum bna_rxf_flags {
-	BNA_RXF_FL_STOP_PENDING 	= 0x01,
+	BNA_RXF_FL_STOP_PENDING		= 0x01,
 	BNA_RXF_FL_FAILED		= 0x02,
 	BNA_RXF_FL_RSS_CONFIG_PENDING	= 0x04,
 	BNA_RXF_FL_OPERSTATE_CHANGED	= 0x08,
@@ -244,11 +244,11 @@ enum bna_port_type {
 enum bna_link_status {
 	BNA_LINK_DOWN		= 0,
 	BNA_LINK_UP		= 1,
-	BNA_CEE_UP 		= 2
+	BNA_CEE_UP		= 2
 };
 
 enum bna_llport_flags {
-	BNA_LLPORT_F_ADMIN_UP	 	= 1,
+	BNA_LLPORT_F_ADMIN_UP		= 1,
 	BNA_LLPORT_F_PORT_ENABLED	= 2,
 	BNA_LLPORT_F_RX_STARTED		= 4
 };
@@ -304,7 +304,7 @@ struct bna_mem_descr {
 struct bna_mem_info {
 	enum bna_mem_type mem_type;
 	u32		len;
-	u32 		num;
+	u32		num;
 	u32		align_sz; /* 0/1 = no alignment */
 	struct bna_mem_descr *mdl;
 	void			*cookie; /* For bnad to unmap dma later */
@@ -371,10 +371,10 @@ struct bna_mbox_qe {
 	struct list_head			qe;
 
 	struct bfa_mbox_cmd cmd;
-	u32 		cmd_len;
+	u32		cmd_len;
 	/* Callback for port, tx, rx, rxf */
 	void (*cbfn)(void *arg, int status);
-	void 			*cbarg;
+	void			*cbarg;
 };
 
 struct bna_mbox_mod {
@@ -480,7 +480,7 @@ struct bna_ib_dbell {
 
 /* Interrupt timer configuration */
 struct bna_ib_config {
-	u8 		coalescing_timeo;    /* Unit is 5usec. */
+	u8		coalescing_timeo;    /* Unit is 5usec. */
 
 	int			interpkt_count;
 	int			interpkt_timeo;
@@ -576,8 +576,8 @@ struct bna_txq {
 
 	struct bna_tx *tx;
 
-	u64 		tx_packets;
-	u64 		tx_bytes;
+	u64		tx_packets;
+	u64		tx_bytes;
 };
 
 /* TxF structure (hardware Tx Function) */
@@ -739,10 +739,10 @@ struct bna_rxq {
 	struct bna_rxp *rxp;
 	struct bna_rx *rx;
 
-	u64 		rx_packets;
+	u64		rx_packets;
 	u64		rx_bytes;
-	u64 		rx_packets_with_error;
-	u64 		rxbuf_alloc_failed;
+	u64		rx_packets_with_error;
+	u64		rxbuf_alloc_failed;
 };
 
 /* RxQ pair */
@@ -902,7 +902,7 @@ struct bna_rxf {
 	 * callback for:
 	 *	bna_rxf_ucast_set()
 	 *	bna_rxf_{ucast/mcast}_add(),
-	 * 	bna_rxf_{ucast/mcast}_del(),
+	 *	bna_rxf_{ucast/mcast}_del(),
 	 *	bna_rxf_mode_set()
 	 */
 	void (*cam_fltr_cbfn)(struct bnad *bnad, struct bna_rx *rx,
diff --git a/drivers/net/bna/bnad.c b/drivers/net/bna/bnad.c
index fa997bf..f57e491 100644
--- a/drivers/net/bna/bnad.c
+++ b/drivers/net/bna/bnad.c
@@ -58,7 +58,7 @@ static const u8 bnad_bcast_addr[] =  {0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
 
 #define BNAD_GET_MBOX_IRQ(_bnad)				\
 	(((_bnad)->cfg_flags & BNAD_CF_MSIX) ?			\
-	 ((_bnad)->msix_table[(_bnad)->msix_num - 1].vector) : 	\
+	 ((_bnad)->msix_table[(_bnad)->msix_num - 1].vector) :	\
 	 ((_bnad)->pcidev->irq))
 
 #define BNAD_FILL_UNMAPQ_MEM_REQ(_res_info, _num, _depth)	\
@@ -110,10 +110,10 @@ static void
 bnad_free_all_txbufs(struct bnad *bnad,
 		 struct bna_tcb *tcb)
 {
-	u32 		unmap_cons;
+	u32		unmap_cons;
 	struct bnad_unmap_q *unmap_q = tcb->unmap_q;
 	struct bnad_skb_unmap *unmap_array;
-	struct sk_buff 		*skb = NULL;
+	struct sk_buff		*skb = NULL;
 	int			i;
 
 	unmap_array = unmap_q->unmap_array;
@@ -163,11 +163,11 @@ static u32
 bnad_free_txbufs(struct bnad *bnad,
 		 struct bna_tcb *tcb)
 {
-	u32 		sent_packets = 0, sent_bytes = 0;
-	u16 		wis, unmap_cons, updated_hw_cons;
+	u32		sent_packets = 0, sent_bytes = 0;
+	u16		wis, unmap_cons, updated_hw_cons;
 	struct bnad_unmap_q *unmap_q = tcb->unmap_q;
 	struct bnad_skb_unmap *unmap_array;
-	struct sk_buff 		*skb;
+	struct sk_buff		*skb;
 	int i;
 
 	/*
@@ -245,7 +245,7 @@ bnad_tx_free_tasklet(unsigned long bnad_ptr)
 {
 	struct bnad *bnad = (struct bnad *)bnad_ptr;
 	struct bna_tcb *tcb;
-	u32 		acked = 0;
+	u32		acked = 0;
 	int			i, j;
 
 	for (i = 0; i < bnad->num_tx; i++) {
@@ -1108,10 +1108,10 @@ static int
 bnad_mbox_irq_alloc(struct bnad *bnad,
 		    struct bna_intr_info *intr_info)
 {
-	int 		err = 0;
-	unsigned long 	irq_flags, flags;
+	int		err = 0;
+	unsigned long	irq_flags, flags;
 	u32	irq;
-	irq_handler_t 	irq_handler;
+	irq_handler_t	irq_handler;
 
 	/* Mbox should use only 1 vector */
 
@@ -1453,7 +1453,7 @@ bnad_iocpf_sem_timeout(unsigned long data)
 /*
  * All timer routines use bnad->bna_lock to protect against
  * the following race, which may occur in case of no locking:
- * 	Time	CPU m  		CPU n
+ *	Time	CPU m	CPU n
  *	0       1 = test_bit
  *	1			clear_bit
  *	2			del_timer_sync
@@ -1918,7 +1918,7 @@ void
 bnad_rx_coalescing_timeo_set(struct bnad *bnad)
 {
 	struct bnad_rx_info *rx_info;
-	int 	i;
+	int	i;
 
 	for (i = 0; i < bnad->num_rx; i++) {
 		rx_info = &bnad->rx_info[i];
@@ -2437,18 +2437,18 @@ bnad_start_xmit(struct sk_buff *skb, struct net_device *netdev)
 {
 	struct bnad *bnad = netdev_priv(netdev);
 
-	u16 		txq_prod, vlan_tag = 0;
-	u32 		unmap_prod, wis, wis_used, wi_range;
-	u32 		vectors, vect_id, i, acked;
+	u16		txq_prod, vlan_tag = 0;
+	u32		unmap_prod, wis, wis_used, wi_range;
+	u32		vectors, vect_id, i, acked;
 	u32		tx_id;
-	int 			err;
+	int			err;
 
 	struct bnad_tx_info *tx_info;
 	struct bna_tcb *tcb;
 	struct bnad_unmap_q *unmap_q;
-	dma_addr_t 		dma_addr;
+	dma_addr_t		dma_addr;
 	struct bna_txq_entry *txqent;
-	bna_txq_wi_ctrl_flag_t 	flags;
+	bna_txq_wi_ctrl_flag_t	flags;
 
 	if (unlikely
 	    (skb->len <= ETH_HLEN || skb->len > BFI_TX_MAX_DATA_PER_PKT)) {
@@ -3054,8 +3054,8 @@ static int __devinit
 bnad_pci_probe(struct pci_dev *pdev,
 		const struct pci_device_id *pcidev_id)
 {
-	bool 	using_dac = false;
-	int 	err;
+	bool	using_dac = false;
+	int	err;
 	struct bnad *bnad;
 	struct bna *bna;
 	struct net_device *netdev;
@@ -3087,7 +3087,7 @@ bnad_pci_probe(struct pci_dev *pdev,
 
 	/*
 	 * PCI initialization
-	 * 	Output : using_dac = 1 for 64 bit DMA
+	 *	Output : using_dac = 1 for 64 bit DMA
 	 *			   = 0 for 32 bit DMA
 	 */
 	err = bnad_pci_init(bnad, pdev, &using_dac);
diff --git a/drivers/net/bna/bnad.h b/drivers/net/bna/bnad.h
index ccdabad..acf04b6 100644
--- a/drivers/net/bna/bnad.h
+++ b/drivers/net/bna/bnad.h
@@ -69,8 +69,8 @@ struct bnad_rx_ctrl {
 
 #define BNAD_MAILBOX_MSIX_VECTORS	1
 
-#define BNAD_STATS_TIMER_FREQ		1000 	/* in msecs */
-#define BNAD_DIM_TIMER_FREQ		1000 	/* in msecs */
+#define BNAD_STATS_TIMER_FREQ		1000	/* in msecs */
+#define BNAD_DIM_TIMER_FREQ		1000	/* in msecs */
 
 #define BNAD_MAX_Q_DEPTH		0x10000
 #define BNAD_MIN_Q_DEPTH		0x200
@@ -101,12 +101,12 @@ enum bnad_intr_source {
 
 enum bnad_link_state {
 	BNAD_LS_DOWN		= 0,
-	BNAD_LS_UP 		= 1
+	BNAD_LS_UP		= 1
 };
 
 struct bnad_completion {
-	struct completion 	ioc_comp;
-	struct completion 	ucast_comp;
+	struct completion	ioc_comp;
+	struct completion	ucast_comp;
 	struct completion	mcast_comp;
 	struct completion	tx_comp;
 	struct completion	rx_comp;
@@ -124,7 +124,7 @@ struct bnad_completion {
 
 /* Tx Rx Control Stats */
 struct bnad_drv_stats {
-	u64 		netif_queue_stop;
+	u64		netif_queue_stop;
 	u64		netif_queue_wakeup;
 	u64		netif_queue_stopped;
 	u64		tso4;
@@ -187,7 +187,7 @@ struct bnad_skb_unmap {
 struct bnad_unmap_q {
 	u32		producer_index;
 	u32		consumer_index;
-	u32 		q_depth;
+	u32		q_depth;
 	/* This should be the last one */
 	struct bnad_skb_unmap unmap_array[1];
 };
@@ -210,7 +210,7 @@ struct bnad_unmap_q {
 #define BNAD_RF_RX_SHUTDOWN_DELAYED	7
 
 struct bnad {
-	struct net_device 	*netdev;
+	struct net_device	*netdev;
 
 	/* Data path */
 	struct bnad_tx_info tx_info[BNAD_MAX_TXS];
@@ -244,7 +244,7 @@ struct bnad {
 	u32		cfg_flags;
 	unsigned long		run_flags;
 
-	struct pci_dev 		*pcidev;
+	struct pci_dev		*pcidev;
 	u64		mmio_start;
 	u64		mmio_len;
 
@@ -277,7 +277,7 @@ struct bnad {
 	struct bnad_diag *diag;
 
 	char			adapter_name[BNAD_NAME_LEN];
-	char 			port_name[BNAD_NAME_LEN];
+	char			port_name[BNAD_NAME_LEN];
 	char			mbox_irq_name[BNAD_NAME_LEN];
 };
 
@@ -285,7 +285,7 @@ struct bnad {
  * EXTERN VARIABLES
  */
 extern struct firmware *bfi_fw;
-extern u32 		bnad_rxqs_per_cq;
+extern u32		bnad_rxqs_per_cq;
 
 /*
  * EXTERN PROTOTYPES
@@ -331,7 +331,7 @@ extern void bnad_netdev_hwstats_fill(struct bnad *bnad,
 }
 
 #define bnad_dim_timer_running(_bnad)				\
-	(((_bnad)->cfg_flags & BNAD_CF_DIM_ENABLED) && 		\
+	(((_bnad)->cfg_flags & BNAD_CF_DIM_ENABLED) &&		\
 	(test_bit(BNAD_RF_DIM_TIMER_RUNNING, &((_bnad)->run_flags))))
 
 #endif /* __BNAD_H__ */
diff --git a/drivers/net/bna/bnad_ethtool.c b/drivers/net/bna/bnad_ethtool.c
index 3330cd7..fea07f1 100644
--- a/drivers/net/bna/bnad_ethtool.c
+++ b/drivers/net/bna/bnad_ethtool.c
@@ -295,7 +295,7 @@ get_regs(struct bnad *bnad, u32 * regs)
 	u32 reg_addr;
 	unsigned long flags;
 
-#define BNAD_GET_REG(addr) 					\
+#define BNAD_GET_REG(addr)					\
 do {								\
 	if (regs)						\
 		regs[num++] = readl(bnad->bar0 + (addr));	\
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 02/45] bna: Update ASIC Register Definition to Support New Hardware
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
  2011-07-18  8:19 ` [PATCH 01/45] bna: Checkpatch Cleanup Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 03/45] bna: Change IOC Event Notification Call Back Mechanism Rasesh Mody
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Change details:
 - Introducing new file bfi_reg.h for new hardware registers. This file will
   completely replace bfi_ctreg.h in a later patch.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfa_ioc_ct.c |   78 ++++----
 drivers/net/bna/bfi_reg.h    |  452 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 492 insertions(+), 38 deletions(-)
 create mode 100644 drivers/net/bna/bfi_reg.h

diff --git a/drivers/net/bna/bfa_ioc_ct.c b/drivers/net/bna/bfa_ioc_ct.c
index 87aecdf..75ecf7a 100644
--- a/drivers/net/bna/bfa_ioc_ct.c
+++ b/drivers/net/bna/bfa_ioc_ct.c
@@ -19,7 +19,7 @@
 #include "bfa_ioc.h"
 #include "cna.h"
 #include "bfi.h"
-#include "bfi_ctreg.h"
+#include "bfi_reg.h"
 #include "bfa_defs.h"
 
 #define bfa_ioc_ct_sync_pos(__ioc)	\
@@ -178,7 +178,7 @@ bfa_ioc_ct_notify_fail(struct bfa_ioc *ioc)
 		readl(ioc->ioc_regs.ll_halt);
 		readl(ioc->ioc_regs.alt_ll_halt);
 	} else {
-		writel(__PSS_ERR_STATUS_SET, ioc->ioc_regs.err_set);
+		writel(~0U, ioc->ioc_regs.err_set);
 		readl(ioc->ioc_regs.err_set);
 	}
 }
@@ -196,21 +196,21 @@ static struct { u32 hfn_mbox, lpu_mbox, hfn_pgn; } iocreg_fnreg[] = {
 /**
  * Host <-> LPU mailbox command/status registers - port 0
  */
-static struct { u32 hfn, lpu; } iocreg_mbcmd_p0[] = {
-	{ HOSTFN0_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN0_MBOX0_CMD_STAT },
-	{ HOSTFN1_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN1_MBOX0_CMD_STAT },
-	{ HOSTFN2_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN2_MBOX0_CMD_STAT },
-	{ HOSTFN3_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN3_MBOX0_CMD_STAT }
+static struct { u32 hfn, lpu; } ct_p0reg[] = {
+	{ HOSTFN0_LPU0_CMD_STAT, LPU0_HOSTFN0_CMD_STAT },
+	{ HOSTFN1_LPU0_CMD_STAT, LPU0_HOSTFN1_CMD_STAT },
+	{ HOSTFN2_LPU0_CMD_STAT, LPU0_HOSTFN2_CMD_STAT },
+	{ HOSTFN3_LPU0_CMD_STAT, LPU0_HOSTFN3_CMD_STAT }
 };
 
 /**
  * Host <-> LPU mailbox command/status registers - port 1
  */
-static struct { u32 hfn, lpu; } iocreg_mbcmd_p1[] = {
-	{ HOSTFN0_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN0_MBOX0_CMD_STAT },
-	{ HOSTFN1_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN1_MBOX0_CMD_STAT },
-	{ HOSTFN2_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN2_MBOX0_CMD_STAT },
-	{ HOSTFN3_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN3_MBOX0_CMD_STAT }
+static struct { u32 hfn, lpu; } ct_p1reg[] = {
+	{ HOSTFN0_LPU1_CMD_STAT, LPU1_HOSTFN0_CMD_STAT },
+	{ HOSTFN1_LPU1_CMD_STAT, LPU1_HOSTFN1_CMD_STAT },
+	{ HOSTFN2_LPU1_CMD_STAT, LPU1_HOSTFN2_CMD_STAT },
+	{ HOSTFN3_LPU1_CMD_STAT, LPU1_HOSTFN3_CMD_STAT }
 };
 
 static void
@@ -229,16 +229,16 @@ bfa_ioc_ct_reg_init(struct bfa_ioc *ioc)
 		ioc->ioc_regs.heartbeat = rb + BFA_IOC0_HBEAT_REG;
 		ioc->ioc_regs.ioc_fwstate = rb + BFA_IOC0_STATE_REG;
 		ioc->ioc_regs.alt_ioc_fwstate = rb + BFA_IOC1_STATE_REG;
-		ioc->ioc_regs.hfn_mbox_cmd = rb + iocreg_mbcmd_p0[pcifn].hfn;
-		ioc->ioc_regs.lpu_mbox_cmd = rb + iocreg_mbcmd_p0[pcifn].lpu;
+		ioc->ioc_regs.hfn_mbox_cmd = rb + ct_p0reg[pcifn].hfn;
+		ioc->ioc_regs.lpu_mbox_cmd = rb + ct_p0reg[pcifn].lpu;
 		ioc->ioc_regs.ll_halt = rb + FW_INIT_HALT_P0;
 		ioc->ioc_regs.alt_ll_halt = rb + FW_INIT_HALT_P1;
 	} else {
 		ioc->ioc_regs.heartbeat = (rb + BFA_IOC1_HBEAT_REG);
 		ioc->ioc_regs.ioc_fwstate = (rb + BFA_IOC1_STATE_REG);
 		ioc->ioc_regs.alt_ioc_fwstate = rb + BFA_IOC0_STATE_REG;
-		ioc->ioc_regs.hfn_mbox_cmd = rb + iocreg_mbcmd_p1[pcifn].hfn;
-		ioc->ioc_regs.lpu_mbox_cmd = rb + iocreg_mbcmd_p1[pcifn].lpu;
+		ioc->ioc_regs.hfn_mbox_cmd = rb + ct_p1reg[pcifn].hfn;
+		ioc->ioc_regs.lpu_mbox_cmd = rb + ct_p1reg[pcifn].lpu;
 		ioc->ioc_regs.ll_halt = rb + FW_INIT_HALT_P1;
 		ioc->ioc_regs.alt_ll_halt = rb + FW_INIT_HALT_P0;
 	}
@@ -248,8 +248,8 @@ bfa_ioc_ct_reg_init(struct bfa_ioc *ioc)
 	 */
 	ioc->ioc_regs.pss_ctl_reg = (rb + PSS_CTL_REG);
 	ioc->ioc_regs.pss_err_status_reg = (rb + PSS_ERR_STATUS_REG);
-	ioc->ioc_regs.app_pll_fast_ctl_reg = (rb + APP_PLL_425_CTL_REG);
-	ioc->ioc_regs.app_pll_slow_ctl_reg = (rb + APP_PLL_312_CTL_REG);
+	ioc->ioc_regs.app_pll_fast_ctl_reg = (rb + APP_PLL_LCLK_CTL_REG);
+	ioc->ioc_regs.app_pll_slow_ctl_reg = (rb + APP_PLL_SCLK_CTL_REG);
 
 	/*
 	 * IOC semaphore registers and serialization
@@ -446,14 +446,15 @@ bfa_ioc_ct_pll_init(void __iomem *rb, bool fcmode)
 {
 	u32	pll_sclk, pll_fclk, r32;
 
-	pll_sclk = __APP_PLL_312_LRESETN | __APP_PLL_312_ENARST |
-		__APP_PLL_312_RSEL200500 | __APP_PLL_312_P0_1(3U) |
-		__APP_PLL_312_JITLMT0_1(3U) |
-		__APP_PLL_312_CNTLMT0_1(1U);
-	pll_fclk = __APP_PLL_425_LRESETN | __APP_PLL_425_ENARST |
-		__APP_PLL_425_RSEL200500 | __APP_PLL_425_P0_1(3U) |
-		__APP_PLL_425_JITLMT0_1(3U) |
-		__APP_PLL_425_CNTLMT0_1(1U);
+	pll_sclk = __APP_PLL_SCLK_LRESETN | __APP_PLL_SCLK_ENARST |
+		__APP_PLL_SCLK_RSEL200500 | __APP_PLL_SCLK_P0_1(3U) |
+		__APP_PLL_SCLK_JITLMT0_1(3U) |
+		__APP_PLL_SCLK_CNTLMT0_1(1U);
+	pll_fclk = __APP_PLL_LCLK_LRESETN | __APP_PLL_LCLK_ENARST |
+		__APP_PLL_LCLK_RSEL200500 | __APP_PLL_LCLK_P0_1(3U) |
+		__APP_PLL_LCLK_JITLMT0_1(3U) |
+		__APP_PLL_LCLK_CNTLMT0_1(1U);
+
 	if (fcmode) {
 		writel(0, (rb + OP_MODE));
 		writel(__APP_EMS_CMLCKSEL |
@@ -474,27 +475,28 @@ bfa_ioc_ct_pll_init(void __iomem *rb, bool fcmode)
 	writel(0xffffffffU, (rb + HOSTFN0_INT_MSK));
 	writel(0xffffffffU, (rb + HOSTFN1_INT_MSK));
 	writel(pll_sclk |
-		__APP_PLL_312_LOGIC_SOFT_RESET,
-		rb + APP_PLL_312_CTL_REG);
+		__APP_PLL_SCLK_LOGIC_SOFT_RESET,
+		rb + APP_PLL_SCLK_CTL_REG);
 	writel(pll_fclk |
-		__APP_PLL_425_LOGIC_SOFT_RESET,
-		rb + APP_PLL_425_CTL_REG);
+		__APP_PLL_LCLK_LOGIC_SOFT_RESET,
+		rb + APP_PLL_LCLK_CTL_REG);
 	writel(pll_sclk |
-		__APP_PLL_312_LOGIC_SOFT_RESET | __APP_PLL_312_ENABLE,
-		rb + APP_PLL_312_CTL_REG);
+		__APP_PLL_SCLK_LOGIC_SOFT_RESET | __APP_PLL_SCLK_ENABLE,
+		rb + APP_PLL_SCLK_CTL_REG);
 	writel(pll_fclk |
-		__APP_PLL_425_LOGIC_SOFT_RESET | __APP_PLL_425_ENABLE,
-		rb + APP_PLL_425_CTL_REG);
+		__APP_PLL_LCLK_LOGIC_SOFT_RESET | __APP_PLL_LCLK_ENABLE,
+		rb + APP_PLL_LCLK_CTL_REG);
 	readl(rb + HOSTFN0_INT_MSK);
 	udelay(2000);
 	writel(0xffffffffU, (rb + HOSTFN0_INT_STATUS));
 	writel(0xffffffffU, (rb + HOSTFN1_INT_STATUS));
 	writel(pll_sclk |
-		__APP_PLL_312_ENABLE,
-		rb + APP_PLL_312_CTL_REG);
+		__APP_PLL_SCLK_ENABLE,
+		rb + APP_PLL_SCLK_CTL_REG);
 	writel(pll_fclk |
-		__APP_PLL_425_ENABLE,
-		rb + APP_PLL_425_CTL_REG);
+		__APP_PLL_LCLK_ENABLE,
+		rb + APP_PLL_LCLK_CTL_REG);
+
 	if (!fcmode) {
 		writel(__PMM_1T_RESET_P, (rb + PMM_1T_RESET_REG_P0));
 		writel(__PMM_1T_RESET_P, (rb + PMM_1T_RESET_REG_P1));
diff --git a/drivers/net/bna/bfi_reg.h b/drivers/net/bna/bfi_reg.h
new file mode 100644
index 0000000..efacff3
--- /dev/null
+++ b/drivers/net/bna/bfi_reg.h
@@ -0,0 +1,452 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/*
+ * bfi_reg.h ASIC register defines for all Brocade adapter ASICs
+ */
+
+#ifndef __BFI_REG_H__
+#define __BFI_REG_H__
+
+#define HOSTFN0_INT_STATUS		0x00014000	/* cb/ct	*/
+#define HOSTFN1_INT_STATUS		0x00014100	/* cb/ct	*/
+#define HOSTFN2_INT_STATUS		0x00014300	/* ct		*/
+#define HOSTFN3_INT_STATUS		0x00014400	/* ct		*/
+#define HOSTFN0_INT_MSK			0x00014004	/* cb/ct	*/
+#define HOSTFN1_INT_MSK			0x00014104	/* cb/ct	*/
+#define HOSTFN2_INT_MSK			0x00014304	/* ct		*/
+#define HOSTFN3_INT_MSK			0x00014404	/* ct		*/
+
+#define HOST_PAGE_NUM_FN0		0x00014008	/* cb/ct	*/
+#define HOST_PAGE_NUM_FN1		0x00014108	/* cb/ct	*/
+#define HOST_PAGE_NUM_FN2		0x00014308	/* ct		*/
+#define HOST_PAGE_NUM_FN3		0x00014408	/* ct		*/
+
+#define APP_PLL_LCLK_CTL_REG		0x00014204	/* cb/ct	*/
+#define __P_LCLK_PLL_LOCK		0x80000000
+#define __APP_PLL_LCLK_SRAM_USE_100MHZ	0x00100000
+#define __APP_PLL_LCLK_RESET_TIMER_MK	0x000e0000
+#define __APP_PLL_LCLK_RESET_TIMER_SH	17
+#define __APP_PLL_LCLK_RESET_TIMER(_v)	((_v) << __APP_PLL_LCLK_RESET_TIMER_SH)
+#define __APP_PLL_LCLK_LOGIC_SOFT_RESET	0x00010000
+#define __APP_PLL_LCLK_CNTLMT0_1_MK	0x0000c000
+#define __APP_PLL_LCLK_CNTLMT0_1_SH	14
+#define __APP_PLL_LCLK_CNTLMT0_1(_v)	((_v) << __APP_PLL_LCLK_CNTLMT0_1_SH)
+#define __APP_PLL_LCLK_JITLMT0_1_MK	0x00003000
+#define __APP_PLL_LCLK_JITLMT0_1_SH	12
+#define __APP_PLL_LCLK_JITLMT0_1(_v)	((_v) << __APP_PLL_LCLK_JITLMT0_1_SH)
+#define __APP_PLL_LCLK_HREF		0x00000800
+#define __APP_PLL_LCLK_HDIV		0x00000400
+#define __APP_PLL_LCLK_P0_1_MK		0x00000300
+#define __APP_PLL_LCLK_P0_1_SH		8
+#define __APP_PLL_LCLK_P0_1(_v)		((_v) << __APP_PLL_LCLK_P0_1_SH)
+#define __APP_PLL_LCLK_Z0_2_MK		0x000000e0
+#define __APP_PLL_LCLK_Z0_2_SH		5
+#define __APP_PLL_LCLK_Z0_2(_v)		((_v) << __APP_PLL_LCLK_Z0_2_SH)
+#define __APP_PLL_LCLK_RSEL200500	0x00000010
+#define __APP_PLL_LCLK_ENARST		0x00000008
+#define __APP_PLL_LCLK_BYPASS		0x00000004
+#define __APP_PLL_LCLK_LRESETN		0x00000002
+#define __APP_PLL_LCLK_ENABLE		0x00000001
+#define APP_PLL_SCLK_CTL_REG		0x00014208	/* cb/ct	*/
+#define __P_SCLK_PLL_LOCK		0x80000000
+#define __APP_PLL_SCLK_RESET_TIMER_MK	0x000e0000
+#define __APP_PLL_SCLK_RESET_TIMER_SH	17
+#define __APP_PLL_SCLK_RESET_TIMER(_v)	((_v) << __APP_PLL_SCLK_RESET_TIMER_SH)
+#define __APP_PLL_SCLK_LOGIC_SOFT_RESET	0x00010000
+#define __APP_PLL_SCLK_CNTLMT0_1_MK	0x0000c000
+#define __APP_PLL_SCLK_CNTLMT0_1_SH	14
+#define __APP_PLL_SCLK_CNTLMT0_1(_v)	((_v) << __APP_PLL_SCLK_CNTLMT0_1_SH)
+#define __APP_PLL_SCLK_JITLMT0_1_MK	0x00003000
+#define __APP_PLL_SCLK_JITLMT0_1_SH	12
+#define __APP_PLL_SCLK_JITLMT0_1(_v)	((_v) << __APP_PLL_SCLK_JITLMT0_1_SH)
+#define __APP_PLL_SCLK_HREF		0x00000800
+#define __APP_PLL_SCLK_HDIV		0x00000400
+#define __APP_PLL_SCLK_P0_1_MK		0x00000300
+#define __APP_PLL_SCLK_P0_1_SH		8
+#define __APP_PLL_SCLK_P0_1(_v)		((_v) << __APP_PLL_SCLK_P0_1_SH)
+#define __APP_PLL_SCLK_Z0_2_MK		0x000000e0
+#define __APP_PLL_SCLK_Z0_2_SH		5
+#define __APP_PLL_SCLK_Z0_2(_v)		((_v) << __APP_PLL_SCLK_Z0_2_SH)
+#define __APP_PLL_SCLK_RSEL200500	0x00000010
+#define __APP_PLL_SCLK_ENARST		0x00000008
+#define __APP_PLL_SCLK_BYPASS		0x00000004
+#define __APP_PLL_SCLK_LRESETN		0x00000002
+#define __APP_PLL_SCLK_ENABLE		0x00000001
+#define __ENABLE_MAC_AHB_1		0x00800000	/* ct		*/
+#define __ENABLE_MAC_AHB_0		0x00400000	/* ct		*/
+#define __ENABLE_MAC_1			0x00200000	/* ct		*/
+#define __ENABLE_MAC_0			0x00100000	/* ct		*/
+
+#define HOST_SEM0_REG			0x00014230	/* cb/ct	*/
+#define HOST_SEM1_REG			0x00014234	/* cb/ct	*/
+#define HOST_SEM2_REG			0x00014238	/* cb/ct	*/
+#define HOST_SEM3_REG			0x0001423c	/* cb/ct	*/
+#define HOST_SEM4_REG			0x00014610	/* cb/ct	*/
+#define HOST_SEM5_REG			0x00014614	/* cb/ct	*/
+#define HOST_SEM6_REG			0x00014618	/* cb/ct	*/
+#define HOST_SEM7_REG			0x0001461c	/* cb/ct	*/
+#define HOST_SEM0_INFO_REG		0x00014240	/* cb/ct	*/
+#define HOST_SEM1_INFO_REG		0x00014244	/* cb/ct	*/
+#define HOST_SEM2_INFO_REG		0x00014248	/* cb/ct	*/
+#define HOST_SEM3_INFO_REG		0x0001424c	/* cb/ct	*/
+#define HOST_SEM4_INFO_REG		0x00014620	/* cb/ct	*/
+#define HOST_SEM5_INFO_REG		0x00014624	/* cb/ct	*/
+#define HOST_SEM6_INFO_REG		0x00014628	/* cb/ct	*/
+#define HOST_SEM7_INFO_REG		0x0001462c	/* cb/ct	*/
+
+#define HOSTFN0_LPU0_CMD_STAT		0x00019000	/* cb/ct	*/
+#define HOSTFN0_LPU1_CMD_STAT		0x00019004	/* cb/ct	*/
+#define HOSTFN1_LPU0_CMD_STAT		0x00019010	/* cb/ct	*/
+#define HOSTFN1_LPU1_CMD_STAT		0x00019014	/* cb/ct	*/
+#define HOSTFN2_LPU0_CMD_STAT		0x00019150	/* ct		*/
+#define HOSTFN2_LPU1_CMD_STAT		0x00019154	/* ct		*/
+#define HOSTFN3_LPU0_CMD_STAT		0x00019160	/* ct		*/
+#define HOSTFN3_LPU1_CMD_STAT		0x00019164	/* ct		*/
+#define LPU0_HOSTFN0_CMD_STAT		0x00019008	/* cb/ct	*/
+#define LPU1_HOSTFN0_CMD_STAT		0x0001900c	/* cb/ct	*/
+#define LPU0_HOSTFN1_CMD_STAT		0x00019018	/* cb/ct	*/
+#define LPU1_HOSTFN1_CMD_STAT		0x0001901c	/* cb/ct	*/
+#define LPU0_HOSTFN2_CMD_STAT		0x00019158	/* ct		*/
+#define LPU1_HOSTFN2_CMD_STAT		0x0001915c	/* ct		*/
+#define LPU0_HOSTFN3_CMD_STAT		0x00019168	/* ct		*/
+#define LPU1_HOSTFN3_CMD_STAT		0x0001916c	/* ct		*/
+
+#define PSS_CTL_REG			0x00018800	/* cb/ct	*/
+#define __PSS_I2C_CLK_DIV_MK		0x007f0000
+#define __PSS_I2C_CLK_DIV_SH		16
+#define __PSS_I2C_CLK_DIV(_v)		((_v) << __PSS_I2C_CLK_DIV_SH)
+#define __PSS_LMEM_INIT_DONE		0x00001000
+#define __PSS_LMEM_RESET		0x00000200
+#define __PSS_LMEM_INIT_EN		0x00000100
+#define __PSS_LPU1_RESET		0x00000002
+#define __PSS_LPU0_RESET		0x00000001
+#define PSS_ERR_STATUS_REG		0x00018810	/* cb/ct	*/
+#define ERR_SET_REG			0x00018818	/* cb/ct	*/
+#define PSS_GPIO_OUT_REG		0x000188c0	/* cb/ct	*/
+#define __PSS_GPIO_OUT_REG		0x00000fff
+#define PSS_GPIO_OE_REG			0x000188c8	/* cb/ct	*/
+#define __PSS_GPIO_OE_REG		0x000000ff
+
+#define HOSTFN0_LPU_MBOX0_0		0x00019200	/* cb/ct	*/
+#define HOSTFN1_LPU_MBOX0_8		0x00019260	/* cb/ct	*/
+#define LPU_HOSTFN0_MBOX0_0		0x00019280	/* cb/ct	*/
+#define LPU_HOSTFN1_MBOX0_8		0x000192e0	/* cb/ct	*/
+#define HOSTFN2_LPU_MBOX0_0		0x00019400	/* ct		*/
+#define HOSTFN3_LPU_MBOX0_8		0x00019460	/* ct		*/
+#define LPU_HOSTFN2_MBOX0_0		0x00019480	/* ct		*/
+#define LPU_HOSTFN3_MBOX0_8		0x000194e0	/* ct		*/
+
+#define HOST_MSIX_ERR_INDEX_FN0		0x0001400c	/* ct		*/
+#define HOST_MSIX_ERR_INDEX_FN1		0x0001410c	/* ct		*/
+#define HOST_MSIX_ERR_INDEX_FN2		0x0001430c	/* ct		*/
+#define HOST_MSIX_ERR_INDEX_FN3		0x0001440c	/* ct		*/
+
+#define MBIST_CTL_REG			0x00014220	/* ct		*/
+#define __EDRAM_BISTR_START		0x00000004
+#define MBIST_STAT_REG			0x00014224	/* ct		*/
+#define ETH_MAC_SER_REG			0x00014288	/* ct		*/
+#define __APP_EMS_CKBUFAMPIN		0x00000020
+#define __APP_EMS_REFCLKSEL		0x00000010
+#define __APP_EMS_CMLCKSEL		0x00000008
+#define __APP_EMS_REFCKBUFEN2		0x00000004
+#define __APP_EMS_REFCKBUFEN1		0x00000002
+#define __APP_EMS_CHANNEL_SEL		0x00000001
+#define FNC_PERS_REG			0x00014604	/* ct		*/
+#define __F3_FUNCTION_ACTIVE		0x80000000
+#define __F3_FUNCTION_MODE		0x40000000
+#define __F3_PORT_MAP_MK		0x30000000
+#define __F3_PORT_MAP_SH		28
+#define __F3_PORT_MAP(_v)		((_v) << __F3_PORT_MAP_SH)
+#define __F3_VM_MODE			0x08000000
+#define __F3_INTX_STATUS_MK		0x07000000
+#define __F3_INTX_STATUS_SH		24
+#define __F3_INTX_STATUS(_v)		((_v) << __F3_INTX_STATUS_SH)
+#define __F2_FUNCTION_ACTIVE		0x00800000
+#define __F2_FUNCTION_MODE		0x00400000
+#define __F2_PORT_MAP_MK		0x00300000
+#define __F2_PORT_MAP_SH		20
+#define __F2_PORT_MAP(_v)		((_v) << __F2_PORT_MAP_SH)
+#define __F2_VM_MODE			0x00080000
+#define __F2_INTX_STATUS_MK		0x00070000
+#define __F2_INTX_STATUS_SH		16
+#define __F2_INTX_STATUS(_v)		((_v) << __F2_INTX_STATUS_SH)
+#define __F1_FUNCTION_ACTIVE		0x00008000
+#define __F1_FUNCTION_MODE		0x00004000
+#define __F1_PORT_MAP_MK		0x00003000
+#define __F1_PORT_MAP_SH		12
+#define __F1_PORT_MAP(_v)		((_v) << __F1_PORT_MAP_SH)
+#define __F1_VM_MODE			0x00000800
+#define __F1_INTX_STATUS_MK		0x00000700
+#define __F1_INTX_STATUS_SH		8
+#define __F1_INTX_STATUS(_v)		((_v) << __F1_INTX_STATUS_SH)
+#define __F0_FUNCTION_ACTIVE		0x00000080
+#define __F0_FUNCTION_MODE		0x00000040
+#define __F0_PORT_MAP_MK		0x00000030
+#define __F0_PORT_MAP_SH		4
+#define __F0_PORT_MAP(_v)		((_v) << __F0_PORT_MAP_SH)
+#define __F0_VM_MODE			0x00000008
+#define __F0_INTX_STATUS		0x00000007
+enum {
+	__F0_INTX_STATUS_MSIX = 0x0,
+	__F0_INTX_STATUS_INTA = 0x1,
+	__F0_INTX_STATUS_INTB = 0x2,
+	__F0_INTX_STATUS_INTC = 0x3,
+	__F0_INTX_STATUS_INTD = 0x4,
+};
+
+#define OP_MODE				0x0001460c
+#define __APP_ETH_CLK_LOWSPEED		0x00000004
+#define __GLOBAL_CORECLK_HALFSPEED	0x00000002
+#define __GLOBAL_FCOE_MODE		0x00000001
+#define FW_INIT_HALT_P0			0x000191ac
+#define __FW_INIT_HALT_P		0x00000001
+#define FW_INIT_HALT_P1			0x000191bc
+#define PMM_1T_RESET_REG_P0		0x0002381c
+#define __PMM_1T_RESET_P		0x00000001
+#define PMM_1T_RESET_REG_P1		0x00023c1c
+
+/**
+ * Brocade 1860 Adapter specific defines
+ */
+#define CT2_PCI_CPQ_BASE		0x00030000
+#define CT2_PCI_APP_BASE		0x00030100
+#define CT2_PCI_ETH_BASE		0x00030400
+
+/*
+ * APP block registers
+ */
+#define CT2_HOSTFN_INT_STATUS		(CT2_PCI_APP_BASE + 0x00)
+#define CT2_HOSTFN_INTR_MASK		(CT2_PCI_APP_BASE + 0x04)
+#define CT2_HOSTFN_PERSONALITY0		(CT2_PCI_APP_BASE + 0x08)
+#define __PME_STATUS_			0x00200000
+#define __PF_VF_BAR_SIZE_MODE__MK	0x00180000
+#define __PF_VF_BAR_SIZE_MODE__SH	19
+#define __PF_VF_BAR_SIZE_MODE_(_v)	((_v) << __PF_VF_BAR_SIZE_MODE__SH)
+#define __FC_LL_PORT_MAP__MK		0x00060000
+#define __FC_LL_PORT_MAP__SH		17
+#define __FC_LL_PORT_MAP_(_v)		((_v) << __FC_LL_PORT_MAP__SH)
+#define __PF_VF_ACTIVE_			0x00010000
+#define __PF_VF_CFG_RDY_		0x00008000
+#define __PF_VF_ENABLE_			0x00004000
+#define __PF_DRIVER_ACTIVE_		0x00002000
+#define __PF_PME_SEND_ENABLE_		0x00001000
+#define __PF_EXROM_OFFSET__MK		0x00000ff0
+#define __PF_EXROM_OFFSET__SH		4
+#define __PF_EXROM_OFFSET_(_v)		((_v) << __PF_EXROM_OFFSET__SH)
+#define __FC_LL_MODE_			0x00000008
+#define __PF_INTX_PIN_			0x00000007
+#define CT2_HOSTFN_PERSONALITY1		(CT2_PCI_APP_BASE + 0x0C)
+#define __PF_NUM_QUEUES1__MK		0xff000000
+#define __PF_NUM_QUEUES1__SH		24
+#define __PF_NUM_QUEUES1_(_v)		((_v) << __PF_NUM_QUEUES1__SH)
+#define __PF_VF_QUE_OFFSET1__MK		0x00ff0000
+#define __PF_VF_QUE_OFFSET1__SH		16
+#define __PF_VF_QUE_OFFSET1_(_v)	((_v) << __PF_VF_QUE_OFFSET1__SH)
+#define __PF_VF_NUM_QUEUES__MK		0x0000ff00
+#define __PF_VF_NUM_QUEUES__SH		8
+#define __PF_VF_NUM_QUEUES_(_v)		((_v) << __PF_VF_NUM_QUEUES__SH)
+#define __PF_VF_QUE_OFFSET_		0x000000ff
+#define CT2_HOSTFN_PAGE_NUM		(CT2_PCI_APP_BASE + 0x18)
+#define CT2_HOSTFN_MSIX_VT_INDEX_MBOX_ERR	(CT2_PCI_APP_BASE + 0x38)
+
+/*
+ * Brocade 1860 adapter CPQ block registers
+ */
+#define CT2_HOSTFN_LPU0_MBOX0		(CT2_PCI_CPQ_BASE + 0x00)
+#define CT2_HOSTFN_LPU1_MBOX0		(CT2_PCI_CPQ_BASE + 0x20)
+#define CT2_LPU0_HOSTFN_MBOX0		(CT2_PCI_CPQ_BASE + 0x40)
+#define CT2_LPU1_HOSTFN_MBOX0		(CT2_PCI_CPQ_BASE + 0x60)
+#define CT2_HOSTFN_LPU0_CMD_STAT	(CT2_PCI_CPQ_BASE + 0x80)
+#define CT2_HOSTFN_LPU1_CMD_STAT	(CT2_PCI_CPQ_BASE + 0x84)
+#define CT2_LPU0_HOSTFN_CMD_STAT	(CT2_PCI_CPQ_BASE + 0x88)
+#define CT2_LPU1_HOSTFN_CMD_STAT	(CT2_PCI_CPQ_BASE + 0x8c)
+#define CT2_HOSTFN_LPU0_READ_STAT	(CT2_PCI_CPQ_BASE + 0x90)
+#define CT2_HOSTFN_LPU1_READ_STAT	(CT2_PCI_CPQ_BASE + 0x94)
+#define CT2_LPU0_HOSTFN_MBOX0_MSK	(CT2_PCI_CPQ_BASE + 0x98)
+#define CT2_LPU1_HOSTFN_MBOX0_MSK	(CT2_PCI_CPQ_BASE + 0x9C)
+#define CT2_HOST_SEM0_REG		0x000148f0
+#define CT2_HOST_SEM1_REG		0x000148f4
+#define CT2_HOST_SEM2_REG		0x000148f8
+#define CT2_HOST_SEM3_REG		0x000148fc
+#define CT2_HOST_SEM4_REG		0x00014900
+#define CT2_HOST_SEM5_REG		0x00014904
+#define CT2_HOST_SEM6_REG		0x00014908
+#define CT2_HOST_SEM7_REG		0x0001490c
+#define CT2_HOST_SEM0_INFO_REG		0x000148b0
+#define CT2_HOST_SEM1_INFO_REG		0x000148b4
+#define CT2_HOST_SEM2_INFO_REG		0x000148b8
+#define CT2_HOST_SEM3_INFO_REG		0x000148bc
+#define CT2_HOST_SEM4_INFO_REG		0x000148c0
+#define CT2_HOST_SEM5_INFO_REG		0x000148c4
+#define CT2_HOST_SEM6_INFO_REG		0x000148c8
+#define CT2_HOST_SEM7_INFO_REG		0x000148cc
+
+#define CT2_APP_PLL_LCLK_CTL_REG	0x00014808
+#define __APP_LPUCLK_HALFSPEED		0x40000000
+#define __APP_PLL_LCLK_LOAD		0x20000000
+#define __APP_PLL_LCLK_FBCNT_MK		0x1fe00000
+#define __APP_PLL_LCLK_FBCNT_SH		21
+#define __APP_PLL_LCLK_FBCNT(_v)	((_v) << __APP_PLL_SCLK_FBCNT_SH)
+enum {
+	__APP_PLL_LCLK_FBCNT_425_MHZ = 6,
+	__APP_PLL_LCLK_FBCNT_468_MHZ = 4,
+};
+#define __APP_PLL_LCLK_EXTFB		0x00000800
+#define __APP_PLL_LCLK_ENOUTS		0x00000400
+#define __APP_PLL_LCLK_RATE		0x00000010
+#define CT2_APP_PLL_SCLK_CTL_REG	0x0001480c
+#define __P_SCLK_PLL_LOCK		0x80000000
+#define __APP_PLL_SCLK_REFCLK_SEL	0x40000000
+#define __APP_PLL_SCLK_CLK_DIV2		0x20000000
+#define __APP_PLL_SCLK_LOAD		0x10000000
+#define __APP_PLL_SCLK_FBCNT_MK		0x0ff00000
+#define __APP_PLL_SCLK_FBCNT_SH		20
+#define __APP_PLL_SCLK_FBCNT(_v)	((_v) << __APP_PLL_SCLK_FBCNT_SH)
+enum {
+	__APP_PLL_SCLK_FBCNT_NORM = 6,
+	__APP_PLL_SCLK_FBCNT_10G_FC = 10,
+};
+#define __APP_PLL_SCLK_EXTFB		0x00000800
+#define __APP_PLL_SCLK_ENOUTS		0x00000400
+#define __APP_PLL_SCLK_RATE		0x00000010
+#define CT2_PCIE_MISC_REG		0x00014804
+#define __ETH_CLK_ENABLE_PORT1		0x00000010
+#define CT2_CHIP_MISC_PRG		0x000148a4
+#define __ETH_CLK_ENABLE_PORT0		0x00004000
+#define __APP_LPU_SPEED			0x00000002
+#define CT2_MBIST_STAT_REG		0x00014818
+#define CT2_MBIST_CTL_REG		0x0001481c
+#define CT2_PMM_1T_CONTROL_REG_P0	0x0002381c
+#define __PMM_1T_PNDB_P			0x00000002
+#define CT2_PMM_1T_CONTROL_REG_P1	0x00023c1c
+#define CT2_WGN_STATUS			0x00014990
+#define __A2T_AHB_LOAD			0x00000800
+#define __WGN_READY			0x00000400
+#define __GLBL_PF_VF_CFG_RDY		0x00000200
+#define CT2_NFC_CSR_SET_REG		0x00027424
+#define __HALT_NFC_CONTROLLER		0x00000002
+#define __NFC_CONTROLLER_HALTED		0x00001000
+
+#define CT2_CSI_MAC0_CONTROL_REG	0x000270d0
+#define __CSI_MAC_RESET			0x00000010
+#define __CSI_MAC_AHB_RESET		0x00000008
+#define CT2_CSI_MAC1_CONTROL_REG	0x000270d4
+#define CT2_CSI_MAC_CONTROL_REG(__n) \
+	(CT2_CSI_MAC0_CONTROL_REG + \
+	(__n) * (CT2_CSI_MAC1_CONTROL_REG - CT2_CSI_MAC0_CONTROL_REG))
+
+/*
+ * Name semaphore registers based on usage
+ */
+#define BFA_IOC0_HBEAT_REG		HOST_SEM0_INFO_REG
+#define BFA_IOC0_STATE_REG		HOST_SEM1_INFO_REG
+#define BFA_IOC1_HBEAT_REG		HOST_SEM2_INFO_REG
+#define BFA_IOC1_STATE_REG		HOST_SEM3_INFO_REG
+#define BFA_FW_USE_COUNT		HOST_SEM4_INFO_REG
+#define BFA_IOC_FAIL_SYNC		HOST_SEM5_INFO_REG
+
+/*
+ * CT2 semaphore register locations changed
+ */
+#define CT2_BFA_IOC0_HBEAT_REG		CT2_HOST_SEM0_INFO_REG
+#define CT2_BFA_IOC0_STATE_REG		CT2_HOST_SEM1_INFO_REG
+#define CT2_BFA_IOC1_HBEAT_REG		CT2_HOST_SEM2_INFO_REG
+#define CT2_BFA_IOC1_STATE_REG		CT2_HOST_SEM3_INFO_REG
+#define CT2_BFA_FW_USE_COUNT		CT2_HOST_SEM4_INFO_REG
+#define CT2_BFA_IOC_FAIL_SYNC		CT2_HOST_SEM5_INFO_REG
+
+#define CPE_Q_NUM(__fn, __q)	(((__fn) << 2) + (__q))
+#define RME_Q_NUM(__fn, __q)	(((__fn) << 2) + (__q))
+
+/*
+ * And corresponding host interrupt status bit field defines
+ */
+#define __HFN_INT_CPE_Q0	0x00000001U
+#define __HFN_INT_CPE_Q1	0x00000002U
+#define __HFN_INT_CPE_Q2	0x00000004U
+#define __HFN_INT_CPE_Q3	0x00000008U
+#define __HFN_INT_CPE_Q4	0x00000010U
+#define __HFN_INT_CPE_Q5	0x00000020U
+#define __HFN_INT_CPE_Q6	0x00000040U
+#define __HFN_INT_CPE_Q7	0x00000080U
+#define __HFN_INT_RME_Q0	0x00000100U
+#define __HFN_INT_RME_Q1	0x00000200U
+#define __HFN_INT_RME_Q2	0x00000400U
+#define __HFN_INT_RME_Q3	0x00000800U
+#define __HFN_INT_RME_Q4	0x00001000U
+#define __HFN_INT_RME_Q5	0x00002000U
+#define __HFN_INT_RME_Q6	0x00004000U
+#define __HFN_INT_RME_Q7	0x00008000U
+#define __HFN_INT_ERR_EMC	0x00010000U
+#define __HFN_INT_ERR_LPU0	0x00020000U
+#define __HFN_INT_ERR_LPU1	0x00040000U
+#define __HFN_INT_ERR_PSS	0x00080000U
+#define __HFN_INT_MBOX_LPU0	0x00100000U
+#define __HFN_INT_MBOX_LPU1	0x00200000U
+#define __HFN_INT_MBOX1_LPU0	0x00400000U
+#define __HFN_INT_MBOX1_LPU1	0x00800000U
+#define __HFN_INT_LL_HALT	0x01000000U
+#define __HFN_INT_CPE_MASK	0x000000ffU
+#define __HFN_INT_RME_MASK	0x0000ff00U
+#define __HFN_INT_ERR_MASK	\
+	(__HFN_INT_ERR_EMC | __HFN_INT_ERR_LPU0 | __HFN_INT_ERR_LPU1 | \
+	 __HFN_INT_ERR_PSS | __HFN_INT_LL_HALT)
+#define __HFN_INT_FN0_MASK	\
+	(__HFN_INT_CPE_Q0 | __HFN_INT_CPE_Q1 | __HFN_INT_CPE_Q2 | \
+	 __HFN_INT_CPE_Q3 | __HFN_INT_RME_Q0 | __HFN_INT_RME_Q1 | \
+	 __HFN_INT_RME_Q2 | __HFN_INT_RME_Q3 | __HFN_INT_MBOX_LPU0)
+#define __HFN_INT_FN1_MASK	\
+	(__HFN_INT_CPE_Q4 | __HFN_INT_CPE_Q5 | __HFN_INT_CPE_Q6 | \
+	 __HFN_INT_CPE_Q7 | __HFN_INT_RME_Q4 | __HFN_INT_RME_Q5 | \
+	 __HFN_INT_RME_Q6 | __HFN_INT_RME_Q7 | __HFN_INT_MBOX_LPU1)
+
+/*
+ * Host interrupt status defines for 1860
+ */
+#define __HFN_INT_MBOX_LPU0_CT2	0x00010000U
+#define __HFN_INT_MBOX_LPU1_CT2	0x00020000U
+#define __HFN_INT_ERR_PSS_CT2	0x00040000U
+#define __HFN_INT_ERR_LPU0_CT2	0x00080000U
+#define __HFN_INT_ERR_LPU1_CT2	0x00100000U
+#define __HFN_INT_CPQ_HALT_CT2	0x00200000U
+#define __HFN_INT_ERR_WGN_CT2	0x00400000U
+#define __HFN_INT_ERR_LEHRX_CT2	0x00800000U
+#define __HFN_INT_ERR_LEHTX_CT2	0x01000000U
+#define __HFN_INT_ERR_MASK_CT2	\
+	(__HFN_INT_ERR_PSS_CT2 | __HFN_INT_ERR_LPU0_CT2 | \
+	 __HFN_INT_ERR_LPU1_CT2 | __HFN_INT_CPQ_HALT_CT2 | \
+	 __HFN_INT_ERR_WGN_CT2 | __HFN_INT_ERR_LEHRX_CT2 | \
+	 __HFN_INT_ERR_LEHTX_CT2)
+#define __HFN_INT_FN0_MASK_CT2	\
+	(__HFN_INT_CPE_Q0 | __HFN_INT_CPE_Q1 | __HFN_INT_CPE_Q2 | \
+	 __HFN_INT_CPE_Q3 | __HFN_INT_RME_Q0 | __HFN_INT_RME_Q1 | \
+	 __HFN_INT_RME_Q2 | __HFN_INT_RME_Q3 | __HFN_INT_MBOX_LPU0_CT2)
+#define __HFN_INT_FN1_MASK_CT2	\
+	(__HFN_INT_CPE_Q4 | __HFN_INT_CPE_Q5 | __HFN_INT_CPE_Q6 | \
+	 __HFN_INT_CPE_Q7 | __HFN_INT_RME_Q4 | __HFN_INT_RME_Q5 | \
+	 __HFN_INT_RME_Q6 | __HFN_INT_RME_Q7 | __HFN_INT_MBOX_LPU1_CT2)
+
+/*
+ * asic memory map.
+ */
+#define PSS_SMEM_PAGE_START		0x8000
+#define PSS_SMEM_PGNUM(_pg0, _ma)	((_pg0) + ((_ma) >> 15))
+#define PSS_SMEM_PGOFF(_ma)		((_ma) & 0x7fff)
+
+#endif /* __BFI_REG_H__ */
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 03/45] bna: Change IOC Event Notification Call Back Mechanism
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
  2011-07-18  8:19 ` [PATCH 01/45] bna: Checkpatch Cleanup Rasesh Mody
  2011-07-18  8:19 ` [PATCH 02/45] bna: Update ASIC Register Definition to Support New Hardware Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 04/45] bna: State Machine Fault Handling Cleanup Rasesh Mody
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Change details:
 - A generic event notification callback mechanism that receives
   IOC enable, disable, failed events and notifies the registered
   modules of these events.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfa_cee.c |   65 +++++++++++++++++++++++++++------------------
 drivers/net/bna/bfa_cee.h |    3 +-
 drivers/net/bna/bfa_ioc.c |   54 +++++++++++++++++++++++--------------
 drivers/net/bna/bfa_ioc.h |   31 ++++++++++++++++++---
 4 files changed, 99 insertions(+), 54 deletions(-)

diff --git a/drivers/net/bna/bfa_cee.c b/drivers/net/bna/bfa_cee.c
index dcfbf08..39e5ab9 100644
--- a/drivers/net/bna/bfa_cee.c
+++ b/drivers/net/bna/bfa_cee.c
@@ -223,44 +223,56 @@ bfa_cee_isr(void *cbarg, struct bfi_mbmsg *m)
 }
 
 /**
- * bfa_cee_hbfail()
+ * bfa_cee_notify()
  *
  * @brief CEE module heart-beat failure handler.
+ * @brief CEE module IOC event handler.
  *
- * @param[in] Pointer to the CEE module data structure.
+ * @param[in] IOC event type
  *
  * @return void
  */
 
 static void
-bfa_cee_hbfail(void *arg)
+bfa_cee_notify(void *arg, enum bfa_ioc_event event)
 {
 	struct bfa_cee *cee;
-	cee = arg;
+	cee = (struct bfa_cee *) arg;
 
-	if (cee->get_attr_pending == true) {
-		cee->get_attr_status = BFA_STATUS_FAILED;
-		cee->get_attr_pending  = false;
-		if (cee->cbfn.get_attr_cbfn) {
-			cee->cbfn.get_attr_cbfn(cee->cbfn.get_attr_cbarg,
-			    BFA_STATUS_FAILED);
+	switch (event) {
+	case BFA_IOC_E_DISABLED:
+	case BFA_IOC_E_FAILED:
+		if (cee->get_attr_pending == true) {
+			cee->get_attr_status = BFA_STATUS_FAILED;
+			cee->get_attr_pending  = false;
+			if (cee->cbfn.get_attr_cbfn) {
+				cee->cbfn.get_attr_cbfn(
+					cee->cbfn.get_attr_cbarg,
+					BFA_STATUS_FAILED);
+			}
 		}
-	}
-	if (cee->get_stats_pending == true) {
-		cee->get_stats_status = BFA_STATUS_FAILED;
-		cee->get_stats_pending  = false;
-		if (cee->cbfn.get_stats_cbfn) {
-			cee->cbfn.get_stats_cbfn(cee->cbfn.get_stats_cbarg,
-			    BFA_STATUS_FAILED);
+		if (cee->get_stats_pending == true) {
+			cee->get_stats_status = BFA_STATUS_FAILED;
+			cee->get_stats_pending  = false;
+			if (cee->cbfn.get_stats_cbfn) {
+				cee->cbfn.get_stats_cbfn(
+					cee->cbfn.get_stats_cbarg,
+					BFA_STATUS_FAILED);
+			}
 		}
-	}
-	if (cee->reset_stats_pending == true) {
-		cee->reset_stats_status = BFA_STATUS_FAILED;
-		cee->reset_stats_pending  = false;
-		if (cee->cbfn.reset_stats_cbfn) {
-			cee->cbfn.reset_stats_cbfn(cee->cbfn.reset_stats_cbarg,
-			    BFA_STATUS_FAILED);
+		if (cee->reset_stats_pending == true) {
+			cee->reset_stats_status = BFA_STATUS_FAILED;
+			cee->reset_stats_pending  = false;
+			if (cee->cbfn.reset_stats_cbfn) {
+				cee->cbfn.reset_stats_cbfn(
+					cee->cbfn.reset_stats_cbarg,
+					BFA_STATUS_FAILED);
+			}
 		}
+		break;
+
+	default:
+		break;
 	}
 }
 
@@ -286,6 +298,7 @@ bfa_nw_cee_attach(struct bfa_cee *cee, struct bfa_ioc *ioc,
 	cee->ioc = ioc;
 
 	bfa_nw_ioc_mbox_regisr(cee->ioc, BFI_MC_CEE, bfa_cee_isr, cee);
-	bfa_ioc_hbfail_init(&cee->hbfail, bfa_cee_hbfail, cee);
-	bfa_nw_ioc_hbfail_register(cee->ioc, &cee->hbfail);
+	bfa_q_qe_init(&cee->ioc_notify);
+	bfa_ioc_notify_init(&cee->ioc_notify, bfa_cee_notify, cee);
+	bfa_nw_ioc_notify_register(cee->ioc, &cee->ioc_notify);
 }
diff --git a/drivers/net/bna/bfa_cee.h b/drivers/net/bna/bfa_cee.h
index 20543d1..58d54e9 100644
--- a/drivers/net/bna/bfa_cee.h
+++ b/drivers/net/bna/bfa_cee.h
@@ -25,7 +25,6 @@
 typedef void (*bfa_cee_get_attr_cbfn_t) (void *dev, enum bfa_status status);
 typedef void (*bfa_cee_get_stats_cbfn_t) (void *dev, enum bfa_status status);
 typedef void (*bfa_cee_reset_stats_cbfn_t) (void *dev, enum bfa_status status);
-typedef void (*bfa_cee_hbfail_cbfn_t) (void *dev, enum bfa_status status);
 
 struct bfa_cee_cbfn {
 	bfa_cee_get_attr_cbfn_t    get_attr_cbfn;
@@ -45,7 +44,7 @@ struct bfa_cee {
 	enum bfa_status get_stats_status;
 	enum bfa_status reset_stats_status;
 	struct bfa_cee_cbfn cbfn;
-	struct bfa_ioc_hbfail_notify hbfail;
+	struct bfa_ioc_notify ioc_notify;
 	struct bfa_cee_attr *attr;
 	struct bfa_cee_stats *stats;
 	struct bfa_dma attr_dma;
diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index 04bfb29..c52ef63 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -71,6 +71,7 @@ static void bfa_ioc_mbox_poll(struct bfa_ioc *ioc);
 static void bfa_ioc_mbox_hbfail(struct bfa_ioc *ioc);
 static void bfa_ioc_recover(struct bfa_ioc *ioc);
 static void bfa_ioc_check_attr_wwns(struct bfa_ioc *ioc);
+static void bfa_ioc_event_notify(struct bfa_ioc *, enum bfa_ioc_event);
 static void bfa_ioc_disable_comp(struct bfa_ioc *ioc);
 static void bfa_ioc_lpu_stop(struct bfa_ioc *ioc);
 static void bfa_ioc_fail_notify(struct bfa_ioc *ioc);
@@ -1123,23 +1124,28 @@ bfa_iocpf_sm_fail(struct bfa_iocpf *iocpf, enum iocpf_event event)
  * BFA IOC private functions
  */
 
+/**
+ * Notify common modules registered for notification.
+ */
 static void
-bfa_ioc_disable_comp(struct bfa_ioc *ioc)
+bfa_ioc_event_notify(struct bfa_ioc *ioc, enum bfa_ioc_event event)
 {
+	struct bfa_ioc_notify *notify;
 	struct list_head			*qe;
-	struct bfa_ioc_hbfail_notify *notify;
 
-	ioc->cbfn->disable_cbfn(ioc->bfa);
-
-	/**
-	 * Notify common modules registered for notification.
-	 */
-	list_for_each(qe, &ioc->hb_notify_q) {
-		notify = (struct bfa_ioc_hbfail_notify *) qe;
-		notify->cbfn(notify->cbarg);
+	list_for_each(qe, &ioc->notify_q) {
+		notify = (struct bfa_ioc_notify *)qe;
+		notify->cbfn(notify->cbarg, event);
 	}
 }
 
+static void
+bfa_ioc_disable_comp(struct bfa_ioc *ioc)
+{
+	ioc->cbfn->disable_cbfn(ioc->bfa);
+	bfa_ioc_event_notify(ioc, BFA_IOC_E_DISABLED);
+}
+
 bool
 bfa_nw_ioc_sem_get(void __iomem *sem_reg)
 {
@@ -1650,17 +1656,11 @@ bfa_ioc_mbox_hbfail(struct bfa_ioc *ioc)
 static void
 bfa_ioc_fail_notify(struct bfa_ioc *ioc)
 {
-	struct list_head		*qe;
-	struct bfa_ioc_hbfail_notify	*notify;
-
 	/**
 	 * Notify driver and common modules registered for notification.
 	 */
 	ioc->cbfn->hbfail_cbfn(ioc->bfa);
-	list_for_each(qe, &ioc->hb_notify_q) {
-		notify = (struct bfa_ioc_hbfail_notify *) qe;
-		notify->cbfn(notify->cbarg);
-	}
+	bfa_ioc_event_notify(ioc, BFA_IOC_E_FAILED);
 }
 
 static void
@@ -1839,7 +1839,7 @@ bfa_nw_ioc_attach(struct bfa_ioc *ioc, void *bfa, struct bfa_ioc_cbfn *cbfn)
 	ioc->iocpf.ioc  = ioc;
 
 	bfa_ioc_mbox_attach(ioc);
-	INIT_LIST_HEAD(&ioc->hb_notify_q);
+	INIT_LIST_HEAD(&ioc->notify_q);
 
 	bfa_fsm_set_state(ioc, bfa_ioc_sm_uninit);
 	bfa_fsm_send_event(ioc, IOC_E_RESET);
@@ -1969,6 +1969,8 @@ bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
 	 * mailbox is free -- queue command to firmware
 	 */
 	bfa_ioc_mbox_send(ioc, cmd->msg, sizeof(cmd->msg));
+
+	return;
 }
 
 /**
@@ -2005,14 +2007,24 @@ bfa_nw_ioc_error_isr(struct bfa_ioc *ioc)
 }
 
 /**
+ * return true if IOC is disabled
+ */
+bool
+bfa_nw_ioc_is_disabled(struct bfa_ioc *ioc)
+{
+	return bfa_fsm_cmp_state(ioc, bfa_ioc_sm_disabling) ||
+		bfa_fsm_cmp_state(ioc, bfa_ioc_sm_disabled);
+}
+
+/**
  * Add to IOC heartbeat failure notification queue. To be used by common
  * modules such as cee, port, diag.
  */
 void
-bfa_nw_ioc_hbfail_register(struct bfa_ioc *ioc,
-			struct bfa_ioc_hbfail_notify *notify)
+bfa_nw_ioc_notify_register(struct bfa_ioc *ioc,
+			struct bfa_ioc_notify *notify)
 {
-	list_add_tail(&notify->qe, &ioc->hb_notify_q);
+	list_add_tail(&notify->qe, &ioc->notify_q);
 }
 
 #define BFA_MFG_NAME "Brocade"
diff --git a/drivers/net/bna/bfa_ioc.h b/drivers/net/bna/bfa_ioc.h
index 8473c00..c6cf218 100644
--- a/drivers/net/bna/bfa_ioc.h
+++ b/drivers/net/bna/bfa_ioc.h
@@ -97,9 +97,12 @@ struct bfa_ioc_regs {
 /**
  * IOC Mailbox structures
  */
+typedef void (*bfa_mbox_cmd_cbfn_t)(void *cbarg);
 struct bfa_mbox_cmd {
 	struct list_head	qe;
-	u32			msg[BFI_IOC_MSGSZ];
+	bfa_mbox_cmd_cbfn_t     cbfn;
+	void		    *cbarg;
+	u32     msg[BFI_IOC_MSGSZ];
 };
 
 /**
@@ -130,6 +133,23 @@ struct bfa_ioc_cbfn {
 };
 
 /**
+ * IOC event notification mechanism.
+ */
+enum bfa_ioc_event {
+	BFA_IOC_E_ENABLED	= 1,
+	BFA_IOC_E_DISABLED	= 2,
+	BFA_IOC_E_FAILED	= 3,
+};
+
+typedef void (*bfa_ioc_notify_cbfn_t)(void *, enum bfa_ioc_event);
+
+struct bfa_ioc_notify {
+	struct list_head	qe;
+	bfa_ioc_notify_cbfn_t	cbfn;
+	void			*cbarg;
+};
+
+/**
  * Heartbeat failure notification queue element.
  */
 struct bfa_ioc_hbfail_notify {
@@ -141,7 +161,7 @@ struct bfa_ioc_hbfail_notify {
 /**
  * Initialize a heartbeat failure notification structure
  */
-#define bfa_ioc_hbfail_init(__notify, __cbfn, __cbarg) do {	\
+#define bfa_ioc_notify_init(__notify, __cbfn, __cbarg) do {	\
 	(__notify)->cbfn = (__cbfn);				\
 	(__notify)->cbarg = (__cbarg);				\
 } while (0)
@@ -162,7 +182,7 @@ struct bfa_ioc {
 	struct timer_list	sem_timer;
 	struct timer_list	hb_timer;
 	u32			hb_count;
-	struct list_head	hb_notify_q;
+	struct list_head	notify_q;
 	void			*dbg_fwsave;
 	int			dbg_fwsave_len;
 	bool			dbg_fwsave_once;
@@ -263,9 +283,10 @@ void bfa_nw_ioc_enable(struct bfa_ioc *ioc);
 void bfa_nw_ioc_disable(struct bfa_ioc *ioc);
 
 void bfa_nw_ioc_error_isr(struct bfa_ioc *ioc);
+bool bfa_nw_ioc_is_disabled(struct bfa_ioc *ioc);
 void bfa_nw_ioc_get_attr(struct bfa_ioc *ioc, struct bfa_ioc_attr *ioc_attr);
-void bfa_nw_ioc_hbfail_register(struct bfa_ioc *ioc,
-	struct bfa_ioc_hbfail_notify *notify);
+void bfa_nw_ioc_notify_register(struct bfa_ioc *ioc,
+	struct bfa_ioc_notify *notify);
 bool bfa_nw_ioc_sem_get(void __iomem *sem_reg);
 void bfa_nw_ioc_sem_release(void __iomem *sem_reg);
 void bfa_nw_ioc_hw_sem_release(struct bfa_ioc *ioc);
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 04/45] bna: State Machine Fault Handling Cleanup
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (2 preceding siblings ...)
  2011-07-18  8:19 ` [PATCH 03/45] bna: Change IOC Event Notification Call Back Mechanism Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 05/45] bna: MSGQ Implementation Rasesh Mody
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Chnage details:
 - The module name is not used in case of state machine fault, hence no longer
   passing the module name to the fault handler.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfa_ioc.c  |   46 ++++++++++++++++++++++----------------------
 drivers/net/bna/bna_ctrl.c |   42 ++++++++++++++++++++--------------------
 drivers/net/bna/bna_txrx.c |   38 ++++++++++++++++++------------------
 drivers/net/bna/cna.h      |    2 +-
 4 files changed, 64 insertions(+), 64 deletions(-)

diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index c52ef63..a13b4c5 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -240,7 +240,7 @@ bfa_ioc_sm_uninit(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -273,7 +273,7 @@ bfa_ioc_sm_reset(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -317,7 +317,7 @@ bfa_ioc_sm_enabling(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -365,7 +365,7 @@ bfa_ioc_sm_getattr(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -404,7 +404,7 @@ bfa_ioc_sm_op(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -435,7 +435,7 @@ bfa_ioc_sm_disabling(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -466,7 +466,7 @@ bfa_ioc_sm_disabled(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -513,7 +513,7 @@ bfa_ioc_sm_fail_retry(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -547,7 +547,7 @@ bfa_ioc_sm_fail(struct bfa_ioc *ioc, enum ioc_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -580,7 +580,7 @@ bfa_iocpf_sm_reset(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(iocpf->ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -632,7 +632,7 @@ bfa_iocpf_sm_fwcheck(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -676,7 +676,7 @@ bfa_iocpf_sm_mismatch(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -715,7 +715,7 @@ bfa_iocpf_sm_semwait(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -763,7 +763,7 @@ bfa_iocpf_sm_hwinit(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -814,7 +814,7 @@ bfa_iocpf_sm_enabling(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -857,7 +857,7 @@ bfa_iocpf_sm_ready(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -899,7 +899,7 @@ bfa_iocpf_sm_disabling(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -928,7 +928,7 @@ bfa_iocpf_sm_disabling_sync(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -958,7 +958,7 @@ bfa_iocpf_sm_disabled(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1010,7 +1010,7 @@ bfa_iocpf_sm_initfail_sync(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1039,7 +1039,7 @@ bfa_iocpf_sm_initfail(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1094,7 +1094,7 @@ bfa_iocpf_sm_fail_sync(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1116,7 +1116,7 @@ bfa_iocpf_sm_fail(struct bfa_iocpf *iocpf, enum iocpf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(iocpf->ioc, event);
+		bfa_sm_fault(event);
 	}
 }
 
diff --git a/drivers/net/bna/bna_ctrl.c b/drivers/net/bna/bna_ctrl.c
index 53b1416..7d9bc2f 100644
--- a/drivers/net/bna/bna_ctrl.c
+++ b/drivers/net/bna/bna_ctrl.c
@@ -380,7 +380,7 @@ bna_llport_sm_stopped(struct bna_llport *llport,
 		break;
 
 	default:
-		bfa_sm_fault(llport->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -409,7 +409,7 @@ bna_llport_sm_down(struct bna_llport *llport,
 		break;
 
 	default:
-		bfa_sm_fault(llport->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -455,7 +455,7 @@ bna_llport_sm_up_resp_wait(struct bna_llport *llport,
 		break;
 
 	default:
-		bfa_sm_fault(llport->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -497,7 +497,7 @@ bna_llport_sm_down_resp_wait(struct bna_llport *llport,
 		break;
 
 	default:
-		bfa_sm_fault(llport->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -526,7 +526,7 @@ bna_llport_sm_up(struct bna_llport *llport,
 		break;
 
 	default:
-		bfa_sm_fault(llport->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -563,7 +563,7 @@ bna_llport_sm_last_resp_wait(struct bna_llport *llport,
 		break;
 
 	default:
-		bfa_sm_fault(llport->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -916,7 +916,7 @@ bna_port_sm_stopped(struct bna_port *port, enum bna_port_event event)
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -956,7 +956,7 @@ bna_port_sm_mtu_init_wait(struct bna_port *port, enum bna_port_event event)
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1001,7 +1001,7 @@ bna_port_sm_pause_init_wait(struct bna_port *port,
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1022,7 +1022,7 @@ bna_port_sm_last_resp_wait(struct bna_port *port,
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1061,7 +1061,7 @@ bna_port_sm_started(struct bna_port *port,
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1086,7 +1086,7 @@ bna_port_sm_pause_cfg_wait(struct bna_port *port,
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1111,7 +1111,7 @@ bna_port_sm_rx_stop_wait(struct bna_port *port,
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1136,7 +1136,7 @@ bna_port_sm_mtu_cfg_wait(struct bna_port *port, enum bna_port_event event)
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1161,7 +1161,7 @@ bna_port_sm_chld_stop_wait(struct bna_port *port,
 		break;
 
 	default:
-		bfa_sm_fault(port->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1472,7 +1472,7 @@ bna_device_sm_stopped(struct bna_device *device,
 		break;
 
 	default:
-		bfa_sm_fault(device->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1512,7 +1512,7 @@ bna_device_sm_ioc_ready_wait(struct bna_device *device,
 		break;
 
 	default:
-		bfa_sm_fault(device->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1542,7 +1542,7 @@ bna_device_sm_ready(struct bna_device *device, enum bna_device_event event)
 		break;
 
 	default:
-		bfa_sm_fault(device->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1568,7 +1568,7 @@ bna_device_sm_port_stop_wait(struct bna_device *device,
 		break;
 
 	default:
-		bfa_sm_fault(device->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1589,7 +1589,7 @@ bna_device_sm_ioc_disable_wait(struct bna_device *device,
 		break;
 
 	default:
-		bfa_sm_fault(device->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1622,7 +1622,7 @@ bna_device_sm_failed(struct bna_device *device,
 		break;
 
 	default:
-		bfa_sm_fault(device->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
diff --git a/drivers/net/bna/bna_txrx.c b/drivers/net/bna/bna_txrx.c
index 2c06e08..4d4f4d9 100644
--- a/drivers/net/bna/bna_txrx.c
+++ b/drivers/net/bna/bna_txrx.c
@@ -569,7 +569,7 @@ bna_rxf_sm_stopped(struct bna_rxf *rxf, enum bna_rxf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -627,7 +627,7 @@ bna_rxf_sm_start_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -678,7 +678,7 @@ bna_rxf_sm_cam_fltr_mod_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -724,7 +724,7 @@ bna_rxf_sm_started(struct bna_rxf *rxf, enum bna_rxf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -761,7 +761,7 @@ bna_rxf_sm_cam_fltr_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -815,7 +815,7 @@ bna_rxf_sm_stop_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -851,7 +851,7 @@ bna_rxf_sm_pause_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 	 * any other event during these states
 	 */
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -887,7 +887,7 @@ bna_rxf_sm_resume_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 	 * any other event during these states
 	 */
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -907,7 +907,7 @@ bna_rxf_sm_stat_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 		break;
 
 	default:
-		bfa_sm_fault(rxf->rx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -1898,7 +1898,7 @@ static void bna_rx_sm_stopped(struct bna_rx *rx,
 		/* no-op */
 		break;
 	default:
-		bfa_sm_fault(rx->bna, event);
+		bfa_sm_fault(event);
 		break;
 	}
 
@@ -1946,7 +1946,7 @@ static void bna_rx_sm_rxf_start_wait(struct bna_rx *rx,
 		bfa_fsm_set_state(rx, bna_rx_sm_started);
 		break;
 	default:
-		bfa_sm_fault(rx->bna, event);
+		bfa_sm_fault(event);
 		break;
 	}
 }
@@ -1981,7 +1981,7 @@ bna_rx_sm_started(struct bna_rx *rx, enum bna_rx_event event)
 		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
 		break;
 	default:
-		bfa_sm_fault(rx->bna, event);
+		bfa_sm_fault(event);
 		break;
 	}
 }
@@ -2011,7 +2011,7 @@ bna_rx_sm_rxf_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
 		bna_rxf_fail(&rx->rxf);
 		break;
 	default:
-		bfa_sm_fault(rx->bna, event);
+		bfa_sm_fault(event);
 		break;
 	}
 
@@ -2064,7 +2064,7 @@ bna_rx_sm_rxq_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
 		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
 		break;
 	default:
-		bfa_sm_fault(rx->bna, event);
+		bfa_sm_fault(event);
 		break;
 	}
 }
@@ -3216,7 +3216,7 @@ bna_tx_sm_stopped(struct bna_tx *tx, enum bna_tx_event event)
 		break;
 
 	default:
-		bfa_sm_fault(tx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -3261,7 +3261,7 @@ bna_tx_sm_started(struct bna_tx *tx, enum bna_tx_event event)
 		break;
 
 	default:
-		bfa_sm_fault(tx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -3294,7 +3294,7 @@ bna_tx_sm_txq_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
 		break;
 
 	default:
-		bfa_sm_fault(tx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -3335,7 +3335,7 @@ bna_tx_sm_prio_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
 		break;
 
 	default:
-		bfa_sm_fault(tx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
@@ -3355,7 +3355,7 @@ bna_tx_sm_stat_clr_wait(struct bna_tx *tx, enum bna_tx_event event)
 		break;
 
 	default:
-		bfa_sm_fault(tx->bna, event);
+		bfa_sm_fault(event);
 	}
 }
 
diff --git a/drivers/net/bna/cna.h b/drivers/net/bna/cna.h
index 01b4af7..a679e03 100644
--- a/drivers/net/bna/cna.h
+++ b/drivers/net/bna/cna.h
@@ -33,7 +33,7 @@
 
 #include <linux/list.h>
 
-#define bfa_sm_fault(__mod, __event)    do {                            \
+#define bfa_sm_fault(__event)    do {                            \
 	pr_err("SM Assertion failure: %s: %d: event = %d", __FILE__, __LINE__, \
 		__event); \
 } while (0)
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 05/45] bna: MSGQ Implementation
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (3 preceding siblings ...)
  2011-07-18  8:19 ` [PATCH 04/45] bna: State Machine Fault Handling Cleanup Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 06/45] bna: Use DEFINE_PCI_DEVICE_TABLE and Print Driver Version Rasesh Mody
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Change details:
 - Currently modules communicate with the FW using 32 byte command and
   response register. This limits the size of the command and response
   messages exchanged with the FW to 32 bytes. We need a mechanism to
   exchange the comamnds and responses exchange with FW that exceeds 32 bytes.

 - MSGQ implementation provides that facility. It removes the assumption that
   command/response queue size is precisely calculated to accommodate all
   concurrent FW commands/responses. The queue depth is made variable now, defined
   by a macro. A waiting command list is implemented to hold all the commands
   when there is no place in the command queue. Callback is implemented for
   each command entry to invoke the module posting the command, when there is
   space in the command queue and the command was finally posted to the queue.
   Module/Object information is embedded in the response for tracking purpose.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/Makefile   |    3 +-
 drivers/net/bna/bfa_ioc.c  |   14 +-
 drivers/net/bna/bfa_ioc.h  |    4 +-
 drivers/net/bna/bfa_msgq.c |  669 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bna/bfa_msgq.h |  130 +++++++++
 drivers/net/bna/bfi.h      |  101 +++++++
 drivers/net/bna/bna_ctrl.c |    6 +-
 7 files changed, 918 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/bna/bfa_msgq.c
 create mode 100644 drivers/net/bna/bfa_msgq.h

diff --git a/drivers/net/bna/Makefile b/drivers/net/bna/Makefile
index a5d604d..d501f52 100644
--- a/drivers/net/bna/Makefile
+++ b/drivers/net/bna/Makefile
@@ -6,6 +6,7 @@
 obj-$(CONFIG_BNA) += bna.o
 
 bna-objs := bnad.o bnad_ethtool.o bna_ctrl.o bna_txrx.o
-bna-objs += bfa_ioc.o bfa_ioc_ct.o bfa_cee.o cna_fwimg.o
+bna-objs += bfa_msgq.o bfa_ioc.o bfa_ioc_ct.o bfa_cee.o
+bna-objs += cna_fwimg.o
 
 EXTRA_CFLAGS := -Idrivers/net/bna
diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index a13b4c5..bd833a1 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -1942,18 +1942,22 @@ bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc,
  * @param[in]	ioc	IOC instance
  * @param[i]	cmd	Mailbox command
  */
-void
-bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
+bool
+bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd,
+			bfa_mbox_cmd_cbfn_t cbfn, void *cbarg)
 {
 	struct bfa_ioc_mbox_mod *mod = &ioc->mbox_mod;
 	u32			stat;
 
+	cmd->cbfn = cbfn;
+	cmd->cbarg = cbarg;
+
 	/**
 	 * If a previous command is pending, queue new command
 	 */
 	if (!list_empty(&mod->cmd_q)) {
 		list_add_tail(&cmd->qe, &mod->cmd_q);
-		return;
+		return true;
 	}
 
 	/**
@@ -1962,7 +1966,7 @@ bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
 	stat = readl(ioc->ioc_regs.hfn_mbox_cmd);
 	if (stat) {
 		list_add_tail(&cmd->qe, &mod->cmd_q);
-		return;
+		return true;
 	}
 
 	/**
@@ -1970,7 +1974,7 @@ bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd)
 	 */
 	bfa_ioc_mbox_send(ioc, cmd->msg, sizeof(cmd->msg));
 
-	return;
+	return false;
 }
 
 /**
diff --git a/drivers/net/bna/bfa_ioc.h b/drivers/net/bna/bfa_ioc.h
index c6cf218..b3c3c5e 100644
--- a/drivers/net/bna/bfa_ioc.h
+++ b/drivers/net/bna/bfa_ioc.h
@@ -251,7 +251,9 @@ struct bfa_ioc_hwif {
 /**
  * IOC mailbox interface
  */
-void bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc, struct bfa_mbox_cmd *cmd);
+bool bfa_nw_ioc_mbox_queue(struct bfa_ioc *ioc,
+			struct bfa_mbox_cmd *cmd,
+			bfa_mbox_cmd_cbfn_t cbfn, void *cbarg);
 void bfa_nw_ioc_mbox_isr(struct bfa_ioc *ioc);
 void bfa_nw_ioc_mbox_regisr(struct bfa_ioc *ioc, enum bfi_mclass mc,
 		bfa_ioc_mbox_mcfunc_t cbfn, void *cbarg);
diff --git a/drivers/net/bna/bfa_msgq.c b/drivers/net/bna/bfa_msgq.c
new file mode 100644
index 0000000..ed52187
--- /dev/null
+++ b/drivers/net/bna/bfa_msgq.c
@@ -0,0 +1,669 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/**
+ * @file bfa_msgq.c MSGQ module source file.
+ */
+
+#include "bfi.h"
+#include "bfa_msgq.h"
+#include "bfa_ioc.h"
+
+#define call_cmdq_ent_cbfn(_cmdq_ent, _status)				\
+{									\
+	bfa_msgq_cmdcbfn_t cbfn;					\
+	void *cbarg;							\
+	cbfn = (_cmdq_ent)->cbfn;					\
+	cbarg = (_cmdq_ent)->cbarg;					\
+	(_cmdq_ent)->cbfn = NULL;					\
+	(_cmdq_ent)->cbarg = NULL;					\
+	if (cbfn) {							\
+		cbfn(cbarg, (_status));					\
+	}								\
+}
+
+static void bfa_msgq_cmdq_dbell(struct bfa_msgq_cmdq *cmdq);
+static void bfa_msgq_cmdq_copy_rsp(struct bfa_msgq_cmdq *cmdq);
+
+enum cmdq_event {
+	CMDQ_E_START			= 1,
+	CMDQ_E_STOP			= 2,
+	CMDQ_E_FAIL			= 3,
+	CMDQ_E_POST			= 4,
+	CMDQ_E_INIT_RESP		= 5,
+	CMDQ_E_DB_READY			= 6,
+};
+
+bfa_fsm_state_decl(cmdq, stopped, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, init_wait, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, ready, struct bfa_msgq_cmdq, enum cmdq_event);
+bfa_fsm_state_decl(cmdq, dbell_wait, struct bfa_msgq_cmdq,
+			enum cmdq_event);
+
+static void
+cmdq_sm_stopped_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfa_msgq_cmd_entry *cmdq_ent;
+
+	cmdq->producer_index = 0;
+	cmdq->consumer_index = 0;
+	cmdq->flags = 0;
+	cmdq->token = 0;
+	cmdq->offset = 0;
+	cmdq->bytes_to_copy = 0;
+	while (!list_empty(&cmdq->pending_q)) {
+		bfa_q_deq(&cmdq->pending_q, &cmdq_ent);
+		bfa_q_qe_init(&cmdq_ent->qe);
+		call_cmdq_ent_cbfn(cmdq_ent, BFA_STATUS_FAILED);
+	}
+}
+
+static void
+cmdq_sm_stopped(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_START:
+		bfa_fsm_set_state(cmdq, cmdq_sm_init_wait);
+		break;
+
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		/* No-op */
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_init_wait_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	bfa_wc_down(&cmdq->msgq->init_wc);
+}
+
+static void
+cmdq_sm_init_wait(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	case CMDQ_E_INIT_RESP:
+		if (cmdq->flags & BFA_MSGQ_CMDQ_F_DB_UPDATE) {
+			cmdq->flags &= ~BFA_MSGQ_CMDQ_F_DB_UPDATE;
+			bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(cmdq, cmdq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_ready_entry(struct bfa_msgq_cmdq *cmdq)
+{
+}
+
+static void
+cmdq_sm_ready(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+cmdq_sm_dbell_wait_entry(struct bfa_msgq_cmdq *cmdq)
+{
+	bfa_msgq_cmdq_dbell(cmdq);
+}
+
+static void
+cmdq_sm_dbell_wait(struct bfa_msgq_cmdq *cmdq, enum cmdq_event event)
+{
+	switch (event) {
+	case CMDQ_E_STOP:
+	case CMDQ_E_FAIL:
+		bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+		break;
+
+	case CMDQ_E_POST:
+		cmdq->flags |= BFA_MSGQ_CMDQ_F_DB_UPDATE;
+		break;
+
+	case CMDQ_E_DB_READY:
+		if (cmdq->flags & BFA_MSGQ_CMDQ_F_DB_UPDATE) {
+			cmdq->flags &= ~BFA_MSGQ_CMDQ_F_DB_UPDATE;
+			bfa_fsm_set_state(cmdq, cmdq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(cmdq, cmdq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bfa_msgq_cmdq_dbell_ready(void *arg)
+{
+	struct bfa_msgq_cmdq *cmdq = (struct bfa_msgq_cmdq *)arg;
+	bfa_fsm_send_event(cmdq, CMDQ_E_DB_READY);
+}
+
+static void
+bfa_msgq_cmdq_dbell(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfi_msgq_h2i_db *dbell =
+		(struct bfi_msgq_h2i_db *)(&cmdq->dbell_mb.msg[0]);
+
+	memset(dbell, 0, sizeof(struct bfi_msgq_h2i_db));
+	bfi_h2i_set(dbell->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_DOORBELL_PI, 0);
+	dbell->mh.mtag.i2htok = 0;
+	dbell->idx.cmdq_pi = htons(cmdq->producer_index);
+
+	if (!bfa_nw_ioc_mbox_queue(cmdq->msgq->ioc, &cmdq->dbell_mb,
+				bfa_msgq_cmdq_dbell_ready, cmdq)) {
+		bfa_msgq_cmdq_dbell_ready(cmdq);
+	}
+}
+
+static void
+__cmd_copy(struct bfa_msgq_cmdq *cmdq, struct bfa_msgq_cmd_entry *cmd)
+{
+	size_t len = cmd->msg_size;
+	int num_entries = 0;
+	size_t to_copy;
+	u8 *src, *dst;
+
+	src = (u8 *)cmd->msg_hdr;
+	dst = (u8 *)cmdq->addr.kva;
+	dst += (cmdq->producer_index * BFI_MSGQ_CMD_ENTRY_SIZE);
+
+	while (len) {
+		to_copy = (len < BFI_MSGQ_CMD_ENTRY_SIZE) ?
+				len : BFI_MSGQ_CMD_ENTRY_SIZE;
+		memcpy(dst, src, to_copy);
+		len -= to_copy;
+		src += BFI_MSGQ_CMD_ENTRY_SIZE;
+		BFA_MSGQ_INDX_ADD(cmdq->producer_index, 1, cmdq->depth);
+		dst = (u8 *)cmdq->addr.kva;
+		dst += (cmdq->producer_index * BFI_MSGQ_CMD_ENTRY_SIZE);
+		num_entries++;
+	}
+
+}
+
+static void
+bfa_msgq_cmdq_ci_update(struct bfa_msgq_cmdq *cmdq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_db *dbell = (struct bfi_msgq_i2h_db *)mb;
+	struct bfa_msgq_cmd_entry *cmd;
+	int posted = 0;
+
+	cmdq->consumer_index = ntohs(dbell->idx.cmdq_ci);
+
+	/* Walk through pending list to see if the command can be posted */
+	while (!list_empty(&cmdq->pending_q)) {
+		cmd =
+		(struct bfa_msgq_cmd_entry *)bfa_q_first(&cmdq->pending_q);
+		if (ntohs(cmd->msg_hdr->num_entries) <=
+			BFA_MSGQ_FREE_CNT(cmdq)) {
+			list_del(&cmd->qe);
+			__cmd_copy(cmdq, cmd);
+			posted = 1;
+			call_cmdq_ent_cbfn(cmd, BFA_STATUS_OK);
+		} else {
+			break;
+		}
+	}
+
+	if (posted)
+		bfa_fsm_send_event(cmdq, CMDQ_E_POST);
+}
+
+static void
+bfa_msgq_cmdq_copy_next(void *arg)
+{
+	struct bfa_msgq_cmdq *cmdq = (struct bfa_msgq_cmdq *)arg;
+
+	if (cmdq->bytes_to_copy)
+		bfa_msgq_cmdq_copy_rsp(cmdq);
+}
+
+static void
+bfa_msgq_cmdq_copy_req(struct bfa_msgq_cmdq *cmdq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_cmdq_copy_req *req =
+		(struct bfi_msgq_i2h_cmdq_copy_req *)mb;
+
+	cmdq->token = 0;
+	cmdq->offset = ntohs(req->offset);
+	cmdq->bytes_to_copy = ntohs(req->len);
+	bfa_msgq_cmdq_copy_rsp(cmdq);
+}
+
+static void
+bfa_msgq_cmdq_copy_rsp(struct bfa_msgq_cmdq *cmdq)
+{
+	struct bfi_msgq_h2i_cmdq_copy_rsp *rsp =
+		(struct bfi_msgq_h2i_cmdq_copy_rsp *)&cmdq->copy_mb.msg[0];
+	int copied;
+	u8 *addr = (u8 *)cmdq->addr.kva;
+
+	memset(rsp, 0, sizeof(struct bfi_msgq_h2i_cmdq_copy_rsp));
+	bfi_h2i_set(rsp->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_CMDQ_COPY_RSP, 0);
+	rsp->mh.mtag.i2htok = htons(cmdq->token);
+	copied = (cmdq->bytes_to_copy >= BFI_CMD_COPY_SZ) ? BFI_CMD_COPY_SZ :
+		cmdq->bytes_to_copy;
+	addr += cmdq->offset;
+	memcpy(rsp->data, addr, copied);
+
+	cmdq->token++;
+	cmdq->offset += copied;
+	cmdq->bytes_to_copy -= copied;
+
+	if (!bfa_nw_ioc_mbox_queue(cmdq->msgq->ioc, &cmdq->copy_mb,
+				bfa_msgq_cmdq_copy_next, cmdq)) {
+		bfa_msgq_cmdq_copy_next(cmdq);
+	}
+}
+
+static void
+bfa_msgq_cmdq_attach(struct bfa_msgq_cmdq *cmdq, struct bfa_msgq *msgq)
+{
+	cmdq->depth = BFA_MSGQ_CMDQ_NUM_ENTRY;
+	INIT_LIST_HEAD(&cmdq->pending_q);
+	cmdq->msgq = msgq;
+	bfa_fsm_set_state(cmdq, cmdq_sm_stopped);
+}
+
+static void bfa_msgq_rspq_dbell(struct bfa_msgq_rspq *rspq);
+
+enum rspq_event {
+	RSPQ_E_START			= 1,
+	RSPQ_E_STOP			= 2,
+	RSPQ_E_FAIL			= 3,
+	RSPQ_E_RESP			= 4,
+	RSPQ_E_INIT_RESP		= 5,
+	RSPQ_E_DB_READY			= 6,
+};
+
+bfa_fsm_state_decl(rspq, stopped, struct bfa_msgq_rspq, enum rspq_event);
+bfa_fsm_state_decl(rspq, init_wait, struct bfa_msgq_rspq,
+			enum rspq_event);
+bfa_fsm_state_decl(rspq, ready, struct bfa_msgq_rspq, enum rspq_event);
+bfa_fsm_state_decl(rspq, dbell_wait, struct bfa_msgq_rspq,
+			enum rspq_event);
+
+static void
+rspq_sm_stopped_entry(struct bfa_msgq_rspq *rspq)
+{
+	rspq->producer_index = 0;
+	rspq->consumer_index = 0;
+	rspq->flags = 0;
+}
+
+static void
+rspq_sm_stopped(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_START:
+		bfa_fsm_set_state(rspq, rspq_sm_init_wait);
+		break;
+
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_init_wait_entry(struct bfa_msgq_rspq *rspq)
+{
+	bfa_wc_down(&rspq->msgq->init_wc);
+}
+
+static void
+rspq_sm_init_wait(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_FAIL:
+	case RSPQ_E_STOP:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_INIT_RESP:
+		bfa_fsm_set_state(rspq, rspq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_ready_entry(struct bfa_msgq_rspq *rspq)
+{
+}
+
+static void
+rspq_sm_ready(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_RESP:
+		bfa_fsm_set_state(rspq, rspq_sm_dbell_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+rspq_sm_dbell_wait_entry(struct bfa_msgq_rspq *rspq)
+{
+	if (!bfa_nw_ioc_is_disabled(rspq->msgq->ioc))
+		bfa_msgq_rspq_dbell(rspq);
+}
+
+static void
+rspq_sm_dbell_wait(struct bfa_msgq_rspq *rspq, enum rspq_event event)
+{
+	switch (event) {
+	case RSPQ_E_STOP:
+	case RSPQ_E_FAIL:
+		bfa_fsm_set_state(rspq, rspq_sm_stopped);
+		break;
+
+	case RSPQ_E_RESP:
+		rspq->flags |= BFA_MSGQ_RSPQ_F_DB_UPDATE;
+		break;
+
+	case RSPQ_E_DB_READY:
+		if (rspq->flags & BFA_MSGQ_RSPQ_F_DB_UPDATE) {
+			rspq->flags &= ~BFA_MSGQ_RSPQ_F_DB_UPDATE;
+			bfa_fsm_set_state(rspq, rspq_sm_dbell_wait);
+		} else
+			bfa_fsm_set_state(rspq, rspq_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bfa_msgq_rspq_dbell_ready(void *arg)
+{
+	struct bfa_msgq_rspq *rspq = (struct bfa_msgq_rspq *)arg;
+	bfa_fsm_send_event(rspq, RSPQ_E_DB_READY);
+}
+
+static void
+bfa_msgq_rspq_dbell(struct bfa_msgq_rspq *rspq)
+{
+	struct bfi_msgq_h2i_db *dbell =
+		(struct bfi_msgq_h2i_db *)(&rspq->dbell_mb.msg[0]);
+
+	memset(dbell, 0, sizeof(struct bfi_msgq_h2i_db));
+	bfi_h2i_set(dbell->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_DOORBELL_CI, 0);
+	dbell->mh.mtag.i2htok = 0;
+	dbell->idx.rspq_ci = htons(rspq->consumer_index);
+
+	if (!bfa_nw_ioc_mbox_queue(rspq->msgq->ioc, &rspq->dbell_mb,
+				bfa_msgq_rspq_dbell_ready, rspq)) {
+		bfa_msgq_rspq_dbell_ready(rspq);
+	}
+}
+
+static void
+bfa_msgq_rspq_pi_update(struct bfa_msgq_rspq *rspq, struct bfi_mbmsg *mb)
+{
+	struct bfi_msgq_i2h_db *dbell = (struct bfi_msgq_i2h_db *)mb;
+	struct bfi_msgq_mhdr *msghdr;
+	int num_entries;
+	int mc;
+	u8 *rspq_qe;
+
+	rspq->producer_index = ntohs(dbell->idx.rspq_pi);
+
+	while (rspq->consumer_index != rspq->producer_index) {
+		rspq_qe = (u8 *)rspq->addr.kva;
+		rspq_qe += (rspq->consumer_index * BFI_MSGQ_RSP_ENTRY_SIZE);
+		msghdr = (struct bfi_msgq_mhdr *)rspq_qe;
+
+		mc = msghdr->msg_class;
+		num_entries = ntohs(msghdr->num_entries);
+
+		if ((mc > BFI_MC_MAX) || (rspq->rsphdlr[mc].cbfn == NULL))
+			break;
+
+		(rspq->rsphdlr[mc].cbfn)(rspq->rsphdlr[mc].cbarg, msghdr);
+
+		BFA_MSGQ_INDX_ADD(rspq->consumer_index, num_entries,
+				rspq->depth);
+	}
+
+	bfa_fsm_send_event(rspq, RSPQ_E_RESP);
+}
+
+static void
+bfa_msgq_rspq_attach(struct bfa_msgq_rspq *rspq, struct bfa_msgq *msgq)
+{
+	rspq->depth = BFA_MSGQ_RSPQ_NUM_ENTRY;
+	rspq->msgq = msgq;
+	bfa_fsm_set_state(rspq, rspq_sm_stopped);
+}
+
+static void
+bfa_msgq_init_rsp(struct bfa_msgq *msgq,
+		 struct bfi_mbmsg *mb)
+{
+	bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_INIT_RESP);
+	bfa_fsm_send_event(&msgq->rspq, RSPQ_E_INIT_RESP);
+}
+
+static void
+bfa_msgq_init(void *arg)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)arg;
+	struct bfi_msgq_cfg_req *msgq_cfg =
+		(struct bfi_msgq_cfg_req *)&msgq->init_mb.msg[0];
+
+	memset(msgq_cfg, 0, sizeof(struct bfi_msgq_cfg_req));
+	bfi_h2i_set(msgq_cfg->mh, BFI_MC_MSGQ, BFI_MSGQ_H2I_INIT_REQ, 0);
+	msgq_cfg->mh.mtag.i2htok = 0;
+
+	bfa_dma_be_addr_set(msgq_cfg->cmdq.addr, msgq->cmdq.addr.pa);
+	msgq_cfg->cmdq.q_depth = htons(msgq->cmdq.depth);
+	bfa_dma_be_addr_set(msgq_cfg->rspq.addr, msgq->rspq.addr.pa);
+	msgq_cfg->rspq.q_depth = htons(msgq->rspq.depth);
+
+	bfa_nw_ioc_mbox_queue(msgq->ioc, &msgq->init_mb, NULL, NULL);
+}
+
+static void
+bfa_msgq_isr(void *cbarg, struct bfi_mbmsg *msg)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)cbarg;
+
+	switch (msg->mh.msg_id) {
+	case BFI_MSGQ_I2H_INIT_RSP:
+		bfa_msgq_init_rsp(msgq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_DOORBELL_PI:
+		bfa_msgq_rspq_pi_update(&msgq->rspq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_DOORBELL_CI:
+		bfa_msgq_cmdq_ci_update(&msgq->cmdq, msg);
+		break;
+
+	case BFI_MSGQ_I2H_CMDQ_COPY_REQ:
+		bfa_msgq_cmdq_copy_req(&msgq->cmdq, msg);
+		break;
+
+	default:
+		BUG_ON(1);
+	}
+}
+
+static void
+bfa_msgq_notify(void *cbarg, enum bfa_ioc_event event)
+{
+	struct bfa_msgq *msgq = (struct bfa_msgq *)cbarg;
+
+	switch (event) {
+	case BFA_IOC_E_ENABLED:
+		bfa_wc_init(&msgq->init_wc, bfa_msgq_init, msgq);
+		bfa_wc_up(&msgq->init_wc);
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_START);
+		bfa_wc_up(&msgq->init_wc);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_START);
+		bfa_wc_wait(&msgq->init_wc);
+		break;
+
+	case BFA_IOC_E_DISABLED:
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_STOP);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_STOP);
+		break;
+
+	case BFA_IOC_E_FAILED:
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_FAIL);
+		bfa_fsm_send_event(&msgq->rspq, RSPQ_E_FAIL);
+		break;
+
+	default:
+		break;
+	}
+}
+
+u32
+bfa_msgq_meminfo(void)
+{
+	return roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ) +
+		roundup(BFA_MSGQ_RSPQ_SIZE, BFA_DMA_ALIGN_SZ);
+}
+
+void
+bfa_msgq_memclaim(struct bfa_msgq *msgq, u8 *kva, u64 pa)
+{
+	msgq->cmdq.addr.kva = kva;
+	msgq->cmdq.addr.pa  = pa;
+
+	kva += roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ);
+	pa += roundup(BFA_MSGQ_CMDQ_SIZE, BFA_DMA_ALIGN_SZ);
+
+	msgq->rspq.addr.kva = kva;
+	msgq->rspq.addr.pa = pa;
+}
+
+void
+bfa_msgq_attach(struct bfa_msgq *msgq, struct bfa_ioc *ioc)
+{
+	msgq->ioc    = ioc;
+
+	bfa_msgq_cmdq_attach(&msgq->cmdq, msgq);
+	bfa_msgq_rspq_attach(&msgq->rspq, msgq);
+
+	bfa_nw_ioc_mbox_regisr(msgq->ioc, BFI_MC_MSGQ, bfa_msgq_isr, msgq);
+	bfa_q_qe_init(&msgq->ioc_notify);
+	bfa_ioc_notify_init(&msgq->ioc_notify, bfa_msgq_notify, msgq);
+	bfa_nw_ioc_notify_register(msgq->ioc, &msgq->ioc_notify);
+}
+
+void
+bfa_msgq_regisr(struct bfa_msgq *msgq, enum bfi_mclass mc,
+		bfa_msgq_mcfunc_t cbfn, void *cbarg)
+{
+	msgq->rspq.rsphdlr[mc].cbfn	= cbfn;
+	msgq->rspq.rsphdlr[mc].cbarg	= cbarg;
+}
+
+void
+bfa_msgq_cmd_post(struct bfa_msgq *msgq,  struct bfa_msgq_cmd_entry *cmd)
+{
+	if (ntohs(cmd->msg_hdr->num_entries) <=
+		BFA_MSGQ_FREE_CNT(&msgq->cmdq)) {
+		__cmd_copy(&msgq->cmdq, cmd);
+		call_cmdq_ent_cbfn(cmd, BFA_STATUS_OK);
+		bfa_fsm_send_event(&msgq->cmdq, CMDQ_E_POST);
+	} else {
+		list_add_tail(&cmd->qe, &msgq->cmdq.pending_q);
+	}
+}
+
+void
+bfa_msgq_rsp_copy(struct bfa_msgq *msgq, u8 *buf, size_t buf_len)
+{
+	struct bfa_msgq_rspq *rspq = &msgq->rspq;
+	size_t len = buf_len;
+	size_t to_copy;
+	int ci;
+	u8 *src, *dst;
+
+	ci = rspq->consumer_index;
+	src = (u8 *)rspq->addr.kva;
+	src += (ci * BFI_MSGQ_RSP_ENTRY_SIZE);
+	dst = buf;
+
+	while (len) {
+		to_copy = (len < BFI_MSGQ_RSP_ENTRY_SIZE) ?
+				len : BFI_MSGQ_RSP_ENTRY_SIZE;
+		memcpy(dst, src, to_copy);
+		len -= to_copy;
+		dst += BFI_MSGQ_RSP_ENTRY_SIZE;
+		BFA_MSGQ_INDX_ADD(ci, 1, rspq->depth);
+		src = (u8 *)rspq->addr.kva;
+		src += (ci * BFI_MSGQ_RSP_ENTRY_SIZE);
+	}
+}
diff --git a/drivers/net/bna/bfa_msgq.h b/drivers/net/bna/bfa_msgq.h
new file mode 100644
index 0000000..93170d0
--- /dev/null
+++ b/drivers/net/bna/bfa_msgq.h
@@ -0,0 +1,130 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+#ifndef __BFA_MSGQ_H__
+#define __BFA_MSGQ_H__
+
+#include "bfa_defs.h"
+#include "bfi.h"
+#include "bfa_ioc.h"
+#include "bfa_wc.h"
+
+#define BFA_MSGQ_FREE_CNT(_q)						\
+	(((_q)->consumer_index - (_q)->producer_index - 1) & ((_q)->depth - 1))
+
+#define BFA_MSGQ_INDX_ADD(_q_indx, _qe_num, _q_depth)			\
+	((_q_indx) = (((_q_indx) + (_qe_num)) & ((_q_depth) - 1)))
+
+#define BFA_MSGQ_CMDQ_NUM_ENTRY		128
+#define BFA_MSGQ_CMDQ_SIZE						\
+	(BFI_MSGQ_CMD_ENTRY_SIZE * BFA_MSGQ_CMDQ_NUM_ENTRY)
+
+#define BFA_MSGQ_RSPQ_NUM_ENTRY		128
+#define BFA_MSGQ_RSPQ_SIZE						\
+	(BFI_MSGQ_RSP_ENTRY_SIZE * BFA_MSGQ_RSPQ_NUM_ENTRY)
+
+#define bfa_msgq_cmd_set(_cmd, _cbfn, _cbarg, _msg_size, _msg_hdr)	\
+do {									\
+	(_cmd)->cbfn = (_cbfn);						\
+	(_cmd)->cbarg = (_cbarg);					\
+	(_cmd)->msg_size = (_msg_size);					\
+	(_cmd)->msg_hdr = (_msg_hdr);					\
+} while (0)
+
+struct bfa_msgq;
+
+typedef void (*bfa_msgq_cmdcbfn_t)(void *cbarg, enum bfa_status status);
+
+struct bfa_msgq_cmd_entry {
+	struct list_head				qe;
+	bfa_msgq_cmdcbfn_t		cbfn;
+	void				*cbarg;
+	size_t				msg_size;
+	struct bfi_msgq_mhdr *msg_hdr;
+};
+
+enum bfa_msgq_cmdq_flags {
+	BFA_MSGQ_CMDQ_F_DB_UPDATE	= 1,
+};
+
+struct bfa_msgq_cmdq {
+	bfa_fsm_t			fsm;
+	enum bfa_msgq_cmdq_flags flags;
+
+	u16			producer_index;
+	u16			consumer_index;
+	u16			depth; /* FW Q depth is 16 bits */
+	struct bfa_dma addr;
+	struct bfa_mbox_cmd dbell_mb;
+
+	u16			token;
+	int				offset;
+	int				bytes_to_copy;
+	struct bfa_mbox_cmd copy_mb;
+
+	struct list_head		pending_q; /* pending command queue */
+
+	struct bfa_msgq *msgq;
+};
+
+enum bfa_msgq_rspq_flags {
+	BFA_MSGQ_RSPQ_F_DB_UPDATE	= 1,
+};
+
+typedef void (*bfa_msgq_mcfunc_t)(void *cbarg, struct bfi_msgq_mhdr *mhdr);
+
+struct bfa_msgq_rspq {
+	bfa_fsm_t			fsm;
+	enum bfa_msgq_rspq_flags flags;
+
+	u16			producer_index;
+	u16			consumer_index;
+	u16			depth; /* FW Q depth is 16 bits */
+	struct bfa_dma addr;
+	struct bfa_mbox_cmd dbell_mb;
+
+	int				nmclass;
+	struct {
+		bfa_msgq_mcfunc_t	cbfn;
+		void			*cbarg;
+	} rsphdlr[BFI_MC_MAX];
+
+	struct bfa_msgq *msgq;
+};
+
+struct bfa_msgq {
+	struct bfa_msgq_cmdq cmdq;
+	struct bfa_msgq_rspq rspq;
+
+	struct bfa_wc			init_wc;
+	struct bfa_mbox_cmd init_mb;
+
+	struct bfa_ioc_notify ioc_notify;
+	struct bfa_ioc *ioc;
+};
+
+u32 bfa_msgq_meminfo(void);
+void bfa_msgq_memclaim(struct bfa_msgq *msgq, u8 *kva, u64 pa);
+void bfa_msgq_attach(struct bfa_msgq *msgq, struct bfa_ioc *ioc);
+void bfa_msgq_regisr(struct bfa_msgq *msgq, enum bfi_mclass mc,
+		     bfa_msgq_mcfunc_t cbfn, void *cbarg);
+void bfa_msgq_cmd_post(struct bfa_msgq *msgq,
+		       struct bfa_msgq_cmd_entry *cmd);
+void bfa_msgq_rsp_copy(struct bfa_msgq *msgq, u8 *buf, size_t buf_len);
+
+#endif
diff --git a/drivers/net/bna/bfi.h b/drivers/net/bna/bfi.h
index 683a7d7..6640174 100644
--- a/drivers/net/bna/bfi.h
+++ b/drivers/net/bna/bfi.h
@@ -192,6 +192,8 @@ enum bfi_mclass {
 
 #define BFI_BOOT_LOADER_OS		0
 
+#define BFI_FWBOOT_ENV_OS		0
+
 #define BFI_BOOT_MEMTEST_RES_ADDR   0x900
 #define BFI_BOOT_MEMTEST_RES_SIG    0xA0A1A2A3
 
@@ -389,6 +391,105 @@ union bfi_ioc_i2h_msg_u {
 	u32			mboxmsg[BFI_IOC_MSGSZ];
 };
 
+/**
+ *----------------------------------------------------------------------
+ *				MSGQ
+ *----------------------------------------------------------------------
+ */
+
+enum bfi_msgq_h2i_msgs {
+	BFI_MSGQ_H2I_INIT_REQ	   = 1,
+	BFI_MSGQ_H2I_DOORBELL_PI	= 2,
+	BFI_MSGQ_H2I_DOORBELL_CI	= 3,
+	BFI_MSGQ_H2I_CMDQ_COPY_RSP      = 4,
+};
+
+enum bfi_msgq_i2h_msgs {
+	BFI_MSGQ_I2H_INIT_RSP	   = BFA_I2HM(BFI_MSGQ_H2I_INIT_REQ),
+	BFI_MSGQ_I2H_DOORBELL_PI	= BFA_I2HM(BFI_MSGQ_H2I_DOORBELL_PI),
+	BFI_MSGQ_I2H_DOORBELL_CI	= BFA_I2HM(BFI_MSGQ_H2I_DOORBELL_CI),
+	BFI_MSGQ_I2H_CMDQ_COPY_REQ      = BFA_I2HM(BFI_MSGQ_H2I_CMDQ_COPY_RSP),
+};
+
+/* Messages(commands/responsed/AENS will have the following header */
+struct bfi_msgq_mhdr {
+	u8	msg_class;
+	u8	msg_id;
+	u16	msg_token;
+	u16	num_entries;
+	u8	enet_id;
+	u8	rsvd[1];
+};
+
+#define bfi_msgq_mhdr_set(_mh, _mc, _mid, _tok, _enet_id) do {	\
+	(_mh).msg_class	 = (_mc);	\
+	(_mh).msg_id	    = (_mid);       \
+	(_mh).msg_token	 = (_tok);       \
+	(_mh).enet_id	   = (_enet_id);   \
+} while (0)
+
+/*
+ * Mailbox  for messaging interface
+ */
+#define BFI_MSGQ_CMD_ENTRY_SIZE	 (64)    /* TBD */
+#define BFI_MSGQ_RSP_ENTRY_SIZE	 (64)    /* TBD */
+
+#define bfi_msgq_num_cmd_entries(_size)				 \
+	(((_size) + BFI_MSGQ_CMD_ENTRY_SIZE - 1) / BFI_MSGQ_CMD_ENTRY_SIZE)
+
+struct bfi_msgq {
+	union bfi_addr_u addr;
+	u16 q_depth;     /* Total num of entries in the queue */
+	u8 rsvd[2];
+};
+
+/* BFI_ENET_MSGQ_CFG_REQ TBD init or cfg? */
+struct bfi_msgq_cfg_req {
+	struct bfi_mhdr mh;
+	struct bfi_msgq cmdq;
+	struct bfi_msgq rspq;
+};
+
+/* BFI_ENET_MSGQ_CFG_RSP */
+struct bfi_msgq_cfg_rsp {
+	struct bfi_mhdr mh;
+	u8 cmd_status;
+	u8 rsvd[3];
+};
+
+/* BFI_MSGQ_H2I_DOORBELL */
+struct bfi_msgq_h2i_db {
+	struct bfi_mhdr mh;
+	union {
+		u16 cmdq_pi;
+		u16 rspq_ci;
+	} idx;
+};
+
+/* BFI_MSGQ_I2H_DOORBELL */
+struct bfi_msgq_i2h_db {
+	struct bfi_mhdr mh;
+	union {
+		u16 rspq_pi;
+		u16 cmdq_ci;
+	} idx;
+};
+
+#define BFI_CMD_COPY_SZ 28
+
+/* BFI_MSGQ_H2I_CMD_COPY_RSP */
+struct bfi_msgq_h2i_cmdq_copy_rsp {
+	struct bfi_mhdr mh;
+	u8	      data[BFI_CMD_COPY_SZ];
+};
+
+/* BFI_MSGQ_I2H_CMD_COPY_REQ */
+struct bfi_msgq_i2h_cmdq_copy_req {
+	struct bfi_mhdr mh;
+	u16     offset;
+	u16     len;
+};
+
 #pragma pack()
 
 #endif /* __BFI_H__ */
diff --git a/drivers/net/bna/bna_ctrl.c b/drivers/net/bna/bna_ctrl.c
index 7d9bc2f..7d6099f 100644
--- a/drivers/net/bna/bna_ctrl.c
+++ b/drivers/net/bna/bna_ctrl.c
@@ -184,7 +184,8 @@ bna_ll_isr(void *llarg, struct bfi_mbmsg *msg)
 			if (to_post) {
 				mb_qe = bfa_q_first(&bna->mbox_mod.posted_q);
 				bfa_nw_ioc_mbox_queue(&bna->device.ioc,
-							&mb_qe->cmd);
+							&mb_qe->cmd, NULL,
+							NULL);
 			}
 		} else {
 			snprintf(message, BNA_MESSAGE_SIZE,
@@ -235,7 +236,8 @@ bna_mbox_send(struct bna *bna, struct bna_mbox_qe *mbox_qe)
 	bna->mbox_mod.msg_pending++;
 	if (bna->mbox_mod.state == BNA_MBOX_FREE) {
 		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-		bfa_nw_ioc_mbox_queue(&bna->device.ioc, &mbox_qe->cmd);
+		bfa_nw_ioc_mbox_queue(&bna->device.ioc, &mbox_qe->cmd,
+					NULL, NULL);
 		bna->mbox_mod.state = BNA_MBOX_POSTED;
 	} else {
 		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 06/45] bna: Use DEFINE_PCI_DEVICE_TABLE and Print Driver Version
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (4 preceding siblings ...)
  2011-07-18  8:19 ` [PATCH 05/45] bna: MSGQ Implementation Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 07/45] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Change details:
 - Use DEFINE_PCI_DEVICE_TABLE in place of struct pci_device_id
 - Print the dirver version when module is loaded

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bnad.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bna/bnad.c b/drivers/net/bna/bnad.c
index f57e491..abad4e3 100644
--- a/drivers/net/bna/bnad.c
+++ b/drivers/net/bna/bnad.c
@@ -3230,7 +3230,7 @@ bnad_pci_remove(struct pci_dev *pdev)
 	free_netdev(netdev);
 }
 
-static const struct pci_device_id bnad_pci_id_table[] = {
+static DEFINE_PCI_DEVICE_TABLE(bnad_pci_id_table) = {
 	{
 		PCI_DEVICE(PCI_VENDOR_ID_BROCADE,
 			PCI_DEVICE_ID_BROCADE_CT),
@@ -3253,7 +3253,7 @@ bnad_module_init(void)
 {
 	int err;
 
-	pr_info("Brocade 10G Ethernet driver\n");
+	pr_info("Brocade 10G Ethernet driver %s\n", BNAD_VERSION);
 
 	bfa_nw_ioc_auto_recover(bnad_ioc_auto_recover);
 
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 07/45] bna: Introduce ENET as New Driver and FW Interface
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (5 preceding siblings ...)
  2011-07-18  8:19 ` [PATCH 06/45] bna: Use DEFINE_PCI_DEVICE_TABLE and Print Driver Version Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 08/45] bna: Tx and Rx Redesign Rasesh Mody
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Change details:
 - This patch contains the messages, opcodes and structure format for the
   messages and responses exchanged between driver and the FW. In addition
   this patch contains the state machine implementation for Ethport, Enet,
   IOCEth.
 - Ethport object is responsible for receiving link state events, sending
   port enable/disable commands to FW.
 - Enet object is responsible for synchronizing initialization/teardown of
   tx & rx datapath configuration.
 - IOCEth object is responsible for init/un-init of IOC in the adapter
   which runs the FW.
 - This patch also contains code for initialization and resource assignment
   for Ethport, Enet, IOCEth, Tx, Rx objects.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfi_enet.h |  902 ++++++++++++++++++
 drivers/net/bna/bna_enet.c | 2199 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3101 insertions(+), 0 deletions(-)
 create mode 100644 drivers/net/bna/bfi_enet.h
 create mode 100644 drivers/net/bna/bna_enet.c

diff --git a/drivers/net/bna/bfi_enet.h b/drivers/net/bna/bfi_enet.h
new file mode 100644
index 0000000..17b34ab
--- /dev/null
+++ b/drivers/net/bna/bfi_enet.h
@@ -0,0 +1,902 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+
+/**
+ * @file bfi_enet.h BNA Hardware and Firmware Interface
+ */
+
+/**
+ * Skipping statistics collection to avoid clutter.
+ * Command is no longer needed:
+ *	MTU
+ *	TxQ Stop
+ *	RxQ Stop
+ *	RxF Enable/Disable
+ *
+ * HDS-off request is dynamic
+ * keep structures as multiple of 32-bit fields for alignment.
+ * All values must be written in big-endian.
+ */
+#ifndef __BFI_ENET_H__
+#define __BFI_ENET_H__
+
+#include "bfa_defs.h"
+#include "bfi.h"
+
+#pragma pack(1)
+
+#define BFI_ENET_CFG_MAX		32	/* Max resources per PF */
+
+#define BFI_ENET_TXQ_PRIO_MAX		8
+#define BFI_ENET_RX_QSET_MAX		16
+#define BFI_ENET_TXQ_WI_VECT_MAX	4
+
+#define BFI_ENET_VLAN_ID_MAX		4096
+#define BFI_ENET_VLAN_BLOCK_SIZE	512	/* in bits */
+#define BFI_ENET_VLAN_BLOCKS_MAX					\
+	(BFI_ENET_VLAN_ID_MAX / BFI_ENET_VLAN_BLOCK_SIZE)
+#define BFI_ENET_VLAN_WORD_SIZE		32	/* in bits */
+#define BFI_ENET_VLAN_WORDS_MAX						\
+	(BFI_ENET_VLAN_BLOCK_SIZE / BFI_ENET_VLAN_WORD_SIZE)
+
+#define BFI_ENET_RSS_RIT_MAX		64	/* entries */
+#define BFI_ENET_RSS_KEY_LEN		10	/* 32-bit words */
+
+union bfi_addr_be_u {
+	struct {
+		u32	addr_hi;	/* Most Significant 32-bits */
+		u32	addr_lo;	/* Least Significant 32-Bits */
+	} a32;
+};
+
+/**
+ *	T X   Q U E U E   D E F I N E S
+ */
+/* TxQ Vector (a.k.a. Tx-Buffer Descriptor) */
+/* TxQ Entry Opcodes */
+#define BFI_ENET_TXQ_WI_SEND		(0x402)	/* Single Frame Transmission */
+#define BFI_ENET_TXQ_WI_SEND_LSO	(0x403)	/* Multi-Frame Transmission */
+#define BFI_ENET_TXQ_WI_EXTENSION	(0x104)	/* Extension WI */
+
+/* TxQ Entry Control Flags */
+#define BFI_ENET_TXQ_WI_CF_FCOE_CRC	(1 << 8)
+#define BFI_ENET_TXQ_WI_CF_IPID_MODE	(1 << 5)
+#define BFI_ENET_TXQ_WI_CF_INS_PRIO	(1 << 4)
+#define BFI_ENET_TXQ_WI_CF_INS_VLAN	(1 << 3)
+#define BFI_ENET_TXQ_WI_CF_UDP_CKSUM	(1 << 2)
+#define BFI_ENET_TXQ_WI_CF_TCP_CKSUM	(1 << 1)
+#define BFI_ENET_TXQ_WI_CF_IP_CKSUM	(1 << 0)
+
+struct bfi_enet_txq_wi_base {
+	u8			reserved;
+	u8			num_vectors;	/* number of vectors present */
+	u16			opcode;
+			/* BFI_ENET_TXQ_WI_SEND or BFI_ENET_TXQ_WI_SEND_LSO */
+	u16			flags;		/* OR of all the flags */
+	u16			l4_hdr_size_n_offset;
+	u16			vlan_tag;
+	u16			lso_mss;	/* Only 14 LSB are valid */
+	u32			frame_length;	/* Only 24 LSB are valid */
+};
+
+struct bfi_enet_txq_wi_ext {
+	u16			reserved;
+	u16			opcode;		/* BFI_ENET_TXQ_WI_EXTENSION */
+	u32			reserved2[3];
+};
+
+struct bfi_enet_txq_wi_vector {			/* Tx Buffer Descriptor */
+	u16			reserved;
+	u16			length;		/* Only 14 LSB are valid */
+	union bfi_addr_be_u	addr;
+};
+
+/**
+ *  TxQ Entry Structure
+ *
+ */
+struct bfi_enet_txq_entry {
+	union {
+		struct bfi_enet_txq_wi_base	base;
+		struct bfi_enet_txq_wi_ext	ext;
+	} wi;
+	struct bfi_enet_txq_wi_vector vector[BFI_ENET_TXQ_WI_VECT_MAX];
+};
+
+#define wi_hdr		wi.base
+#define wi_ext_hdr	wi.ext
+
+#define BFI_ENET_TXQ_WI_L4_HDR_N_OFFSET(_hdr_size, _offset) \
+		(((_hdr_size) << 10) | ((_offset) & 0x3FF))
+
+/**
+ *   R X   Q U E U E   D E F I N E S
+ */
+struct bfi_enet_rxq_entry {
+	union bfi_addr_be_u  rx_buffer;
+};
+
+/**
+ *   R X   C O M P L E T I O N   Q U E U E   D E F I N E S
+ */
+/* CQ Entry Flags */
+#define	BFI_ENET_CQ_EF_MAC_ERROR	(1 <<  0)
+#define	BFI_ENET_CQ_EF_FCS_ERROR	(1 <<  1)
+#define	BFI_ENET_CQ_EF_TOO_LONG		(1 <<  2)
+#define	BFI_ENET_CQ_EF_FC_CRC_OK	(1 <<  3)
+
+#define	BFI_ENET_CQ_EF_RSVD1		(1 <<  4)
+#define	BFI_ENET_CQ_EF_L4_CKSUM_OK	(1 <<  5)
+#define	BFI_ENET_CQ_EF_L3_CKSUM_OK	(1 <<  6)
+#define	BFI_ENET_CQ_EF_HDS_HEADER	(1 <<  7)
+
+#define	BFI_ENET_CQ_EF_UDP		(1 <<  8)
+#define	BFI_ENET_CQ_EF_TCP		(1 <<  9)
+#define	BFI_ENET_CQ_EF_IP_OPTIONS	(1 << 10)
+#define	BFI_ENET_CQ_EF_IPV6		(1 << 11)
+
+#define	BFI_ENET_CQ_EF_IPV4		(1 << 12)
+#define	BFI_ENET_CQ_EF_VLAN		(1 << 13)
+#define	BFI_ENET_CQ_EF_RSS		(1 << 14)
+#define	BFI_ENET_CQ_EF_RSVD2		(1 << 15)
+
+#define	BFI_ENET_CQ_EF_MCAST_MATCH	(1 << 16)
+#define	BFI_ENET_CQ_EF_MCAST		(1 << 17)
+#define BFI_ENET_CQ_EF_BCAST		(1 << 18)
+#define	BFI_ENET_CQ_EF_REMOTE		(1 << 19)
+
+#define	BFI_ENET_CQ_EF_LOCAL		(1 << 20)
+
+/* CQ Entry Structure */
+struct bfi_enet_cq_entry {
+	u32 flags;
+	u16	vlan_tag;
+	u16	length;
+	u32	rss_hash;
+	u8	valid;
+	u8	reserved1;
+	u8	reserved2;
+	u8	rxq_id;
+};
+
+/**
+ *   E N E T   C O N T R O L   P A T H   C O M M A N D S
+ */
+struct bfi_enet_q {
+	union bfi_addr_u	pg_tbl;
+	union bfi_addr_u	first_entry;
+	u16		pages;	/* # of pages */
+	u16		page_sz;
+};
+
+struct bfi_enet_txq {
+	struct bfi_enet_q	q;
+	u8			priority;
+	u8			rsvd[3];
+};
+
+struct bfi_enet_rxq {
+	struct bfi_enet_q	q;
+	u16		rx_buffer_size;
+	u16		rsvd;
+};
+
+struct bfi_enet_cq {
+	struct bfi_enet_q	q;
+};
+
+struct bfi_enet_ib_cfg {
+	u8		int_pkt_dma;
+	u8		int_enabled;
+	u8		int_pkt_enabled;
+	u8		continuous_coalescing;
+	u8		msix;
+	u8		rsvd[3];
+	u32	coalescing_timeout;
+	u32	inter_pkt_timeout;
+	u8		inter_pkt_count;
+	u8		rsvd1[3];
+};
+
+struct bfi_enet_ib {
+	union bfi_addr_u	index_addr;
+	union {
+		u16	msix_index;
+		u16	intx_bitmask;
+	} intr;
+	u16		rsvd;
+};
+
+/**
+ * ENET command messages
+ */
+enum bfi_enet_h2i_msgs {
+	/* Rx Commands */
+	BFI_ENET_H2I_RX_CFG_SET_REQ = 1,
+	BFI_ENET_H2I_RX_CFG_CLR_REQ = 2,
+
+	BFI_ENET_H2I_RIT_CFG_REQ = 3,
+	BFI_ENET_H2I_RSS_CFG_REQ = 4,
+	BFI_ENET_H2I_RSS_ENABLE_REQ = 5,
+	BFI_ENET_H2I_RX_PROMISCUOUS_REQ = 6,
+	BFI_ENET_H2I_RX_DEFAULT_REQ = 7,
+
+	BFI_ENET_H2I_MAC_UCAST_SET_REQ = 8,
+	BFI_ENET_H2I_MAC_UCAST_CLR_REQ = 9,
+	BFI_ENET_H2I_MAC_UCAST_ADD_REQ = 10,
+	BFI_ENET_H2I_MAC_UCAST_DEL_REQ = 11,
+
+	BFI_ENET_H2I_MAC_MCAST_ADD_REQ = 12,
+	BFI_ENET_H2I_MAC_MCAST_DEL_REQ = 13,
+	BFI_ENET_H2I_MAC_MCAST_FILTER_REQ = 14,
+
+	BFI_ENET_H2I_RX_VLAN_SET_REQ = 15,
+	BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ = 16,
+
+	/* Tx Commands */
+	BFI_ENET_H2I_TX_CFG_SET_REQ = 17,
+	BFI_ENET_H2I_TX_CFG_CLR_REQ = 18,
+
+	/* Port Commands */
+	BFI_ENET_H2I_PORT_ADMIN_UP_REQ = 19,
+	BFI_ENET_H2I_SET_PAUSE_REQ = 20,
+	BFI_ENET_H2I_DIAG_LOOPBACK_REQ = 21,
+
+	/* Get Attributes Command */
+	BFI_ENET_H2I_GET_ATTR_REQ = 22,
+
+	/*  Statistics Commands */
+	BFI_ENET_H2I_STATS_GET_REQ = 23,
+	BFI_ENET_H2I_STATS_CLR_REQ = 24,
+
+	BFI_ENET_H2I_WOL_MAGIC_REQ = 25,
+	BFI_ENET_H2I_WOL_FRAME_REQ = 26,
+
+	BFI_ENET_H2I_MAX = 27,
+};
+
+enum bfi_enet_i2h_msgs {
+	/* Rx Responses */
+	BFI_ENET_I2H_RX_CFG_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_CFG_SET_REQ),
+	BFI_ENET_I2H_RX_CFG_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_CFG_CLR_REQ),
+
+	BFI_ENET_I2H_RIT_CFG_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RIT_CFG_REQ),
+	BFI_ENET_I2H_RSS_CFG_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RSS_CFG_REQ),
+	BFI_ENET_I2H_RSS_ENABLE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RSS_ENABLE_REQ),
+	BFI_ENET_I2H_RX_PROMISCUOUS_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_PROMISCUOUS_REQ),
+	BFI_ENET_I2H_RX_DEFAULT_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_DEFAULT_REQ),
+
+	BFI_ENET_I2H_MAC_UCAST_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_SET_REQ),
+	BFI_ENET_I2H_MAC_UCAST_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_CLR_REQ),
+	BFI_ENET_I2H_MAC_UCAST_ADD_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_ADD_REQ),
+	BFI_ENET_I2H_MAC_UCAST_DEL_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_UCAST_DEL_REQ),
+
+	BFI_ENET_I2H_MAC_MCAST_ADD_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_ADD_REQ),
+	BFI_ENET_I2H_MAC_MCAST_DEL_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_DEL_REQ),
+	BFI_ENET_I2H_MAC_MCAST_FILTER_RSP =
+		BFA_I2HM(BFI_ENET_H2I_MAC_MCAST_FILTER_REQ),
+
+	BFI_ENET_I2H_RX_VLAN_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_VLAN_SET_REQ),
+
+	BFI_ENET_I2H_RX_VLAN_STRIP_ENABLE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ),
+
+	/* Tx Responses */
+	BFI_ENET_I2H_TX_CFG_SET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_TX_CFG_SET_REQ),
+	BFI_ENET_I2H_TX_CFG_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_TX_CFG_CLR_REQ),
+
+	/* Port Responses */
+	BFI_ENET_I2H_PORT_ADMIN_RSP =
+		BFA_I2HM(BFI_ENET_H2I_PORT_ADMIN_UP_REQ),
+
+	BFI_ENET_I2H_SET_PAUSE_RSP =
+		BFA_I2HM(BFI_ENET_H2I_SET_PAUSE_REQ),
+	BFI_ENET_I2H_DIAG_LOOPBACK_RSP =
+		BFA_I2HM(BFI_ENET_H2I_DIAG_LOOPBACK_REQ),
+
+	/*  Attributes Response */
+	BFI_ENET_I2H_GET_ATTR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_GET_ATTR_REQ),
+
+	/* Statistics Responses */
+	BFI_ENET_I2H_STATS_GET_RSP =
+		BFA_I2HM(BFI_ENET_H2I_STATS_GET_REQ),
+	BFI_ENET_I2H_STATS_CLR_RSP =
+		BFA_I2HM(BFI_ENET_H2I_STATS_CLR_REQ),
+
+	BFI_ENET_I2H_WOL_MAGIC_RSP =
+		BFA_I2HM(BFI_ENET_H2I_WOL_MAGIC_REQ),
+	BFI_ENET_I2H_WOL_FRAME_RSP =
+		BFA_I2HM(BFI_ENET_H2I_WOL_FRAME_REQ),
+
+	/* AENs */
+	BFI_ENET_I2H_LINK_DOWN_AEN = BFA_I2HM(BFI_ENET_H2I_MAX),
+	BFI_ENET_I2H_LINK_UP_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 1),
+
+	BFI_ENET_I2H_PORT_ENABLE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 2),
+	BFI_ENET_I2H_PORT_DISABLE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 3),
+
+	BFI_ENET_I2H_BW_UPDATE_AEN = BFA_I2HM(BFI_ENET_H2I_MAX + 4),
+};
+
+/**
+ *  The following error codes can be returned by the enet commands
+ */
+enum bfi_enet_err {
+	BFI_ENET_CMD_OK		= 0,
+	BFI_ENET_CMD_FAIL	= 1,
+	BFI_ENET_CMD_DUP_ENTRY	= 2,	/* !< Duplicate entry in CAM */
+	BFI_ENET_CMD_CAM_FULL	= 3,	/* !< CAM is full */
+	BFI_ENET_CMD_NOT_OWNER	= 4,	/* !< Not permitted, b'cos not owner */
+	BFI_ENET_CMD_NOT_EXEC	= 5,	/* !< Was not sent to f/w at all */
+	BFI_ENET_CMD_WAITING	= 6,	/* !< Waiting for completion */
+	BFI_ENET_CMD_PORT_DISABLED = 7,	/* !< port in disabled state */
+};
+
+/**
+ * Generic Request
+ *
+ * bfi_enet_req is used by:
+ *	BFI_ENET_H2I_RX_CFG_CLR_REQ
+ *	BFI_ENET_H2I_TX_CFG_CLR_REQ
+ */
+struct bfi_enet_req {
+	struct bfi_msgq_mhdr mh;
+};
+
+/**
+ * Enable/Disable Request
+ *
+ * bfi_enet_enable_req is used by:
+ *	BFI_ENET_H2I_RSS_ENABLE_REQ	(enet_id must be zero)
+ *	BFI_ENET_H2I_RX_PROMISCUOUS_REQ (enet_id must be zero)
+ *	BFI_ENET_H2I_RX_DEFAULT_REQ	(enet_id must be zero)
+ *	BFI_ENET_H2I_RX_MAC_MCAST_FILTER_REQ
+ *	BFI_ENET_H2I_PORT_ADMIN_UP_REQ	(enet_id must be zero)
+ */
+struct bfi_enet_enable_req {
+	struct		bfi_msgq_mhdr mh;
+	u8		enable;		/* 1 = enable;  0 = disable */
+	u8		rsvd[3];
+};
+
+/**
+ * Generic Response
+ */
+struct bfi_enet_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;		/*!< if error see cmd_offset */
+	u8		rsvd;
+	u16		cmd_offset;	/*!< offset to invalid parameter */
+};
+
+/**
+ * GLOBAL CONFIGURATION
+ */
+
+/**
+ * bfi_enet_attr_req is used by:
+ *	BFI_ENET_H2I_GET_ATTR_REQ
+ */
+struct bfi_enet_attr_req {
+	struct bfi_msgq_mhdr	mh;
+};
+
+/**
+ * bfi_enet_attr_rsp is used by:
+ *	BFI_ENET_I2H_GET_ATTR_RSP
+ */
+struct bfi_enet_attr_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;		/*!< if error see cmd_offset */
+	u8		rsvd;
+	u16		cmd_offset;	/*!< offset to invalid parameter */
+	u32		max_cfg;
+	u32		max_ucmac;
+	u32		rit_size;
+};
+
+/**
+ * Tx Configuration
+ *
+ * bfi_enet_tx_cfg is used by:
+ *	BFI_ENET_H2I_TX_CFG_SET_REQ
+ */
+enum bfi_enet_tx_vlan_mode {
+	BFI_ENET_TX_VLAN_NOP	= 0,
+	BFI_ENET_TX_VLAN_INS	= 1,
+	BFI_ENET_TX_VLAN_WI	= 2,
+};
+
+struct bfi_enet_tx_cfg {
+	u8		vlan_mode;	/*!< processing mode */
+	u8		rsvd;
+	u16		vlan_id;
+	u8		admit_tagged_frame;
+	u8		apply_vlan_filter;
+	u8		add_to_vswitch;
+	u8		rsvd1[1];
+};
+
+struct bfi_enet_tx_cfg_req {
+	struct bfi_msgq_mhdr mh;
+	u8			num_queues;	/* # of Tx Queues */
+	u8			rsvd[3];
+
+	struct {
+		struct bfi_enet_txq	q;
+		struct bfi_enet_ib	ib;
+	} q_cfg[BFI_ENET_TXQ_PRIO_MAX];
+
+	struct bfi_enet_ib_cfg	ib_cfg;
+
+	struct bfi_enet_tx_cfg	tx_cfg;
+};
+
+struct bfi_enet_tx_cfg_rsp {
+	struct		bfi_msgq_mhdr mh;
+	u8		error;
+	u8		hw_id;		/* For debugging */
+	u8		rsvd[2];
+	struct {
+		u32	q_dbell;	/* PCI base address offset */
+		u32	i_dbell;	/* PCI base address offset */
+		u8	hw_qid;		/* For debugging */
+		u8	rsvd[3];
+	} q_handles[BFI_ENET_TXQ_PRIO_MAX];
+};
+
+/**
+ * Rx Configuration
+ *
+ * bfi_enet_rx_cfg is used by:
+ *	BFI_ENET_H2I_RX_CFG_SET_REQ
+ */
+enum bfi_enet_rxq_type {
+	BFI_ENET_RXQ_SINGLE		= 1,
+	BFI_ENET_RXQ_LARGE_SMALL	= 2,
+	BFI_ENET_RXQ_HDS		= 3,
+	BFI_ENET_RXQ_HDS_OPT_BASED	= 4,
+};
+
+enum bfi_enet_hds_type {
+	BFI_ENET_HDS_FORCED	= 0x01,
+	BFI_ENET_HDS_IPV6_UDP	= 0x02,
+	BFI_ENET_HDS_IPV6_TCP	= 0x04,
+	BFI_ENET_HDS_IPV4_TCP	= 0x08,
+	BFI_ENET_HDS_IPV4_UDP	= 0x10,
+};
+
+struct bfi_enet_rx_cfg {
+	u8		rxq_type;
+	u8		rsvd[3];
+
+	struct {
+		u8			max_header_size;
+		u8			force_offset;
+		u8			type;
+		u8			rsvd1;
+	} hds;
+
+	u8		multi_buffer;
+	u8		strip_vlan;
+	u8		drop_untagged;
+	u8		rsvd2;
+};
+
+/*
+ * Multicast frames are received on the ql of q-set index zero.
+ * On the completion queue.  RxQ ID = even is for large/data buffer queues
+ * and RxQ ID = odd is for small/header buffer queues.
+ */
+struct bfi_enet_rx_cfg_req {
+	struct bfi_msgq_mhdr mh;
+	u8			num_queue_sets;	/* # of Rx Queue Sets */
+	u8			rsvd[3];
+
+	struct {
+		struct bfi_enet_rxq	ql;	/* large/data/single buffers */
+		struct bfi_enet_rxq	qs;	/* small/header buffers */
+		struct bfi_enet_cq	cq;
+		struct bfi_enet_ib	ib;
+	} q_cfg[BFI_ENET_RX_QSET_MAX];
+
+	struct bfi_enet_ib_cfg	ib_cfg;
+
+	struct bfi_enet_rx_cfg	rx_cfg;
+};
+
+struct bfi_enet_rx_cfg_rsp {
+	struct bfi_msgq_mhdr mh;
+	u8		error;
+	u8		hw_id;	 /* For debugging */
+	u8		rsvd[2];
+	struct {
+		u32	ql_dbell; /* PCI base address offset */
+		u32	qs_dbell; /* PCI base address offset */
+		u32	i_dbell;  /* PCI base address offset */
+		u8		hw_lqid;  /* For debugging */
+		u8		hw_sqid;  /* For debugging */
+		u8		hw_cqid;  /* For debugging */
+		u8		rsvd;
+	} q_handles[BFI_ENET_RX_QSET_MAX];
+};
+
+/**
+ * RIT
+ *
+ * bfi_enet_rit_req is used by:
+ *	BFI_ENET_H2I_RIT_CFG_REQ
+ */
+struct bfi_enet_rit_req {
+	struct	bfi_msgq_mhdr mh;
+	u16	size;			/* number of table-entries used */
+	u8	rsvd[2];
+	u8	table[BFI_ENET_RSS_RIT_MAX];
+};
+
+/**
+ * RSS
+ *
+ * bfi_enet_rss_cfg_req is used by:
+ *	BFI_ENET_H2I_RSS_CFG_REQ
+ */
+enum bfi_enet_rss_type {
+	BFI_ENET_RSS_IPV6	= 0x01,
+	BFI_ENET_RSS_IPV6_TCP	= 0x02,
+	BFI_ENET_RSS_IPV4	= 0x04,
+	BFI_ENET_RSS_IPV4_TCP	= 0x08
+};
+
+struct bfi_enet_rss_cfg {
+	u8	type;
+	u8	mask;
+	u8	rsvd[2];
+	u32	key[BFI_ENET_RSS_KEY_LEN];
+};
+
+struct bfi_enet_rss_cfg_req {
+	struct bfi_msgq_mhdr	mh;
+	struct bfi_enet_rss_cfg	cfg;
+};
+
+/**
+ * MAC Unicast
+ *
+ * bfi_enet_rx_vlan_req is used by:
+ *	BFI_ENET_H2I_MAC_UCAST_SET_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_CLR_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_ADD_REQ
+ *	BFI_ENET_H2I_MAC_UCAST_DEL_REQ
+ */
+struct bfi_enet_ucast_req {
+	struct bfi_msgq_mhdr	mh;
+	mac_t			mac_addr;
+	u8			rsvd[2];
+};
+
+/**
+ * MAC Unicast + VLAN
+ */
+struct bfi_enet_mac_n_vlan_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			vlan_id;
+	mac_t			mac_addr;
+};
+
+/**
+ * MAC Multicast
+ *
+ * bfi_enet_mac_mfilter_add_req is used by:
+ *	BFI_ENET_H2I_MAC_MCAST_ADD_REQ
+ */
+struct bfi_enet_mcast_add_req {
+	struct bfi_msgq_mhdr	mh;
+	mac_t			mac_addr;
+	u8			rsvd[2];
+};
+
+/**
+ * bfi_enet_mac_mfilter_add_rsp is used by:
+ *	BFI_ENET_I2H_MAC_MCAST_ADD_RSP
+ */
+struct bfi_enet_mcast_add_rsp {
+	struct bfi_msgq_mhdr	mh;
+	u8			error;
+	u8			rsvd;
+	u16			cmd_offset;
+	u16			handle;
+	u8			rsvd1[2];
+};
+
+/**
+ * bfi_enet_mac_mfilter_del_req is used by:
+ *	BFI_ENET_H2I_MAC_MCAST_DEL_REQ
+ */
+struct bfi_enet_mcast_del_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			handle;
+	u8			rsvd[2];
+};
+
+/**
+ * VLAN
+ *
+ * bfi_enet_rx_vlan_req is used by:
+ *	BFI_ENET_H2I_RX_VLAN_SET_REQ
+ */
+struct bfi_enet_rx_vlan_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			block_idx;
+	u8			rsvd[3];
+	u32			bit_mask[BFI_ENET_VLAN_WORDS_MAX];
+};
+
+/**
+ * PAUSE
+ *
+ * bfi_enet_set_pause_req is used by:
+ *	BFI_ENET_H2I_SET_PAUSE_REQ
+ */
+struct bfi_enet_set_pause_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			rsvd[2];
+	u8			tx_pause;	/* 1 = enable;  0 = disable */
+	u8			rx_pause;	/* 1 = enable;  0 = disable */
+};
+
+/**
+ * DIAGNOSTICS
+ *
+ * bfi_enet_diag_lb_req is used by:
+ *      BFI_ENET_H2I_DIAG_LOOPBACK
+ */
+struct bfi_enet_diag_lb_req {
+	struct bfi_msgq_mhdr	mh;
+	u8			rsvd[2];
+	u8			mode;		/* cable or Serdes */
+	u8			enable;		/* 1 = enable;  0 = disable */
+};
+
+/**
+ * enum for Loopback opmodes
+ */
+enum {
+	BFI_ENET_DIAG_LB_OPMODE_EXT = 0,
+	BFI_ENET_DIAG_LB_OPMODE_CBL = 1,
+};
+
+/**
+ * STATISTICS
+ *
+ * bfi_enet_stats_req is used by:
+ *    BFI_ENET_H2I_STATS_GET_REQ
+ *    BFI_ENET_I2H_STATS_CLR_REQ
+ */
+struct bfi_enet_stats_req {
+	struct bfi_msgq_mhdr	mh;
+	u16			stats_mask;
+	u8			rsvd[2];
+	u32			rx_enet_mask;
+	u32			tx_enet_mask;
+	union bfi_addr_u	host_buffer;
+};
+
+/**
+ * defines for "stats_mask" above.
+ */
+#define BFI_ENET_STATS_MAC    (1 << 0)    /* !< MAC Statistics */
+#define BFI_ENET_STATS_BPC    (1 << 1)    /* !< Pause Stats from BPC */
+#define BFI_ENET_STATS_RAD    (1 << 2)    /* !< Rx Admission Statistics */
+#define BFI_ENET_STATS_RX_FC  (1 << 3)    /* !< Rx FC Stats from RxA */
+#define BFI_ENET_STATS_TX_FC  (1 << 4)    /* !< Tx FC Stats from TxA */
+
+#define BFI_ENET_STATS_ALL    0x1f
+
+/* TxF Frame Statistics */
+struct bfi_enet_stats_txf {
+	u64 ucast_octets;
+	u64 ucast;
+	u64 ucast_vlan;
+
+	u64 mcast_octets;
+	u64 mcast;
+	u64 mcast_vlan;
+
+	u64 bcast_octets;
+	u64 bcast;
+	u64 bcast_vlan;
+
+	u64 errors;
+	u64 filter_vlan;      /* frames filtered due to VLAN */
+	u64 filter_mac_sa;    /* frames filtered due to SA check */
+};
+
+/* RxF Frame Statistics */
+struct bfi_enet_stats_rxf {
+	u64 ucast_octets;
+	u64 ucast;
+	u64 ucast_vlan;
+
+	u64 mcast_octets;
+	u64 mcast;
+	u64 mcast_vlan;
+
+	u64 bcast_octets;
+	u64 bcast;
+	u64 bcast_vlan;
+	u64 frame_drops;
+};
+
+/* Fixme combine fc_tx & fc_rx */
+/* FC Tx Frame Statistics */
+struct bfi_enet_stats_fc_tx {
+	u64 txf_ucast_octets;
+	u64 txf_ucast;
+	u64 txf_ucast_vlan;
+
+	u64 txf_mcast_octets;
+	u64 txf_mcast;
+	u64 txf_mcast_vlan;
+
+	u64 txf_bcast_octets;
+	u64 txf_bcast;
+	u64 txf_bcast_vlan;
+
+	u64 txf_parity_errors;
+	u64 txf_timeout;
+	u64 txf_fid_parity_errors;
+};
+
+/* FC Rx Frame Statistics */
+struct bfi_enet_stats_fc_rx {
+	u64 rxf_ucast_octets;
+	u64 rxf_ucast;
+	u64 rxf_ucast_vlan;
+
+	u64 rxf_mcast_octets;
+	u64 rxf_mcast;
+	u64 rxf_mcast_vlan;
+
+	u64 rxf_bcast_octets;
+	u64 rxf_bcast;
+	u64 rxf_bcast_vlan;
+};
+
+/* RAD Frame Statistics */
+struct bfi_enet_stats_rad {
+	u64 rx_frames;
+	u64 rx_octets;
+	u64 rx_vlan_frames;
+
+	u64 rx_ucast;
+	u64 rx_ucast_octets;
+	u64 rx_ucast_vlan;
+
+	u64 rx_mcast;
+	u64 rx_mcast_octets;
+	u64 rx_mcast_vlan;
+
+	u64 rx_bcast;
+	u64 rx_bcast_octets;
+	u64 rx_bcast_vlan;
+
+	u64 rx_drops;
+};
+
+/* BPC Tx Registers */
+struct bfi_enet_stats_bpc {
+	/* transmit stats */
+	u64 tx_pause[8];
+	u64 tx_zero_pause[8];	/*!< Pause cancellation */
+	/*!<Pause initiation rather than retention */
+	u64 tx_first_pause[8];
+
+	/* receive stats */
+	u64 rx_pause[8];
+	u64 rx_zero_pause[8];	/*!< Pause cancellation */
+	/*!<Pause initiation rather than retention */
+	u64 rx_first_pause[8];
+};
+
+/* MAC Rx Statistics */
+struct bfi_enet_stats_mac {
+	u64 frame_64;		/* both rx and tx counter */
+	u64 frame_65_127;		/* both rx and tx counter */
+	u64 frame_128_255;		/* both rx and tx counter */
+	u64 frame_256_511;		/* both rx and tx counter */
+	u64 frame_512_1023;	/* both rx and tx counter */
+	u64 frame_1024_1518;	/* both rx and tx counter */
+	u64 frame_1519_1522;	/* both rx and tx counter */
+
+	/* receive stats */
+	u64 rx_bytes;
+	u64 rx_packets;
+	u64 rx_fcs_error;
+	u64 rx_multicast;
+	u64 rx_broadcast;
+	u64 rx_control_frames;
+	u64 rx_pause;
+	u64 rx_unknown_opcode;
+	u64 rx_alignment_error;
+	u64 rx_frame_length_error;
+	u64 rx_code_error;
+	u64 rx_carrier_sense_error;
+	u64 rx_undersize;
+	u64 rx_oversize;
+	u64 rx_fragments;
+	u64 rx_jabber;
+	u64 rx_drop;
+
+	/* transmit stats */
+	u64 tx_bytes;
+	u64 tx_packets;
+	u64 tx_multicast;
+	u64 tx_broadcast;
+	u64 tx_pause;
+	u64 tx_deferral;
+	u64 tx_excessive_deferral;
+	u64 tx_single_collision;
+	u64 tx_muliple_collision;
+	u64 tx_late_collision;
+	u64 tx_excessive_collision;
+	u64 tx_total_collision;
+	u64 tx_pause_honored;
+	u64 tx_drop;
+	u64 tx_jabber;
+	u64 tx_fcs_error;
+	u64 tx_control_frame;
+	u64 tx_oversize;
+	u64 tx_undersize;
+	u64 tx_fragments;
+};
+
+/**
+ * Complete statistics, DMAed from fw to host followed by
+ * BFI_ENET_I2H_STATS_GET_RSP
+ */
+struct bfi_enet_stats {
+	struct bfi_enet_stats_mac	mac_stats;
+	struct bfi_enet_stats_bpc	bpc_stats;
+	struct bfi_enet_stats_rad	rad_stats;
+	struct bfi_enet_stats_rad	rlb_stats;
+	struct bfi_enet_stats_fc_rx	fc_rx_stats;
+	struct bfi_enet_stats_fc_tx	fc_tx_stats;
+	struct bfi_enet_stats_rxf	rxf_stats[BFI_ENET_CFG_MAX];
+	struct bfi_enet_stats_txf	txf_stats[BFI_ENET_CFG_MAX];
+};
+
+#pragma pack()
+
+#endif  /* __BFI_ENET_H__ */
diff --git a/drivers/net/bna/bna_enet.c b/drivers/net/bna/bna_enet.c
new file mode 100644
index 0000000..668c72e
--- /dev/null
+++ b/drivers/net/bna/bna_enet.c
@@ -0,0 +1,2199 @@
+/*
+ * Linux network driver for Brocade Converged Network Adapter.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License (GPL) Version 2 as
+ * published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+/*
+ * Copyright (c) 2005-2011 Brocade Communications Systems, Inc.
+ * All rights reserved
+ * www.brocade.com
+ */
+#include "bna.h"
+
+static void
+bna_err_handler(struct bna *bna, u32 intr_status)
+{
+	if (BNA_IS_HALT_INTR(bna, intr_status))
+		bna_halt_clear(bna);
+
+	bfa_nw_ioc_error_isr(&bna->ioceth.ioc);
+}
+
+void
+bna_mbox_handler(struct bna *bna, u32 intr_status)
+{
+	if (BNA_IS_ERR_INTR(bna, intr_status)) {
+		bna_err_handler(bna, intr_status);
+		return;
+	}
+	if (BNA_IS_MBOX_INTR(bna, intr_status))
+		bfa_nw_ioc_mbox_isr(&bna->ioceth.ioc);
+}
+
+static void
+bna_msgq_rsp_handler(void *arg, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bna *bna = (struct bna *)arg;
+	struct bna_tx *tx;
+	struct bna_rx *rx;
+
+	switch (msghdr->msg_id) {
+	case BFI_ENET_I2H_RX_CFG_SET_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rx_enet_start_rsp(rx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_RX_CFG_CLR_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rx_enet_stop_rsp(rx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_RIT_CFG_RSP:
+	case BFI_ENET_I2H_RSS_CFG_RSP:
+	case BFI_ENET_I2H_RSS_ENABLE_RSP:
+	case BFI_ENET_I2H_RX_PROMISCUOUS_RSP:
+	case BFI_ENET_I2H_RX_DEFAULT_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_SET_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_CLR_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_ADD_RSP:
+	case BFI_ENET_I2H_MAC_UCAST_DEL_RSP:
+	case BFI_ENET_I2H_MAC_MCAST_DEL_RSP:
+	case BFI_ENET_I2H_MAC_MCAST_FILTER_RSP:
+	case BFI_ENET_I2H_RX_VLAN_SET_RSP:
+	case BFI_ENET_I2H_RX_VLAN_STRIP_ENABLE_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rxf_cfg_rsp(&rx->rxf, msghdr);
+		break;
+
+	case BFI_ENET_I2H_MAC_MCAST_ADD_RSP:
+		bna_rx_from_rid(bna, msghdr->enet_id, rx);
+		if (rx)
+			bna_bfi_rxf_mcast_add_rsp(&rx->rxf, msghdr);
+		break;
+
+	case BFI_ENET_I2H_TX_CFG_SET_RSP:
+		bna_tx_from_rid(bna, msghdr->enet_id, tx);
+		if (tx)
+			bna_bfi_tx_enet_start_rsp(tx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_TX_CFG_CLR_RSP:
+		bna_tx_from_rid(bna, msghdr->enet_id, tx);
+		if (tx)
+			bna_bfi_tx_enet_stop_rsp(tx, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_ADMIN_RSP:
+		bna_bfi_ethport_admin_rsp(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_DIAG_LOOPBACK_RSP:
+		bna_bfi_ethport_lpbk_rsp(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_SET_PAUSE_RSP:
+		bna_bfi_pause_set_rsp(&bna->enet, msghdr);
+		break;
+
+	case BFI_ENET_I2H_GET_ATTR_RSP:
+		bna_bfi_attr_get_rsp(&bna->ioceth, msghdr);
+		break;
+
+	case BFI_ENET_I2H_STATS_GET_RSP:
+		bna_bfi_stats_get_rsp(bna, msghdr);
+		break;
+
+	case BFI_ENET_I2H_STATS_CLR_RSP:
+		/* No-op */
+		break;
+
+	case BFI_ENET_I2H_LINK_UP_AEN:
+		bna_bfi_ethport_linkup_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_LINK_DOWN_AEN:
+		bna_bfi_ethport_linkdown_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_ENABLE_AEN:
+		bna_bfi_ethport_enable_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_PORT_DISABLE_AEN:
+		bna_bfi_ethport_disable_aen(&bna->ethport, msghdr);
+		break;
+
+	case BFI_ENET_I2H_BW_UPDATE_AEN:
+		bna_bfi_bw_update_aen(&bna->tx_mod);
+		break;
+
+	default:
+		break;
+	}
+}
+
+/**
+ * ETHPORT
+ */
+#define call_ethport_stop_cbfn(_ethport)				\
+do {									\
+	if ((_ethport)->stop_cbfn) {					\
+		void (*cbfn)(struct bna_enet *);			\
+		cbfn = (_ethport)->stop_cbfn;				\
+		(_ethport)->stop_cbfn = NULL;				\
+		cbfn(&(_ethport)->bna->enet);				\
+	}								\
+} while (0)
+
+#define call_ethport_adminup_cbfn(ethport, status)			\
+do {									\
+	if ((ethport)->adminup_cbfn) {					\
+		void (*cbfn)(struct bnad *, enum bna_cb_status);	\
+		cbfn = (ethport)->adminup_cbfn;				\
+		(ethport)->adminup_cbfn = NULL;				\
+		cbfn((ethport)->bna->bnad, status);			\
+	}								\
+} while (0)
+
+static inline int
+ethport_can_be_up(struct bna_ethport *ethport)
+{
+	int ready = 0;
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		ready = ((ethport->flags & BNA_ETHPORT_F_ADMIN_UP) &&
+			 (ethport->flags & BNA_ETHPORT_F_RX_STARTED) &&
+			 (ethport->flags & BNA_ETHPORT_F_PORT_ENABLED));
+	else
+		ready = ((ethport->flags & BNA_ETHPORT_F_ADMIN_UP) &&
+			 (ethport->flags & BNA_ETHPORT_F_RX_STARTED) &&
+			 !(ethport->flags & BNA_ETHPORT_F_PORT_ENABLED));
+	return ready;
+}
+
+#define ethport_is_up ethport_can_be_up
+
+static void bna_bfi_ethport_up(struct bna_ethport *ethport);
+static void bna_bfi_ethport_down(struct bna_ethport *ethport);
+
+enum bna_ethport_event {
+	ETHPORT_E_START			= 1,
+	ETHPORT_E_STOP			= 2,
+	ETHPORT_E_FAIL			= 3,
+	ETHPORT_E_UP			= 4,
+	ETHPORT_E_DOWN			= 5,
+	ETHPORT_E_FWRESP_UP_OK		= 6,
+	ETHPORT_E_FWRESP_DOWN		= 7,
+	ETHPORT_E_FWRESP_UP_FAIL	= 8,
+};
+
+bfa_fsm_state_decl(bna_ethport, stopped, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, down, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, up_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, down_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, up, struct bna_ethport,
+			enum bna_ethport_event);
+bfa_fsm_state_decl(bna_ethport, last_resp_wait, struct bna_ethport,
+			enum bna_ethport_event);
+
+static void
+bna_ethport_sm_stopped_entry(struct bna_ethport *ethport)
+{
+	call_ethport_stop_cbfn(ethport);
+}
+
+static void
+bna_ethport_sm_stopped(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_START:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	case ETHPORT_E_STOP:
+		call_ethport_stop_cbfn(ethport);
+		break;
+
+	case ETHPORT_E_FAIL:
+		/* No-op */
+		break;
+
+	case ETHPORT_E_DOWN:
+		/* This event is received due to Rx objects failing */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_down_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_down(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_UP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up_resp_wait);
+		bna_bfi_ethport_up(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_up_resp_wait_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_up_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		break;
+
+	case ETHPORT_E_FAIL:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_FAIL);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_INTERRUPT);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down_resp_wait);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_SUCCESS);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+		call_ethport_adminup_cbfn(ethport, BNA_CB_FAIL);
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	case ETHPORT_E_FWRESP_DOWN:
+		/* down_resp_wait -> up_resp_wait transition on ETHPORT_E_UP */
+		bna_bfi_ethport_up(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_down_resp_wait_entry(struct bna_ethport *ethport)
+{
+	/**
+	 * NOTE: Do not call bna_bfi_ethport_down() here. That will over step
+	 * mbox due to up_resp_wait -> down_resp_wait transition on event
+	 * ETHPORT_E_DOWN
+	 */
+}
+
+static void
+bna_ethport_sm_down_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_UP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_up_resp_wait);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		/* up_resp_wait->down_resp_wait transition on ETHPORT_E_DOWN */
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+	case ETHPORT_E_FWRESP_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_up_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_up(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_STOP:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_last_resp_wait);
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_down_resp_wait);
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ethport_sm_last_resp_wait_entry(struct bna_ethport *ethport)
+{
+}
+
+static void
+bna_ethport_sm_last_resp_wait(struct bna_ethport *ethport,
+			enum bna_ethport_event event)
+{
+	switch (event) {
+	case ETHPORT_E_FAIL:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	case ETHPORT_E_DOWN:
+		/**
+		 * This event is received due to Rx objects stopping in
+		 * parallel to ethport
+		 */
+		/* No-op */
+		break;
+
+	case ETHPORT_E_FWRESP_UP_OK:
+		/* up_resp_wait->last_resp_wait transition on ETHPORT_T_STOP */
+		bna_bfi_ethport_down(ethport);
+		break;
+
+	case ETHPORT_E_FWRESP_UP_FAIL:
+	case ETHPORT_E_FWRESP_DOWN:
+		bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_ethport_admin_up(struct bna_ethport *ethport)
+{
+	struct bfi_enet_enable_req *admin_up_req =
+		&ethport->bfi_enet_cmd.admin_req;
+
+	bfi_msgq_mhdr_set(admin_up_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_PORT_ADMIN_UP_REQ, 0, 0);
+	admin_up_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	admin_up_req->enable = BNA_STATUS_T_ENABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &admin_up_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_admin_down(struct bna_ethport *ethport)
+{
+	struct bfi_enet_enable_req *admin_down_req =
+		&ethport->bfi_enet_cmd.admin_req;
+
+	bfi_msgq_mhdr_set(admin_down_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_PORT_ADMIN_UP_REQ, 0, 0);
+	admin_down_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	admin_down_req->enable = BNA_STATUS_T_DISABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &admin_down_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_lpbk_up(struct bna_ethport *ethport)
+{
+	struct bfi_enet_diag_lb_req *lpbk_up_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+
+	bfi_msgq_mhdr_set(lpbk_up_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_DIAG_LOOPBACK_REQ, 0, 0);
+	lpbk_up_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_diag_lb_req)));
+	lpbk_up_req->mode = (ethport->bna->enet.type ==
+				BNA_ENET_T_LOOPBACK_INTERNAL) ?
+				BFI_ENET_DIAG_LB_OPMODE_EXT :
+				BFI_ENET_DIAG_LB_OPMODE_CBL;
+	lpbk_up_req->enable = BNA_STATUS_T_ENABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_diag_lb_req), &lpbk_up_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_lpbk_down(struct bna_ethport *ethport)
+{
+	struct bfi_enet_diag_lb_req *lpbk_down_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+
+	bfi_msgq_mhdr_set(lpbk_down_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_DIAG_LOOPBACK_REQ, 0, 0);
+	lpbk_down_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_diag_lb_req)));
+	lpbk_down_req->enable = BNA_STATUS_T_DISABLED;
+
+	bfa_msgq_cmd_set(&ethport->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_diag_lb_req), &lpbk_down_req->mh);
+	bfa_msgq_cmd_post(&ethport->bna->msgq, &ethport->msgq_cmd);
+}
+
+static void
+bna_bfi_ethport_up(struct bna_ethport *ethport)
+{
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		bna_bfi_ethport_admin_up(ethport);
+	else
+		bna_bfi_ethport_lpbk_up(ethport);
+}
+
+static void
+bna_bfi_ethport_down(struct bna_ethport *ethport)
+{
+	if (ethport->bna->enet.type == BNA_ENET_T_REGULAR)
+		bna_bfi_ethport_admin_down(ethport);
+	else
+		bna_bfi_ethport_lpbk_down(ethport);
+}
+
+void
+bna_ethport_init(struct bna_ethport *ethport, struct bna *bna)
+{
+	ethport->flags |= (BNA_ETHPORT_F_ADMIN_UP | BNA_ETHPORT_F_PORT_ENABLED);
+	ethport->bna = bna;
+
+	ethport->link_status = BNA_LINK_DOWN;
+	ethport->link_cbfn = bnad_cb_ethport_link_status;
+
+	ethport->rx_started_count = 0;
+
+	ethport->stop_cbfn = NULL;
+	ethport->adminup_cbfn = NULL;
+
+	bfa_fsm_set_state(ethport, bna_ethport_sm_stopped);
+}
+
+void
+bna_ethport_uninit(struct bna_ethport *ethport)
+{
+	ethport->flags &= ~BNA_ETHPORT_F_ADMIN_UP;
+	ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+
+	ethport->bna = NULL;
+}
+
+void
+bna_ethport_start(struct bna_ethport *ethport)
+{
+	bfa_fsm_send_event(ethport, ETHPORT_E_START);
+}
+
+void
+bna_ethport_stop(struct bna_ethport *ethport)
+{
+	ethport->stop_cbfn = bna_enet_cb_ethport_stopped;
+	bfa_fsm_send_event(ethport, ETHPORT_E_STOP);
+}
+
+void
+bna_ethport_fail(struct bna_ethport *ethport)
+{
+	/* Reset the physical port status to enabled */
+	ethport->flags |= BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport->link_status != BNA_LINK_DOWN) {
+		ethport->link_status = BNA_LINK_DOWN;
+		ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+	}
+	bfa_fsm_send_event(ethport, ETHPORT_E_FAIL);
+}
+
+void
+bna_ethport_admin_up(struct bna_ethport *ethport,
+		void (*cbfn)(struct bnad *, enum bna_cb_status))
+{
+	ethport->adminup_cbfn = cbfn;
+
+	if (ethport->flags & BNA_ETHPORT_F_ADMIN_UP) {
+		call_ethport_adminup_cbfn(ethport, BNA_CB_SUCCESS);
+		return;
+	}
+
+	if ((ethport->bna->enet.type != BNA_ENET_T_REGULAR) &&
+		(ethport->flags & BNA_ETHPORT_F_PORT_ENABLED)) {
+		call_ethport_adminup_cbfn(ethport, BNA_CB_FAIL);
+		return;
+	}
+
+	ethport->flags |= BNA_ETHPORT_F_ADMIN_UP;
+
+	if (ethport_can_be_up(ethport))
+		bfa_fsm_send_event(ethport, ETHPORT_E_UP);
+}
+
+void
+bna_ethport_admin_down(struct bna_ethport *ethport)
+{
+	int ethport_up = ethport_is_up(ethport);
+
+	if (!(ethport->flags & BNA_ETHPORT_F_ADMIN_UP))
+		return;
+
+	ethport->flags &= ~BNA_ETHPORT_F_ADMIN_UP;
+
+	if (ethport_up)
+		bfa_fsm_send_event(ethport, ETHPORT_E_DOWN);
+}
+
+/* Should be called only when ethport is disabled */
+void
+bna_ethport_linkcbfn_set(struct bna_ethport *ethport,
+		      void (*linkcbfn)(struct bnad *, enum bna_link_status))
+{
+	ethport->link_cbfn = linkcbfn;
+}
+
+void
+bna_bfi_ethport_enable_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->flags |= BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport_can_be_up(ethport))
+		bfa_fsm_send_event(ethport, ETHPORT_E_UP);
+}
+
+void
+bna_bfi_ethport_disable_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	int ethport_up = ethport_is_up(ethport);
+
+	ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+
+	if (ethport_up)
+		bfa_fsm_send_event(ethport, ETHPORT_E_DOWN);
+}
+
+void
+bna_ethport_cb_rx_started(struct bna_ethport *ethport)
+{
+	ethport->rx_started_count++;
+
+	if (ethport->rx_started_count == 1) {
+		ethport->flags |= BNA_ETHPORT_F_RX_STARTED;
+
+		if (ethport_can_be_up(ethport))
+			bfa_fsm_send_event(ethport, ETHPORT_E_UP);
+	}
+}
+
+void
+bna_ethport_cb_rx_stopped(struct bna_ethport *ethport)
+{
+	int ethport_up = ethport_is_up(ethport);
+
+	ethport->rx_started_count--;
+
+	if (ethport->rx_started_count == 0) {
+		ethport->flags &= ~BNA_ETHPORT_F_RX_STARTED;
+
+		if (ethport_up)
+			bfa_fsm_send_event(ethport, ETHPORT_E_DOWN);
+	}
+}
+
+void
+bna_bfi_ethport_admin_rsp(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_enable_req *admin_req =
+		&ethport->bfi_enet_cmd.admin_req;
+	struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+
+	switch (admin_req->enable) {
+	case BNA_STATUS_T_ENABLED:
+		if (rsp->error == BFI_ENET_CMD_OK)
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_OK);
+		else {
+			ethport->flags &= ~BNA_ETHPORT_F_PORT_ENABLED;
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_FAIL);
+		}
+		break;
+
+	case BNA_STATUS_T_DISABLED:
+		bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_DOWN);
+		ethport->link_status = BNA_LINK_DOWN;
+		ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+		break;
+	}
+}
+
+void
+bna_bfi_ethport_lpbk_rsp(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_diag_lb_req *diag_lb_req =
+		&ethport->bfi_enet_cmd.lpbk_req;
+	struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+
+	switch (diag_lb_req->enable) {
+	case BNA_STATUS_T_ENABLED:
+		if (rsp->error == BFI_ENET_CMD_OK)
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_OK);
+		else {
+			ethport->flags &= ~BNA_ETHPORT_F_ADMIN_UP;
+			bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_UP_FAIL);
+		}
+		break;
+
+	case BNA_STATUS_T_DISABLED:
+		bfa_fsm_send_event(ethport, ETHPORT_E_FWRESP_DOWN);
+		break;
+	}
+}
+
+void
+bna_bfi_ethport_linkup_aen(struct bna_ethport *ethport,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->link_status = BNA_LINK_UP;
+
+	/* Dispatch events */
+	ethport->link_cbfn(ethport->bna->bnad, ethport->link_status);
+}
+
+void
+bna_bfi_ethport_linkdown_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr)
+{
+	ethport->link_status = BNA_LINK_DOWN;
+
+	/* Dispatch events */
+	ethport->link_cbfn(ethport->bna->bnad, BNA_LINK_DOWN);
+}
+
+int
+bna_ethport_is_disabled(struct bna_ethport *ethport)
+{
+	int ret = (!(ethport->flags & BNA_ETHPORT_F_PORT_ENABLED));
+	return ret;
+}
+
+/**
+ * ENET
+ */
+#define bna_enet_chld_start(enet)					\
+do {									\
+	enum bna_tx_type tx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;			\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bna_ethport_start(&(enet)->bna->ethport);			\
+	bna_tx_mod_start(&(enet)->bna->tx_mod, tx_type);		\
+	bna_rx_mod_start(&(enet)->bna->rx_mod, rx_type);		\
+} while (0)
+
+#define bna_enet_chld_stop(enet)					\
+do {									\
+	enum bna_tx_type tx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;			\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bfa_wc_init(&(enet)->chld_stop_wc, bna_enet_cb_chld_stopped, (enet));\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_ethport_stop(&(enet)->bna->ethport);			\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_tx_mod_stop(&(enet)->bna->tx_mod, tx_type);			\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_rx_mod_stop(&(enet)->bna->rx_mod, rx_type);			\
+	bfa_wc_wait(&(enet)->chld_stop_wc);				\
+} while (0)
+
+#define bna_enet_chld_fail(enet)					\
+do {									\
+	bna_ethport_fail(&(enet)->bna->ethport);			\
+	bna_tx_mod_fail(&(enet)->bna->tx_mod);				\
+	bna_rx_mod_fail(&(enet)->bna->rx_mod);				\
+} while (0)
+
+#define bna_enet_rx_start(enet)						\
+do {									\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bna_rx_mod_start(&(enet)->bna->rx_mod, rx_type);		\
+} while (0)
+
+#define bna_enet_rx_stop(enet)						\
+do {									\
+	enum bna_rx_type rx_type =					\
+		((enet)->type == BNA_ENET_T_REGULAR) ?			\
+		BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;			\
+	bfa_wc_init(&(enet)->chld_stop_wc, bna_enet_cb_chld_stopped, (enet));\
+	bfa_wc_up(&(enet)->chld_stop_wc);				\
+	bna_rx_mod_stop(&(enet)->bna->rx_mod, rx_type);			\
+	bfa_wc_wait(&(enet)->chld_stop_wc);				\
+} while (0)
+
+#define call_enet_stop_cbfn(enet)					\
+do {									\
+	if ((enet)->stop_cbfn) {					\
+		void (*cbfn)(void *);					\
+		void *cbarg;						\
+		cbfn = (enet)->stop_cbfn;				\
+		cbarg = (enet)->stop_cbarg;				\
+		(enet)->stop_cbfn = NULL;				\
+		(enet)->stop_cbarg = NULL;				\
+		cbfn(cbarg);						\
+	}								\
+} while (0)
+
+#define call_enet_pause_cbfn(enet)					\
+do {									\
+	if ((enet)->pause_cbfn) {					\
+		void (*cbfn)(struct bnad *);				\
+		cbfn = (enet)->pause_cbfn;				\
+		(enet)->pause_cbfn = NULL;				\
+		cbfn((enet)->bna->bnad);				\
+	}								\
+} while (0)
+
+#define call_enet_mtu_cbfn(enet)					\
+do {									\
+	if ((enet)->mtu_cbfn) {						\
+		void (*cbfn)(struct bnad *);				\
+		cbfn = (enet)->mtu_cbfn;				\
+		(enet)->mtu_cbfn = NULL;				\
+		cbfn((enet)->bna->bnad);				\
+	}								\
+} while (0)
+
+static void bna_enet_cb_chld_stopped(void *arg);
+static void bna_bfi_pause_set(struct bna_enet *enet);
+
+enum bna_enet_event {
+	ENET_E_START			= 1,
+	ENET_E_STOP			= 2,
+	ENET_E_FAIL			= 3,
+	ENET_E_PAUSE_CFG		= 4,
+	ENET_E_MTU_CFG			= 5,
+	ENET_E_FWRESP_PAUSE		= 6,
+	ENET_E_CHLD_STOPPED		= 7,
+};
+
+bfa_fsm_state_decl(bna_enet, stopped, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, pause_init_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, last_resp_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, started, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, cfg_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, cfg_stop_wait, struct bna_enet,
+			enum bna_enet_event);
+bfa_fsm_state_decl(bna_enet, chld_stop_wait, struct bna_enet,
+			enum bna_enet_event);
+
+static void
+bna_enet_sm_stopped_entry(struct bna_enet *enet)
+{
+	call_enet_pause_cbfn(enet);
+	call_enet_mtu_cbfn(enet);
+	call_enet_stop_cbfn(enet);
+}
+
+static void
+bna_enet_sm_stopped(struct bna_enet *enet, enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_START:
+		bfa_fsm_set_state(enet, bna_enet_sm_pause_init_wait);
+		break;
+
+	case ENET_E_STOP:
+		call_enet_stop_cbfn(enet);
+		break;
+
+	case ENET_E_FAIL:
+		/* No-op */
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		call_enet_pause_cbfn(enet);
+		break;
+
+	case ENET_E_MTU_CFG:
+		call_enet_mtu_cbfn(enet);
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		/**
+		 * This event is received due to Ethport, Tx and Rx objects
+		 * failing
+		 */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_pause_init_wait_entry(struct bna_enet *enet)
+{
+	bna_bfi_pause_set(enet);
+}
+
+static void
+bna_enet_sm_pause_init_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_last_resp_wait);
+		break;
+
+	case ENET_E_FAIL:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		enet->flags |= BNA_ENET_F_PAUSE_CHANGED;
+		break;
+
+	case ENET_E_MTU_CFG:
+		/* No-op */
+		break;
+
+	case ENET_E_FWRESP_PAUSE:
+		if (enet->flags & BNA_ENET_F_PAUSE_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+			bna_bfi_pause_set(enet);
+		} else {
+			bfa_fsm_set_state(enet, bna_enet_sm_started);
+			bna_enet_chld_start(enet);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_last_resp_wait_entry(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+}
+
+static void
+bna_enet_sm_last_resp_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+	case ENET_E_FWRESP_PAUSE:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_started_entry(struct bna_enet *enet)
+{
+	/**
+	 * NOTE: Do not call bna_enet_chld_start() here, since it will be
+	 * inadvertently called during cfg_wait->started transition as well
+	 */
+	call_enet_pause_cbfn(enet);
+	call_enet_mtu_cbfn(enet);
+}
+
+static void
+bna_enet_sm_started(struct bna_enet *enet,
+			enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		bfa_fsm_set_state(enet, bna_enet_sm_chld_stop_wait);
+		break;
+
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_wait);
+		bna_bfi_pause_set(enet);
+		break;
+
+	case ENET_E_MTU_CFG:
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_wait);
+		bna_enet_rx_stop(enet);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_cfg_wait_entry(struct bna_enet *enet)
+{
+}
+
+static void
+bna_enet_sm_cfg_wait(struct bna_enet *enet,
+			enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_STOP:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_cfg_stop_wait);
+		break;
+
+	case ENET_E_FAIL:
+		enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+		enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_PAUSE_CFG:
+		enet->flags |= BNA_ENET_F_PAUSE_CHANGED;
+		break;
+
+	case ENET_E_MTU_CFG:
+		enet->flags |= BNA_ENET_F_MTU_CHANGED;
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		bna_enet_rx_start(enet);
+		/* Fall through */
+	case ENET_E_FWRESP_PAUSE:
+		if (enet->flags & BNA_ENET_F_PAUSE_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+			bna_bfi_pause_set(enet);
+		} else if (enet->flags & BNA_ENET_F_MTU_CHANGED) {
+			enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+			bna_enet_rx_stop(enet);
+		} else {
+			bfa_fsm_set_state(enet, bna_enet_sm_started);
+		}
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_cfg_stop_wait_entry(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_PAUSE_CHANGED;
+	enet->flags &= ~BNA_ENET_F_MTU_CHANGED;
+}
+
+static void
+bna_enet_sm_cfg_stop_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_FWRESP_PAUSE:
+	case ENET_E_CHLD_STOPPED:
+		bfa_fsm_set_state(enet, bna_enet_sm_chld_stop_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_enet_sm_chld_stop_wait_entry(struct bna_enet *enet)
+{
+	bna_enet_chld_stop(enet);
+}
+
+static void
+bna_enet_sm_chld_stop_wait(struct bna_enet *enet,
+				enum bna_enet_event event)
+{
+	switch (event) {
+	case ENET_E_FAIL:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		bna_enet_chld_fail(enet);
+		break;
+
+	case ENET_E_CHLD_STOPPED:
+		bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_pause_set(struct bna_enet *enet)
+{
+	struct bfi_enet_set_pause_req *pause_req = &enet->pause_req;
+
+	bfi_msgq_mhdr_set(pause_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_SET_PAUSE_REQ, 0, 0);
+	pause_req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_set_pause_req)));
+	pause_req->tx_pause = enet->pause_config.tx_pause;
+	pause_req->rx_pause = enet->pause_config.rx_pause;
+
+	bfa_msgq_cmd_set(&enet->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_set_pause_req), &pause_req->mh);
+	bfa_msgq_cmd_post(&enet->bna->msgq, &enet->msgq_cmd);
+}
+
+static void
+bna_enet_cb_chld_stopped(void *arg)
+{
+	struct bna_enet *enet = (struct bna_enet *)arg;
+
+	bfa_fsm_send_event(enet, ENET_E_CHLD_STOPPED);
+}
+
+void
+bna_enet_init(struct bna_enet *enet, struct bna *bna)
+{
+	enet->bna = bna;
+	enet->flags = 0;
+	enet->mtu = 0;
+	enet->type = BNA_ENET_T_REGULAR;
+
+	enet->stop_cbfn = NULL;
+	enet->stop_cbarg = NULL;
+
+	enet->pause_cbfn = NULL;
+
+	enet->mtu_cbfn = NULL;
+
+	bfa_fsm_set_state(enet, bna_enet_sm_stopped);
+}
+
+void
+bna_enet_uninit(struct bna_enet *enet)
+{
+	enet->flags = 0;
+
+	enet->bna = NULL;
+}
+
+void
+bna_enet_start(struct bna_enet *enet)
+{
+	enet->flags |= BNA_ENET_F_IOCETH_READY;
+	if (enet->flags & BNA_ENET_F_ENABLED)
+		bfa_fsm_send_event(enet, ENET_E_START);
+}
+
+void
+bna_enet_stop(struct bna_enet *enet)
+{
+	enet->stop_cbfn = bna_ioceth_cb_enet_stopped;
+	enet->stop_cbarg = &enet->bna->ioceth;
+
+	enet->flags &= ~BNA_ENET_F_IOCETH_READY;
+	bfa_fsm_send_event(enet, ENET_E_STOP);
+}
+
+void
+bna_enet_fail(struct bna_enet *enet)
+{
+	enet->flags &= ~BNA_ENET_F_IOCETH_READY;
+	bfa_fsm_send_event(enet, ENET_E_FAIL);
+}
+
+void
+bna_enet_cb_ethport_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+void
+bna_enet_cb_tx_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+void
+bna_enet_cb_rx_stopped(struct bna_enet *enet)
+{
+	bfa_wc_down(&enet->chld_stop_wc);
+}
+
+void
+bna_bfi_pause_set_rsp(struct bna_enet *enet, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(enet, ENET_E_FWRESP_PAUSE);
+}
+
+int
+bna_enet_mtu_get(struct bna_enet *enet)
+{
+	return enet->mtu;
+}
+
+void
+bna_enet_enable(struct bna_enet *enet)
+{
+	if (enet->fsm != (bfa_sm_t)bna_enet_sm_stopped)
+		return;
+
+	enet->flags |= BNA_ENET_F_ENABLED;
+
+	if (enet->flags & BNA_ENET_F_IOCETH_READY)
+		bfa_fsm_send_event(enet, ENET_E_START);
+}
+
+void
+bna_enet_disable(struct bna_enet *enet, enum bna_cleanup_type type,
+		 void (*cbfn)(void *))
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		(*cbfn)(enet->bna->bnad);
+		return;
+	}
+
+	enet->stop_cbfn = cbfn;
+	enet->stop_cbarg = enet->bna->bnad;
+
+	enet->flags &= ~BNA_ENET_F_ENABLED;
+
+	bfa_fsm_send_event(enet, ENET_E_STOP);
+}
+
+void
+bna_enet_pause_config(struct bna_enet *enet,
+		      struct bna_pause_config *pause_config,
+		      void (*cbfn)(struct bnad *))
+{
+	enet->pause_config = *pause_config;
+
+	enet->pause_cbfn = cbfn;
+
+	bfa_fsm_send_event(enet, ENET_E_PAUSE_CFG);
+}
+
+void
+bna_enet_mtu_set(struct bna_enet *enet, int mtu,
+		 void (*cbfn)(struct bnad *))
+{
+	enet->mtu = mtu;
+
+	enet->mtu_cbfn = cbfn;
+
+	bfa_fsm_send_event(enet, ENET_E_MTU_CFG);
+}
+
+void
+bna_enet_perm_mac_get(struct bna_enet *enet, mac_t *mac)
+{
+	*mac = bfa_nw_ioc_get_mac(&enet->bna->ioceth.ioc);
+}
+
+/* Should be called only when enet is disabled */
+void
+bna_enet_type_set(struct bna_enet *enet, enum bna_enet_type type)
+{
+	enet->type = type;
+}
+
+enum bna_enet_type
+bna_enet_type_get(struct bna_enet *enet)
+{
+	return enet->type;
+}
+
+/**
+ * IOCETH
+ */
+#define enable_mbox_intr(_ioceth)					\
+do {									\
+	u32 intr_status;						\
+	bna_intr_status_get((_ioceth)->bna, intr_status);		\
+	bnad_cb_mbox_intr_enable((_ioceth)->bna->bnad);			\
+	bna_mbox_intr_enable((_ioceth)->bna);				\
+} while (0)
+
+#define disable_mbox_intr(_ioceth)					\
+do {									\
+	bna_mbox_intr_disable((_ioceth)->bna);				\
+	bnad_cb_mbox_intr_disable((_ioceth)->bna->bnad);		\
+} while (0)
+
+#define call_ioceth_stop_cbfn(_ioceth)					\
+do {									\
+	if ((_ioceth)->stop_cbfn) {					\
+		void (*cbfn)(struct bnad *);				\
+		struct bnad *cbarg;					\
+		cbfn = (_ioceth)->stop_cbfn;				\
+		cbarg = (_ioceth)->stop_cbarg;				\
+		(_ioceth)->stop_cbfn = NULL;				\
+		(_ioceth)->stop_cbarg = NULL;				\
+		cbfn(cbarg);						\
+	}								\
+} while (0)
+
+#define bna_stats_mod_uninit(_stats_mod)				\
+do {									\
+} while (0)
+
+#define bna_stats_mod_start(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = true;					\
+} while (0)
+
+#define bna_stats_mod_stop(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = false;				\
+} while (0)
+
+#define bna_stats_mod_fail(_stats_mod)					\
+do {									\
+	(_stats_mod)->ioc_ready = false;				\
+	(_stats_mod)->stats_get_busy = false;				\
+	(_stats_mod)->stats_clr_busy = false;				\
+} while (0)
+
+static void bna_bfi_attr_get(struct bna_ioceth *ioceth);
+
+enum bna_ioceth_event {
+	IOCETH_E_ENABLE			= 1,
+	IOCETH_E_DISABLE		= 2,
+	IOCETH_E_IOC_RESET		= 3,
+	IOCETH_E_IOC_FAILED		= 4,
+	IOCETH_E_IOC_READY		= 5,
+	IOCETH_E_ENET_ATTR_RESP		= 6,
+	IOCETH_E_ENET_STOPPED		= 7,
+	IOCETH_E_IOC_DISABLED		= 8,
+};
+
+bfa_fsm_state_decl(bna_ioceth, stopped, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ioc_ready_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, enet_attr_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ready, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, last_resp_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, enet_stop_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, ioc_disable_wait, struct bna_ioceth,
+			enum bna_ioceth_event);
+bfa_fsm_state_decl(bna_ioceth, failed, struct bna_ioceth,
+			enum bna_ioceth_event);
+
+static void
+bna_ioceth_sm_stopped_entry(struct bna_ioceth *ioceth)
+{
+	call_ioceth_stop_cbfn(ioceth);
+}
+
+static void
+bna_ioceth_sm_stopped(struct bna_ioceth *ioceth,
+			enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_ENABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_ready_wait);
+		bfa_nw_ioc_enable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ioc_ready_wait_entry(struct bna_ioceth *ioceth)
+{
+	/**
+	 * Do not call bfa_nw_ioc_enable() here. It must be called in the
+	 * previous state due to failed -> ioc_ready_wait transition.
+	 */
+}
+
+static void
+bna_ioceth_sm_ioc_ready_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	case IOCETH_E_IOC_READY:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_enet_attr_wait);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_enet_attr_wait_entry(struct bna_ioceth *ioceth)
+{
+	bna_bfi_attr_get(ioceth);
+}
+
+static void
+bna_ioceth_sm_enet_attr_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_last_resp_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	case IOCETH_E_ENET_ATTR_RESP:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ready);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ready_entry(struct bna_ioceth *ioceth)
+{
+	bna_enet_start(&ioceth->bna->enet);
+	bna_stats_mod_start(&ioceth->bna->stats_mod);
+	bnad_cb_ioceth_ready(ioceth->bna->bnad);
+}
+
+static void
+bna_ioceth_sm_ready(struct bna_ioceth *ioceth, enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_enet_stop_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		disable_mbox_intr(ioceth);
+		bna_enet_fail(&ioceth->bna->enet);
+		bna_stats_mod_fail(&ioceth->bna->stats_mod);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_failed);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_last_resp_wait_entry(struct bna_ioceth *ioceth)
+{
+}
+
+static void
+bna_ioceth_sm_last_resp_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_FAILED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		disable_mbox_intr(ioceth);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_ENET_ATTR_RESP:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_enet_stop_wait_entry(struct bna_ioceth *ioceth)
+{
+	bna_stats_mod_stop(&ioceth->bna->stats_mod);
+	bna_enet_stop(&ioceth->bna->enet);
+}
+
+static void
+bna_ioceth_sm_enet_stop_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_FAILED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		disable_mbox_intr(ioceth);
+		bna_enet_fail(&ioceth->bna->enet);
+		bna_stats_mod_fail(&ioceth->bna->stats_mod);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_ENET_STOPPED:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_ioc_disable_wait_entry(struct bna_ioceth *ioceth)
+{
+}
+
+static void
+bna_ioceth_sm_ioc_disable_wait(struct bna_ioceth *ioceth,
+				enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_IOC_DISABLED:
+		disable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+		break;
+
+	case IOCETH_E_ENET_STOPPED:
+		/* This event is received due to enet failing */
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_ioceth_sm_failed_entry(struct bna_ioceth *ioceth)
+{
+	bnad_cb_ioceth_failed(ioceth->bna->bnad);
+}
+
+static void
+bna_ioceth_sm_failed(struct bna_ioceth *ioceth,
+			enum bna_ioceth_event event)
+{
+	switch (event) {
+	case IOCETH_E_DISABLE:
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_disable_wait);
+		bfa_nw_ioc_disable(&ioceth->ioc);
+		break;
+
+	case IOCETH_E_IOC_RESET:
+		enable_mbox_intr(ioceth);
+		bfa_fsm_set_state(ioceth, bna_ioceth_sm_ioc_ready_wait);
+		break;
+
+	case IOCETH_E_IOC_FAILED:
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_bfi_attr_get(struct bna_ioceth *ioceth)
+{
+	struct bfi_enet_attr_req *attr_req = &ioceth->attr_req;
+
+	bfi_msgq_mhdr_set(attr_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_GET_ATTR_REQ, 0, 0);
+	attr_req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_attr_req)));
+	bfa_msgq_cmd_set(&ioceth->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_attr_req), &attr_req->mh);
+	bfa_msgq_cmd_post(&ioceth->bna->msgq, &ioceth->msgq_cmd);
+}
+
+/* IOC callback functions */
+
+static void
+bna_cb_ioceth_enable(void *arg, enum bfa_status error)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	if (error)
+		bfa_fsm_send_event(ioceth, IOCETH_E_IOC_FAILED);
+	else
+		bfa_fsm_send_event(ioceth, IOCETH_E_IOC_READY);
+}
+
+static void
+bna_cb_ioceth_disable(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_DISABLED);
+}
+
+static void
+bna_cb_ioceth_hbfail(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_FAILED);
+}
+
+static void
+bna_cb_ioceth_reset(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_IOC_RESET);
+}
+
+static struct bfa_ioc_cbfn bna_ioceth_cbfn = {
+	bna_cb_ioceth_enable,
+	bna_cb_ioceth_disable,
+	bna_cb_ioceth_hbfail,
+	bna_cb_ioceth_reset
+};
+
+void
+bna_ioceth_init(struct bna_ioceth *ioceth, struct bna *bna,
+		struct bna_res_info *res_info)
+{
+	u64 dma;
+	u8 *kva;
+
+	ioceth->bna = bna;
+
+	/**
+	 * Attach IOC and claim:
+	 *	1. DMA memory for IOC attributes
+	 *	2. Kernel memory for FW trace
+	 */
+	bfa_nw_ioc_attach(&ioceth->ioc, ioceth, &bna_ioceth_cbfn);
+	bfa_nw_ioc_pci_init(&ioceth->ioc, &bna->pcidev, BFI_PCIFN_CLASS_ETH);
+
+	BNA_GET_DMA_ADDR(
+		&res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].dma, dma);
+	kva = res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].kva;
+	bfa_nw_ioc_mem_claim(&ioceth->ioc, kva, dma);
+
+	kva = res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mdl[0].kva;
+
+	/**
+	 * Attach common modules (Diag, SFP, CEE, Port) and claim respective
+	 * DMA memory.
+	 */
+	BNA_GET_DMA_ADDR(
+		&res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].dma, dma);
+	kva = res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].kva;
+	bfa_nw_cee_attach(&bna->cee, &ioceth->ioc, bna);
+	bfa_nw_cee_mem_claim(&bna->cee, kva, dma);
+	kva += bfa_nw_cee_meminfo();
+	dma += bfa_nw_cee_meminfo();
+
+	bfa_msgq_attach(&bna->msgq, &ioceth->ioc);
+	bfa_msgq_memclaim(&bna->msgq, kva, dma);
+	bfa_msgq_regisr(&bna->msgq, BFI_MC_ENET, bna_msgq_rsp_handler, bna);
+	kva += bfa_msgq_meminfo();
+	dma += bfa_msgq_meminfo();
+
+	ioceth->stop_cbfn = NULL;
+	ioceth->stop_cbarg = NULL;
+
+	bfa_fsm_set_state(ioceth, bna_ioceth_sm_stopped);
+}
+
+void
+bna_ioceth_uninit(struct bna_ioceth *ioceth)
+{
+	bfa_nw_ioc_detach(&ioceth->ioc);
+
+	ioceth->bna = NULL;
+}
+
+void
+bna_bfi_attr_get_rsp(struct bna_ioceth *ioceth,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_attr_rsp *rsp = (struct bfi_enet_attr_rsp *)msghdr;
+
+	/**
+	 * Store only if not set earlier, since BNAD can override the HW
+	 * attributes
+	 */
+	if (!ioceth->attr.num_txq)
+		ioceth->attr.num_txq = ntohl(rsp->max_cfg);
+	if (!ioceth->attr.num_rxp)
+		ioceth->attr.num_rxp = ntohl(rsp->max_cfg);
+	ioceth->attr.num_ucmac = ntohl(rsp->max_ucmac);
+	ioceth->attr.num_mcmac = BFI_ENET_MAX_MCAM;
+	ioceth->attr.max_rit_size = ntohl(rsp->rit_size);
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_ENET_ATTR_RESP);
+}
+
+void
+bna_ioceth_cb_enet_stopped(void *arg)
+{
+	struct bna_ioceth *ioceth = (struct bna_ioceth *)arg;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_ENET_STOPPED);
+}
+
+void
+bna_ioceth_enable(struct bna_ioceth *ioceth)
+{
+	if (ioceth->fsm == (bfa_fsm_t)bna_ioceth_sm_ready) {
+		bnad_cb_ioceth_ready(ioceth->bna->bnad);
+		return;
+	}
+
+	if (ioceth->fsm == (bfa_fsm_t)bna_ioceth_sm_stopped)
+		bfa_fsm_send_event(ioceth, IOCETH_E_ENABLE);
+}
+
+void
+bna_ioceth_disable(struct bna_ioceth *ioceth, enum bna_cleanup_type type)
+{
+	if (type == BNA_SOFT_CLEANUP) {
+		bnad_cb_ioceth_disabled(ioceth->bna->bnad);
+		return;
+	}
+
+	ioceth->stop_cbfn = bnad_cb_ioceth_disabled;
+	ioceth->stop_cbarg = ioceth->bna->bnad;
+
+	bfa_fsm_send_event(ioceth, IOCETH_E_DISABLE);
+}
+
+bool
+bna_ioceth_state_is_failed(struct bna_ioceth *ioceth)
+{
+	return (ioceth->fsm == (bfa_fsm_t)bna_ioceth_sm_failed) ?
+		true : false;
+}
+
+static void
+bna_ucam_mod_init(struct bna_ucam_mod *ucam_mod, struct bna *bna,
+		  struct bna_res_info *res_info)
+{
+	int i;
+
+	ucam_mod->ucmac = (struct bna_mac *)
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&ucam_mod->free_q);
+	for (i = 0; i < bna->ioceth.attr.num_ucmac; i++) {
+		bfa_q_qe_init(&ucam_mod->ucmac[i].qe);
+		list_add_tail(&ucam_mod->ucmac[i].qe, &ucam_mod->free_q);
+	}
+
+	ucam_mod->bna = bna;
+}
+
+static void
+bna_ucam_mod_uninit(struct bna_ucam_mod *ucam_mod)
+{
+	struct list_head *qe;
+	int i = 0;
+
+	list_for_each(qe, &ucam_mod->free_q)
+		i++;
+
+	ucam_mod->bna = NULL;
+}
+
+static void
+bna_mcam_mod_init(struct bna_mcam_mod *mcam_mod, struct bna *bna,
+		  struct bna_res_info *res_info)
+{
+	int i;
+
+	mcam_mod->mcmac = (struct bna_mac *)
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&mcam_mod->free_q);
+	for (i = 0; i < bna->ioceth.attr.num_mcmac; i++) {
+		bfa_q_qe_init(&mcam_mod->mcmac[i].qe);
+		list_add_tail(&mcam_mod->mcmac[i].qe, &mcam_mod->free_q);
+	}
+
+	mcam_mod->mchandle = (struct bna_mcam_handle *)
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.mdl[0].kva;
+
+	INIT_LIST_HEAD(&mcam_mod->free_handle_q);
+	for (i = 0; i < bna->ioceth.attr.num_mcmac; i++) {
+		bfa_q_qe_init(&mcam_mod->mchandle[i].qe);
+		list_add_tail(&mcam_mod->mchandle[i].qe,
+				&mcam_mod->free_handle_q);
+	}
+
+	mcam_mod->bna = bna;
+}
+
+static void
+bna_mcam_mod_uninit(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+	int i;
+
+	i = 0;
+	list_for_each(qe, &mcam_mod->free_q) i++;
+
+	i = 0;
+	list_for_each(qe, &mcam_mod->free_handle_q) i++;
+
+	mcam_mod->bna = NULL;
+}
+
+static void
+bna_bfi_stats_get(struct bna *bna)
+{
+	struct bfi_enet_stats_req *stats_req = &bna->stats_mod.stats_get;
+
+	bna->stats_mod.stats_get_busy = true;
+
+	bfi_msgq_mhdr_set(stats_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_STATS_GET_REQ, 0, 0);
+	stats_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_stats_req)));
+	stats_req->stats_mask = htons(BFI_ENET_STATS_ALL);
+	stats_req->tx_enet_mask = htonl(bna->tx_mod.rid_mask);
+	stats_req->rx_enet_mask = htonl(bna->rx_mod.rid_mask);
+	stats_req->host_buffer.a32.addr_hi = bna->stats.hw_stats_dma.msb;
+	stats_req->host_buffer.a32.addr_lo = bna->stats.hw_stats_dma.lsb;
+
+	bfa_msgq_cmd_set(&bna->stats_mod.stats_get_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_stats_req), &stats_req->mh);
+	bfa_msgq_cmd_post(&bna->msgq, &bna->stats_mod.stats_get_cmd);
+}
+
+#define bna_stats_copy(_name, _type)					\
+do {									\
+	count = sizeof(struct bfi_enet_stats_ ## _type) / sizeof(u64);	\
+	stats_src = (u64 *)&bna->stats.hw_stats_kva->_name ## _stats;	\
+	stats_dst = (u64 *)&bna->stats.hw_stats._name ## _stats;	\
+	for (i = 0; i < count; i++)					\
+		stats_dst[i] = be64_to_cpu(stats_src[i]);		\
+} while (0)								\
+
+void
+bna_bfi_stats_get_rsp(struct bna *bna, struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_stats_req *stats_req = &bna->stats_mod.stats_get;
+	u64 *stats_src;
+	u64 *stats_dst;
+	u32 tx_enet_mask = ntohl(stats_req->tx_enet_mask);
+	u32 rx_enet_mask = ntohl(stats_req->rx_enet_mask);
+	int count;
+	int i;
+
+	bna_stats_copy(mac, mac);
+	bna_stats_copy(bpc, bpc);
+	bna_stats_copy(rad, rad);
+	bna_stats_copy(rlb, rad);
+	bna_stats_copy(fc_rx, fc_rx);
+	bna_stats_copy(fc_tx, fc_tx);
+
+	stats_src = (u64 *)&(bna->stats.hw_stats_kva->rxf_stats[0]);
+
+	/* Copy Rxf stats to SW area, scatter them while copying */
+	for (i = 0; i < BFI_ENET_CFG_MAX; i++) {
+		stats_dst = (u64 *)&(bna->stats.hw_stats.rxf_stats[i]);
+		memset(stats_dst, 0, sizeof(struct bfi_enet_stats_rxf));
+		if (rx_enet_mask & ((u32)(1 << i))) {
+			int k;
+			count = sizeof(struct bfi_enet_stats_rxf) /
+				sizeof(u64);
+			for (k = 0; k < count; k++) {
+				stats_dst[k] = be64_to_cpu(*stats_src);
+				stats_src++;
+			}
+		}
+	}
+
+	/* Copy Txf stats to SW area, scatter them while copying */
+	for (i = 0; i < BFI_ENET_CFG_MAX; i++) {
+		stats_dst = (u64 *)&(bna->stats.hw_stats.txf_stats[i]);
+		memset(stats_dst, 0, sizeof(struct bfi_enet_stats_txf));
+		if (tx_enet_mask & ((u32)(1 << i))) {
+			int k;
+			count = sizeof(struct bfi_enet_stats_txf) /
+				sizeof(u64);
+			for (k = 0; k < count; k++) {
+				stats_dst[k] = be64_to_cpu(*stats_src);
+				stats_src++;
+			}
+		}
+	}
+
+	bna->stats_mod.stats_get_busy = false;
+	bnad_cb_stats_get(bna->bnad, BNA_CB_SUCCESS, &bna->stats);
+}
+
+void
+bna_res_req(struct bna_res_info *res_info)
+{
+	/* DMA memory for COMMON_MODULE */
+	res_info[BNA_RES_MEM_T_COM].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.len = ALIGN(
+				(bfa_nw_cee_meminfo() +
+				bfa_msgq_meminfo()), PAGE_SIZE);
+
+	/* DMA memory for retrieving IOC attributes */
+	res_info[BNA_RES_MEM_T_ATTR].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.len =
+				ALIGN(bfa_nw_ioc_meminfo(), PAGE_SIZE);
+
+	/* Virtual memory for retreiving fw_trc */
+	res_info[BNA_RES_MEM_T_FWTRC].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mem_type = BNA_MEM_T_KVA;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.num = 0;
+	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.len = 0;
+
+	/* DMA memory for retreiving stats */
+	res_info[BNA_RES_MEM_T_STATS].res_type = BNA_RES_T_MEM;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.num = 1;
+	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.len =
+				ALIGN(sizeof(struct bfi_enet_stats),
+					PAGE_SIZE);
+}
+
+void
+bna_mod_res_req(struct bna *bna, struct bna_res_info *res_info)
+{
+	struct bna_attr *attr = &bna->ioceth.attr;
+
+	/* Virtual memory for Tx objects - stored by Tx module */
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.len =
+		attr->num_txq * sizeof(struct bna_tx);
+
+	/* Virtual memory for TxQ - stored by Tx module */
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.len =
+		attr->num_txq * sizeof(struct bna_txq);
+
+	/* Virtual memory for Rx objects - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.len =
+		attr->num_rxp * sizeof(struct bna_rx);
+
+	/* Virtual memory for RxPath - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.len =
+		attr->num_rxp * sizeof(struct bna_rxp);
+
+	/* Virtual memory for RxQ - stored by Rx module */
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.len =
+		(attr->num_rxp * 2) * sizeof(struct bna_rxq);
+
+	/* Virtual memory for Unicast MAC address - stored by ucam module */
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.len =
+		attr->num_ucmac * sizeof(struct bna_mac);
+
+	/* Virtual memory for Multicast MAC address - stored by mcam module */
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.len =
+		attr->num_mcmac * sizeof(struct bna_mac);
+
+	/* Virtual memory for Multicast handle - stored by mcam module */
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_type = BNA_RES_T_MEM;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.mem_type =
+		BNA_MEM_T_KVA;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.num = 1;
+	res_info[BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY].res_u.mem_info.len =
+		attr->num_mcmac * sizeof(struct bna_mcam_handle);
+}
+
+void
+bna_init(struct bna *bna, struct bnad *bnad,
+		struct bfa_pcidev *pcidev, struct bna_res_info *res_info)
+{
+	bna->bnad = bnad;
+	bna->pcidev = *pcidev;
+
+	bna->stats.hw_stats_kva = (struct bfi_enet_stats *)
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].kva;
+	bna->stats.hw_stats_dma.msb =
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.msb;
+	bna->stats.hw_stats_dma.lsb =
+		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.lsb;
+
+	bna_reg_addr_init(bna, &bna->pcidev);
+
+	/* Also initializes diag, cee, sfp, phy_port, msgq */
+	bna_ioceth_init(&bna->ioceth, bna, res_info);
+
+	bna_enet_init(&bna->enet, bna);
+	bna_ethport_init(&bna->ethport, bna);
+}
+
+void
+bna_mod_init(struct bna *bna, struct bna_res_info *res_info)
+{
+	bna_tx_mod_init(&bna->tx_mod, bna, res_info);
+
+	bna_rx_mod_init(&bna->rx_mod, bna, res_info);
+
+	bna_ucam_mod_init(&bna->ucam_mod, bna, res_info);
+
+	bna_mcam_mod_init(&bna->mcam_mod, bna, res_info);
+
+	bna->default_mode_rid = BFI_INVALID_RID;
+	bna->promisc_rid = BFI_INVALID_RID;
+
+	bna->mod_flags |= BNA_MOD_F_INIT_DONE;
+}
+
+void
+bna_uninit(struct bna *bna)
+{
+	if (bna->mod_flags & BNA_MOD_F_INIT_DONE) {
+		bna_mcam_mod_uninit(&bna->mcam_mod);
+		bna_ucam_mod_uninit(&bna->ucam_mod);
+		bna_rx_mod_uninit(&bna->rx_mod);
+		bna_tx_mod_uninit(&bna->tx_mod);
+		bna->mod_flags &= ~BNA_MOD_F_INIT_DONE;
+	}
+
+	bna_stats_mod_uninit(&bna->stats_mod);
+	bna_ethport_uninit(&bna->ethport);
+	bna_enet_uninit(&bna->enet);
+
+	bna_ioceth_uninit(&bna->ioceth);
+
+	bna->bnad = NULL;
+}
+
+int
+bna_num_txq_set(struct bna *bna, int num_txq)
+{
+	if (num_txq > 0 && (num_txq <= bna->ioceth.attr.num_txq)) {
+		bna->ioceth.attr.num_txq = num_txq;
+		return BNA_CB_SUCCESS;
+	}
+
+	return BNA_CB_FAIL;
+}
+
+int
+bna_num_rxp_set(struct bna *bna, int num_rxp)
+{
+	if (num_rxp > 0 && (num_rxp <= bna->ioceth.attr.num_rxp)) {
+		bna->ioceth.attr.num_rxp = num_rxp;
+		return BNA_CB_SUCCESS;
+	}
+
+	return BNA_CB_FAIL;
+}
+
+struct bna_mac *
+bna_ucam_mod_mac_get(struct bna_ucam_mod *ucam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&ucam_mod->free_q))
+		return NULL;
+
+	bfa_q_deq(&ucam_mod->free_q, &qe);
+
+	return (struct bna_mac *)qe;
+}
+
+void
+bna_ucam_mod_mac_put(struct bna_ucam_mod *ucam_mod, struct bna_mac *mac)
+{
+	list_add_tail(&mac->qe, &ucam_mod->free_q);
+}
+
+struct bna_mac *
+bna_mcam_mod_mac_get(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&mcam_mod->free_q))
+		return NULL;
+
+	bfa_q_deq(&mcam_mod->free_q, &qe);
+
+	return (struct bna_mac *)qe;
+}
+
+void
+bna_mcam_mod_mac_put(struct bna_mcam_mod *mcam_mod, struct bna_mac *mac)
+{
+	list_add_tail(&mac->qe, &mcam_mod->free_q);
+}
+
+struct bna_mcam_handle *
+bna_mcam_mod_handle_get(struct bna_mcam_mod *mcam_mod)
+{
+	struct list_head *qe;
+
+	if (list_empty(&mcam_mod->free_handle_q))
+		return NULL;
+
+	bfa_q_deq(&mcam_mod->free_handle_q, &qe);
+
+	return (struct bna_mcam_handle *)qe;
+}
+
+void
+bna_mcam_mod_handle_put(struct bna_mcam_mod *mcam_mod,
+			struct bna_mcam_handle *handle)
+{
+	list_add_tail(&handle->qe, &mcam_mod->free_handle_q);
+}
+
+void
+bna_hw_stats_get(struct bna *bna)
+{
+	if (!bna->stats_mod.ioc_ready) {
+		bnad_cb_stats_get(bna->bnad, BNA_CB_FAIL, &bna->stats);
+		return;
+	}
+	if (bna->stats_mod.stats_get_busy) {
+		bnad_cb_stats_get(bna->bnad, BNA_CB_BUSY, &bna->stats);
+		return;
+	}
+
+	bna_bfi_stats_get(bna);
+}
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 08/45] bna: Tx and Rx Redesign
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (6 preceding siblings ...)
  2011-07-18  8:19 ` [PATCH 07/45] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 09/45] bna: ENET and Tx Rx Redesign Update Rasesh Mody
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Change details:
 - This patch contains the changes as a result of redesigning of Tx, Rx data
   path setup. In the old design, setting up Txqs, Rxqs were done in the driver.
   With the new design, most of the hardware setup steps for the Txq, Rxqs are
   moved to FW. Host driver issues commands to FW through the message queue to
   setup/teardown tx, rx data path. FW performs necessary steps and responds
   back to the driver with a status.
 - As a result of this redesign, the state machine implementation for Tx, Rx
   objects have changed significantly. Instead of doing the raw register access,
   these state machines mostly send a command to FW and wait for response and
   take the next action. In addition to tx, rx datapath setup, this patch also
   deals with rx filter configuration - such as unicast address, multicast
   address, vlan filter, promiscuous mode etc.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bna_txrx.c | 4385 +++++++++++++++++++++-----------------------
 1 files changed, 2083 insertions(+), 2302 deletions(-)

diff --git a/drivers/net/bna/bna_txrx.c b/drivers/net/bna/bna_txrx.c
index 4d4f4d9..a44ee12 100644
--- a/drivers/net/bna/bna_txrx.c
+++ b/drivers/net/bna/bna_txrx.c
@@ -16,516 +16,68 @@
  * www.brocade.com
  */
 #include "bna.h"
-#include "bfa_sm.h"
 #include "bfi.h"
 
 /**
  * IB
  */
-#define bna_ib_find_free_ibidx(_mask, _pos)\
-do {\
-	(_pos) = 0;\
-	while (((_pos) < (BFI_IBIDX_MAX_SEGSIZE)) &&\
-		((1 << (_pos)) & (_mask)))\
-		(_pos)++;\
-} while (0)
-
-#define bna_ib_count_ibidx(_mask, _count)\
-do {\
-	int pos = 0;\
-	(_count) = 0;\
-	while (pos < (BFI_IBIDX_MAX_SEGSIZE)) {\
-		if ((1 << pos) & (_mask))\
-			(_count) = pos + 1;\
-		pos++;\
-	} \
-} while (0)
-
-#define bna_ib_select_segpool(_count, _q_idx)\
-do {\
-	int i;\
-	(_q_idx) = -1;\
-	for (i = 0; i < BFI_IBIDX_TOTAL_POOLS; i++) {\
-		if ((_count <= ibidx_pool[i].pool_entry_size)) {\
-			(_q_idx) = i;\
-			break;\
-		} \
-	} \
-} while (0)
-
-struct bna_ibidx_pool {
-	int	pool_size;
-	int	pool_entry_size;
-};
-init_ibidx_pool(ibidx_pool);
-
-static struct bna_intr *
-bna_intr_get(struct bna_ib_mod *ib_mod, enum bna_intr_type intr_type,
-		int vector)
-{
-	struct bna_intr *intr;
-	struct list_head *qe;
-
-	list_for_each(qe, &ib_mod->intr_active_q) {
-		intr = (struct bna_intr *)qe;
-
-		if ((intr->intr_type == intr_type) &&
-			(intr->vector == vector)) {
-			intr->ref_count++;
-			return intr;
-		}
-	}
-
-	if (list_empty(&ib_mod->intr_free_q))
-		return NULL;
-
-	bfa_q_deq(&ib_mod->intr_free_q, &intr);
-	bfa_q_qe_init(&intr->qe);
-
-	intr->ref_count = 1;
-	intr->intr_type = intr_type;
-	intr->vector = vector;
-
-	list_add_tail(&intr->qe, &ib_mod->intr_active_q);
-
-	return intr;
-}
-
-static void
-bna_intr_put(struct bna_ib_mod *ib_mod,
-		struct bna_intr *intr)
-{
-	intr->ref_count--;
-
-	if (intr->ref_count == 0) {
-		intr->ib = NULL;
-		list_del(&intr->qe);
-		bfa_q_qe_init(&intr->qe);
-		list_add_tail(&intr->qe, &ib_mod->intr_free_q);
-	}
-}
-
 void
-bna_ib_mod_init(struct bna_ib_mod *ib_mod, struct bna *bna,
-		struct bna_res_info *res_info)
-{
-	int i;
-	int j;
-	int count;
-	u8 offset;
-	struct bna_doorbell_qset *qset;
-	unsigned long off;
-
-	ib_mod->bna = bna;
-
-	ib_mod->ib = (struct bna_ib *)
-		res_info[BNA_RES_MEM_T_IB_ARRAY].res_u.mem_info.mdl[0].kva;
-	ib_mod->intr = (struct bna_intr *)
-		res_info[BNA_RES_MEM_T_INTR_ARRAY].res_u.mem_info.mdl[0].kva;
-	ib_mod->idx_seg = (struct bna_ibidx_seg *)
-		res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_u.mem_info.mdl[0].kva;
-
-	INIT_LIST_HEAD(&ib_mod->ib_free_q);
-	INIT_LIST_HEAD(&ib_mod->intr_free_q);
-	INIT_LIST_HEAD(&ib_mod->intr_active_q);
-
-	for (i = 0; i < BFI_IBIDX_TOTAL_POOLS; i++)
-		INIT_LIST_HEAD(&ib_mod->ibidx_seg_pool[i]);
-
-	for (i = 0; i < BFI_MAX_IB; i++) {
-		ib_mod->ib[i].ib_id = i;
-
-		ib_mod->ib[i].ib_seg_host_addr_kva =
-		res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].kva;
-		ib_mod->ib[i].ib_seg_host_addr.lsb =
-		res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.lsb;
-		ib_mod->ib[i].ib_seg_host_addr.msb =
-		res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.msb;
-
-		qset = (struct bna_doorbell_qset *)0;
-		off = (unsigned long)(&qset[i >> 1].ib0[(i & 0x1)
-					* (0x20 >> 2)]);
-		ib_mod->ib[i].door_bell.doorbell_addr = off +
-			BNA_GET_DOORBELL_BASE_ADDR(bna->pcidev.pci_bar_kva);
-
-		bfa_q_qe_init(&ib_mod->ib[i].qe);
-		list_add_tail(&ib_mod->ib[i].qe, &ib_mod->ib_free_q);
-
-		bfa_q_qe_init(&ib_mod->intr[i].qe);
-		list_add_tail(&ib_mod->intr[i].qe, &ib_mod->intr_free_q);
-	}
-
-	count = 0;
-	offset = 0;
-	for (i = 0; i < BFI_IBIDX_TOTAL_POOLS; i++) {
-		for (j = 0; j < ibidx_pool[i].pool_size; j++) {
-			bfa_q_qe_init(&ib_mod->idx_seg[count]);
-			ib_mod->idx_seg[count].ib_seg_size =
-					ibidx_pool[i].pool_entry_size;
-			ib_mod->idx_seg[count].ib_idx_tbl_offset = offset;
-			list_add_tail(&ib_mod->idx_seg[count].qe,
-				&ib_mod->ibidx_seg_pool[i]);
-			count++;
-			offset += ibidx_pool[i].pool_entry_size;
-		}
-	}
-}
-
-void
-bna_ib_mod_uninit(struct bna_ib_mod *ib_mod)
-{
-	int i;
-	int j;
-	struct list_head *qe;
-
-	i = 0;
-	list_for_each(qe, &ib_mod->ib_free_q)
-		i++;
-
-	i = 0;
-	list_for_each(qe, &ib_mod->intr_free_q)
-		i++;
-
-	for (i = 0; i < BFI_IBIDX_TOTAL_POOLS; i++) {
-		j = 0;
-		list_for_each(qe, &ib_mod->ibidx_seg_pool[i])
-			j++;
-	}
-
-	ib_mod->bna = NULL;
-}
-
-static struct bna_ib *
-bna_ib_get(struct bna_ib_mod *ib_mod,
-		enum bna_intr_type intr_type,
-		int vector)
-{
-	struct bna_ib *ib;
-	struct bna_intr *intr;
-
-	if (intr_type == BNA_INTR_T_INTX)
-		vector = (1 << vector);
-
-	intr = bna_intr_get(ib_mod, intr_type, vector);
-	if (intr == NULL)
-		return NULL;
-
-	if (intr->ib) {
-		if (intr->ib->ref_count == BFI_IBIDX_MAX_SEGSIZE) {
-			bna_intr_put(ib_mod, intr);
-			return NULL;
-		}
-		intr->ib->ref_count++;
-		return intr->ib;
-	}
-
-	if (list_empty(&ib_mod->ib_free_q)) {
-		bna_intr_put(ib_mod, intr);
-		return NULL;
-	}
-
-	bfa_q_deq(&ib_mod->ib_free_q, &ib);
-	bfa_q_qe_init(&ib->qe);
-
-	ib->ref_count = 1;
-	ib->start_count = 0;
-	ib->idx_mask = 0;
-
-	ib->intr = intr;
-	ib->idx_seg = NULL;
-	intr->ib = ib;
-
-	ib->bna = ib_mod->bna;
-
-	return ib;
-}
-
-static void
-bna_ib_put(struct bna_ib_mod *ib_mod, struct bna_ib *ib)
-{
-	bna_intr_put(ib_mod, ib->intr);
-
-	ib->ref_count--;
-
-	if (ib->ref_count == 0) {
-		ib->intr = NULL;
-		ib->bna = NULL;
-		list_add_tail(&ib->qe, &ib_mod->ib_free_q);
-	}
-}
-
-/* Returns index offset - starting from 0 */
-static int
-bna_ib_reserve_idx(struct bna_ib *ib)
-{
-	struct bna_ib_mod *ib_mod = &ib->bna->ib_mod;
-	struct bna_ibidx_seg *idx_seg;
-	int idx;
-	int num_idx;
-	int q_idx;
-
-	/* Find the first free index position */
-	bna_ib_find_free_ibidx(ib->idx_mask, idx);
-	if (idx == BFI_IBIDX_MAX_SEGSIZE)
-		return -1;
-
-	/*
-	 * Calculate the total number of indexes held by this IB,
-	 * including the index newly reserved above.
-	 */
-	bna_ib_count_ibidx((ib->idx_mask | (1 << idx)), num_idx);
-
-	/* See if there is a free space in the index segment held by this IB */
-	if (ib->idx_seg && (num_idx <= ib->idx_seg->ib_seg_size)) {
-		ib->idx_mask |= (1 << idx);
-		return idx;
-	}
-
-	if (ib->start_count)
-		return -1;
-
-	/* Allocate a new segment */
-	bna_ib_select_segpool(num_idx, q_idx);
-	while (1) {
-		if (q_idx == BFI_IBIDX_TOTAL_POOLS)
-			return -1;
-		if (!list_empty(&ib_mod->ibidx_seg_pool[q_idx]))
-			break;
-		q_idx++;
-	}
-	bfa_q_deq(&ib_mod->ibidx_seg_pool[q_idx], &idx_seg);
-	bfa_q_qe_init(&idx_seg->qe);
-
-	/* Free the old segment */
-	if (ib->idx_seg) {
-		bna_ib_select_segpool(ib->idx_seg->ib_seg_size, q_idx);
-		list_add_tail(&ib->idx_seg->qe, &ib_mod->ibidx_seg_pool[q_idx]);
-	}
-
-	ib->idx_seg = idx_seg;
-
-	ib->idx_mask |= (1 << idx);
-
-	return idx;
-}
-
-static void
-bna_ib_release_idx(struct bna_ib *ib, int idx)
-{
-	struct bna_ib_mod *ib_mod = &ib->bna->ib_mod;
-	struct bna_ibidx_seg *idx_seg;
-	int num_idx;
-	int cur_q_idx;
-	int new_q_idx;
-
-	ib->idx_mask &= ~(1 << idx);
-
-	if (ib->start_count)
-		return;
-
-	bna_ib_count_ibidx(ib->idx_mask, num_idx);
-
-	/*
-	 * Free the segment, if there are no more indexes in the segment
-	 * held by this IB
-	 */
-	if (!num_idx) {
-		bna_ib_select_segpool(ib->idx_seg->ib_seg_size, cur_q_idx);
-		list_add_tail(&ib->idx_seg->qe,
-			&ib_mod->ibidx_seg_pool[cur_q_idx]);
-		ib->idx_seg = NULL;
-		return;
-	}
-
-	/* See if we can move to a smaller segment */
-	bna_ib_select_segpool(num_idx, new_q_idx);
-	bna_ib_select_segpool(ib->idx_seg->ib_seg_size, cur_q_idx);
-	while (new_q_idx < cur_q_idx) {
-		if (!list_empty(&ib_mod->ibidx_seg_pool[new_q_idx]))
-			break;
-		new_q_idx++;
-	}
-	if (new_q_idx < cur_q_idx) {
-		/* Select the new smaller segment */
-		bfa_q_deq(&ib_mod->ibidx_seg_pool[new_q_idx], &idx_seg);
-		bfa_q_qe_init(&idx_seg->qe);
-		/* Free the old segment */
-		list_add_tail(&ib->idx_seg->qe,
-			&ib_mod->ibidx_seg_pool[cur_q_idx]);
-		ib->idx_seg = idx_seg;
-	}
-}
-
-static int
-bna_ib_config(struct bna_ib *ib, struct bna_ib_config *ib_config)
+bna_ib_coalescing_timeo_set(struct bna_ib *ib, u8 coalescing_timeo)
 {
-	if (ib->start_count)
-		return -1;
-
-	ib->ib_config.coalescing_timeo = ib_config->coalescing_timeo;
-	ib->ib_config.interpkt_timeo = ib_config->interpkt_timeo;
-	ib->ib_config.interpkt_count = ib_config->interpkt_count;
-	ib->ib_config.ctrl_flags = ib_config->ctrl_flags;
-
-	ib->ib_config.ctrl_flags |= BFI_IB_CF_MASTER_ENABLE;
-	if (ib->intr->intr_type == BNA_INTR_T_MSIX)
-		ib->ib_config.ctrl_flags |= BFI_IB_CF_MSIX_MODE;
-
-	return 0;
-}
-
-static void
-bna_ib_start(struct bna_ib *ib)
-{
-	struct bna_ib_blk_mem ib_cfg;
-	struct bna_ib_blk_mem *ib_mem;
-	u32 pg_num;
-	u32 intx_mask;
-	int i;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	ib->start_count++;
-
-	if (ib->start_count > 1)
-		return;
-
-	ib_cfg.host_addr_lo = (u32)(ib->ib_seg_host_addr.lsb);
-	ib_cfg.host_addr_hi = (u32)(ib->ib_seg_host_addr.msb);
-
-	ib_cfg.clsc_n_ctrl_n_msix = (((u32)
-				     ib->ib_config.coalescing_timeo << 16) |
-				((u32)ib->ib_config.ctrl_flags << 8) |
-				(ib->intr->vector));
-	ib_cfg.ipkt_n_ent_n_idxof =
-				((u32)
-				 (ib->ib_config.interpkt_timeo & 0xf) << 16) |
-				((u32)ib->idx_seg->ib_seg_size << 8) |
-				(ib->idx_seg->ib_idx_tbl_offset);
-	ib_cfg.ipkt_cnt_cfg_n_unacked = ((u32)
-					 ib->ib_config.interpkt_count << 24);
-
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + ib->bna->port_num,
-				HQM_IB_RAM_BASE_OFFSET);
-	writel(pg_num, ib->bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(ib->bna->pcidev.pci_bar_kva,
-					HQM_IB_RAM_BASE_OFFSET);
-
-	ib_mem = (struct bna_ib_blk_mem *)0;
-	off = (unsigned long)&ib_mem[ib->ib_id].host_addr_lo;
-	writel(htonl(ib_cfg.host_addr_lo), base_addr + off);
-
-	off = (unsigned long)&ib_mem[ib->ib_id].host_addr_hi;
-	writel(htonl(ib_cfg.host_addr_hi), base_addr + off);
-
-	off = (unsigned long)&ib_mem[ib->ib_id].clsc_n_ctrl_n_msix;
-	writel(ib_cfg.clsc_n_ctrl_n_msix, base_addr + off);
-
-	off = (unsigned long)&ib_mem[ib->ib_id].ipkt_n_ent_n_idxof;
-	writel(ib_cfg.ipkt_n_ent_n_idxof, base_addr + off);
-
-	off = (unsigned long)&ib_mem[ib->ib_id].ipkt_cnt_cfg_n_unacked;
-	writel(ib_cfg.ipkt_cnt_cfg_n_unacked, base_addr + off);
-
+	ib->coalescing_timeo = coalescing_timeo;
 	ib->door_bell.doorbell_ack = BNA_DOORBELL_IB_INT_ACK(
-				(u32)ib->ib_config.coalescing_timeo, 0);
-
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + ib->bna->port_num,
-				HQM_INDX_TBL_RAM_BASE_OFFSET);
-	writel(pg_num, ib->bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(ib->bna->pcidev.pci_bar_kva,
-					HQM_INDX_TBL_RAM_BASE_OFFSET);
-	for (i = 0; i < ib->idx_seg->ib_seg_size; i++) {
-		off = (unsigned long)
-		((ib->idx_seg->ib_idx_tbl_offset + i) * BFI_IBIDX_SIZE);
-		writel(0, base_addr + off);
-	}
-
-	if (ib->intr->intr_type == BNA_INTR_T_INTX) {
-		bna_intx_disable(ib->bna, intx_mask);
-		intx_mask &= ~(ib->intr->vector);
-		bna_intx_enable(ib->bna, intx_mask);
-	}
-}
-
-static void
-bna_ib_stop(struct bna_ib *ib)
-{
-	u32 intx_mask;
-
-	ib->start_count--;
-
-	if (ib->start_count == 0) {
-		writel(BNA_DOORBELL_IB_INT_DISABLE,
-				ib->door_bell.doorbell_addr);
-		if (ib->intr->intr_type == BNA_INTR_T_INTX) {
-			bna_intx_disable(ib->bna, intx_mask);
-			intx_mask |= (ib->intr->vector);
-			bna_intx_enable(ib->bna, intx_mask);
-		}
-	}
-}
-
-static void
-bna_ib_fail(struct bna_ib *ib)
-{
-	ib->start_count = 0;
+				(u32)ib->coalescing_timeo, 0);
 }
 
 /**
  * RXF
  */
-static void rxf_enable(struct bna_rxf *rxf);
-static void rxf_disable(struct bna_rxf *rxf);
-static void __rxf_config_set(struct bna_rxf *rxf);
-static void __rxf_rit_set(struct bna_rxf *rxf);
-static void __bna_rxf_stat_clr(struct bna_rxf *rxf);
-static int rxf_process_packet_filter(struct bna_rxf *rxf);
-static int rxf_clear_packet_filter(struct bna_rxf *rxf);
-static void rxf_reset_packet_filter(struct bna_rxf *rxf);
-static void rxf_cb_enabled(void *arg, int status);
-static void rxf_cb_disabled(void *arg, int status);
-static void bna_rxf_cb_stats_cleared(void *arg, int status);
-static void __rxf_enable(struct bna_rxf *rxf);
-static void __rxf_disable(struct bna_rxf *rxf);
+
+#define bna_rxf_vlan_cfg_soft_reset(rxf)				\
+do {									\
+	(rxf)->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;		\
+	(rxf)->vlan_strip_pending = true;				\
+} while (0)
+
+#define bna_rxf_rss_cfg_soft_reset(rxf)					\
+do {									\
+	if ((rxf)->rss_status == BNA_STATUS_T_ENABLED)			\
+		(rxf)->rss_pending = (BNA_RSS_F_RIT_PENDING |		\
+				BNA_RSS_F_CFG_PENDING |			\
+				BNA_RSS_F_STATUS_PENDING);		\
+} while (0)
+
+static int bna_rxf_cfg_apply(struct bna_rxf *rxf);
+static void bna_rxf_cfg_reset(struct bna_rxf *rxf);
+static int bna_rxf_fltr_clear(struct bna_rxf *rxf);
+static int bna_rxf_ucast_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_promisc_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_allmulti_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_vlan_strip_cfg_apply(struct bna_rxf *rxf);
+static int bna_rxf_ucast_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
+static int bna_rxf_promisc_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
+static int bna_rxf_allmulti_cfg_reset(struct bna_rxf *rxf,
+					enum bna_cleanup_type cleanup);
 
 bfa_fsm_state_decl(bna_rxf, stopped, struct bna_rxf,
 			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, start_wait, struct bna_rxf,
+bfa_fsm_state_decl(bna_rxf, paused, struct bna_rxf,
 			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, cam_fltr_mod_wait, struct bna_rxf,
+bfa_fsm_state_decl(bna_rxf, cfg_wait, struct bna_rxf,
 			enum bna_rxf_event);
 bfa_fsm_state_decl(bna_rxf, started, struct bna_rxf,
 			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, cam_fltr_clr_wait, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, stop_wait, struct bna_rxf,
-			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, pause_wait, struct bna_rxf,
+bfa_fsm_state_decl(bna_rxf, fltr_clr_wait, struct bna_rxf,
 			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, resume_wait, struct bna_rxf,
+bfa_fsm_state_decl(bna_rxf, last_resp_wait, struct bna_rxf,
 			enum bna_rxf_event);
-bfa_fsm_state_decl(bna_rxf, stat_clr_wait, struct bna_rxf,
-			enum bna_rxf_event);
-
-static struct bfa_sm_table rxf_sm_table[] = {
-	{BFA_SM(bna_rxf_sm_stopped), BNA_RXF_STOPPED},
-	{BFA_SM(bna_rxf_sm_start_wait), BNA_RXF_START_WAIT},
-	{BFA_SM(bna_rxf_sm_cam_fltr_mod_wait), BNA_RXF_CAM_FLTR_MOD_WAIT},
-	{BFA_SM(bna_rxf_sm_started), BNA_RXF_STARTED},
-	{BFA_SM(bna_rxf_sm_cam_fltr_clr_wait), BNA_RXF_CAM_FLTR_CLR_WAIT},
-	{BFA_SM(bna_rxf_sm_stop_wait), BNA_RXF_STOP_WAIT},
-	{BFA_SM(bna_rxf_sm_pause_wait), BNA_RXF_PAUSE_WAIT},
-	{BFA_SM(bna_rxf_sm_resume_wait), BNA_RXF_RESUME_WAIT},
-	{BFA_SM(bna_rxf_sm_stat_clr_wait), BNA_RXF_STAT_CLR_WAIT}
-};
 
 static void
 bna_rxf_sm_stopped_entry(struct bna_rxf *rxf)
 {
-	call_rxf_stop_cbfn(rxf, BNA_CB_SUCCESS);
+	call_rxf_stop_cbfn(rxf);
 }
 
 static void
@@ -533,39 +85,33 @@ bna_rxf_sm_stopped(struct bna_rxf *rxf, enum bna_rxf_event event)
 {
 	switch (event) {
 	case RXF_E_START:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_start_wait);
+		if (rxf->flags & BNA_RXF_F_PAUSED) {
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
+			call_rxf_start_cbfn(rxf);
+		} else
+			bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
 		break;
 
 	case RXF_E_STOP:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
+		call_rxf_stop_cbfn(rxf);
 		break;
 
 	case RXF_E_FAIL:
 		/* No-op */
 		break;
 
-	case RXF_E_CAM_FLTR_MOD:
-		call_rxf_cam_fltr_cbfn(rxf, BNA_CB_SUCCESS);
-		break;
-
-	case RXF_E_STARTED:
-	case RXF_E_STOPPED:
-	case RXF_E_CAM_FLTR_RESP:
-		/**
-		 * These events are received due to flushing of mbox
-		 * when device fails
-		 */
-		/* No-op */
+	case RXF_E_CONFIG:
+		call_rxf_cam_fltr_cbfn(rxf);
 		break;
 
 	case RXF_E_PAUSE:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_PAUSED;
-		call_rxf_pause_cbfn(rxf, BNA_CB_SUCCESS);
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		call_rxf_pause_cbfn(rxf);
 		break;
 
 	case RXF_E_RESUME:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_RUNNING;
-		call_rxf_resume_cbfn(rxf, BNA_CB_SUCCESS);
+		rxf->flags &= ~BNA_RXF_F_PAUSED;
+		call_rxf_resume_cbfn(rxf);
 		break;
 
 	default:
@@ -574,56 +120,27 @@ bna_rxf_sm_stopped(struct bna_rxf *rxf, enum bna_rxf_event event)
 }
 
 static void
-bna_rxf_sm_start_wait_entry(struct bna_rxf *rxf)
+bna_rxf_sm_paused_entry(struct bna_rxf *rxf)
 {
-	__rxf_config_set(rxf);
-	__rxf_rit_set(rxf);
-	rxf_enable(rxf);
+	call_rxf_pause_cbfn(rxf);
 }
 
 static void
-bna_rxf_sm_start_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+bna_rxf_sm_paused(struct bna_rxf *rxf, enum bna_rxf_event event)
 {
 	switch (event) {
 	case RXF_E_STOP:
-		/**
-		 * STOP is originated from bnad. When this happens,
-		 * it can not be waiting for filter update
-		 */
-		call_rxf_start_cbfn(rxf, BNA_CB_INTERRUPT);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stop_wait);
-		break;
-
 	case RXF_E_FAIL:
-		call_rxf_cam_fltr_cbfn(rxf, BNA_CB_SUCCESS);
-		call_rxf_start_cbfn(rxf, BNA_CB_FAIL);
 		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
 		break;
 
-	case RXF_E_CAM_FLTR_MOD:
-		/* No-op */
+	case RXF_E_CONFIG:
+		call_rxf_cam_fltr_cbfn(rxf);
 		break;
 
-	case RXF_E_STARTED:
-		/**
-		 * Force rxf_process_filter() to go through initial
-		 * config
-		 */
-		if ((rxf->ucast_active_mac != NULL) &&
-			(rxf->ucast_pending_set == 0))
-			rxf->ucast_pending_set = 1;
-
-		if (rxf->rss_status == BNA_STATUS_T_ENABLED)
-			rxf->rxf_flags |= BNA_RXF_FL_RSS_CONFIG_PENDING;
-
-		rxf->rxf_flags |= BNA_RXF_FL_VLAN_CONFIG_PENDING;
-
-		bfa_fsm_set_state(rxf, bna_rxf_sm_cam_fltr_mod_wait);
-		break;
-
-	case RXF_E_PAUSE:
 	case RXF_E_RESUME:
-		rxf->rxf_flags |= BNA_RXF_FL_OPERSTATE_CHANGED;
+		rxf->flags &= ~BNA_RXF_F_PAUSED;
+		bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
 		break;
 
 	default:
@@ -632,49 +149,45 @@ bna_rxf_sm_start_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 }
 
 static void
-bna_rxf_sm_cam_fltr_mod_wait_entry(struct bna_rxf *rxf)
+bna_rxf_sm_cfg_wait_entry(struct bna_rxf *rxf)
 {
-	if (!rxf_process_packet_filter(rxf)) {
-		/* No more pending CAM entries to update */
+	if (!bna_rxf_cfg_apply(rxf)) {
+		/* No more pending config updates */
 		bfa_fsm_set_state(rxf, bna_rxf_sm_started);
 	}
 }
 
 static void
-bna_rxf_sm_cam_fltr_mod_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+bna_rxf_sm_cfg_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 {
 	switch (event) {
 	case RXF_E_STOP:
-		/**
-		 * STOP is originated from bnad. When this happens,
-		 * it can not be waiting for filter update
-		 */
-		call_rxf_start_cbfn(rxf, BNA_CB_INTERRUPT);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_cam_fltr_clr_wait);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_last_resp_wait);
 		break;
 
 	case RXF_E_FAIL:
-		rxf_reset_packet_filter(rxf);
-		call_rxf_cam_fltr_cbfn(rxf, BNA_CB_SUCCESS);
-		call_rxf_start_cbfn(rxf, BNA_CB_FAIL);
+		bna_rxf_cfg_reset(rxf);
+		call_rxf_start_cbfn(rxf);
+		call_rxf_cam_fltr_cbfn(rxf);
+		call_rxf_resume_cbfn(rxf);
 		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
 		break;
 
-	case RXF_E_CAM_FLTR_MOD:
+	case RXF_E_CONFIG:
 		/* No-op */
 		break;
 
-	case RXF_E_CAM_FLTR_RESP:
-		if (!rxf_process_packet_filter(rxf)) {
-			/* No more pending CAM entries to update */
-			call_rxf_cam_fltr_cbfn(rxf, BNA_CB_SUCCESS);
-			bfa_fsm_set_state(rxf, bna_rxf_sm_started);
-		}
+	case RXF_E_PAUSE:
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		call_rxf_start_cbfn(rxf);
+		bfa_fsm_set_state(rxf, bna_rxf_sm_fltr_clr_wait);
 		break;
 
-	case RXF_E_PAUSE:
-	case RXF_E_RESUME:
-		rxf->rxf_flags |= BNA_RXF_FL_OPERSTATE_CHANGED;
+	case RXF_E_FW_RESP:
+		if (!bna_rxf_cfg_apply(rxf)) {
+			/* No more pending config updates */
+			bfa_fsm_set_state(rxf, bna_rxf_sm_started);
+		}
 		break;
 
 	default:
@@ -685,15 +198,9 @@ bna_rxf_sm_cam_fltr_mod_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 static void
 bna_rxf_sm_started_entry(struct bna_rxf *rxf)
 {
-	call_rxf_start_cbfn(rxf, BNA_CB_SUCCESS);
-
-	if (rxf->rxf_flags & BNA_RXF_FL_OPERSTATE_CHANGED) {
-		if (rxf->rxf_oper_state == BNA_RXF_OPER_STATE_PAUSED)
-			bfa_fsm_send_event(rxf, RXF_E_PAUSE);
-		else
-			bfa_fsm_send_event(rxf, RXF_E_RESUME);
-	}
-
+	call_rxf_start_cbfn(rxf);
+	call_rxf_cam_fltr_cbfn(rxf);
+	call_rxf_resume_cbfn(rxf);
 }
 
 static void
@@ -701,26 +208,21 @@ bna_rxf_sm_started(struct bna_rxf *rxf, enum bna_rxf_event event)
 {
 	switch (event) {
 	case RXF_E_STOP:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_cam_fltr_clr_wait);
-		/* Hack to get FSM start clearing CAM entries */
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_RESP);
-		break;
-
 	case RXF_E_FAIL:
-		rxf_reset_packet_filter(rxf);
+		bna_rxf_cfg_reset(rxf);
 		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
 		break;
 
-	case RXF_E_CAM_FLTR_MOD:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_cam_fltr_mod_wait);
+	case RXF_E_CONFIG:
+		bfa_fsm_set_state(rxf, bna_rxf_sm_cfg_wait);
 		break;
 
 	case RXF_E_PAUSE:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_pause_wait);
-		break;
-
-	case RXF_E_RESUME:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_resume_wait);
+		rxf->flags |= BNA_RXF_F_PAUSED;
+		if (!bna_rxf_fltr_clear(rxf))
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
+		else
+			bfa_fsm_set_state(rxf, bna_rxf_sm_fltr_clr_wait);
 		break;
 
 	default:
@@ -729,34 +231,24 @@ bna_rxf_sm_started(struct bna_rxf *rxf, enum bna_rxf_event event)
 }
 
 static void
-bna_rxf_sm_cam_fltr_clr_wait_entry(struct bna_rxf *rxf)
+bna_rxf_sm_fltr_clr_wait_entry(struct bna_rxf *rxf)
 {
-	/**
-	 *  Note: Do not add rxf_clear_packet_filter here.
-	 * It will overstep mbox when this transition happens:
-	 *	cam_fltr_mod_wait -> cam_fltr_clr_wait on RXF_E_STOP event
-	 */
 }
 
 static void
-bna_rxf_sm_cam_fltr_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+bna_rxf_sm_fltr_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 {
 	switch (event) {
 	case RXF_E_FAIL:
-		/**
-		 * FSM was in the process of stopping, initiated by
-		 * bnad. When this happens, no one can be waiting for
-		 * start or filter update
-		 */
-		rxf_reset_packet_filter(rxf);
+		bna_rxf_cfg_reset(rxf);
+		call_rxf_pause_cbfn(rxf);
 		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
 		break;
 
-	case RXF_E_CAM_FLTR_RESP:
-		if (!rxf_clear_packet_filter(rxf)) {
+	case RXF_E_FW_RESP:
+		if (!bna_rxf_fltr_clear(rxf)) {
 			/* No more pending CAM entries to clear */
-			bfa_fsm_set_state(rxf, bna_rxf_sm_stop_wait);
-			rxf_disable(rxf);
+			bfa_fsm_set_state(rxf, bna_rxf_sm_paused);
 		}
 		break;
 
@@ -766,674 +258,518 @@ bna_rxf_sm_cam_fltr_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 }
 
 static void
-bna_rxf_sm_stop_wait_entry(struct bna_rxf *rxf)
+bna_rxf_sm_last_resp_wait_entry(struct bna_rxf *rxf)
 {
-	/**
-	 * NOTE: Do not add  rxf_disable here.
-	 * It will overstep mbox when this transition happens:
-	 *	start_wait -> stop_wait on RXF_E_STOP event
-	 */
 }
 
 static void
-bna_rxf_sm_stop_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+bna_rxf_sm_last_resp_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
 {
 	switch (event) {
 	case RXF_E_FAIL:
-		/**
-		 * FSM was in the process of stopping, initiated by
-		 * bnad. When this happens, no one can be waiting for
-		 * start or filter update
-		 */
+	case RXF_E_FW_RESP:
+		bna_rxf_cfg_reset(rxf);
 		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
 		break;
 
-	case RXF_E_STARTED:
-		/**
-		 * This event is received due to abrupt transition from
-		 * bna_rxf_sm_start_wait state on receiving
-		 * RXF_E_STOP event
-		 */
-		rxf_disable(rxf);
-		break;
-
-	case RXF_E_STOPPED:
-		/**
-		 * FSM was in the process of stopping, initiated by
-		 * bnad. When this happens, no one can be waiting for
-		 * start or filter update
-		 */
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stat_clr_wait);
-		break;
-
-	case RXF_E_PAUSE:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_PAUSED;
-		break;
-
-	case RXF_E_RESUME:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_RUNNING;
-		break;
-
 	default:
 		bfa_sm_fault(event);
 	}
 }
 
 static void
-bna_rxf_sm_pause_wait_entry(struct bna_rxf *rxf)
+bna_bfi_ucast_req(struct bna_rxf *rxf, struct bna_mac *mac,
+		enum bfi_enet_h2i_msgs req_type)
 {
-	rxf->rxf_flags &=
-		~(BNA_RXF_FL_OPERSTATE_CHANGED | BNA_RXF_FL_RXF_ENABLED);
-	__rxf_disable(rxf);
-}
-
-static void
-bna_rxf_sm_pause_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_FAIL:
-		/**
-		 * FSM was in the process of disabling rxf, initiated by
-		 * bnad.
-		 */
-		call_rxf_pause_cbfn(rxf, BNA_CB_FAIL);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_STOPPED:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_PAUSED;
-		call_rxf_pause_cbfn(rxf, BNA_CB_SUCCESS);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_started);
-		break;
+	struct bfi_enet_ucast_req *req = &rxf->bfi_enet_cmd.ucast_req;
 
-	/*
-	 * Since PAUSE/RESUME can only be sent by bnad, we don't expect
-	 * any other event during these states
-	 */
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_rxf_sm_resume_wait_entry(struct bna_rxf *rxf)
-{
-	rxf->rxf_flags &= ~(BNA_RXF_FL_OPERSTATE_CHANGED);
-	rxf->rxf_flags |= BNA_RXF_FL_RXF_ENABLED;
-	__rxf_enable(rxf);
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, req_type, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_ucast_req)));
+	memcpy(&req->mac_addr, &mac->addr, sizeof(mac_t));
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_ucast_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-bna_rxf_sm_resume_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
+bna_bfi_mcast_add_req(struct bna_rxf *rxf, struct bna_mac *mac)
 {
-	switch (event) {
-	case RXF_E_FAIL:
-		/**
-		 * FSM was in the process of disabling rxf, initiated by
-		 * bnad.
-		 */
-		call_rxf_resume_cbfn(rxf, BNA_CB_FAIL);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	case RXF_E_STARTED:
-		rxf->rxf_oper_state = BNA_RXF_OPER_STATE_RUNNING;
-		call_rxf_resume_cbfn(rxf, BNA_CB_SUCCESS);
-		bfa_fsm_set_state(rxf, bna_rxf_sm_started);
-		break;
+	struct bfi_enet_mcast_add_req *req =
+		&rxf->bfi_enet_cmd.mcast_add_req;
 
-	/*
-	 * Since PAUSE/RESUME can only be sent by bnad, we don't expect
-	 * any other event during these states
-	 */
-	default:
-		bfa_sm_fault(event);
-	}
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, BFI_ENET_H2I_MAC_MCAST_ADD_REQ,
+		0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_mcast_add_req)));
+	memcpy(&req->mac_addr, &mac->addr, sizeof(mac_t));
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_mcast_add_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-bna_rxf_sm_stat_clr_wait_entry(struct bna_rxf *rxf)
+bna_bfi_mcast_del_req(struct bna_rxf *rxf, u16 handle)
 {
-	__bna_rxf_stat_clr(rxf);
-}
+	struct bfi_enet_mcast_del_req *req =
+		&rxf->bfi_enet_cmd.mcast_del_req;
 
-static void
-bna_rxf_sm_stat_clr_wait(struct bna_rxf *rxf, enum bna_rxf_event event)
-{
-	switch (event) {
-	case RXF_E_FAIL:
-	case RXF_E_STAT_CLEARED:
-		bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET, BFI_ENET_H2I_MAC_MCAST_DEL_REQ,
+		0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+	bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_mcast_del_req)));
+	req->handle = htons(handle);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_mcast_del_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-__rxf_enable(struct bna_rxf *rxf)
+bna_bfi_mcast_filter_req(struct bna_rxf *rxf, enum bna_status status)
 {
-	struct bfi_ll_rxf_multi_req ll_req;
-	u32 bm[2] = {0, 0};
-
-	if (rxf->rxf_id < 32)
-		bm[0] = 1 << rxf->rxf_id;
-	else
-		bm[1] = 1 << (rxf->rxf_id - 32);
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
 
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_RX_REQ, 0);
-	ll_req.rxf_id_mask[0] = htonl(bm[0]);
-	ll_req.rxf_id_mask[1] = htonl(bm[1]);
-	ll_req.enable = 1;
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &ll_req, sizeof(ll_req),
-			rxf_cb_enabled, rxf);
-
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_MAC_MCAST_FILTER_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-__rxf_disable(struct bna_rxf *rxf)
+bna_bfi_rx_promisc_req(struct bna_rxf *rxf, enum bna_status status)
 {
-	struct bfi_ll_rxf_multi_req ll_req;
-	u32 bm[2] = {0, 0};
-
-	if (rxf->rxf_id < 32)
-		bm[0] = 1 << rxf->rxf_id;
-	else
-		bm[1] = 1 << (rxf->rxf_id - 32);
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_RX_REQ, 0);
-	ll_req.rxf_id_mask[0] = htonl(bm[0]);
-	ll_req.rxf_id_mask[1] = htonl(bm[1]);
-	ll_req.enable = 0;
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &ll_req, sizeof(ll_req),
-			rxf_cb_disabled, rxf);
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
 
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_PROMISCUOUS_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-__rxf_config_set(struct bna_rxf *rxf)
-{
-	u32 i;
-	struct bna_rss_mem *rss_mem;
-	struct bna_rx_fndb_ram *rx_fndb_ram;
-	struct bna *bna = rxf->rx->bna;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-			RSS_TABLE_BASE_OFFSET);
-
-	rss_mem = (struct bna_rss_mem *)0;
-
-	/* Configure RSS if required */
-	if (rxf->ctrl_flags & BNA_RXF_CF_RSS_ENABLE) {
-		/* configure RSS Table */
-		writel(BNA_GET_PAGE_NUM(RAD0_MEM_BLK_BASE_PG_NUM +
-			bna->port_num, RSS_TABLE_BASE_OFFSET),
-					bna->regs.page_addr);
-
-		/* temporarily disable RSS, while hash value is written */
-		off = (unsigned long)&rss_mem[0].type_n_hash;
-		writel(0, base_addr + off);
-
-		for (i = 0; i < BFI_RSS_HASH_KEY_LEN; i++) {
-			off = (unsigned long)
-			&rss_mem[0].hash_key[(BFI_RSS_HASH_KEY_LEN - 1) - i];
-			writel(htonl(rxf->rss_cfg.toeplitz_hash_key[i]),
-			base_addr + off);
-		}
-
-		off = (unsigned long)&rss_mem[0].type_n_hash;
-		writel(rxf->rss_cfg.hash_type | rxf->rss_cfg.hash_mask,
-			base_addr + off);
-	}
-
-	/* Configure RxF */
-	writel(BNA_GET_PAGE_NUM(
-		LUT0_MEM_BLK_BASE_PG_NUM + (bna->port_num * 2),
-		RX_FNDB_RAM_BASE_OFFSET),
-		bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-		RX_FNDB_RAM_BASE_OFFSET);
-
-	rx_fndb_ram = (struct bna_rx_fndb_ram *)0;
-
-	/* We always use RSS table 0 */
-	off = (unsigned long)&rx_fndb_ram[rxf->rxf_id].rss_prop;
-	writel(rxf->ctrl_flags & BNA_RXF_CF_RSS_ENABLE,
-		base_addr + off);
-
-	/* small large buffer enable/disable */
-	off = (unsigned long)&rx_fndb_ram[rxf->rxf_id].size_routing_props;
-	writel((rxf->ctrl_flags & BNA_RXF_CF_SM_LG_RXQ) | 0x80,
-		base_addr + off);
-
-	/* RIT offset,  HDS forced offset, multicast RxQ Id */
-	off = (unsigned long)&rx_fndb_ram[rxf->rxf_id].rit_hds_mcastq;
-	writel((rxf->rit_segment->rit_offset << 16) |
-		(rxf->forced_offset << 8) |
-		(rxf->hds_cfg.hdr_type & BNA_HDS_FORCED) | rxf->mcast_rxq_id,
-		base_addr + off);
-
-	/*
-	 * default vlan tag, default function enable, strip vlan bytes,
-	 * HDS type, header size
-	 */
-
-	off = (unsigned long)&rx_fndb_ram[rxf->rxf_id].control_flags;
-	 writel(((u32)rxf->default_vlan_tag << 16) |
-		(rxf->ctrl_flags &
-			(BNA_RXF_CF_DEFAULT_VLAN |
-			BNA_RXF_CF_DEFAULT_FUNCTION_ENABLE |
-			BNA_RXF_CF_VLAN_STRIP)) |
-		(rxf->hds_cfg.hdr_type & ~BNA_HDS_FORCED) |
-		rxf->hds_cfg.header_size,
-		base_addr + off);
-}
-
-void
-__rxf_vlan_filter_set(struct bna_rxf *rxf, enum bna_status status)
+bna_bfi_rx_vlan_filter_set(struct bna_rxf *rxf, u8 block_idx)
 {
-	struct bna *bna = rxf->rx->bna;
+	struct bfi_enet_rx_vlan_req *req = &rxf->bfi_enet_cmd.vlan_req;
 	int i;
+	int j;
 
-	writel(BNA_GET_PAGE_NUM(LUT0_MEM_BLK_BASE_PG_NUM +
-			(bna->port_num * 2), VLAN_RAM_BASE_OFFSET),
-			bna->regs.page_addr);
-
-	if (status == BNA_STATUS_T_ENABLED) {
-		/* enable VLAN filtering on this function */
-		for (i = 0; i <= BFI_MAX_VLAN / 32; i++) {
-			writel(rxf->vlan_filter_table[i],
-					BNA_GET_VLAN_MEM_ENTRY_ADDR
-					(bna->pcidev.pci_bar_kva, rxf->rxf_id,
-						i * 32));
-		}
-	} else {
-		/* disable VLAN filtering on this function */
-		for (i = 0; i <= BFI_MAX_VLAN / 32; i++) {
-			writel(0xffffffff,
-					BNA_GET_VLAN_MEM_ENTRY_ADDR
-					(bna->pcidev.pci_bar_kva, rxf->rxf_id,
-						i * 32));
-		}
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_VLAN_SET_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rx_vlan_req)));
+	req->block_idx = block_idx;
+	for (i = 0; i < (BFI_ENET_VLAN_BLOCK_SIZE / 32); i++) {
+		j = (block_idx * (BFI_ENET_VLAN_BLOCK_SIZE / 32)) + i;
+		if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED)
+			req->bit_mask[i] =
+				htonl(rxf->vlan_filter_table[j]);
+		else
+			req->bit_mask[i] = 0xFFFFFFFF;
 	}
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rx_vlan_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-__rxf_rit_set(struct bna_rxf *rxf)
+bna_bfi_vlan_strip_enable(struct bna_rxf *rxf)
 {
-	struct bna *bna = rxf->rx->bna;
-	struct bna_rit_mem *rit_mem;
-	int i;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-			FUNCTION_TO_RXQ_TRANSLATE);
-
-	rit_mem = (struct bna_rit_mem *)0;
-
-	writel(BNA_GET_PAGE_NUM(RXA0_MEM_BLK_BASE_PG_NUM + bna->port_num,
-		FUNCTION_TO_RXQ_TRANSLATE),
-		bna->regs.page_addr);
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
 
-	for (i = 0; i < rxf->rit_segment->rit_size; i++) {
-		off = (unsigned long)&rit_mem[i + rxf->rit_segment->rit_offset];
-		writel(rxf->rit_segment->rit[i].large_rxq_id << 6 |
-			rxf->rit_segment->rit[i].small_rxq_id,
-			base_addr + off);
-	}
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_VLAN_STRIP_ENABLE_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = rxf->vlan_strip_status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-__bna_rxf_stat_clr(struct bna_rxf *rxf)
+bna_bfi_rit_cfg(struct bna_rxf *rxf)
 {
-	struct bfi_ll_stats_req ll_req;
-	u32 bm[2] = {0, 0};
-
-	if (rxf->rxf_id < 32)
-		bm[0] = 1 << rxf->rxf_id;
-	else
-		bm[1] = 1 << (rxf->rxf_id - 32);
+	struct bfi_enet_rit_req *req = &rxf->bfi_enet_cmd.rit_req;
 
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_STATS_CLEAR_REQ, 0);
-	ll_req.stats_mask = 0;
-	ll_req.txf_id_mask[0] = 0;
-	ll_req.txf_id_mask[1] =	0;
-
-	ll_req.rxf_id_mask[0] = htonl(bm[0]);
-	ll_req.rxf_id_mask[1] = htonl(bm[1]);
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_rxf_cb_stats_cleared, rxf);
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RIT_CFG_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rit_req)));
+	req->size = htons(rxf->rit_size);
+	memcpy(&req->table[0], rxf->rit, rxf->rit_size);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rit_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-rxf_enable(struct bna_rxf *rxf)
+bna_bfi_rss_cfg(struct bna_rxf *rxf)
 {
-	if (rxf->rxf_oper_state == BNA_RXF_OPER_STATE_PAUSED)
-		bfa_fsm_send_event(rxf, RXF_E_STARTED);
-	else {
-		rxf->rxf_flags |= BNA_RXF_FL_RXF_ENABLED;
-		__rxf_enable(rxf);
-	}
+	struct bfi_enet_rss_cfg_req *req = &rxf->bfi_enet_cmd.rss_req;
+	int i;
+
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RSS_CFG_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rss_cfg_req)));
+	req->cfg.type = rxf->rss_cfg.hash_type;
+	req->cfg.mask = rxf->rss_cfg.hash_mask;
+	for (i = 0; i < BFI_ENET_RSS_KEY_LEN; i++)
+		req->cfg.key[i] =
+			htonl(rxf->rss_cfg.toeplitz_hash_key[i]);
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rss_cfg_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
 static void
-rxf_cb_enabled(void *arg, int status)
+bna_bfi_rss_enable(struct bna_rxf *rxf)
 {
-	struct bna_rxf *rxf = (struct bna_rxf *)arg;
+	struct bfi_enet_enable_req *req = &rxf->bfi_enet_cmd.req;
 
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-	bfa_fsm_send_event(rxf, RXF_E_STARTED);
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RSS_ENABLE_REQ, 0, rxf->rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_enable_req)));
+	req->enable = rxf->rss_status;
+	bfa_msgq_cmd_set(&rxf->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_enable_req), &req->mh);
+	bfa_msgq_cmd_post(&rxf->rx->bna->msgq, &rxf->msgq_cmd);
 }
 
-static void
-rxf_disable(struct bna_rxf *rxf)
+/* This function gets the multicast MAC that has already been added to CAM */
+static struct bna_mac *
+bna_rxf_mcmac_get(struct bna_rxf *rxf, u8 *mac_addr)
 {
-	if (rxf->rxf_oper_state == BNA_RXF_OPER_STATE_PAUSED)
-		bfa_fsm_send_event(rxf, RXF_E_STOPPED);
-	else
-		rxf->rxf_flags &= ~BNA_RXF_FL_RXF_ENABLED;
-		__rxf_disable(rxf);
-}
+	struct bna_mac *mac;
+	struct list_head *qe;
 
-static void
-rxf_cb_disabled(void *arg, int status)
-{
-	struct bna_rxf *rxf = (struct bna_rxf *)arg;
+	list_for_each(qe, &rxf->mcast_active_q) {
+		mac = (struct bna_mac *)qe;
+		if (BNA_MAC_IS_EQUAL(&mac->addr, mac_addr))
+			return mac;
+	}
 
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-	bfa_fsm_send_event(rxf, RXF_E_STOPPED);
+	list_for_each(qe, &rxf->mcast_pending_del_q) {
+		mac = (struct bna_mac *)qe;
+		if (BNA_MAC_IS_EQUAL(&mac->addr, mac_addr))
+			return mac;
+	}
+
+	return NULL;
 }
 
-void
-rxf_cb_cam_fltr_mbox_cmd(void *arg, int status)
+static struct bna_mcam_handle *
+bna_rxf_mchandle_get(struct bna_rxf *rxf, int handle)
 {
-	struct bna_rxf *rxf = (struct bna_rxf *)arg;
+	struct bna_mcam_handle *mchandle;
+	struct list_head *qe;
 
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
+	list_for_each(qe, &rxf->mcast_handle_q) {
+		mchandle = (struct bna_mcam_handle *)qe;
+		if (mchandle->handle == handle)
+			return mchandle;
+	}
 
-	bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_RESP);
+	return NULL;
 }
 
 static void
-bna_rxf_cb_stats_cleared(void *arg, int status)
+bna_rxf_mchandle_attach(struct bna_rxf *rxf, u8 *mac_addr, int handle)
 {
-	struct bna_rxf *rxf = (struct bna_rxf *)arg;
+	struct bna_mac *mcmac;
+	struct bna_mcam_handle *mchandle;
 
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-	bfa_fsm_send_event(rxf, RXF_E_STAT_CLEARED);
+	mcmac = bna_rxf_mcmac_get(rxf, mac_addr);
+	mchandle = bna_rxf_mchandle_get(rxf, handle);
+	if (mchandle == NULL) {
+		mchandle = bna_mcam_mod_handle_get(&rxf->rx->bna->mcam_mod);
+		mchandle->handle = handle;
+		mchandle->refcnt = 0;
+		list_add_tail(&mchandle->qe, &rxf->mcast_handle_q);
+	}
+	mchandle->refcnt++;
+	mcmac->handle = mchandle;
 }
 
-void
-rxf_cam_mbox_cmd(struct bna_rxf *rxf, u8 cmd,
-		const struct bna_mac *mac_addr)
+static int
+bna_rxf_mcast_del(struct bna_rxf *rxf, struct bna_mac *mac,
+		enum bna_cleanup_type cleanup)
 {
-	struct bfi_ll_mac_addr_req req;
-
-	bfi_h2i_set(req.mh, BFI_MC_LL, cmd, 0);
+	struct bna_mcam_handle *mchandle;
+	int ret = 0;
 
-	req.rxf_id = rxf->rxf_id;
-	memcpy(&req.mac_addr, (void *)&mac_addr->addr, ETH_ALEN);
+	mchandle = mac->handle;
+	if (mchandle == NULL)
+		return ret;
 
-	bna_mbox_qe_fill(&rxf->mbox_qe, &req, sizeof(req),
-				rxf_cb_cam_fltr_mbox_cmd, rxf);
+	mchandle->refcnt--;
+	if (mchandle->refcnt == 0) {
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_del_req(rxf, mchandle->handle);
+			ret = 1;
+		}
+		list_del(&mchandle->qe);
+		bfa_q_qe_init(&mchandle->qe);
+		bna_mcam_mod_handle_put(&rxf->rx->bna->mcam_mod, mchandle);
+	}
+	mac->handle = NULL;
 
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
+	return ret;
 }
 
 static int
-rxf_process_packet_filter_mcast(struct bna_rxf *rxf)
+bna_rxf_mcast_cfg_apply(struct bna_rxf *rxf)
 {
 	struct bna_mac *mac = NULL;
 	struct list_head *qe;
+	int ret;
+
+	/* Delete multicast entries previousely added */
+	while (!list_empty(&rxf->mcast_pending_del_q)) {
+		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		ret = bna_rxf_mcast_del(rxf, mac, BNA_HARD_CLEANUP);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+		if (ret)
+			return ret;
+	}
 
 	/* Add multicast entries */
 	if (!list_empty(&rxf->mcast_pending_add_q)) {
 		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
 		bfa_q_qe_init(qe);
 		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_ADD_REQ, mac);
 		list_add_tail(&mac->qe, &rxf->mcast_active_q);
+		bna_bfi_mcast_add_req(rxf, mac);
 		return 1;
 	}
 
-	/* Delete multicast entries previousely added */
-	if (!list_empty(&rxf->mcast_pending_del_q)) {
+	return 0;
+}
+
+static int
+bna_rxf_vlan_cfg_apply(struct bna_rxf *rxf)
+{
+	u8 vlan_pending_bitmask;
+	int block_idx = 0;
+
+	if (rxf->vlan_pending_bitmask) {
+		vlan_pending_bitmask = rxf->vlan_pending_bitmask;
+		while (!(vlan_pending_bitmask & 0x1)) {
+			block_idx++;
+			vlan_pending_bitmask >>= 1;
+		}
+		rxf->vlan_pending_bitmask &= ~(1 << block_idx);
+		bna_bfi_rx_vlan_filter_set(rxf, block_idx);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_mcast_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct list_head *qe;
+	struct bna_mac *mac;
+	int ret;
+
+	/* Throw away delete pending mcast entries */
+	while (!list_empty(&rxf->mcast_pending_del_q)) {
 		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
 		bfa_q_qe_init(qe);
 		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_DEL_REQ, mac);
+		ret = bna_rxf_mcast_del(rxf, mac, cleanup);
 		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-		return 1;
+		if (ret)
+			return ret;
+	}
+
+	/* Move active mcast entries to pending_add_q */
+	while (!list_empty(&rxf->mcast_active_q)) {
+		bfa_q_deq(&rxf->mcast_active_q, &qe);
+		bfa_q_qe_init(qe);
+		list_add_tail(qe, &rxf->mcast_pending_add_q);
+		mac = (struct bna_mac *)qe;
+		if (bna_rxf_mcast_del(rxf, mac, cleanup))
+			return 1;
 	}
 
 	return 0;
 }
 
 static int
-rxf_process_packet_filter_vlan(struct bna_rxf *rxf)
+bna_rxf_rss_cfg_apply(struct bna_rxf *rxf)
 {
-	/* Apply the VLAN filter */
-	if (rxf->rxf_flags & BNA_RXF_FL_VLAN_CONFIG_PENDING) {
-		rxf->rxf_flags &= ~BNA_RXF_FL_VLAN_CONFIG_PENDING;
-		if (!(rxf->rxmode_active & BNA_RXMODE_PROMISC))
-			__rxf_vlan_filter_set(rxf, rxf->vlan_filter_status);
-	}
+	if (rxf->rss_pending) {
+		if (rxf->rss_pending & BNA_RSS_F_RIT_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_RIT_PENDING;
+			bna_bfi_rit_cfg(rxf);
+			return 1;
+		}
+
+		if (rxf->rss_pending & BNA_RSS_F_CFG_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_CFG_PENDING;
+			bna_bfi_rss_cfg(rxf);
+			return 1;
+		}
 
-	/* Apply RSS configuration */
-	if (rxf->rxf_flags & BNA_RXF_FL_RSS_CONFIG_PENDING) {
-		rxf->rxf_flags &= ~BNA_RXF_FL_RSS_CONFIG_PENDING;
-		if (rxf->rss_status == BNA_STATUS_T_DISABLED) {
-			/* RSS is being disabled */
-			rxf->ctrl_flags &= ~BNA_RXF_CF_RSS_ENABLE;
-			__rxf_rit_set(rxf);
-			__rxf_config_set(rxf);
-		} else {
-			/* RSS is being enabled or reconfigured */
-			rxf->ctrl_flags |= BNA_RXF_CF_RSS_ENABLE;
-			__rxf_rit_set(rxf);
-			__rxf_config_set(rxf);
+		if (rxf->rss_pending & BNA_RSS_F_STATUS_PENDING) {
+			rxf->rss_pending &= ~BNA_RSS_F_STATUS_PENDING;
+			bna_bfi_rss_enable(rxf);
+			return 1;
 		}
 	}
 
 	return 0;
 }
 
-/**
- * Processes pending ucast, mcast entry addition/deletion and issues mailbox
- * command. Also processes pending filter configuration - promiscuous mode,
- * default mode, allmutli mode and issues mailbox command or directly applies
- * to h/w
- */
 static int
-rxf_process_packet_filter(struct bna_rxf *rxf)
+bna_rxf_cfg_apply(struct bna_rxf *rxf)
 {
-	/* Set the default MAC first */
-	if (rxf->ucast_pending_set > 0) {
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_SET_REQ,
-				rxf->ucast_active_mac);
-		rxf->ucast_pending_set--;
+	if (bna_rxf_ucast_cfg_apply(rxf))
 		return 1;
-	}
 
-	if (rxf_process_packet_filter_ucast(rxf))
+	if (bna_rxf_mcast_cfg_apply(rxf))
 		return 1;
 
-	if (rxf_process_packet_filter_mcast(rxf))
+	if (bna_rxf_promisc_cfg_apply(rxf))
 		return 1;
 
-	if (rxf_process_packet_filter_promisc(rxf))
+	if (bna_rxf_allmulti_cfg_apply(rxf))
 		return 1;
 
-	if (rxf_process_packet_filter_allmulti(rxf))
+	if (bna_rxf_vlan_cfg_apply(rxf))
 		return 1;
 
-	if (rxf_process_packet_filter_vlan(rxf))
+	if (bna_rxf_vlan_strip_cfg_apply(rxf))
 		return 1;
 
-	return 0;
-}
-
-static int
-rxf_clear_packet_filter_mcast(struct bna_rxf *rxf)
-{
-	struct bna_mac *mac = NULL;
-	struct list_head *qe;
-
-	/* 3. delete pending mcast entries */
-	if (!list_empty(&rxf->mcast_pending_del_q)) {
-		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_DEL_REQ, mac);
-		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	if (bna_rxf_rss_cfg_apply(rxf))
 		return 1;
-	}
-
-	/* 4. clear active mcast entries; move them to pending_add_q */
-	if (!list_empty(&rxf->mcast_active_q)) {
-		bfa_q_deq(&rxf->mcast_active_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_DEL_REQ, mac);
-		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
-		return 1;
-	}
 
 	return 0;
 }
 
-/**
- * In the rxf stop path, processes pending ucast/mcast delete queue and issues
- * the mailbox command. Moves the active ucast/mcast entries to pending add q,
- * so that they are added to CAM again in the rxf start path. Moves the current
- * filter settings - promiscuous, default, allmutli - to pending filter
- * configuration
- */
+/* Only software reset */
 static int
-rxf_clear_packet_filter(struct bna_rxf *rxf)
+bna_rxf_fltr_clear(struct bna_rxf *rxf)
 {
-	if (rxf_clear_packet_filter_ucast(rxf))
+	if (bna_rxf_ucast_cfg_reset(rxf, BNA_HARD_CLEANUP))
 		return 1;
 
-	if (rxf_clear_packet_filter_mcast(rxf))
+	if (bna_rxf_mcast_cfg_reset(rxf, BNA_HARD_CLEANUP))
 		return 1;
 
-	/* 5. clear active default MAC in the CAM */
-	if (rxf->ucast_pending_set > 0)
-		rxf->ucast_pending_set = 0;
-
-	if (rxf_clear_packet_filter_promisc(rxf))
+	if (bna_rxf_promisc_cfg_reset(rxf, BNA_HARD_CLEANUP))
 		return 1;
 
-	if (rxf_clear_packet_filter_allmulti(rxf))
+	if (bna_rxf_allmulti_cfg_reset(rxf, BNA_HARD_CLEANUP))
 		return 1;
 
 	return 0;
 }
 
 static void
-rxf_reset_packet_filter_mcast(struct bna_rxf *rxf)
+bna_rxf_cfg_reset(struct bna_rxf *rxf)
 {
+	bna_rxf_ucast_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_mcast_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_promisc_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_allmulti_cfg_reset(rxf, BNA_SOFT_CLEANUP);
+	bna_rxf_vlan_cfg_soft_reset(rxf);
+	bna_rxf_rss_cfg_soft_reset(rxf);
+}
+
+static void
+bna_rit_init(struct bna_rxf *rxf, int rit_size)
+{
+	struct bna_rx *rx = rxf->rx;
+	struct bna_rxp *rxp;
 	struct list_head *qe;
-	struct bna_mac *mac;
+	int offset = 0;
 
-	/* 3. Move active mcast entries to pending_add_q */
-	while (!list_empty(&rxf->mcast_active_q)) {
-		bfa_q_deq(&rxf->mcast_active_q, &qe);
-		bfa_q_qe_init(qe);
-		list_add_tail(qe, &rxf->mcast_pending_add_q);
+	rxf->rit_size = rit_size;
+	list_for_each(qe, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe;
+		rxf->rit[offset] = rxp->cq.ccb->id;
+		offset++;
 	}
 
-	/* 4. Throw away delete pending mcast entries */
-	while (!list_empty(&rxf->mcast_pending_del_q)) {
-		bfa_q_deq(&rxf->mcast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-	}
 }
 
-/**
- * In the rxf fail path, throws away the ucast/mcast entries pending for
- * deletion, moves all active ucast/mcast entries to pending queue so that
- * they are added back to CAM in the rxf start path. Also moves the current
- * filter configuration to pending filter configuration.
- */
-static void
-rxf_reset_packet_filter(struct bna_rxf *rxf)
+void
+bna_bfi_rxf_cfg_rsp(struct bna_rxf *rxf, struct bfi_msgq_mhdr *msghdr)
 {
-	rxf_reset_packet_filter_ucast(rxf);
-
-	rxf_reset_packet_filter_mcast(rxf);
-
-	/* 5. Turn off ucast set flag */
-	rxf->ucast_pending_set = 0;
+	bfa_fsm_send_event(rxf, RXF_E_FW_RESP);
+}
 
-	rxf_reset_packet_filter_promisc(rxf);
+void
+bna_bfi_rxf_mcast_add_rsp(struct bna_rxf *rxf,
+			struct bfi_msgq_mhdr *msghdr)
+{
+	struct bfi_enet_mcast_add_req *req =
+		&rxf->bfi_enet_cmd.mcast_add_req;
+	struct bfi_enet_mcast_add_rsp *rsp =
+		(struct bfi_enet_mcast_add_rsp *)msghdr;
 
-	rxf_reset_packet_filter_allmulti(rxf);
+	bna_rxf_mchandle_attach(rxf, (u8 *)&req->mac_addr,
+		ntohs(rsp->handle));
+	bfa_fsm_send_event(rxf, RXF_E_FW_RESP);
 }
 
 static void
 bna_rxf_init(struct bna_rxf *rxf,
 		struct bna_rx *rx,
-		struct bna_rx_config *q_config)
+		struct bna_rx_config *q_config,
+		struct bna_res_info *res_info)
 {
-	struct list_head *qe;
-	struct bna_rxp *rxp;
-
-	/* rxf_id is initialized during rx_mod init */
 	rxf->rx = rx;
 
 	INIT_LIST_HEAD(&rxf->ucast_pending_add_q);
 	INIT_LIST_HEAD(&rxf->ucast_pending_del_q);
 	rxf->ucast_pending_set = 0;
+	rxf->ucast_active_set = 0;
 	INIT_LIST_HEAD(&rxf->ucast_active_q);
-	rxf->ucast_active_mac = NULL;
+	rxf->ucast_pending_mac = NULL;
 
 	INIT_LIST_HEAD(&rxf->mcast_pending_add_q);
 	INIT_LIST_HEAD(&rxf->mcast_pending_del_q);
 	INIT_LIST_HEAD(&rxf->mcast_active_q);
+	INIT_LIST_HEAD(&rxf->mcast_handle_q);
 
-	bfa_q_qe_init(&rxf->mbox_qe.qe);
-
-	if (q_config->vlan_strip_status == BNA_STATUS_T_ENABLED)
-		rxf->ctrl_flags |= BNA_RXF_CF_VLAN_STRIP;
-
-	rxf->rxf_oper_state = (q_config->paused) ?
-		BNA_RXF_OPER_STATE_PAUSED : BNA_RXF_OPER_STATE_RUNNING;
+	if (q_config->paused)
+		rxf->flags |= BNA_RXF_F_PAUSED;
 
-	bna_rxf_adv_init(rxf, rx, q_config);
+	rxf->rit = (u8 *)
+		res_info[BNA_RX_RES_MEM_T_RIT].res_u.mem_info.mdl[0].kva;
+	bna_rit_init(rxf, q_config->num_paths);
 
-	rxf->rit_segment = bna_rit_mod_seg_get(&rxf->rx->bna->rit_mod,
-					q_config->num_paths);
-
-	list_for_each(qe, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe;
-		if (q_config->rxp_type == BNA_RXP_SINGLE)
-			rxf->mcast_rxq_id = rxp->rxq.single.only->rxq_id;
-		else
-			rxf->mcast_rxq_id = rxp->rxq.slr.large->rxq_id;
-		break;
+	rxf->rss_status = q_config->rss_status;
+	if (rxf->rss_status == BNA_STATUS_T_ENABLED) {
+		rxf->rss_cfg = q_config->rss_config;
+		rxf->rss_pending |= BNA_RSS_F_CFG_PENDING;
+		rxf->rss_pending |= BNA_RSS_F_RIT_PENDING;
+		rxf->rss_pending |= BNA_RSS_F_STATUS_PENDING;
 	}
 
 	rxf->vlan_filter_status = BNA_STATUS_T_DISABLED;
 	memset(rxf->vlan_filter_table, 0,
-			(sizeof(u32) * ((BFI_MAX_VLAN + 1) / 32)));
+			(sizeof(u32) * ((BFI_ENET_VLAN_ID_MAX + 1) / 32)));
+	rxf->vlan_filter_table[0] |= 1; /* for pure priority tagged frames */
+	rxf->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;
 
-	/* Set up VLAN 0 for pure priority tagged packets */
-	rxf->vlan_filter_table[0] |= 1;
+	rxf->vlan_strip_status = q_config->vlan_strip_status;
 
 	bfa_fsm_set_state(rxf, bna_rxf_sm_stopped);
 }
@@ -1441,13 +777,10 @@ bna_rxf_init(struct bna_rxf *rxf,
 static void
 bna_rxf_uninit(struct bna_rxf *rxf)
 {
-	struct bna *bna = rxf->rx->bna;
 	struct bna_mac *mac;
 
-	bna_rit_mod_seg_put(&rxf->rx->bna->rit_mod, rxf->rit_segment);
-	rxf->rit_segment = NULL;
-
 	rxf->ucast_pending_set = 0;
+	rxf->ucast_active_set = 0;
 
 	while (!list_empty(&rxf->ucast_pending_add_q)) {
 		bfa_q_deq(&rxf->ucast_pending_add_q, &mac);
@@ -1455,11 +788,11 @@ bna_rxf_uninit(struct bna_rxf *rxf)
 		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
 	}
 
-	if (rxf->ucast_active_mac) {
-		bfa_q_qe_init(&rxf->ucast_active_mac->qe);
+	if (rxf->ucast_pending_mac) {
+		bfa_q_qe_init(&rxf->ucast_pending_mac->qe);
 		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod,
-			rxf->ucast_active_mac);
-		rxf->ucast_active_mac = NULL;
+			rxf->ucast_pending_mac);
+		rxf->ucast_pending_mac = NULL;
 	}
 
 	while (!list_empty(&rxf->mcast_pending_add_q)) {
@@ -1468,39 +801,25 @@ bna_rxf_uninit(struct bna_rxf *rxf)
 		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
 	}
 
-	/* Turn off pending promisc mode */
-	if (is_promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* system promisc state should be pending */
-		BUG_ON(!(bna->rxf_promisc_id == rxf->rxf_id));
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		 bna->rxf_promisc_id = BFI_MAX_RXF;
-	}
-	/* Promisc mode should not be active */
-	BUG_ON(rxf->rxmode_active & BNA_RXMODE_PROMISC);
+	rxf->rxmode_pending = 0;
+	rxf->rxmode_pending_bitmask = 0;
+	if (rxf->rx->bna->promisc_rid == rxf->rx->rid)
+		rxf->rx->bna->promisc_rid = BFI_INVALID_RID;
+	if (rxf->rx->bna->default_mode_rid == rxf->rx->rid)
+		rxf->rx->bna->default_mode_rid = BFI_INVALID_RID;
 
-	/* Turn off pending all-multi mode */
-	if (is_allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-	}
-	/* Allmulti mode should not be active */
-	BUG_ON(rxf->rxmode_active & BNA_RXMODE_ALLMULTI);
+	rxf->rss_pending = 0;
+	rxf->vlan_strip_pending = false;
+
+	rxf->flags = 0;
 
 	rxf->rx = NULL;
 }
 
 static void
-bna_rx_cb_rxf_started(struct bna_rx *rx, enum bna_cb_status status)
+bna_rx_cb_rxf_started(struct bna_rx *rx)
 {
 	bfa_fsm_send_event(rx, RX_E_RXF_STARTED);
-	if (rx->rxf.rxf_id < 32)
-		rx->bna->rx_mod.rxf_bmap[0] |= ((u32)1 << rx->rxf.rxf_id);
-	else
-		rx->bna->rx_mod.rxf_bmap[1] |= ((u32)
-				1 << (rx->rxf.rxf_id - 32));
 }
 
 static void
@@ -1508,19 +827,13 @@ bna_rxf_start(struct bna_rxf *rxf)
 {
 	rxf->start_cbfn = bna_rx_cb_rxf_started;
 	rxf->start_cbarg = rxf->rx;
-	rxf->rxf_flags &= ~BNA_RXF_FL_FAILED;
 	bfa_fsm_send_event(rxf, RXF_E_START);
 }
 
 static void
-bna_rx_cb_rxf_stopped(struct bna_rx *rx, enum bna_cb_status status)
+bna_rx_cb_rxf_stopped(struct bna_rx *rx)
 {
 	bfa_fsm_send_event(rx, RX_E_RXF_STOPPED);
-	if (rx->rxf.rxf_id < 32)
-		rx->bna->rx_mod.rxf_bmap[0] &= ~(u32)1 << rx->rxf.rxf_id;
-	else
-		rx->bna->rx_mod.rxf_bmap[1] &= ~(u32)
-				1 << (rx->rxf.rxf_id - 32);
 }
 
 static void
@@ -1534,68 +847,46 @@ bna_rxf_stop(struct bna_rxf *rxf)
 static void
 bna_rxf_fail(struct bna_rxf *rxf)
 {
-	rxf->rxf_flags |= BNA_RXF_FL_FAILED;
 	bfa_fsm_send_event(rxf, RXF_E_FAIL);
 }
 
-int
-bna_rxf_state_get(struct bna_rxf *rxf)
-{
-	return bfa_sm_to_state(rxf_sm_table, rxf->fsm);
-}
-
 enum bna_cb_status
 bna_rx_ucast_set(struct bna_rx *rx, u8 *ucmac,
-		 void (*cbfn)(struct bnad *, struct bna_rx *,
-			      enum bna_cb_status))
+		 void (*cbfn)(struct bnad *, struct bna_rx *))
 {
 	struct bna_rxf *rxf = &rx->rxf;
 
-	if (rxf->ucast_active_mac == NULL) {
-		rxf->ucast_active_mac =
+	if (rxf->ucast_pending_mac == NULL) {
+		rxf->ucast_pending_mac =
 				bna_ucam_mod_mac_get(&rxf->rx->bna->ucam_mod);
-		if (rxf->ucast_active_mac == NULL)
+		if (rxf->ucast_pending_mac == NULL)
 			return BNA_CB_UCAST_CAM_FULL;
-		bfa_q_qe_init(&rxf->ucast_active_mac->qe);
+		bfa_q_qe_init(&rxf->ucast_pending_mac->qe);
 	}
 
-	memcpy(rxf->ucast_active_mac->addr, ucmac, ETH_ALEN);
-	rxf->ucast_pending_set++;
+	memcpy(rxf->ucast_pending_mac->addr, ucmac, ETH_ALEN);
+	rxf->ucast_pending_set = 1;
 	rxf->cam_fltr_cbfn = cbfn;
 	rxf->cam_fltr_cbarg = rx->bna->bnad;
 
-	bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
 
 	return BNA_CB_SUCCESS;
 }
 
 enum bna_cb_status
 bna_rx_mcast_add(struct bna_rx *rx, u8 *addr,
-		 void (*cbfn)(struct bnad *, struct bna_rx *,
-			      enum bna_cb_status))
+		 void (*cbfn)(struct bnad *, struct bna_rx *))
 {
 	struct bna_rxf *rxf = &rx->rxf;
-	struct list_head	*qe;
 	struct bna_mac *mac;
 
-	/* Check if already added */
-	list_for_each(qe, &rxf->mcast_active_q) {
-		mac = (struct bna_mac *)qe;
-		if (BNA_MAC_IS_EQUAL(mac->addr, addr)) {
-			if (cbfn)
-				(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
-			return BNA_CB_SUCCESS;
-		}
-	}
-
-	/* Check if pending addition */
-	list_for_each(qe, &rxf->mcast_pending_add_q) {
-		mac = (struct bna_mac *)qe;
-		if (BNA_MAC_IS_EQUAL(mac->addr, addr)) {
-			if (cbfn)
-				(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
-			return BNA_CB_SUCCESS;
-		}
+	/* Check if already added or pending addition */
+	if (bna_mac_find(&rxf->mcast_active_q, addr) ||
+		bna_mac_find(&rxf->mcast_pending_add_q, addr)) {
+		if (cbfn)
+			cbfn(rx->bna->bnad, rx);
+		return BNA_CB_SUCCESS;
 	}
 
 	mac = bna_mcam_mod_mac_get(&rxf->rx->bna->mcam_mod);
@@ -1608,25 +899,20 @@ bna_rx_mcast_add(struct bna_rx *rx, u8 *addr,
 	rxf->cam_fltr_cbfn = cbfn;
 	rxf->cam_fltr_cbarg = rx->bna->bnad;
 
-	bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
 
 	return BNA_CB_SUCCESS;
 }
 
 enum bna_cb_status
 bna_rx_mcast_listset(struct bna_rx *rx, int count, u8 *mclist,
-		     void (*cbfn)(struct bnad *, struct bna_rx *,
-				  enum bna_cb_status))
+		     void (*cbfn)(struct bnad *, struct bna_rx *))
 {
 	struct bna_rxf *rxf = &rx->rxf;
 	struct list_head list_head;
 	struct list_head *qe;
 	u8 *mcaddr;
 	struct bna_mac *mac;
-	struct bna_mac *mac1;
-	int skip;
-	int delete;
-	int need_hw_config = 0;
 	int i;
 
 	/* Allocate nodes */
@@ -1642,108 +928,33 @@ bna_rx_mcast_listset(struct bna_rx *rx, int count, u8 *mclist,
 		mcaddr += ETH_ALEN;
 	}
 
-	/* Schedule for addition */
-	while (!list_empty(&list_head)) {
-		bfa_q_deq(&list_head, &qe);
-		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-
-		skip = 0;
-
-		/* Skip if already added */
-		list_for_each(qe, &rxf->mcast_active_q) {
-			mac1 = (struct bna_mac *)qe;
-			if (BNA_MAC_IS_EQUAL(mac1->addr, mac->addr)) {
-				bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod,
-							mac);
-				skip = 1;
-				break;
-			}
-		}
-
-		if (skip)
-			continue;
-
-		/* Skip if pending addition */
-		list_for_each(qe, &rxf->mcast_pending_add_q) {
-			mac1 = (struct bna_mac *)qe;
-			if (BNA_MAC_IS_EQUAL(mac1->addr, mac->addr)) {
-				bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod,
-							mac);
-				skip = 1;
-				break;
-			}
-		}
-
-		if (skip)
-			continue;
-
-		need_hw_config = 1;
-		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
-	}
-
-	/**
-	 * Delete the entries that are in the pending_add_q but not
-	 * in the new list
-	 */
+	/* Purge the pending_add_q */
 	while (!list_empty(&rxf->mcast_pending_add_q)) {
 		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
+		bfa_q_qe_init(qe);
 		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-		for (i = 0, mcaddr = mclist, delete = 1; i < count; i++) {
-			if (BNA_MAC_IS_EQUAL(mcaddr, mac->addr)) {
-				delete = 0;
-				break;
-			}
-			mcaddr += ETH_ALEN;
-		}
-		if (delete)
-			bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
-		else
-			list_add_tail(&mac->qe, &list_head);
-	}
-	while (!list_empty(&list_head)) {
-		bfa_q_deq(&list_head, &qe);
-		mac = (struct bna_mac *)qe;
-		bfa_q_qe_init(&mac->qe);
-		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
 	}
 
-	/**
-	 * Schedule entries for deletion that are in the active_q but not
-	 * in the new list
-	 */
+	/* Schedule active_q entries for deletion */
 	while (!list_empty(&rxf->mcast_active_q)) {
 		bfa_q_deq(&rxf->mcast_active_q, &qe);
 		mac = (struct bna_mac *)qe;
 		bfa_q_qe_init(&mac->qe);
-		for (i = 0, mcaddr = mclist, delete = 1; i < count; i++) {
-			if (BNA_MAC_IS_EQUAL(mcaddr, mac->addr)) {
-				delete = 0;
-				break;
-			}
-			mcaddr += ETH_ALEN;
-		}
-		if (delete) {
-			list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
-			need_hw_config = 1;
-		} else {
-			list_add_tail(&mac->qe, &list_head);
-		}
+		list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
 	}
+
+	/* Add the new entries */
 	while (!list_empty(&list_head)) {
 		bfa_q_deq(&list_head, &qe);
 		mac = (struct bna_mac *)qe;
 		bfa_q_qe_init(&mac->qe);
-		list_add_tail(&mac->qe, &rxf->mcast_active_q);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_add_q);
 	}
 
-	if (need_hw_config) {
-		rxf->cam_fltr_cbfn = cbfn;
-		rxf->cam_fltr_cbarg = rx->bna->bnad;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-	} else if (cbfn)
-		(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
+	rxf->cam_fltr_cbfn = cbfn;
+	rxf->cam_fltr_cbarg = rx->bna->bnad;
+	bfa_fsm_send_event(rxf, RXF_E_CONFIG);
 
 	return BNA_CB_SUCCESS;
 
@@ -1762,13 +973,14 @@ void
 bna_rx_vlan_add(struct bna_rx *rx, int vlan_id)
 {
 	struct bna_rxf *rxf = &rx->rxf;
-	int index = (vlan_id >> 5);
-	int bit = (1 << (vlan_id & 0x1F));
+	int index = (vlan_id >> BFI_VLAN_WORD_SHIFT);
+	int bit = (1 << (vlan_id & BFI_VLAN_WORD_MASK));
+	int group_id = (vlan_id >> BFI_VLAN_BLOCK_SHIFT);
 
 	rxf->vlan_filter_table[index] |= bit;
 	if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED) {
-		rxf->rxf_flags |= BNA_RXF_FL_VLAN_CONFIG_PENDING;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
+		rxf->vlan_pending_bitmask |= (1 << group_id);
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
 	}
 }
 
@@ -1776,34 +988,403 @@ void
 bna_rx_vlan_del(struct bna_rx *rx, int vlan_id)
 {
 	struct bna_rxf *rxf = &rx->rxf;
-	int index = (vlan_id >> 5);
-	int bit = (1 << (vlan_id & 0x1F));
+	int index = (vlan_id >> BFI_VLAN_WORD_SHIFT);
+	int bit = (1 << (vlan_id & BFI_VLAN_WORD_MASK));
+	int group_id = (vlan_id >> BFI_VLAN_BLOCK_SHIFT);
 
 	rxf->vlan_filter_table[index] &= ~bit;
 	if (rxf->vlan_filter_status == BNA_STATUS_T_ENABLED) {
-		rxf->rxf_flags |= BNA_RXF_FL_VLAN_CONFIG_PENDING;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
+		rxf->vlan_pending_bitmask |= (1 << group_id);
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	}
+}
+
+static int
+bna_rxf_ucast_cfg_apply(struct bna_rxf *rxf)
+{
+	struct bna_mac *mac = NULL;
+	struct list_head *qe;
+
+	/* Delete MAC addresses previousely added */
+	if (!list_empty(&rxf->ucast_pending_del_q)) {
+		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		bna_bfi_ucast_req(rxf, mac, BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+		return 1;
+	}
+
+	/* Set default unicast MAC */
+	if (rxf->ucast_pending_set) {
+		rxf->ucast_pending_set = 0;
+		memcpy(rxf->ucast_active_mac.addr,
+			rxf->ucast_pending_mac->addr, ETH_ALEN);
+		rxf->ucast_active_set = 1;
+		bna_bfi_ucast_req(rxf, &rxf->ucast_active_mac,
+			BFI_ENET_H2I_MAC_UCAST_SET_REQ);
+		return 1;
+	}
+
+	/* Add additional MAC entries */
+	if (!list_empty(&rxf->ucast_pending_add_q)) {
+		bfa_q_deq(&rxf->ucast_pending_add_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		list_add_tail(&mac->qe, &rxf->ucast_active_q);
+		bna_bfi_ucast_req(rxf, mac, BFI_ENET_H2I_MAC_UCAST_ADD_REQ);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_ucast_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct list_head *qe;
+	struct bna_mac *mac;
+
+	/* Throw away delete pending ucast entries */
+	while (!list_empty(&rxf->ucast_pending_del_q)) {
+		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
+		bfa_q_qe_init(qe);
+		mac = (struct bna_mac *)qe;
+		if (cleanup == BNA_SOFT_CLEANUP)
+			bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+		else {
+			bna_bfi_ucast_req(rxf, mac,
+				BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+			bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
+			return 1;
+		}
+	}
+
+	/* Move active ucast entries to pending_add_q */
+	while (!list_empty(&rxf->ucast_active_q)) {
+		bfa_q_deq(&rxf->ucast_active_q, &qe);
+		bfa_q_qe_init(qe);
+		list_add_tail(qe, &rxf->ucast_pending_add_q);
+		if (cleanup == BNA_HARD_CLEANUP) {
+			mac = (struct bna_mac *)qe;
+			bna_bfi_ucast_req(rxf, mac,
+				BFI_ENET_H2I_MAC_UCAST_DEL_REQ);
+			return 1;
+		}
+	}
+
+	if (rxf->ucast_active_set) {
+		rxf->ucast_pending_set = 1;
+		rxf->ucast_active_set = 0;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_ucast_req(rxf, &rxf->ucast_active_mac,
+				BFI_ENET_H2I_MAC_UCAST_CLR_REQ);
+			return 1;
+		}
 	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_cfg_apply(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+
+	/* Enable/disable promiscuous mode */
+	if (is_promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move promisc configuration from pending -> active */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active |= BNA_RXMODE_PROMISC;
+		bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_ENABLED);
+		return 1;
+	} else if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move promisc configuration from pending -> active */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		bna->promisc_rid = BFI_INVALID_RID;
+		bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	struct bna *bna = rxf->rx->bna;
+
+	/* Clear pending promisc mode disable */
+	if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		bna->promisc_rid = BFI_INVALID_RID;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+			return 1;
+		}
+	}
+
+	/* Move promisc mode config from active -> pending */
+	if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
+		promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_rx_promisc_req(rxf, BNA_STATUS_T_DISABLED);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_allmulti_cfg_apply(struct bna_rxf *rxf)
+{
+	/* Enable/disable allmulti mode */
+	if (is_allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		/* move allmulti configuration from pending -> active */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active |= BNA_RXMODE_ALLMULTI;
+		bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_DISABLED);
+		return 1;
+	} else if (is_allmulti_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* move allmulti configuration from pending -> active */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_allmulti_cfg_reset(struct bna_rxf *rxf, enum bna_cleanup_type cleanup)
+{
+	/* Clear pending allmulti mode disable */
+	if (is_allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask)) {
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+			return 1;
+		}
+	}
+
+	/* Move allmulti mode config from active -> pending */
+	if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
+		allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
+		if (cleanup == BNA_HARD_CLEANUP) {
+			bna_bfi_mcast_filter_req(rxf, BNA_STATUS_T_ENABLED);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+bna_rxf_promisc_enable(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+	int ret = 0;
+
+	if (is_promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(rxf->rxmode_active & BNA_RXMODE_PROMISC)) {
+		/* Do nothing if pending enable or already enabled */
+	} else if (is_promisc_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending disable command */
+		promisc_inactive(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask);
+	} else {
+		/* Schedule enable */
+		promisc_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		bna->promisc_rid = rxf->rx->rid;
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_promisc_disable(struct bna_rxf *rxf)
+{
+	struct bna *bna = rxf->rx->bna;
+	int ret = 0;
+
+	if (is_promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(!(rxf->rxmode_active & BNA_RXMODE_PROMISC))) {
+		/* Do nothing if pending disable or already disabled */
+	} else if (is_promisc_enable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending enable command */
+		promisc_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		bna->promisc_rid = BFI_INVALID_RID;
+	} else if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
+		/* Schedule disable */
+		promisc_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_allmulti_enable(struct bna_rxf *rxf)
+{
+	int ret = 0;
+
+	if (is_allmulti_enable(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask) ||
+			(rxf->rxmode_active & BNA_RXMODE_ALLMULTI)) {
+		/* Do nothing if pending enable or already enabled */
+	} else if (is_allmulti_disable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending disable command */
+		allmulti_inactive(rxf->rxmode_pending,
+			rxf->rxmode_pending_bitmask);
+	} else {
+		/* Schedule enable */
+		allmulti_enable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_allmulti_disable(struct bna_rxf *rxf)
+{
+	int ret = 0;
+
+	if (is_allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask) ||
+		(!(rxf->rxmode_active & BNA_RXMODE_ALLMULTI))) {
+		/* Do nothing if pending disable or already disabled */
+	} else if (is_allmulti_enable(rxf->rxmode_pending,
+					rxf->rxmode_pending_bitmask)) {
+		/* Turn off pending enable command */
+		allmulti_inactive(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+	} else if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
+		/* Schedule disable */
+		allmulti_disable(rxf->rxmode_pending,
+				rxf->rxmode_pending_bitmask);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int
+bna_rxf_vlan_strip_cfg_apply(struct bna_rxf *rxf)
+{
+	if (rxf->vlan_strip_pending) {
+			rxf->vlan_strip_pending = false;
+			bna_bfi_vlan_strip_enable(rxf);
+			return 1;
+	}
+
+	return 0;
+}
+
+enum bna_cb_status
+bna_rx_mcast_del(struct bna_rx *rx, u8 *addr,
+		 void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	struct bna_mac *mac;
+
+
+	mac = bna_mac_find(&rxf->mcast_pending_add_q, addr);
+	if (mac) {
+		list_del(&mac->qe);
+		bfa_q_qe_init(&mac->qe);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+		if (cbfn)
+			(*cbfn)(rx->bna->bnad, rx);
+		return BNA_CB_SUCCESS;
+	}
+
+	mac = bna_mac_find(&rxf->mcast_active_q, addr);
+	if (mac) {
+		list_del(&mac->qe);
+		bfa_q_qe_init(&mac->qe);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
+		rxf->cam_fltr_cbfn = cbfn;
+		rxf->cam_fltr_cbarg = rx->bna->bnad;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+		return BNA_CB_SUCCESS;
+	}
+
+	return BNA_CB_INVALID_MAC;
+}
+
+void
+bna_rx_mcast_delall(struct bna_rx *rx,
+		    void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	struct list_head *qe;
+	struct bna_mac *mac;
+	int need_hw_config = 0;
+
+
+	/* Purge all entries from pending_add_q */
+	while (!list_empty(&rxf->mcast_pending_add_q)) {
+		bfa_q_deq(&rxf->mcast_pending_add_q, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		bna_mcam_mod_mac_put(&rxf->rx->bna->mcam_mod, mac);
+	}
+
+	/* Schedule all entries in active_q for deletion */
+	while (!list_empty(&rxf->mcast_active_q)) {
+		bfa_q_deq(&rxf->mcast_active_q, &qe);
+		mac = (struct bna_mac *)qe;
+		bfa_q_qe_init(&mac->qe);
+		list_add_tail(&mac->qe, &rxf->mcast_pending_del_q);
+		need_hw_config = 1;
+	}
+
+	if (need_hw_config) {
+		rxf->cam_fltr_cbfn = cbfn;
+		rxf->cam_fltr_cbarg = rx->bna->bnad;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+		return;
+	}
+
+	if (cbfn)
+		(*cbfn)(rx->bna->bnad, rx);
 }
 
 /**
  * RX
  */
-#define	RXQ_RCB_INIT(q, rxp, qdepth, bna, _id, unmapq_mem)	do {	\
-	struct bna_doorbell_qset *_qset;				\
-	unsigned long off;						\
-	(q)->rcb->producer_index = (q)->rcb->consumer_index = 0;	\
-	(q)->rcb->q_depth = (qdepth);					\
-	(q)->rcb->unmap_q = unmapq_mem;					\
-	(q)->rcb->rxq = (q);						\
-	(q)->rcb->cq = &(rxp)->cq;					\
-	(q)->rcb->bnad = (bna)->bnad;					\
-	_qset = (struct bna_doorbell_qset *)0;			\
-	off = (unsigned long)&_qset[(q)->rxq_id].rxq[0];		\
-	(q)->rcb->q_dbell = off +					\
-		BNA_GET_DOORBELL_BASE_ADDR((bna)->pcidev.pci_bar_kva);	\
-	(q)->rcb->id = _id;						\
-} while (0)
 
 #define	BNA_GET_RXQS(qcfg)	(((qcfg)->rxp_type == BNA_RXP_SINGLE) ?	\
 	(qcfg)->num_paths : ((qcfg)->num_paths * 2))
@@ -1811,77 +1392,57 @@ bna_rx_vlan_del(struct bna_rx *rx, int vlan_id)
 #define	SIZE_TO_PAGES(size)	(((size) >> PAGE_SHIFT) + ((((size) &\
 	(PAGE_SIZE - 1)) + (PAGE_SIZE - 1)) >> PAGE_SHIFT))
 
-#define	call_rx_stop_callback(rx, status)				\
+#define	call_rx_stop_cbfn(rx)						\
+do {								    \
 	if ((rx)->stop_cbfn) {						\
-		(*(rx)->stop_cbfn)((rx)->stop_cbarg, rx, (status));	\
+		void (*cbfn)(void *, struct bna_rx *);	  \
+		void *cbarg;					    \
+		cbfn = (rx)->stop_cbfn;				 \
+		cbarg = (rx)->stop_cbarg;			       \
 		(rx)->stop_cbfn = NULL;					\
 		(rx)->stop_cbarg = NULL;				\
-	}
-
-/*
- * Since rx_enable is synchronous callback, there is no start_cbfn required.
- * Instead, we'll call bnad_rx_post(rxp) so that bnad can post the buffers
- * for each rxpath.
- */
+		cbfn(cbarg, rx);					\
+	}							       \
+} while (0)
 
-#define	call_rx_disable_cbfn(rx, status)				\
-		if ((rx)->disable_cbfn)	{				\
-			(*(rx)->disable_cbfn)((rx)->disable_cbarg,	\
-					status);			\
-			(rx)->disable_cbfn = NULL;			\
-			(rx)->disable_cbarg = NULL;			\
-		}							\
-
-#define	rxqs_reqd(type, num_rxqs)					\
-	(((type) == BNA_RXP_SINGLE) ? (num_rxqs) : ((num_rxqs) * 2))
-
-#define rx_ib_fail(rx)						\
-do {								\
-	struct bna_rxp *rxp;					\
-	struct list_head *qe;						\
-	list_for_each(qe, &(rx)->rxp_q) {				\
-		rxp = (struct bna_rxp *)qe;			\
-		bna_ib_fail(rxp->cq.ib);			\
-	}							\
+#define bfi_enet_datapath_q_init(bfi_q, bna_qpt)			\
+do {									\
+	struct bna_dma_addr cur_q_addr =				\
+		*((struct bna_dma_addr *)((bna_qpt)->kv_qpt_ptr));	\
+	(bfi_q)->pg_tbl.a32.addr_lo = (bna_qpt)->hw_qpt_ptr.lsb;	\
+	(bfi_q)->pg_tbl.a32.addr_hi = (bna_qpt)->hw_qpt_ptr.msb;	\
+	(bfi_q)->first_entry.a32.addr_lo = cur_q_addr.lsb;		\
+	(bfi_q)->first_entry.a32.addr_hi = cur_q_addr.msb;		\
+	(bfi_q)->pages = htons((u16)(bna_qpt)->page_count);	\
+	(bfi_q)->page_sz = htons((u16)(bna_qpt)->page_size);\
 } while (0)
 
-static void __bna_multi_rxq_stop(struct bna_rxp *, u32 *);
-static void __bna_rxq_start(struct bna_rxq *rxq);
-static void __bna_cq_start(struct bna_cq *cq);
-static void bna_rit_create(struct bna_rx *rx);
-static void bna_rx_cb_multi_rxq_stopped(void *arg, int status);
-static void bna_rx_cb_rxq_stopped_all(void *arg);
+static void bna_bfi_rx_enet_start(struct bna_rx *rx);
+static void bna_rx_enet_stop(struct bna_rx *rx);
+static void bna_rx_mod_cb_rx_stopped(void *arg, struct bna_rx *rx);
 
 bfa_fsm_state_decl(bna_rx, stopped,
 	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, start_wait,
+	struct bna_rx, enum bna_rx_event);
 bfa_fsm_state_decl(bna_rx, rxf_start_wait,
 	struct bna_rx, enum bna_rx_event);
 bfa_fsm_state_decl(bna_rx, started,
 	struct bna_rx, enum bna_rx_event);
 bfa_fsm_state_decl(bna_rx, rxf_stop_wait,
 	struct bna_rx, enum bna_rx_event);
-bfa_fsm_state_decl(bna_rx, rxq_stop_wait,
+bfa_fsm_state_decl(bna_rx, stop_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, cleanup_wait,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, failed,
+	struct bna_rx, enum bna_rx_event);
+bfa_fsm_state_decl(bna_rx, quiesce_wait,
 	struct bna_rx, enum bna_rx_event);
-
-static const struct bfa_sm_table rx_sm_table[] = {
-	{BFA_SM(bna_rx_sm_stopped), BNA_RX_STOPPED},
-	{BFA_SM(bna_rx_sm_rxf_start_wait), BNA_RX_RXF_START_WAIT},
-	{BFA_SM(bna_rx_sm_started), BNA_RX_STARTED},
-	{BFA_SM(bna_rx_sm_rxf_stop_wait), BNA_RX_RXF_STOP_WAIT},
-	{BFA_SM(bna_rx_sm_rxq_stop_wait), BNA_RX_RXQ_STOP_WAIT},
-};
 
 static void bna_rx_sm_stopped_entry(struct bna_rx *rx)
 {
-	struct bna_rxp *rxp;
-	struct list_head *qe_rxp;
-
-	list_for_each(qe_rxp, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe_rxp;
-		rx->rx_cleanup_cbfn(rx->bna->bnad, rxp->cq.ccb);
-	}
-
-	call_rx_stop_callback(rx, BNA_CB_SUCCESS);
+	call_rx_stop_cbfn(rx);
 }
 
 static void bna_rx_sm_stopped(struct bna_rx *rx,
@@ -1889,44 +1450,53 @@ static void bna_rx_sm_stopped(struct bna_rx *rx,
 {
 	switch (event) {
 	case RX_E_START:
-		bfa_fsm_set_state(rx, bna_rx_sm_rxf_start_wait);
+		bfa_fsm_set_state(rx, bna_rx_sm_start_wait);
 		break;
+
 	case RX_E_STOP:
-		call_rx_stop_callback(rx, BNA_CB_SUCCESS);
+		call_rx_stop_cbfn(rx);
 		break;
+
 	case RX_E_FAIL:
 		/* no-op */
 		break;
+
 	default:
 		bfa_sm_fault(event);
 		break;
 	}
+}
 
+static void bna_rx_sm_start_wait_entry(struct bna_rx *rx)
+{
+	bna_bfi_rx_enet_start(rx);
 }
 
-static void bna_rx_sm_rxf_start_wait_entry(struct bna_rx *rx)
+static void bna_rx_sm_start_wait(struct bna_rx *rx,
+				enum bna_rx_event event)
 {
-	struct bna_rxp *rxp;
-	struct list_head *qe_rxp;
-	struct bna_rxq *q0 = NULL, *q1 = NULL;
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_stop_wait);
+		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
 
-	/* Setup the RIT */
-	bna_rit_create(rx);
+	case RX_E_STARTED:
+		bfa_fsm_set_state(rx, bna_rx_sm_rxf_start_wait);
+		break;
 
-	list_for_each(qe_rxp, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe_rxp;
-		bna_ib_start(rxp->cq.ib);
-		GET_RXQS(rxp, q0, q1);
-		q0->buffer_size = bna_port_mtu_get(&rx->bna->port);
-		__bna_rxq_start(q0);
-		rx->rx_post_cbfn(rx->bna->bnad, q0->rcb);
-		if (q1)  {
-			__bna_rxq_start(q1);
-			rx->rx_post_cbfn(rx->bna->bnad, q1->rcb);
-		}
-		__bna_cq_start(&rxp->cq);
+	default:
+		bfa_sm_fault(event);
+		break;
 	}
+}
 
+static void bna_rx_sm_rxf_start_wait_entry(struct bna_rx *rx)
+{
+	rx->rx_post_cbfn(rx->bna->bnad, rx);
 	bna_rxf_start(&rx->rxf);
 }
 
@@ -1937,14 +1507,17 @@ static void bna_rx_sm_rxf_start_wait(struct bna_rx *rx,
 	case RX_E_STOP:
 		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
 		break;
+
 	case RX_E_FAIL:
-		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
-		rx_ib_fail(rx);
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
 		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
 		break;
+
 	case RX_E_RXF_STARTED:
 		bfa_fsm_set_state(rx, bna_rx_sm_started);
 		break;
+
 	default:
 		bfa_sm_fault(event);
 		break;
@@ -1956,30 +1529,34 @@ bna_rx_sm_started_entry(struct bna_rx *rx)
 {
 	struct bna_rxp *rxp;
 	struct list_head *qe_rxp;
+	int is_regular = (rx->type == BNA_RX_T_REGULAR);
 
 	/* Start IB */
 	list_for_each(qe_rxp, &rx->rxp_q) {
 		rxp = (struct bna_rxp *)qe_rxp;
-		bna_ib_ack(&rxp->cq.ib->door_bell, 0);
+		bna_ib_start(rx->bna, &rxp->cq.ib, is_regular);
 	}
 
-	bna_llport_rx_started(&rx->bna->port.llport);
+	bna_ethport_cb_rx_started(&rx->bna->ethport);
 }
 
 void
 bna_rx_sm_started(struct bna_rx *rx, enum bna_rx_event event)
 {
 	switch (event) {
-	case RX_E_FAIL:
-		bna_llport_rx_stopped(&rx->bna->port.llport);
-		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
-		rx_ib_fail(rx);
-		bna_rxf_fail(&rx->rxf);
-		break;
 	case RX_E_STOP:
-		bna_llport_rx_stopped(&rx->bna->port.llport);
 		bfa_fsm_set_state(rx, bna_rx_sm_rxf_stop_wait);
+		bna_ethport_cb_rx_stopped(&rx->bna->ethport);
+		bna_rxf_stop(&rx->rxf);
 		break;
+
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
+		bna_ethport_cb_rx_stopped(&rx->bna->ethport);
+		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
 	default:
 		bfa_sm_fault(event);
 		break;
@@ -1989,27 +1566,27 @@ bna_rx_sm_started(struct bna_rx *rx, enum bna_rx_event event)
 void
 bna_rx_sm_rxf_stop_wait_entry(struct bna_rx *rx)
 {
-	bna_rxf_stop(&rx->rxf);
 }
 
 void
 bna_rx_sm_rxf_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
 {
 	switch (event) {
-	case RX_E_RXF_STOPPED:
-		bfa_fsm_set_state(rx, bna_rx_sm_rxq_stop_wait);
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		bna_rxf_fail(&rx->rxf);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
 		break;
+
 	case RX_E_RXF_STARTED:
-		/**
-		 * RxF was in the process of starting up when
-		 * RXF_E_STOP was issued. Ignore this event
-		 */
+		bna_rxf_stop(&rx->rxf);
 		break;
-	case RX_E_FAIL:
-		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
-		rx_ib_fail(rx);
-		bna_rxf_fail(&rx->rxf);
+
+	case RX_E_RXF_STOPPED:
+		bfa_fsm_set_state(rx, bna_rx_sm_stop_wait);
+		bna_rx_enet_stop(rx);
 		break;
+
 	default:
 		bfa_sm_fault(event);
 		break;
@@ -2018,51 +1595,24 @@ bna_rx_sm_rxf_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
 }
 
 void
-bna_rx_sm_rxq_stop_wait_entry(struct bna_rx *rx)
+bna_rx_sm_stop_wait_entry(struct bna_rx *rx)
 {
-	struct bna_rxp *rxp = NULL;
-	struct bna_rxq *q0 = NULL;
-	struct bna_rxq *q1 = NULL;
-	struct list_head	*qe;
-	u32 rxq_mask[2] = {0, 0};
-
-	/* Only one call to multi-rxq-stop for all RXPs in this RX */
-	bfa_wc_up(&rx->rxq_stop_wc);
-	list_for_each(qe, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe;
-		GET_RXQS(rxp, q0, q1);
-		if (q0->rxq_id < 32)
-			rxq_mask[0] |= ((u32)1 << q0->rxq_id);
-		else
-			rxq_mask[1] |= ((u32)1 << (q0->rxq_id - 32));
-		if (q1) {
-			if (q1->rxq_id < 32)
-				rxq_mask[0] |= ((u32)1 << q1->rxq_id);
-			else
-				rxq_mask[1] |= ((u32)
-						1 << (q1->rxq_id - 32));
-		}
-	}
-
-	__bna_multi_rxq_stop(rxp, rxq_mask);
 }
 
 void
-bna_rx_sm_rxq_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
+bna_rx_sm_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
 {
-	struct bna_rxp *rxp = NULL;
-	struct list_head	*qe;
-
 	switch (event) {
-	case RX_E_RXQ_STOPPED:
-		list_for_each(qe, &rx->rxp_q) {
-			rxp = (struct bna_rxp *)qe;
-			bna_ib_stop(rxp->cq.ib);
-		}
-		/* Fall through */
 	case RX_E_FAIL:
-		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+	case RX_E_STOPPED:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		rx->rx_cleanup_cbfn(rx->bna->bnad, rx);
+		break;
+
+	case RX_E_STARTED:
+		bna_rx_enet_stop(rx);
 		break;
+
 	default:
 		bfa_sm_fault(event);
 		break;
@@ -2070,184 +1620,214 @@ bna_rx_sm_rxq_stop_wait(struct bna_rx *rx, enum bna_rx_event event)
 }
 
 void
-__bna_multi_rxq_stop(struct bna_rxp *rxp, u32 * rxq_id_mask)
+bna_rx_sm_cleanup_wait_entry(struct bna_rx *rx)
 {
-	struct bfi_ll_q_stop_req ll_req;
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_RXQ_STOP_REQ, 0);
-	ll_req.q_id_mask[0] = htonl(rxq_id_mask[0]);
-	ll_req.q_id_mask[1] = htonl(rxq_id_mask[1]);
-	bna_mbox_qe_fill(&rxp->mbox_qe, &ll_req, sizeof(ll_req),
-		bna_rx_cb_multi_rxq_stopped, rxp);
-	bna_mbox_send(rxp->rx->bna, &rxp->mbox_qe);
 }
 
 void
-__bna_rxq_start(struct bna_rxq *rxq)
+bna_rx_sm_cleanup_wait(struct bna_rx *rx, enum bna_rx_event event)
 {
-	struct bna_rxtx_q_mem *q_mem;
-	struct bna_rxq_mem rxq_cfg, *rxq_mem;
-	struct bna_dma_addr cur_q_addr;
-	/* struct bna_doorbell_qset *qset; */
-	struct bna_qpt *qpt;
-	u32 pg_num;
-	struct bna *bna = rxq->rx->bna;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	qpt = &rxq->qpt;
-	cur_q_addr = *((struct bna_dma_addr *)(qpt->kv_qpt_ptr));
-
-	rxq_cfg.pg_tbl_addr_lo = qpt->hw_qpt_ptr.lsb;
-	rxq_cfg.pg_tbl_addr_hi = qpt->hw_qpt_ptr.msb;
-	rxq_cfg.cur_q_entry_lo = cur_q_addr.lsb;
-	rxq_cfg.cur_q_entry_hi = cur_q_addr.msb;
-
-	rxq_cfg.pg_cnt_n_prd_ptr = ((u32)qpt->page_count << 16) | 0x0;
-	rxq_cfg.entry_n_pg_size = ((u32)(BFI_RXQ_WI_SIZE >> 2) << 16) |
-		(qpt->page_size >> 2);
-	rxq_cfg.sg_n_cq_n_cns_ptr =
-		((u32)(rxq->rxp->cq.cq_id & 0xff) << 16) | 0x0;
-	rxq_cfg.buf_sz_n_q_state = ((u32)rxq->buffer_size << 16) |
-		BNA_Q_IDLE_STATE;
-	rxq_cfg.next_qid = 0x0 | (0x3 << 8);
-
-	/* Write the page number register */
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + bna->port_num,
-			HQM_RXTX_Q_RAM_BASE_OFFSET);
-	writel(pg_num, bna->regs.page_addr);
-
-	/* Write to h/w */
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-					HQM_RXTX_Q_RAM_BASE_OFFSET);
-
-	q_mem = (struct bna_rxtx_q_mem *)0;
-	rxq_mem = &q_mem[rxq->rxq_id].rxq;
-
-	off = (unsigned long)&rxq_mem->pg_tbl_addr_lo;
-	writel(htonl(rxq_cfg.pg_tbl_addr_lo), base_addr + off);
+	switch (event) {
+	case RX_E_FAIL:
+	case RX_E_RXF_STOPPED:
+		/* No-op */
+		break;
 
-	off = (unsigned long)&rxq_mem->pg_tbl_addr_hi;
-	writel(htonl(rxq_cfg.pg_tbl_addr_hi), base_addr + off);
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
 
-	off = (unsigned long)&rxq_mem->cur_q_entry_lo;
-	writel(htonl(rxq_cfg.cur_q_entry_lo), base_addr + off);
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
 
-	off = (unsigned long)&rxq_mem->cur_q_entry_hi;
-	writel(htonl(rxq_cfg.cur_q_entry_hi), base_addr + off);
+static void
+bna_rx_sm_failed_entry(struct bna_rx *rx)
+{
+}
 
-	off = (unsigned long)&rxq_mem->pg_cnt_n_prd_ptr;
-	writel(rxq_cfg.pg_cnt_n_prd_ptr, base_addr + off);
+static void
+bna_rx_sm_failed(struct bna_rx *rx, enum bna_rx_event event)
+{
+	switch (event) {
+	case RX_E_START:
+		bfa_fsm_set_state(rx, bna_rx_sm_quiesce_wait);
+		break;
 
-	off = (unsigned long)&rxq_mem->entry_n_pg_size;
-	writel(rxq_cfg.entry_n_pg_size, base_addr + off);
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		break;
 
-	off = (unsigned long)&rxq_mem->sg_n_cq_n_cns_ptr;
-	writel(rxq_cfg.sg_n_cq_n_cns_ptr, base_addr + off);
+	case RX_E_FAIL:
+	case RX_E_RXF_STARTED:
+	case RX_E_RXF_STOPPED:
+		/* No-op */
+		break;
 
-	off = (unsigned long)&rxq_mem->buf_sz_n_q_state;
-	writel(rxq_cfg.buf_sz_n_q_state, base_addr + off);
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_stopped);
+		break;
 
-	off = (unsigned long)&rxq_mem->next_qid;
-	writel(rxq_cfg.next_qid, base_addr + off);
+	default:
+		bfa_sm_fault(event);
+		break;
+}	}
 
-	rxq->rcb->producer_index = 0;
-	rxq->rcb->consumer_index = 0;
+static void
+bna_rx_sm_quiesce_wait_entry(struct bna_rx *rx)
+{
 }
 
-void
-__bna_cq_start(struct bna_cq *cq)
+static void
+bna_rx_sm_quiesce_wait(struct bna_rx *rx, enum bna_rx_event event)
 {
-	struct bna_cq_mem cq_cfg, *cq_mem;
-	const struct bna_qpt *qpt;
-	struct bna_dma_addr cur_q_addr;
-	u32 pg_num;
-	struct bna *bna = cq->rx->bna;
-	void __iomem *base_addr;
-	unsigned long off;
+	switch (event) {
+	case RX_E_STOP:
+		bfa_fsm_set_state(rx, bna_rx_sm_cleanup_wait);
+		break;
 
-	qpt = &cq->qpt;
-	cur_q_addr = *((struct bna_dma_addr *)(qpt->kv_qpt_ptr));
+	case RX_E_FAIL:
+		bfa_fsm_set_state(rx, bna_rx_sm_failed);
+		break;
 
-	/*
-	 * Fill out structure, to be subsequently written
-	 * to hardware
-	 */
-	cq_cfg.pg_tbl_addr_lo = qpt->hw_qpt_ptr.lsb;
-	cq_cfg.pg_tbl_addr_hi = qpt->hw_qpt_ptr.msb;
-	cq_cfg.cur_q_entry_lo = cur_q_addr.lsb;
-	cq_cfg.cur_q_entry_hi = cur_q_addr.msb;
+	case RX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(rx, bna_rx_sm_start_wait);
+		break;
 
-	cq_cfg.pg_cnt_n_prd_ptr = (qpt->page_count << 16) | 0x0;
-	cq_cfg.entry_n_pg_size =
-		((u32)(BFI_CQ_WI_SIZE >> 2) << 16) | (qpt->page_size >> 2);
-	cq_cfg.int_blk_n_cns_ptr = ((((u32)cq->ib_seg_offset) << 24) |
-			((u32)(cq->ib->ib_id & 0xff)  << 16) | 0x0);
-	cq_cfg.q_state = BNA_Q_IDLE_STATE;
+	default:
+		bfa_sm_fault(event);
+		break;
+	}
+}
 
-	/* Write the page number register */
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + bna->port_num,
-				  HQM_CQ_RAM_BASE_OFFSET);
+static void
+bna_bfi_rx_enet_start(struct bna_rx *rx)
+{
+	struct bfi_enet_rx_cfg_req *cfg_req = &rx->bfi_enet_cmd.cfg_req;
+	struct bna_rxp *rxp = NULL;
+	struct bna_rxq *q0 = NULL, *q1 = NULL;
+	struct list_head *rxp_qe;
+	int i;
 
-	writel(pg_num, bna->regs.page_addr);
+	bfi_msgq_mhdr_set(cfg_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_CFG_SET_REQ, 0, rx->rid);
+	cfg_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_rx_cfg_req)));
 
-	/* H/W write */
-	base_addr = BNA_GET_MEM_BASE_ADDR(bna->pcidev.pci_bar_kva,
-					HQM_CQ_RAM_BASE_OFFSET);
+	cfg_req->num_queue_sets = rx->num_paths;
+	for (i = 0, rxp_qe = bfa_q_first(&rx->rxp_q);
+		i < rx->num_paths;
+		i++, rxp_qe = bfa_q_next(rxp_qe)) {
+		rxp = (struct bna_rxp *)rxp_qe;
 
-	cq_mem = (struct bna_cq_mem *)0;
+		GET_RXQS(rxp, q0, q1);
+		switch (rxp->type) {
+		case BNA_RXP_SLR:
+		case BNA_RXP_HDS:
+			/* Small RxQ */
+			bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].qs.q,
+						&q1->qpt);
+			cfg_req->q_cfg[i].qs.rx_buffer_size =
+				htons((u16)q1->buffer_size);
+			/* Fall through */
+
+		case BNA_RXP_SINGLE:
+			/* Large/Single RxQ */
+			bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].ql.q,
+						&q0->qpt);
+			q0->buffer_size =
+				bna_enet_mtu_get(&rx->bna->enet);
+			cfg_req->q_cfg[i].ql.rx_buffer_size =
+				htons((u16)q0->buffer_size);
+			break;
 
-	off = (unsigned long)&cq_mem[cq->cq_id].pg_tbl_addr_lo;
-	writel(htonl(cq_cfg.pg_tbl_addr_lo), base_addr + off);
+		default:
+			BUG_ON(1);
+		}
 
-	off = (unsigned long)&cq_mem[cq->cq_id].pg_tbl_addr_hi;
-	writel(htonl(cq_cfg.pg_tbl_addr_hi), base_addr + off);
+		bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].cq.q,
+					&rxp->cq.qpt);
+
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_lo =
+			rxp->cq.ib.ib_seg_host_addr.lsb;
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_hi =
+			rxp->cq.ib.ib_seg_host_addr.msb;
+		cfg_req->q_cfg[i].ib.intr.msix_index =
+			htons((u16)rxp->cq.ib.intr_vector);
+	}
+
+	cfg_req->ib_cfg.int_pkt_dma = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.int_enabled = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_pkt_enabled = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.continuous_coalescing = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.msix = (rxp->cq.ib.intr_type == BNA_INTR_T_MSIX)
+				? BNA_STATUS_T_ENABLED :
+				BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.coalescing_timeout =
+			htonl((u32)rxp->cq.ib.coalescing_timeo);
+	cfg_req->ib_cfg.inter_pkt_timeout =
+			htonl((u32)rxp->cq.ib.interpkt_timeo);
+	cfg_req->ib_cfg.inter_pkt_count = (u8)rxp->cq.ib.interpkt_count;
 
-	off = (unsigned long)&cq_mem[cq->cq_id].cur_q_entry_lo;
-	writel(htonl(cq_cfg.cur_q_entry_lo), base_addr + off);
+	switch (rxp->type) {
+	case BNA_RXP_SLR:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_LARGE_SMALL;
+		break;
 
-	off = (unsigned long)&cq_mem[cq->cq_id].cur_q_entry_hi;
-	writel(htonl(cq_cfg.cur_q_entry_hi), base_addr + off);
+	case BNA_RXP_HDS:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_HDS;
+		cfg_req->rx_cfg.hds.type = rx->hds_cfg.hdr_type;
+		cfg_req->rx_cfg.hds.force_offset = rx->hds_cfg.forced_offset;
+		cfg_req->rx_cfg.hds.max_header_size = rx->hds_cfg.forced_offset;
+		break;
 
-	off = (unsigned long)&cq_mem[cq->cq_id].pg_cnt_n_prd_ptr;
-	writel(cq_cfg.pg_cnt_n_prd_ptr, base_addr + off);
+	case BNA_RXP_SINGLE:
+		cfg_req->rx_cfg.rxq_type = BFI_ENET_RXQ_SINGLE;
+		break;
 
-	off = (unsigned long)&cq_mem[cq->cq_id].entry_n_pg_size;
-	writel(cq_cfg.entry_n_pg_size, base_addr + off);
+	default:
+		BUG_ON(1);
+	}
+	cfg_req->rx_cfg.strip_vlan = rx->rxf.vlan_strip_status;
 
-	off = (unsigned long)&cq_mem[cq->cq_id].int_blk_n_cns_ptr;
-	writel(cq_cfg.int_blk_n_cns_ptr, base_addr + off);
+	bfa_msgq_cmd_set(&rx->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_rx_cfg_req), &cfg_req->mh);
+	bfa_msgq_cmd_post(&rx->bna->msgq, &rx->msgq_cmd);
+}
 
-	off = (unsigned long)&cq_mem[cq->cq_id].q_state;
-	writel(cq_cfg.q_state, base_addr + off);
+static void
+bna_bfi_rx_enet_stop(struct bna_rx *rx)
+{
+	struct bfi_enet_req *req = &rx->bfi_enet_cmd.req;
 
-	cq->ccb->producer_index = 0;
-	*(cq->ccb->hw_producer_index) = 0;
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_RX_CFG_CLR_REQ, 0, rx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_req)));
+	bfa_msgq_cmd_set(&rx->msgq_cmd, NULL, NULL, sizeof(struct bfi_enet_req),
+		&req->mh);
+	bfa_msgq_cmd_post(&rx->bna->msgq, &rx->msgq_cmd);
 }
 
-void
-bna_rit_create(struct bna_rx *rx)
+static void
+bna_rx_enet_stop(struct bna_rx *rx)
 {
-	struct list_head	*qe_rxp;
 	struct bna_rxp *rxp;
-	struct bna_rxq *q0 = NULL;
-	struct bna_rxq *q1 = NULL;
-	int offset;
+	struct list_head		 *qe_rxp;
 
-	offset = 0;
+	/* Stop IB */
 	list_for_each(qe_rxp, &rx->rxp_q) {
 		rxp = (struct bna_rxp *)qe_rxp;
-		GET_RXQS(rxp, q0, q1);
-		rx->rxf.rit_segment->rit[offset].large_rxq_id = q0->rxq_id;
-		rx->rxf.rit_segment->rit[offset].small_rxq_id =
-						(q1 ? q1->rxq_id : 0);
-		offset++;
+		bna_ib_stop(rx->bna, &rxp->cq.ib);
 	}
+
+	bna_bfi_rx_enet_stop(rx);
 }
 
 static int
-_rx_can_satisfy(struct bna_rx_mod *rx_mod,
-		struct bna_rx_config *rx_cfg)
+bna_rx_res_check(struct bna_rx_mod *rx_mod, struct bna_rx_config *rx_cfg)
 {
 	if ((rx_mod->rx_free_count == 0) ||
 		(rx_mod->rxp_free_count == 0) ||
@@ -2264,28 +1844,25 @@ _rx_can_satisfy(struct bna_rx_mod *rx_mod,
 			return 0;
 	}
 
-	if (!bna_rit_mod_can_satisfy(&rx_mod->bna->rit_mod, rx_cfg->num_paths))
-		return 0;
-
 	return 1;
 }
 
 static struct bna_rxq *
-_get_free_rxq(struct bna_rx_mod *rx_mod)
+bna_rxq_get(struct bna_rx_mod *rx_mod)
 {
 	struct bna_rxq *rxq = NULL;
 	struct list_head	*qe = NULL;
 
 	bfa_q_deq(&rx_mod->rxq_free_q, &qe);
-	if (qe) {
-		rx_mod->rxq_free_count--;
-		rxq = (struct bna_rxq *)qe;
-	}
+	rx_mod->rxq_free_count--;
+	rxq = (struct bna_rxq *)qe;
+	bfa_q_qe_init(&rxq->qe);
+
 	return rxq;
 }
 
 static void
-_put_free_rxq(struct bna_rx_mod *rx_mod, struct bna_rxq *rxq)
+bna_rxq_put(struct bna_rx_mod *rx_mod, struct bna_rxq *rxq)
 {
 	bfa_q_qe_init(&rxq->qe);
 	list_add_tail(&rxq->qe, &rx_mod->rxq_free_q);
@@ -2293,23 +1870,21 @@ _put_free_rxq(struct bna_rx_mod *rx_mod, struct bna_rxq *rxq)
 }
 
 static struct bna_rxp *
-_get_free_rxp(struct bna_rx_mod *rx_mod)
+bna_rxp_get(struct bna_rx_mod *rx_mod)
 {
 	struct list_head	*qe = NULL;
 	struct bna_rxp *rxp = NULL;
 
 	bfa_q_deq(&rx_mod->rxp_free_q, &qe);
-	if (qe) {
-		rx_mod->rxp_free_count--;
-
-		rxp = (struct bna_rxp *)qe;
-	}
+	rx_mod->rxp_free_count--;
+	rxp = (struct bna_rxp *)qe;
+	bfa_q_qe_init(&rxp->qe);
 
 	return rxp;
 }
 
 static void
-_put_free_rxp(struct bna_rx_mod *rx_mod, struct bna_rxp *rxp)
+bna_rxp_put(struct bna_rx_mod *rx_mod, struct bna_rxp *rxp)
 {
 	bfa_q_qe_init(&rxp->qe);
 	list_add_tail(&rxp->qe, &rx_mod->rxp_free_q);
@@ -2317,50 +1892,59 @@ _put_free_rxp(struct bna_rx_mod *rx_mod, struct bna_rxp *rxp)
 }
 
 static struct bna_rx *
-_get_free_rx(struct bna_rx_mod *rx_mod)
+bna_rx_get(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
 {
 	struct list_head	*qe = NULL;
 	struct bna_rx *rx = NULL;
 
-	bfa_q_deq(&rx_mod->rx_free_q, &qe);
-	if (qe) {
-		rx_mod->rx_free_count--;
+	if (type == BNA_RX_T_REGULAR) {
+		bfa_q_deq(&rx_mod->rx_free_q, &qe);
+	} else
+		bfa_q_deq_tail(&rx_mod->rx_free_q, &qe);
 
-		rx = (struct bna_rx *)qe;
-		bfa_q_qe_init(qe);
-		list_add_tail(&rx->qe, &rx_mod->rx_active_q);
-	}
+	rx_mod->rx_free_count--;
+	rx = (struct bna_rx *)qe;
+	bfa_q_qe_init(&rx->qe);
+	list_add_tail(&rx->qe, &rx_mod->rx_active_q);
+	rx->type = type;
 
 	return rx;
 }
 
 static void
-_put_free_rx(struct bna_rx_mod *rx_mod, struct bna_rx *rx)
+bna_rx_put(struct bna_rx_mod *rx_mod, struct bna_rx *rx)
 {
-	bfa_q_qe_init(&rx->qe);
-	list_add_tail(&rx->qe, &rx_mod->rx_free_q);
-	rx_mod->rx_free_count++;
-}
+	struct list_head *prev_qe = NULL;
+	struct list_head *qe;
 
-static void
-_rx_init(struct bna_rx *rx, struct bna *bna)
-{
-	rx->bna = bna;
-	rx->rx_flags = 0;
+	bfa_q_qe_init(&rx->qe);
 
-	INIT_LIST_HEAD(&rx->rxp_q);
+	list_for_each(qe, &rx_mod->rx_free_q) {
+		if (((struct bna_rx *)qe)->rid < rx->rid)
+			prev_qe = qe;
+		else
+			break;
+	}
 
-	rx->rxq_stop_wc.wc_resume = bna_rx_cb_rxq_stopped_all;
-	rx->rxq_stop_wc.wc_cbarg = rx;
-	rx->rxq_stop_wc.wc_count = 0;
+	if (prev_qe == NULL) {
+		/* This is the first entry */
+		bfa_q_enq_head(&rx_mod->rx_free_q, &rx->qe);
+	} else if (bfa_q_next(prev_qe) == &rx_mod->rx_free_q) {
+		/* This is the last entry */
+		list_add_tail(&rx->qe, &rx_mod->rx_free_q);
+	} else {
+		/* Somewhere in the middle */
+		bfa_q_next(&rx->qe) = bfa_q_next(prev_qe);
+		bfa_q_prev(&rx->qe) = prev_qe;
+		bfa_q_next(prev_qe) = &rx->qe;
+		bfa_q_prev(bfa_q_next(&rx->qe)) = &rx->qe;
+	}
 
-	rx->stop_cbfn = NULL;
-	rx->stop_cbarg = NULL;
+	rx_mod->rx_free_count++;
 }
 
 static void
-_rxp_add_rxqs(struct bna_rxp *rxp,
-		struct bna_rxq *q0,
+bna_rxp_add_rxqs(struct bna_rxp *rxp, struct bna_rxq *q0,
 		struct bna_rxq *q1)
 {
 	switch (rxp->type) {
@@ -2382,7 +1966,7 @@ _rxp_add_rxqs(struct bna_rxp *rxp,
 }
 
 static void
-_rxq_qpt_init(struct bna_rxq *rxq,
+bna_rxq_qpt_setup(struct bna_rxq *rxq,
 		struct bna_rxp *rxp,
 		u32 page_count,
 		u32 page_size,
@@ -2406,12 +1990,11 @@ _rxq_qpt_init(struct bna_rxq *rxq,
 			page_mem[i].dma.lsb;
 		((struct bna_dma_addr *)rxq->qpt.kv_qpt_ptr)[i].msb =
 			page_mem[i].dma.msb;
-
 	}
 }
 
 static void
-_rxp_cqpt_setup(struct bna_rxp *rxp,
+bna_rxp_cqpt_setup(struct bna_rxp *rxp,
 		u32 page_count,
 		u32 page_size,
 		struct bna_mem_descr *qpt_mem,
@@ -2435,64 +2018,11 @@ _rxp_cqpt_setup(struct bna_rxp *rxp,
 			page_mem[i].dma.lsb;
 		((struct bna_dma_addr *)rxp->cq.qpt.kv_qpt_ptr)[i].msb =
 			page_mem[i].dma.msb;
-
 	}
 }
 
 static void
-_rx_add_rxp(struct bna_rx *rx, struct bna_rxp *rxp)
-{
-	list_add_tail(&rxp->qe, &rx->rxp_q);
-}
-
-static void
-_init_rxmod_queues(struct bna_rx_mod *rx_mod)
-{
-	INIT_LIST_HEAD(&rx_mod->rx_free_q);
-	INIT_LIST_HEAD(&rx_mod->rxq_free_q);
-	INIT_LIST_HEAD(&rx_mod->rxp_free_q);
-	INIT_LIST_HEAD(&rx_mod->rx_active_q);
-
-	rx_mod->rx_free_count = 0;
-	rx_mod->rxq_free_count = 0;
-	rx_mod->rxp_free_count = 0;
-}
-
-static void
-_rx_ctor(struct bna_rx *rx, int id)
-{
-	bfa_q_qe_init(&rx->qe);
-	INIT_LIST_HEAD(&rx->rxp_q);
-	rx->bna = NULL;
-
-	rx->rxf.rxf_id = id;
-
-	/* FIXME: mbox_qe ctor()?? */
-	bfa_q_qe_init(&rx->mbox_qe.qe);
-
-	rx->stop_cbfn = NULL;
-	rx->stop_cbarg = NULL;
-}
-
-void
-bna_rx_cb_multi_rxq_stopped(void *arg, int status)
-{
-	struct bna_rxp *rxp = (struct bna_rxp *)arg;
-
-	bfa_wc_down(&rxp->rx->rxq_stop_wc);
-}
-
-void
-bna_rx_cb_rxq_stopped_all(void *arg)
-{
-	struct bna_rx *rx = (struct bna_rx *)arg;
-
-	bfa_fsm_send_event(rx, RX_E_RXQ_STOPPED);
-}
-
-static void
-bna_rx_mod_cb_rx_stopped(void *arg, struct bna_rx *rx,
-			 enum bna_cb_status status)
+bna_rx_mod_cb_rx_stopped(void *arg, struct bna_rx *rx)
 {
 	struct bna_rx_mod *rx_mod = (struct bna_rx_mod *)arg;
 
@@ -2505,24 +2035,24 @@ bna_rx_mod_cb_rx_stopped_all(void *arg)
 	struct bna_rx_mod *rx_mod = (struct bna_rx_mod *)arg;
 
 	if (rx_mod->stop_cbfn)
-		rx_mod->stop_cbfn(&rx_mod->bna->port, BNA_CB_SUCCESS);
+		rx_mod->stop_cbfn(&rx_mod->bna->enet);
 	rx_mod->stop_cbfn = NULL;
 }
 
 static void
 bna_rx_start(struct bna_rx *rx)
 {
-	rx->rx_flags |= BNA_RX_F_PORT_ENABLED;
-	if (rx->rx_flags & BNA_RX_F_ENABLE)
+	rx->rx_flags |= BNA_RX_F_ENET_STARTED;
+	if (rx->rx_flags & BNA_RX_F_ENABLED)
 		bfa_fsm_send_event(rx, RX_E_START);
 }
 
 static void
 bna_rx_stop(struct bna_rx *rx)
 {
-	rx->rx_flags &= ~BNA_RX_F_PORT_ENABLED;
+	rx->rx_flags &= ~BNA_RX_F_ENET_STARTED;
 	if (rx->fsm == (bfa_fsm_t) bna_rx_sm_stopped)
-		bna_rx_mod_cb_rx_stopped(&rx->bna->rx_mod, rx, BNA_CB_SUCCESS);
+		bna_rx_mod_cb_rx_stopped(&rx->bna->rx_mod, rx);
 	else {
 		rx->stop_cbfn = bna_rx_mod_cb_rx_stopped;
 		rx->stop_cbarg = &rx->bna->rx_mod;
@@ -2533,9 +2063,8 @@ bna_rx_stop(struct bna_rx *rx)
 static void
 bna_rx_fail(struct bna_rx *rx)
 {
-	/* Indicate port is not enabled, and failed */
-	rx->rx_flags &= ~BNA_RX_F_PORT_ENABLED;
-	rx->rx_flags |= BNA_RX_F_PORT_FAILED;
+	/* Indicate Enet is not enabled, and failed */
+	rx->rx_flags &= ~BNA_RX_F_ENET_STARTED;
 	bfa_fsm_send_event(rx, RX_E_FAIL);
 }
 
@@ -2545,9 +2074,9 @@ bna_rx_mod_start(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
 	struct bna_rx *rx;
 	struct list_head *qe;
 
-	rx_mod->flags |= BNA_RX_MOD_F_PORT_STARTED;
+	rx_mod->flags |= BNA_RX_MOD_F_ENET_STARTED;
 	if (type == BNA_RX_T_LOOPBACK)
-		rx_mod->flags |= BNA_RX_MOD_F_PORT_LOOPBACK;
+		rx_mod->flags |= BNA_RX_MOD_F_ENET_LOOPBACK;
 
 	list_for_each(qe, &rx_mod->rx_active_q) {
 		rx = (struct bna_rx *)qe;
@@ -2562,32 +2091,22 @@ bna_rx_mod_stop(struct bna_rx_mod *rx_mod, enum bna_rx_type type)
 	struct bna_rx *rx;
 	struct list_head *qe;
 
-	rx_mod->flags &= ~BNA_RX_MOD_F_PORT_STARTED;
-	rx_mod->flags &= ~BNA_RX_MOD_F_PORT_LOOPBACK;
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_STARTED;
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_LOOPBACK;
 
-	rx_mod->stop_cbfn = bna_port_cb_rx_stopped;
+	rx_mod->stop_cbfn = bna_enet_cb_rx_stopped;
 
-	/**
-	 * Before calling bna_rx_stop(), increment rx_stop_wc as many times
-	 * as we are going to call bna_rx_stop
-	 */
-	list_for_each(qe, &rx_mod->rx_active_q) {
-		rx = (struct bna_rx *)qe;
-		if (rx->type == type)
-			bfa_wc_up(&rx_mod->rx_stop_wc);
-	}
-
-	if (rx_mod->rx_stop_wc.wc_count == 0) {
-		rx_mod->stop_cbfn(&rx_mod->bna->port, BNA_CB_SUCCESS);
-		rx_mod->stop_cbfn = NULL;
-		return;
-	}
+	bfa_wc_init(&rx_mod->rx_stop_wc, bna_rx_mod_cb_rx_stopped_all, rx_mod);
 
 	list_for_each(qe, &rx_mod->rx_active_q) {
 		rx = (struct bna_rx *)qe;
-		if (rx->type == type)
+		if (rx->type == type) {
+			bfa_wc_up(&rx_mod->rx_stop_wc);
 			bna_rx_stop(rx);
+		}
 	}
+
+	bfa_wc_wait(&rx_mod->rx_stop_wc);
 }
 
 void
@@ -2596,8 +2115,8 @@ bna_rx_mod_fail(struct bna_rx_mod *rx_mod)
 	struct bna_rx *rx;
 	struct list_head *qe;
 
-	rx_mod->flags &= ~BNA_RX_MOD_F_PORT_STARTED;
-	rx_mod->flags &= ~BNA_RX_MOD_F_PORT_LOOPBACK;
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_STARTED;
+	rx_mod->flags &= ~BNA_RX_MOD_F_ENET_LOOPBACK;
 
 	list_for_each(qe, &rx_mod->rx_active_q) {
 		rx = (struct bna_rx *)qe;
@@ -2617,45 +2136,51 @@ void bna_rx_mod_init(struct bna_rx_mod *rx_mod, struct bna *bna,
 	rx_mod->flags = 0;
 
 	rx_mod->rx = (struct bna_rx *)
-		res_info[BNA_RES_MEM_T_RX_ARRAY].res_u.mem_info.mdl[0].kva;
+		res_info[BNA_MOD_RES_MEM_T_RX_ARRAY].res_u.mem_info.mdl[0].kva;
 	rx_mod->rxp = (struct bna_rxp *)
-		res_info[BNA_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mdl[0].kva;
+		res_info[BNA_MOD_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mdl[0].kva;
 	rx_mod->rxq = (struct bna_rxq *)
-		res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mdl[0].kva;
+		res_info[BNA_MOD_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mdl[0].kva;
 
 	/* Initialize the queues */
-	_init_rxmod_queues(rx_mod);
+	INIT_LIST_HEAD(&rx_mod->rx_free_q);
+	rx_mod->rx_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rxq_free_q);
+	rx_mod->rxq_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rxp_free_q);
+	rx_mod->rxp_free_count = 0;
+	INIT_LIST_HEAD(&rx_mod->rx_active_q);
 
 	/* Build RX queues */
-	for (index = 0; index < BFI_MAX_RXQ; index++) {
+	for (index = 0; index < bna->ioceth.attr.num_rxp; index++) {
 		rx_ptr = &rx_mod->rx[index];
-		_rx_ctor(rx_ptr, index);
+
+		bfa_q_qe_init(&rx_ptr->qe);
+		INIT_LIST_HEAD(&rx_ptr->rxp_q);
+		rx_ptr->bna = NULL;
+		rx_ptr->rid = index;
+		rx_ptr->stop_cbfn = NULL;
+		rx_ptr->stop_cbarg = NULL;
+
 		list_add_tail(&rx_ptr->qe, &rx_mod->rx_free_q);
 		rx_mod->rx_free_count++;
 	}
 
 	/* build RX-path queue */
-	for (index = 0; index < BFI_MAX_RXQ; index++) {
+	for (index = 0; index < bna->ioceth.attr.num_rxp; index++) {
 		rxp_ptr = &rx_mod->rxp[index];
-		rxp_ptr->cq.cq_id = index;
 		bfa_q_qe_init(&rxp_ptr->qe);
 		list_add_tail(&rxp_ptr->qe, &rx_mod->rxp_free_q);
 		rx_mod->rxp_free_count++;
 	}
 
 	/* build RXQ queue */
-	for (index = 0; index < BFI_MAX_RXQ; index++) {
+	for (index = 0; index < (bna->ioceth.attr.num_rxp * 2); index++) {
 		rxq_ptr = &rx_mod->rxq[index];
-		rxq_ptr->rxq_id = index;
-
 		bfa_q_qe_init(&rxq_ptr->qe);
 		list_add_tail(&rxq_ptr->qe, &rx_mod->rxq_free_q);
 		rx_mod->rxq_free_count++;
 	}
-
-	rx_mod->rx_stop_wc.wc_resume = bna_rx_mod_cb_rx_stopped_all;
-	rx_mod->rx_stop_wc.wc_cbarg = rx_mod;
-	rx_mod->rx_stop_wc.wc_count = 0;
 }
 
 void
@@ -2679,10 +2204,57 @@ bna_rx_mod_uninit(struct bna_rx_mod *rx_mod)
 	rx_mod->bna = NULL;
 }
 
-int
-bna_rx_state_get(struct bna_rx *rx)
+void
+bna_bfi_rx_enet_start_rsp(struct bna_rx *rx, struct bfi_msgq_mhdr *msghdr)
 {
-	return bfa_sm_to_state(rx_sm_table, rx->fsm);
+	struct bfi_enet_rx_cfg_rsp *cfg_rsp = &rx->bfi_enet_cmd.cfg_rsp;
+	struct bna_rxp *rxp = NULL;
+	struct bna_rxq *q0 = NULL, *q1 = NULL;
+	struct list_head *rxp_qe;
+	int i;
+
+	bfa_msgq_rsp_copy(&rx->bna->msgq, (u8 *)cfg_rsp,
+		sizeof(struct bfi_enet_rx_cfg_rsp));
+
+	rx->hw_id = cfg_rsp->hw_id;
+
+	for (i = 0, rxp_qe = bfa_q_first(&rx->rxp_q);
+		i < rx->num_paths;
+		i++, rxp_qe = bfa_q_next(rxp_qe)) {
+		rxp = (struct bna_rxp *)rxp_qe;
+		GET_RXQS(rxp, q0, q1);
+
+		/* Setup doorbells */
+		rxp->cq.ccb->i_dbell->doorbell_addr =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].i_dbell);
+		rxp->hw_id = cfg_rsp->q_handles[i].hw_cqid;
+		q0->rcb->q_dbell =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].ql_dbell);
+		q0->hw_id = cfg_rsp->q_handles[i].hw_lqid;
+		if (q1) {
+			q1->rcb->q_dbell =
+			rx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].qs_dbell);
+			q1->hw_id = cfg_rsp->q_handles[i].hw_sqid;
+		}
+
+		/* Initialize producer/consumer indexes */
+		(*rxp->cq.ccb->hw_producer_index) = 0;
+		rxp->cq.ccb->producer_index = 0;
+		q0->rcb->producer_index = q0->rcb->consumer_index = 0;
+		if (q1)
+			q1->rcb->producer_index = q1->rcb->consumer_index = 0;
+	}
+
+	bfa_fsm_send_event(rx, RX_E_STARTED);
+}
+
+void
+bna_bfi_rx_enet_stop_rsp(struct bna_rx *rx, struct bfi_msgq_mhdr *msghdr)
+{
+	bfa_fsm_send_event(rx, RX_E_STOPPED);
 }
 
 void
@@ -2714,88 +2286,87 @@ bna_rx_res_req(struct bna_rx_config *q_cfg, struct bna_res_info *res_info)
 		hq_size = hq_depth * BFI_RXQ_WI_SIZE;
 		hq_size = ALIGN(hq_size, PAGE_SIZE);
 		hpage_count = SIZE_TO_PAGES(hq_size);
-	} else {
+	} else
 		hpage_count = 0;
-	}
 
-	/* CCB structures */
 	res_info[BNA_RX_RES_MEM_T_CCB].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_CCB].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_KVA;
 	mem_info->len = sizeof(struct bna_ccb);
 	mem_info->num = q_cfg->num_paths;
 
-	/* RCB structures */
 	res_info[BNA_RX_RES_MEM_T_RCB].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_RCB].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_KVA;
 	mem_info->len = sizeof(struct bna_rcb);
 	mem_info->num = BNA_GET_RXQS(q_cfg);
 
-	/* Completion QPT */
 	res_info[BNA_RX_RES_MEM_T_CQPT].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_CQPT].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_DMA;
 	mem_info->len = cpage_count * sizeof(struct bna_dma_addr);
 	mem_info->num = q_cfg->num_paths;
 
-	/* Completion s/w QPT */
 	res_info[BNA_RX_RES_MEM_T_CSWQPT].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_CSWQPT].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_KVA;
 	mem_info->len = cpage_count * sizeof(void *);
 	mem_info->num = q_cfg->num_paths;
 
-	/* Completion QPT pages */
 	res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_DMA;
 	mem_info->len = PAGE_SIZE;
 	mem_info->num = cpage_count * q_cfg->num_paths;
 
-	/* Data QPTs */
 	res_info[BNA_RX_RES_MEM_T_DQPT].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_DQPT].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_DMA;
 	mem_info->len = dpage_count * sizeof(struct bna_dma_addr);
 	mem_info->num = q_cfg->num_paths;
 
-	/* Data s/w QPTs */
 	res_info[BNA_RX_RES_MEM_T_DSWQPT].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_DSWQPT].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_KVA;
 	mem_info->len = dpage_count * sizeof(void *);
 	mem_info->num = q_cfg->num_paths;
 
-	/* Data QPT pages */
 	res_info[BNA_RX_RES_MEM_T_DPAGE].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_DMA;
 	mem_info->len = PAGE_SIZE;
 	mem_info->num = dpage_count * q_cfg->num_paths;
 
-	/* Hdr QPTs */
 	res_info[BNA_RX_RES_MEM_T_HQPT].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_HQPT].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_DMA;
 	mem_info->len = hpage_count * sizeof(struct bna_dma_addr);
 	mem_info->num = (hpage_count ? q_cfg->num_paths : 0);
 
-	/* Hdr s/w QPTs */
 	res_info[BNA_RX_RES_MEM_T_HSWQPT].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_HSWQPT].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_KVA;
 	mem_info->len = hpage_count * sizeof(void *);
 	mem_info->num = (hpage_count ? q_cfg->num_paths : 0);
 
-	/* Hdr QPT pages */
 	res_info[BNA_RX_RES_MEM_T_HPAGE].res_type = BNA_RES_T_MEM;
 	mem_info = &res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info;
 	mem_info->mem_type = BNA_MEM_T_DMA;
 	mem_info->len = (hpage_count ? PAGE_SIZE : 0);
 	mem_info->num = (hpage_count ? (hpage_count * q_cfg->num_paths) : 0);
 
-	/* RX Interrupts */
+	res_info[BNA_RX_RES_MEM_T_IBIDX].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = BFI_IBIDX_SIZE;
+	mem_info->num = q_cfg->num_paths;
+
+	res_info[BNA_RX_RES_MEM_T_RIT].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_RX_RES_MEM_T_RIT].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_KVA;
+	mem_info->len = BFI_ENET_RSS_RIT_MAX;
+	mem_info->num = 1;
+
 	res_info[BNA_RX_RES_T_INTR].res_type = BNA_RES_T_INTR;
 	res_info[BNA_RX_RES_T_INTR].res_u.intr_info.intr_type = BNA_INTR_T_MSIX;
 	res_info[BNA_RX_RES_T_INTR].res_u.intr_info.num = q_cfg->num_paths;
@@ -2821,20 +2392,18 @@ bna_rx_create(struct bna *bna, struct bnad *bnad,
 	struct bna_mem_descr *cqpt_mem;
 	struct bna_mem_descr *cswqpt_mem;
 	struct bna_mem_descr *cpage_mem;
-	struct bna_mem_descr *hqpt_mem;	/* Header/Small Q qpt */
-	struct bna_mem_descr *dqpt_mem;	/* Data/Large Q qpt */
-	struct bna_mem_descr *hsqpt_mem;	/* s/w qpt for hdr */
-	struct bna_mem_descr *dsqpt_mem;	/* s/w qpt for data */
-	struct bna_mem_descr *hpage_mem;	/* hdr page mem */
-	struct bna_mem_descr *dpage_mem;	/* data page mem */
+	struct bna_mem_descr *hqpt_mem;
+	struct bna_mem_descr *dqpt_mem;
+	struct bna_mem_descr *hsqpt_mem;
+	struct bna_mem_descr *dsqpt_mem;
+	struct bna_mem_descr *hpage_mem;
+	struct bna_mem_descr *dpage_mem;
 	int i, cpage_idx = 0, dpage_idx = 0, hpage_idx = 0;
 	int dpage_count, hpage_count, rcb_idx;
-	struct bna_ib_config ibcfg;
-	/* Fail if we don't have enough RXPs, RXQs */
-	if (!_rx_can_satisfy(rx_mod, rx_cfg))
+
+	if (!bna_rx_res_check(rx_mod, rx_cfg))
 		return NULL;
 
-	/* Initialize resource pointers */
 	intr_info = &res_info[BNA_RX_RES_T_INTR].res_u.intr_info;
 	ccb_mem = &res_info[BNA_RX_RES_MEM_T_CCB].res_u.mem_info.mdl[0];
 	rcb_mem = &res_info[BNA_RX_RES_MEM_T_RCB].res_u.mem_info.mdl[0];
@@ -2849,7 +2418,6 @@ bna_rx_create(struct bna *bna, struct bnad *bnad,
 	hpage_mem = &res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info.mdl[0];
 	dpage_mem = &res_info[BNA_RX_RES_MEM_T_DPAGE].res_u.mem_info.mdl[0];
 
-	/* Compute q depth & page count */
 	page_count = res_info[BNA_RX_RES_MEM_T_CQPT_PAGE].res_u.mem_info.num /
 			rx_cfg->num_paths;
 
@@ -2858,11 +2426,14 @@ bna_rx_create(struct bna *bna, struct bnad *bnad,
 
 	hpage_count = res_info[BNA_RX_RES_MEM_T_HPAGE].res_u.mem_info.num /
 			rx_cfg->num_paths;
-	/* Get RX pointer */
-	rx = _get_free_rx(rx_mod);
-	_rx_init(rx, bna);
+
+	rx = bna_rx_get(rx_mod, rx_cfg->rx_type);
+	rx->bna = bna;
+	rx->rx_flags = 0;
+	INIT_LIST_HEAD(&rx->rxp_q);
+	rx->stop_cbfn = NULL;
+	rx->stop_cbarg = NULL;
 	rx->priv = priv;
-	rx->type = rx_cfg->rx_type;
 
 	rx->rcb_setup_cbfn = rx_cbfn->rcb_setup_cbfn;
 	rx->rcb_destroy_cbfn = rx_cbfn->rcb_destroy_cbfn;
@@ -2872,152 +2443,155 @@ bna_rx_create(struct bna *bna, struct bnad *bnad,
 	rx->rx_cleanup_cbfn = rx_cbfn->rx_cleanup_cbfn;
 	rx->rx_post_cbfn = rx_cbfn->rx_post_cbfn;
 
-	if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_PORT_STARTED) {
+	if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_ENET_STARTED) {
 		switch (rx->type) {
 		case BNA_RX_T_REGULAR:
 			if (!(rx->bna->rx_mod.flags &
-				BNA_RX_MOD_F_PORT_LOOPBACK))
-				rx->rx_flags |= BNA_RX_F_PORT_ENABLED;
+				BNA_RX_MOD_F_ENET_LOOPBACK))
+				rx->rx_flags |= BNA_RX_F_ENET_STARTED;
 			break;
 		case BNA_RX_T_LOOPBACK:
-			if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_PORT_LOOPBACK)
-				rx->rx_flags |= BNA_RX_F_PORT_ENABLED;
+			if (rx->bna->rx_mod.flags & BNA_RX_MOD_F_ENET_LOOPBACK)
+				rx->rx_flags |= BNA_RX_F_ENET_STARTED;
 			break;
 		}
 	}
 
-	for (i = 0, rcb_idx = 0; i < rx_cfg->num_paths; i++) {
-		rxp = _get_free_rxp(rx_mod);
+	rx->num_paths = rx_cfg->num_paths;
+	for (i = 0, rcb_idx = 0; i < rx->num_paths; i++) {
+		rxp = bna_rxp_get(rx_mod);
+		list_add_tail(&rxp->qe, &rx->rxp_q);
 		rxp->type = rx_cfg->rxp_type;
 		rxp->rx = rx;
 		rxp->cq.rx = rx;
 
-		/* Get required RXQs, and queue them to rx-path */
-		q0 = _get_free_rxq(rx_mod);
+		q0 = bna_rxq_get(rx_mod);
 		if (BNA_RXP_SINGLE == rx_cfg->rxp_type)
 			q1 = NULL;
 		else
-			q1 = _get_free_rxq(rx_mod);
+			q1 = bna_rxq_get(rx_mod);
 
-		/* Initialize IB */
-		if (1 == intr_info->num) {
-			rxp->cq.ib = bna_ib_get(&bna->ib_mod,
-					intr_info->intr_type,
-					intr_info->idl[0].vector);
+		if (1 == intr_info->num)
 			rxp->vector = intr_info->idl[0].vector;
-		} else {
-			rxp->cq.ib = bna_ib_get(&bna->ib_mod,
-					intr_info->intr_type,
-					intr_info->idl[i].vector);
-
-			/* Map the MSI-x vector used for this RXP */
+		else
 			rxp->vector = intr_info->idl[i].vector;
-		}
-
-		rxp->cq.ib_seg_offset = bna_ib_reserve_idx(rxp->cq.ib);
-
-		ibcfg.coalescing_timeo = BFI_RX_COALESCING_TIMEO;
-		ibcfg.interpkt_count = BFI_RX_INTERPKT_COUNT;
-		ibcfg.interpkt_timeo = BFI_RX_INTERPKT_TIMEO;
-		ibcfg.ctrl_flags = BFI_IB_CF_INT_ENABLE;
 
-		bna_ib_config(rxp->cq.ib, &ibcfg);
+		/* Setup IB */
+
+		rxp->cq.ib.ib_seg_host_addr.lsb =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.lsb;
+		rxp->cq.ib.ib_seg_host_addr.msb =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.msb;
+		rxp->cq.ib.ib_seg_host_addr_kva =
+		res_info[BNA_RX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].kva;
+		rxp->cq.ib.intr_type = intr_info->intr_type;
+		if (intr_info->intr_type == BNA_INTR_T_MSIX)
+			rxp->cq.ib.intr_vector = rxp->vector;
+		else
+			rxp->cq.ib.intr_vector = (1 << rxp->vector);
+		rxp->cq.ib.coalescing_timeo = rx_cfg->coalescing_timeo;
+		rxp->cq.ib.interpkt_count = BFI_RX_INTERPKT_COUNT;
+		rxp->cq.ib.interpkt_timeo = BFI_RX_INTERPKT_TIMEO;
 
-		/* Link rxqs to rxp */
-		_rxp_add_rxqs(rxp, q0, q1);
+		bna_rxp_add_rxqs(rxp, q0, q1);
 
-		/* Link rxp to rx */
-		_rx_add_rxp(rx, rxp);
+		/* Setup large Q */
 
 		q0->rx = rx;
 		q0->rxp = rxp;
 
-		/* Initialize RCB for the large / data q */
 		q0->rcb = (struct bna_rcb *) rcb_mem[rcb_idx].kva;
-		RXQ_RCB_INIT(q0, rxp, rx_cfg->q_depth, bna, 0,
-			(void *)unmapq_mem[rcb_idx].kva);
+		q0->rcb->unmap_q = (void *)unmapq_mem[rcb_idx].kva;
 		rcb_idx++;
-		(q0)->rx_packets = (q0)->rx_bytes = 0;
-		(q0)->rx_packets_with_error = (q0)->rxbuf_alloc_failed = 0;
-
-		/* Initialize RXQs */
-		_rxq_qpt_init(q0, rxp, dpage_count, PAGE_SIZE,
+		q0->rcb->q_depth = rx_cfg->q_depth;
+		q0->rcb->rxq = q0;
+		q0->rcb->bnad = bna->bnad;
+		q0->rcb->id = 0;
+		q0->rx_packets = q0->rx_bytes = 0;
+		q0->rx_packets_with_error = q0->rxbuf_alloc_failed = 0;
+
+		bna_rxq_qpt_setup(q0, rxp, dpage_count, PAGE_SIZE,
 			&dqpt_mem[i], &dsqpt_mem[i], &dpage_mem[dpage_idx]);
 		q0->rcb->page_idx = dpage_idx;
 		q0->rcb->page_count = dpage_count;
 		dpage_idx += dpage_count;
 
-		/* Call bnad to complete rcb setup */
 		if (rx->rcb_setup_cbfn)
 			rx->rcb_setup_cbfn(bnad, q0->rcb);
 
+		/* Setup small Q */
+
 		if (q1) {
 			q1->rx = rx;
 			q1->rxp = rxp;
 
 			q1->rcb = (struct bna_rcb *) rcb_mem[rcb_idx].kva;
-			RXQ_RCB_INIT(q1, rxp, rx_cfg->q_depth, bna, 1,
-				(void *)unmapq_mem[rcb_idx].kva);
+			q1->rcb->unmap_q = (void *)unmapq_mem[rcb_idx].kva;
 			rcb_idx++;
-			(q1)->buffer_size = (rx_cfg)->small_buff_size;
-			(q1)->rx_packets = (q1)->rx_bytes = 0;
-			(q1)->rx_packets_with_error =
-				(q1)->rxbuf_alloc_failed = 0;
-
-			_rxq_qpt_init(q1, rxp, hpage_count, PAGE_SIZE,
+			q1->rcb->q_depth = rx_cfg->q_depth;
+			q1->rcb->rxq = q1;
+			q1->rcb->bnad = bna->bnad;
+			q1->rcb->id = 1;
+			q1->buffer_size = (rx_cfg->rxp_type == BNA_RXP_HDS) ?
+					rx_cfg->hds_config.forced_offset
+					: rx_cfg->small_buff_size;
+			q1->rx_packets = q1->rx_bytes = 0;
+			q1->rx_packets_with_error = q1->rxbuf_alloc_failed = 0;
+
+			bna_rxq_qpt_setup(q1, rxp, hpage_count, PAGE_SIZE,
 				&hqpt_mem[i], &hsqpt_mem[i],
 				&hpage_mem[hpage_idx]);
 			q1->rcb->page_idx = hpage_idx;
 			q1->rcb->page_count = hpage_count;
 			hpage_idx += hpage_count;
 
-			/* Call bnad to complete rcb setup */
 			if (rx->rcb_setup_cbfn)
 				rx->rcb_setup_cbfn(bnad, q1->rcb);
 		}
-		/* Setup RXP::CQ */
-		rxp->cq.ccb = (struct bna_ccb *) ccb_mem[i].kva;
-		_rxp_cqpt_setup(rxp, page_count, PAGE_SIZE,
-			&cqpt_mem[i], &cswqpt_mem[i], &cpage_mem[cpage_idx]);
-		rxp->cq.ccb->page_idx = cpage_idx;
-		rxp->cq.ccb->page_count = page_count;
-		cpage_idx += page_count;
 
-		rxp->cq.ccb->pkt_rate.small_pkt_cnt = 0;
-		rxp->cq.ccb->pkt_rate.large_pkt_cnt = 0;
+		/* Setup CQ */
 
-		rxp->cq.ccb->producer_index = 0;
+		rxp->cq.ccb = (struct bna_ccb *) ccb_mem[i].kva;
 		rxp->cq.ccb->q_depth =	rx_cfg->q_depth +
 					((rx_cfg->rxp_type == BNA_RXP_SINGLE) ?
 					0 : rx_cfg->q_depth);
-		rxp->cq.ccb->i_dbell = &rxp->cq.ib->door_bell;
+		rxp->cq.ccb->cq = &rxp->cq;
 		rxp->cq.ccb->rcb[0] = q0->rcb;
-		if (q1)
+		q0->rcb->ccb = rxp->cq.ccb;
+		if (q1) {
 			rxp->cq.ccb->rcb[1] = q1->rcb;
-		rxp->cq.ccb->cq = &rxp->cq;
-		rxp->cq.ccb->bnad = bna->bnad;
+			q1->rcb->ccb = rxp->cq.ccb;
+		}
 		rxp->cq.ccb->hw_producer_index =
-			((volatile u32 *)rxp->cq.ib->ib_seg_host_addr_kva +
-				      (rxp->cq.ib_seg_offset * BFI_IBIDX_SIZE));
-		*(rxp->cq.ccb->hw_producer_index) = 0;
-		rxp->cq.ccb->intr_type = intr_info->intr_type;
-		rxp->cq.ccb->intr_vector = (intr_info->num == 1) ?
-						intr_info->idl[0].vector :
-						intr_info->idl[i].vector;
+			(u32 *)rxp->cq.ib.ib_seg_host_addr_kva;
+		rxp->cq.ccb->i_dbell = &rxp->cq.ib.door_bell;
+		rxp->cq.ccb->intr_type = rxp->cq.ib.intr_type;
+		rxp->cq.ccb->intr_vector = rxp->cq.ib.intr_vector;
 		rxp->cq.ccb->rx_coalescing_timeo =
-					rxp->cq.ib->ib_config.coalescing_timeo;
+			rxp->cq.ib.coalescing_timeo;
+		rxp->cq.ccb->pkt_rate.small_pkt_cnt = 0;
+		rxp->cq.ccb->pkt_rate.large_pkt_cnt = 0;
+		rxp->cq.ccb->bnad = bna->bnad;
 		rxp->cq.ccb->id = i;
 
-		/* Call bnad to complete CCB setup */
+		bna_rxp_cqpt_setup(rxp, page_count, PAGE_SIZE,
+			&cqpt_mem[i], &cswqpt_mem[i], &cpage_mem[cpage_idx]);
+		rxp->cq.ccb->page_idx = cpage_idx;
+		rxp->cq.ccb->page_count = page_count;
+		cpage_idx += page_count;
+
 		if (rx->ccb_setup_cbfn)
 			rx->ccb_setup_cbfn(bnad, rxp->cq.ccb);
+	}
 
-	} /* for each rx-path */
+	rx->hds_cfg = rx_cfg->hds_config;
 
-	bna_rxf_init(&rx->rxf, rx, rx_cfg);
+	bna_rxf_init(&rx->rxf, rx, rx_cfg, res_info);
 
 	bfa_fsm_set_state(rx, bna_rx_sm_stopped);
 
+	rx_mod->rid_mask |= (1 << rx->rid);
+
 	return rx;
 }
 
@@ -3025,7 +2599,6 @@ void
 bna_rx_destroy(struct bna_rx *rx)
 {
 	struct bna_rx_mod *rx_mod = &rx->bna->rx_mod;
-	struct bna_ib_mod *ib_mod = &rx->bna->ib_mod;
 	struct bna_rxq *q0 = NULL;
 	struct bna_rxq *q1 = NULL;
 	struct bna_rxp *rxp;
@@ -3036,37 +2609,29 @@ bna_rx_destroy(struct bna_rx *rx)
 	while (!list_empty(&rx->rxp_q)) {
 		bfa_q_deq(&rx->rxp_q, &rxp);
 		GET_RXQS(rxp, q0, q1);
-		/* Callback to bnad for destroying RCB */
 		if (rx->rcb_destroy_cbfn)
 			rx->rcb_destroy_cbfn(rx->bna->bnad, q0->rcb);
 		q0->rcb = NULL;
 		q0->rxp = NULL;
 		q0->rx = NULL;
-		_put_free_rxq(rx_mod, q0);
+		bna_rxq_put(rx_mod, q0);
+
 		if (q1) {
-			/* Callback to bnad for destroying RCB */
 			if (rx->rcb_destroy_cbfn)
 				rx->rcb_destroy_cbfn(rx->bna->bnad, q1->rcb);
 			q1->rcb = NULL;
 			q1->rxp = NULL;
 			q1->rx = NULL;
-			_put_free_rxq(rx_mod, q1);
+			bna_rxq_put(rx_mod, q1);
 		}
 		rxp->rxq.slr.large = NULL;
 		rxp->rxq.slr.small = NULL;
-		if (rxp->cq.ib) {
-			if (rxp->cq.ib_seg_offset != 0xff)
-				bna_ib_release_idx(rxp->cq.ib,
-						rxp->cq.ib_seg_offset);
-			bna_ib_put(ib_mod, rxp->cq.ib);
-			rxp->cq.ib = NULL;
-		}
-		/* Callback to bnad for destroying CCB */
+
 		if (rx->ccb_destroy_cbfn)
 			rx->ccb_destroy_cbfn(rx->bna->bnad, rxp->cq.ccb);
 		rxp->cq.ccb = NULL;
 		rxp->rx = NULL;
-		_put_free_rxp(rx_mod, rxp);
+		bna_rxp_put(rx_mod, rxp);
 	}
 
 	list_for_each(qe, &rx_mod->rx_active_q) {
@@ -3077,9 +2642,11 @@ bna_rx_destroy(struct bna_rx *rx)
 		}
 	}
 
+	rx_mod->rid_mask &= ~(1 << rx->rid);
+
 	rx->bna = NULL;
 	rx->priv = NULL;
-	_put_free_rx(rx_mod, rx);
+	bna_rx_put(rx_mod, rx);
 }
 
 void
@@ -3088,103 +2655,260 @@ bna_rx_enable(struct bna_rx *rx)
 	if (rx->fsm != (bfa_sm_t)bna_rx_sm_stopped)
 		return;
 
-	rx->rx_flags |= BNA_RX_F_ENABLE;
-	if (rx->rx_flags & BNA_RX_F_PORT_ENABLED)
+	rx->rx_flags |= BNA_RX_F_ENABLED;
+	if (rx->rx_flags & BNA_RX_F_ENET_STARTED)
 		bfa_fsm_send_event(rx, RX_E_START);
 }
 
 void
 bna_rx_disable(struct bna_rx *rx, enum bna_cleanup_type type,
-		void (*cbfn)(void *, struct bna_rx *,
-				enum bna_cb_status))
+		void (*cbfn)(void *, struct bna_rx *))
 {
 	if (type == BNA_SOFT_CLEANUP) {
 		/* h/w should not be accessed. Treat we're stopped */
-		(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
+		(*cbfn)(rx->bna->bnad, rx);
 	} else {
 		rx->stop_cbfn = cbfn;
 		rx->stop_cbarg = rx->bna->bnad;
 
-		rx->rx_flags &= ~BNA_RX_F_ENABLE;
+		rx->rx_flags &= ~BNA_RX_F_ENABLED;
 
 		bfa_fsm_send_event(rx, RX_E_STOP);
 	}
 }
 
+void
+bna_rx_cleanup_complete(struct bna_rx *rx)
+{
+	bfa_fsm_send_event(rx, RX_E_CLEANUP_DONE);
+}
+
+enum bna_cb_status
+bna_rx_mode_set(struct bna_rx *rx, enum bna_rxmode new_mode,
+		enum bna_rxmode bitmask,
+		void (*cbfn)(struct bnad *, struct bna_rx *))
+{
+	struct bna_rxf *rxf = &rx->rxf;
+	int need_hw_config = 0;
+
+	/* Error checks */
+
+	if (is_promisc_enable(new_mode, bitmask)) {
+		/* If promisc mode is already enabled elsewhere in the system */
+		if ((rx->bna->promisc_rid != BFI_INVALID_RID) &&
+			(rx->bna->promisc_rid != rxf->rx->rid))
+			goto err_return;
+
+		/* If default mode is already enabled in the system */
+		if (rx->bna->default_mode_rid != BFI_INVALID_RID)
+			goto err_return;
+
+		/* Trying to enable promiscuous and default mode together */
+		if (is_default_enable(new_mode, bitmask))
+			goto err_return;
+	}
+
+	if (is_default_enable(new_mode, bitmask)) {
+		/* If default mode is already enabled elsewhere in the system */
+		if ((rx->bna->default_mode_rid != BFI_INVALID_RID) &&
+			(rx->bna->default_mode_rid != rxf->rx->rid)) {
+				goto err_return;
+		}
+
+		/* If promiscuous mode is already enabled in the system */
+		if (rx->bna->promisc_rid != BFI_INVALID_RID)
+			goto err_return;
+	}
+
+	/* Process the commands */
+
+	if (is_promisc_enable(new_mode, bitmask)) {
+		if (bna_rxf_promisc_enable(rxf))
+			need_hw_config = 1;
+	} else if (is_promisc_disable(new_mode, bitmask)) {
+		if (bna_rxf_promisc_disable(rxf))
+			need_hw_config = 1;
+	}
+
+	if (is_allmulti_enable(new_mode, bitmask)) {
+		if (bna_rxf_allmulti_enable(rxf))
+			need_hw_config = 1;
+	} else if (is_allmulti_disable(new_mode, bitmask)) {
+		if (bna_rxf_allmulti_disable(rxf))
+			need_hw_config = 1;
+	}
+
+	/* Trigger h/w if needed */
+
+	if (need_hw_config) {
+		rxf->cam_fltr_cbfn = cbfn;
+		rxf->cam_fltr_cbarg = rx->bna->bnad;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	} else if (cbfn)
+		(*cbfn)(rx->bna->bnad, rx);
+
+	return BNA_CB_SUCCESS;
+
+err_return:
+	return BNA_CB_FAIL;
+}
+
+void
+bna_rx_vlanfilter_enable(struct bna_rx *rx)
+{
+	struct bna_rxf *rxf = &rx->rxf;
+
+	if (rxf->vlan_filter_status == BNA_STATUS_T_DISABLED) {
+		rxf->vlan_filter_status = BNA_STATUS_T_ENABLED;
+		rxf->vlan_pending_bitmask = (u8)BFI_VLAN_BMASK_ALL;
+		bfa_fsm_send_event(rxf, RXF_E_CONFIG);
+	}
+}
+
+void
+bna_rx_coalescing_timeo_set(struct bna_rx *rx, int coalescing_timeo)
+{
+	struct bna_rxp *rxp;
+	struct list_head *qe;
+
+	list_for_each(qe, &rx->rxp_q) {
+		rxp = (struct bna_rxp *)qe;
+		rxp->cq.ccb->rx_coalescing_timeo = coalescing_timeo;
+		bna_ib_coalescing_timeo_set(&rxp->cq.ib, coalescing_timeo);
+	}
+}
+
+void
+bna_rx_dim_reconfig(struct bna *bna, u32 vector[][BNA_BIAS_T_MAX])
+{
+	int i, j;
+
+	for (i = 0; i < BNA_LOAD_T_MAX; i++)
+		for (j = 0; j < BNA_BIAS_T_MAX; j++)
+			bna->rx_mod.dim_vector[i][j] = vector[i][j];
+}
+
+void
+bna_rx_dim_update(struct bna_ccb *ccb)
+{
+	struct bna *bna = ccb->cq->rx->bna;
+	u32 load, bias;
+	u32 pkt_rt, small_rt, large_rt;
+	u8 coalescing_timeo;
+
+	if ((ccb->pkt_rate.small_pkt_cnt == 0) &&
+		(ccb->pkt_rate.large_pkt_cnt == 0))
+		return;
+
+	/* Arrive at preconfigured coalescing timeo value based on pkt rate */
+
+	small_rt = ccb->pkt_rate.small_pkt_cnt;
+	large_rt = ccb->pkt_rate.large_pkt_cnt;
+
+	pkt_rt = small_rt + large_rt;
+
+	if (pkt_rt < BNA_PKT_RATE_10K)
+		load = BNA_LOAD_T_LOW_4;
+	else if (pkt_rt < BNA_PKT_RATE_20K)
+		load = BNA_LOAD_T_LOW_3;
+	else if (pkt_rt < BNA_PKT_RATE_30K)
+		load = BNA_LOAD_T_LOW_2;
+	else if (pkt_rt < BNA_PKT_RATE_40K)
+		load = BNA_LOAD_T_LOW_1;
+	else if (pkt_rt < BNA_PKT_RATE_50K)
+		load = BNA_LOAD_T_HIGH_1;
+	else if (pkt_rt < BNA_PKT_RATE_60K)
+		load = BNA_LOAD_T_HIGH_2;
+	else if (pkt_rt < BNA_PKT_RATE_80K)
+		load = BNA_LOAD_T_HIGH_3;
+	else
+		load = BNA_LOAD_T_HIGH_4;
+
+	if (small_rt > (large_rt << 1))
+		bias = 0;
+	else
+		bias = 1;
+
+	ccb->pkt_rate.small_pkt_cnt = 0;
+	ccb->pkt_rate.large_pkt_cnt = 0;
+
+	coalescing_timeo = bna->rx_mod.dim_vector[load][bias];
+	ccb->rx_coalescing_timeo = coalescing_timeo;
+
+	/* Set it to IB */
+	bna_ib_coalescing_timeo_set(&ccb->cq->ib, coalescing_timeo);
+}
+
+u32 bna_napi_dim_vector[BNA_LOAD_T_MAX][BNA_BIAS_T_MAX] = {
+	{12, 12},
+	{6, 10},
+	{5, 10},
+	{4, 8},
+	{3, 6},
+	{3, 6},
+	{2, 4},
+	{1, 2},
+};
+
 /**
  * TX
  */
-#define call_tx_stop_cbfn(tx, status)\
-do {\
-	if ((tx)->stop_cbfn)\
-		(tx)->stop_cbfn((tx)->stop_cbarg, (tx), status);\
-	(tx)->stop_cbfn = NULL;\
-	(tx)->stop_cbarg = NULL;\
+#define call_tx_stop_cbfn(tx)						\
+do {									\
+	if ((tx)->stop_cbfn) {						\
+		void (*cbfn)(void *, struct bna_tx *);		\
+		void *cbarg;						\
+		cbfn = (tx)->stop_cbfn;					\
+		cbarg = (tx)->stop_cbarg;				\
+		(tx)->stop_cbfn = NULL;					\
+		(tx)->stop_cbarg = NULL;				\
+		cbfn(cbarg, (tx));					\
+	}								\
 } while (0)
 
-#define call_tx_prio_change_cbfn(tx, status)\
-do {\
-	if ((tx)->prio_change_cbfn)\
-		(tx)->prio_change_cbfn((tx)->bna->bnad, (tx), status);\
-	(tx)->prio_change_cbfn = NULL;\
+#define call_tx_prio_change_cbfn(tx)					\
+do {									\
+	if ((tx)->prio_change_cbfn) {					\
+		void (*cbfn)(struct bnad *, struct bna_tx *);	\
+		cbfn = (tx)->prio_change_cbfn;				\
+		(tx)->prio_change_cbfn = NULL;				\
+		cbfn((tx)->bna->bnad, (tx));				\
+	}								\
 } while (0)
 
-static void bna_tx_mod_cb_tx_stopped(void *tx_mod, struct bna_tx *tx,
-					enum bna_cb_status status);
-static void bna_tx_cb_txq_stopped(void *arg, int status);
-static void bna_tx_cb_stats_cleared(void *arg, int status);
-static void __bna_tx_stop(struct bna_tx *tx);
-static void __bna_tx_start(struct bna_tx *tx);
-static void __bna_txf_stat_clr(struct bna_tx *tx);
+static void bna_tx_mod_cb_tx_stopped(void *tx_mod, struct bna_tx *tx);
+static void bna_bfi_tx_enet_start(struct bna_tx *tx);
+static void bna_tx_enet_stop(struct bna_tx *tx);
 
 enum bna_tx_event {
 	TX_E_START			= 1,
 	TX_E_STOP			= 2,
 	TX_E_FAIL			= 3,
-	TX_E_TXQ_STOPPED		= 4,
-	TX_E_PRIO_CHANGE		= 5,
-	TX_E_STAT_CLEARED		= 6,
+	TX_E_STARTED			= 4,
+	TX_E_STOPPED			= 5,
+	TX_E_PRIO_CHANGE		= 6,
+	TX_E_CLEANUP_DONE		= 7,
+	TX_E_BW_UPDATE			= 8,
 };
 
-enum bna_tx_state {
-	BNA_TX_STOPPED			= 1,
-	BNA_TX_STARTED			= 2,
-	BNA_TX_TXQ_STOP_WAIT		= 3,
-	BNA_TX_PRIO_STOP_WAIT		= 4,
-	BNA_TX_STAT_CLR_WAIT		= 5,
-};
-
-bfa_fsm_state_decl(bna_tx, stopped, struct bna_tx,
-			enum bna_tx_event);
-bfa_fsm_state_decl(bna_tx, started, struct bna_tx,
-			enum bna_tx_event);
-bfa_fsm_state_decl(bna_tx, txq_stop_wait, struct bna_tx,
+bfa_fsm_state_decl(bna_tx, stopped, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, start_wait, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, started, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, stop_wait, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, cleanup_wait, struct bna_tx,
 			enum bna_tx_event);
 bfa_fsm_state_decl(bna_tx, prio_stop_wait, struct bna_tx,
 			enum bna_tx_event);
-bfa_fsm_state_decl(bna_tx, stat_clr_wait, struct bna_tx,
+bfa_fsm_state_decl(bna_tx, prio_cleanup_wait, struct bna_tx,
+			enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, failed, struct bna_tx, enum bna_tx_event);
+bfa_fsm_state_decl(bna_tx, quiesce_wait, struct bna_tx,
 			enum bna_tx_event);
-
-static struct bfa_sm_table tx_sm_table[] = {
-	{BFA_SM(bna_tx_sm_stopped), BNA_TX_STOPPED},
-	{BFA_SM(bna_tx_sm_started), BNA_TX_STARTED},
-	{BFA_SM(bna_tx_sm_txq_stop_wait), BNA_TX_TXQ_STOP_WAIT},
-	{BFA_SM(bna_tx_sm_prio_stop_wait), BNA_TX_PRIO_STOP_WAIT},
-	{BFA_SM(bna_tx_sm_stat_clr_wait), BNA_TX_STAT_CLR_WAIT},
-};
 
 static void
 bna_tx_sm_stopped_entry(struct bna_tx *tx)
 {
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		(tx->tx_cleanup_cbfn)(tx->bna->bnad, txq->tcb);
-	}
-
-	call_tx_stop_cbfn(tx, BNA_CB_SUCCESS);
+	call_tx_stop_cbfn(tx);
 }
 
 static void
@@ -3192,11 +2916,11 @@ bna_tx_sm_stopped(struct bna_tx *tx, enum bna_tx_event event)
 {
 	switch (event) {
 	case TX_E_START:
-		bfa_fsm_set_state(tx, bna_tx_sm_started);
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
 		break;
 
 	case TX_E_STOP:
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		call_tx_stop_cbfn(tx);
 		break;
 
 	case TX_E_FAIL:
@@ -3204,14 +2928,10 @@ bna_tx_sm_stopped(struct bna_tx *tx, enum bna_tx_event event)
 		break;
 
 	case TX_E_PRIO_CHANGE:
-		call_tx_prio_change_cbfn(tx, BNA_CB_SUCCESS);
+		call_tx_prio_change_cbfn(tx);
 		break;
 
-	case TX_E_TXQ_STOPPED:
-		/**
-		 * This event is received due to flushing of mbox when
-		 * device fails
-		 */
+	case TX_E_BW_UPDATE:
 		/* No-op */
 		break;
 
@@ -3221,42 +2941,81 @@ bna_tx_sm_stopped(struct bna_tx *tx, enum bna_tx_event event)
 }
 
 static void
+bna_tx_sm_start_wait_entry(struct bna_tx *tx)
+{
+	bna_bfi_tx_enet_start(tx);
+}
+
+static void
+bna_tx_sm_start_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_STOP:
+		tx->flags &= ~(BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED);
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
+		break;
+
+	case TX_E_FAIL:
+		tx->flags &= ~(BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED);
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
+
+	case TX_E_STARTED:
+		if (tx->flags & (BNA_TX_F_PRIO_CHANGED | BNA_TX_F_BW_UPDATED)) {
+			tx->flags &= ~(BNA_TX_F_PRIO_CHANGED |
+				BNA_TX_F_BW_UPDATED);
+			bfa_fsm_set_state(tx, bna_tx_sm_prio_stop_wait);
+		} else
+			bfa_fsm_set_state(tx, bna_tx_sm_started);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+		tx->flags |=  BNA_TX_F_PRIO_CHANGED;
+		break;
+
+	case TX_E_BW_UPDATE:
+		tx->flags |= BNA_TX_F_BW_UPDATED;
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
 bna_tx_sm_started_entry(struct bna_tx *tx)
 {
 	struct bna_txq *txq;
 	struct list_head		 *qe;
+	int is_regular = (tx->type == BNA_TX_T_REGULAR);
 
-	__bna_tx_start(tx);
-
-	/* Start IB */
 	list_for_each(qe, &tx->txq_q) {
 		txq = (struct bna_txq *)qe;
-		bna_ib_ack(&txq->ib->door_bell, 0);
+		txq->tcb->priority = txq->priority;
+		/* Start IB */
+		bna_ib_start(tx->bna, &txq->ib, is_regular);
 	}
+	tx->tx_resume_cbfn(tx->bna->bnad, tx);
 }
 
 static void
 bna_tx_sm_started(struct bna_tx *tx, enum bna_tx_event event)
 {
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
 	switch (event) {
 	case TX_E_STOP:
-		bfa_fsm_set_state(tx, bna_tx_sm_txq_stop_wait);
-		__bna_tx_stop(tx);
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
+		tx->tx_stall_cbfn(tx->bna->bnad, tx);
+		bna_tx_enet_stop(tx);
 		break;
 
 	case TX_E_FAIL:
-		list_for_each(qe, &tx->txq_q) {
-			txq = (struct bna_txq *)qe;
-			bna_ib_fail(txq->ib);
-			(tx->tx_stall_cbfn)(tx->bna->bnad, txq->tcb);
-		}
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		tx->tx_stall_cbfn(tx->bna->bnad, tx);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
 		break;
 
 	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
 		bfa_fsm_set_state(tx, bna_tx_sm_prio_stop_wait);
 		break;
 
@@ -3266,33 +3025,57 @@ bna_tx_sm_started(struct bna_tx *tx, enum bna_tx_event event)
 }
 
 static void
-bna_tx_sm_txq_stop_wait_entry(struct bna_tx *tx)
+bna_tx_sm_stop_wait_entry(struct bna_tx *tx)
 {
 }
 
 static void
-bna_tx_sm_txq_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
+bna_tx_sm_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
 {
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
 	switch (event) {
 	case TX_E_FAIL:
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+	case TX_E_STOPPED:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
 		break;
 
-	case TX_E_TXQ_STOPPED:
-		list_for_each(qe, &tx->txq_q) {
-			txq = (struct bna_txq *)qe;
-			bna_ib_stop(txq->ib);
-		}
-		bfa_fsm_set_state(tx, bna_tx_sm_stat_clr_wait);
+	case TX_E_STARTED:
+		/**
+		 * We are here due to start_wait -> stop_wait transition on
+		 * TX_E_STOP event
+		 */
+		bna_tx_enet_stop(tx);
 		break;
 
 	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	default:
+		bfa_sm_fault(event);
+	}
+}
+
+static void
+bna_tx_sm_cleanup_wait_entry(struct bna_tx *tx)
+{
+}
+
+static void
+bna_tx_sm_cleanup_wait(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_FAIL:
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
 		/* No-op */
 		break;
 
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
+
 	default:
 		bfa_sm_fault(event);
 	}
@@ -3301,36 +3084,30 @@ bna_tx_sm_txq_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
 static void
 bna_tx_sm_prio_stop_wait_entry(struct bna_tx *tx)
 {
-	__bna_tx_stop(tx);
+	tx->tx_stall_cbfn(tx->bna->bnad, tx);
+	bna_tx_enet_stop(tx);
 }
 
 static void
 bna_tx_sm_prio_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
 {
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
 	switch (event) {
 	case TX_E_STOP:
-		bfa_fsm_set_state(tx, bna_tx_sm_txq_stop_wait);
+		bfa_fsm_set_state(tx, bna_tx_sm_stop_wait);
 		break;
 
 	case TX_E_FAIL:
-		call_tx_prio_change_cbfn(tx, BNA_CB_FAIL);
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		call_tx_prio_change_cbfn(tx);
+		tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
 		break;
 
-	case TX_E_TXQ_STOPPED:
-		list_for_each(qe, &tx->txq_q) {
-			txq = (struct bna_txq *)qe;
-			bna_ib_stop(txq->ib);
-			(tx->tx_cleanup_cbfn)(tx->bna->bnad, txq->tcb);
-		}
-		call_tx_prio_change_cbfn(tx, BNA_CB_SUCCESS);
-		bfa_fsm_set_state(tx, bna_tx_sm_started);
+	case TX_E_STOPPED:
+		bfa_fsm_set_state(tx, bna_tx_sm_prio_cleanup_wait);
 		break;
 
 	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
 		/* No-op */
 		break;
 
@@ -3340,18 +3117,31 @@ bna_tx_sm_prio_stop_wait(struct bna_tx *tx, enum bna_tx_event event)
 }
 
 static void
-bna_tx_sm_stat_clr_wait_entry(struct bna_tx *tx)
+bna_tx_sm_prio_cleanup_wait_entry(struct bna_tx *tx)
 {
-	__bna_txf_stat_clr(tx);
+	call_tx_prio_change_cbfn(tx);
+	tx->tx_cleanup_cbfn(tx->bna->bnad, tx);
 }
 
 static void
-bna_tx_sm_stat_clr_wait(struct bna_tx *tx, enum bna_tx_event event)
+bna_tx_sm_prio_cleanup_wait(struct bna_tx *tx, enum bna_tx_event event)
 {
 	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
+
 	case TX_E_FAIL:
-	case TX_E_STAT_CLEARED:
-		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		break;
+
+	case TX_E_PRIO_CHANGE:
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
+
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
 		break;
 
 	default:
@@ -3360,236 +3150,144 @@ bna_tx_sm_stat_clr_wait(struct bna_tx *tx, enum bna_tx_event event)
 }
 
 static void
-__bna_txq_start(struct bna_tx *tx, struct bna_txq *txq)
-{
-	struct bna_rxtx_q_mem *q_mem;
-	struct bna_txq_mem txq_cfg;
-	struct bna_txq_mem *txq_mem;
-	struct bna_dma_addr cur_q_addr;
-	u32 pg_num;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	/* Fill out structure, to be subsequently written to hardware */
-	txq_cfg.pg_tbl_addr_lo = txq->qpt.hw_qpt_ptr.lsb;
-	txq_cfg.pg_tbl_addr_hi = txq->qpt.hw_qpt_ptr.msb;
-	cur_q_addr = *((struct bna_dma_addr *)(txq->qpt.kv_qpt_ptr));
-	txq_cfg.cur_q_entry_lo = cur_q_addr.lsb;
-	txq_cfg.cur_q_entry_hi = cur_q_addr.msb;
-
-	txq_cfg.pg_cnt_n_prd_ptr = (txq->qpt.page_count << 16) | 0x0;
-
-	txq_cfg.entry_n_pg_size = ((u32)(BFI_TXQ_WI_SIZE >> 2) << 16) |
-			(txq->qpt.page_size >> 2);
-	txq_cfg.int_blk_n_cns_ptr = ((((u32)txq->ib_seg_offset) << 24) |
-			((u32)(txq->ib->ib_id & 0xff) << 16) | 0x0);
-
-	txq_cfg.cns_ptr2_n_q_state = BNA_Q_IDLE_STATE;
-	txq_cfg.nxt_qid_n_fid_n_pri = (((tx->txf.txf_id & 0x3f) << 3) |
-			(txq->priority & 0x7));
-	txq_cfg.wvc_n_cquota_n_rquota =
-			((((u32)BFI_TX_MAX_WRR_QUOTA & 0xfff) << 12) |
-			(BFI_TX_MAX_WRR_QUOTA & 0xfff));
-
-	/* Setup the page and write to H/W */
-
-	pg_num = BNA_GET_PAGE_NUM(HQM0_BLK_PG_NUM + tx->bna->port_num,
-			HQM_RXTX_Q_RAM_BASE_OFFSET);
-	writel(pg_num, tx->bna->regs.page_addr);
-
-	base_addr = BNA_GET_MEM_BASE_ADDR(tx->bna->pcidev.pci_bar_kva,
-					HQM_RXTX_Q_RAM_BASE_OFFSET);
-	q_mem = (struct bna_rxtx_q_mem *)0;
-	txq_mem = &q_mem[txq->txq_id].txq;
-
-	/*
-	 * The following 4 lines, is a hack b'cos the H/W needs to read
-	 * these DMA addresses as little endian
-	 */
-
-	off = (unsigned long)&txq_mem->pg_tbl_addr_lo;
-	writel(htonl(txq_cfg.pg_tbl_addr_lo), base_addr + off);
-
-	off = (unsigned long)&txq_mem->pg_tbl_addr_hi;
-	writel(htonl(txq_cfg.pg_tbl_addr_hi), base_addr + off);
-
-	off = (unsigned long)&txq_mem->cur_q_entry_lo;
-	writel(htonl(txq_cfg.cur_q_entry_lo), base_addr + off);
-
-	off = (unsigned long)&txq_mem->cur_q_entry_hi;
-	writel(htonl(txq_cfg.cur_q_entry_hi), base_addr + off);
-
-	off = (unsigned long)&txq_mem->pg_cnt_n_prd_ptr;
-	writel(txq_cfg.pg_cnt_n_prd_ptr, base_addr + off);
-
-	off = (unsigned long)&txq_mem->entry_n_pg_size;
-	writel(txq_cfg.entry_n_pg_size, base_addr + off);
-
-	off = (unsigned long)&txq_mem->int_blk_n_cns_ptr;
-	writel(txq_cfg.int_blk_n_cns_ptr, base_addr + off);
+bna_tx_sm_failed_entry(struct bna_tx *tx)
+{
+}
 
-	off = (unsigned long)&txq_mem->cns_ptr2_n_q_state;
-	writel(txq_cfg.cns_ptr2_n_q_state, base_addr + off);
+static void
+bna_tx_sm_failed(struct bna_tx *tx, enum bna_tx_event event)
+{
+	switch (event) {
+	case TX_E_START:
+		bfa_fsm_set_state(tx, bna_tx_sm_quiesce_wait);
+		break;
 
-	off = (unsigned long)&txq_mem->nxt_qid_n_fid_n_pri;
-	writel(txq_cfg.nxt_qid_n_fid_n_pri, base_addr + off);
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
 
-	off = (unsigned long)&txq_mem->wvc_n_cquota_n_rquota;
-	writel(txq_cfg.wvc_n_cquota_n_rquota, base_addr + off);
+	case TX_E_FAIL:
+		/* No-op */
+		break;
 
-	txq->tcb->producer_index = 0;
-	txq->tcb->consumer_index = 0;
-	*(txq->tcb->hw_consumer_index) = 0;
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_stopped);
+		break;
 
+	default:
+		bfa_sm_fault(event);
+	}
 }
 
 static void
-__bna_txq_stop(struct bna_tx *tx, struct bna_txq *txq)
+bna_tx_sm_quiesce_wait_entry(struct bna_tx *tx)
 {
-	struct bfi_ll_q_stop_req ll_req;
-	u32 bit_mask[2] = {0, 0};
-	if (txq->txq_id < 32)
-		bit_mask[0] = (u32)1 << txq->txq_id;
-	else
-		bit_mask[1] = (u32)1 << (txq->txq_id - 32);
-
-	memset(&ll_req, 0, sizeof(ll_req));
-	ll_req.mh.msg_class = BFI_MC_LL;
-	ll_req.mh.msg_id = BFI_LL_H2I_TXQ_STOP_REQ;
-	ll_req.mh.mtag.h2i.lpu_id = 0;
-	ll_req.q_id_mask[0] = htonl(bit_mask[0]);
-	ll_req.q_id_mask[1] = htonl(bit_mask[1]);
-
-	bna_mbox_qe_fill(&tx->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_tx_cb_txq_stopped, tx);
-
-	bna_mbox_send(tx->bna, &tx->mbox_qe);
 }
 
 static void
-__bna_txf_start(struct bna_tx *tx)
+bna_tx_sm_quiesce_wait(struct bna_tx *tx, enum bna_tx_event event)
 {
-	struct bna_tx_fndb_ram *tx_fndb;
-	struct bna_txf *txf = &tx->txf;
-	void __iomem *base_addr;
-	unsigned long off;
-
-	writel(BNA_GET_PAGE_NUM(LUT0_MEM_BLK_BASE_PG_NUM +
-			(tx->bna->port_num * 2), TX_FNDB_RAM_BASE_OFFSET),
-			tx->bna->regs.page_addr);
+	switch (event) {
+	case TX_E_STOP:
+		bfa_fsm_set_state(tx, bna_tx_sm_cleanup_wait);
+		break;
 
-	base_addr = BNA_GET_MEM_BASE_ADDR(tx->bna->pcidev.pci_bar_kva,
-					TX_FNDB_RAM_BASE_OFFSET);
+	case TX_E_FAIL:
+		bfa_fsm_set_state(tx, bna_tx_sm_failed);
+		break;
 
-	tx_fndb = (struct bna_tx_fndb_ram *)0;
-	off = (unsigned long)&tx_fndb[txf->txf_id].vlan_n_ctrl_flags;
+	case TX_E_CLEANUP_DONE:
+		bfa_fsm_set_state(tx, bna_tx_sm_start_wait);
+		break;
 
-	writel(((u32)txf->vlan << 16) | txf->ctrl_flags,
-			base_addr + off);
+	case TX_E_BW_UPDATE:
+		/* No-op */
+		break;
 
-	if (tx->txf.txf_id < 32)
-		tx->bna->tx_mod.txf_bmap[0] |= ((u32)1 << tx->txf.txf_id);
-	else
-		tx->bna->tx_mod.txf_bmap[1] |= ((u32)
-						 1 << (tx->txf.txf_id - 32));
+	default:
+		bfa_sm_fault(event);
+	}
 }
 
 static void
-__bna_txf_stop(struct bna_tx *tx)
+bna_bfi_tx_enet_start(struct bna_tx *tx)
 {
-	struct bna_tx_fndb_ram *tx_fndb;
-	u32 page_num;
-	u32 ctl_flags;
-	struct bna_txf *txf = &tx->txf;
-	void __iomem *base_addr;
-	unsigned long off;
+	struct bfi_enet_tx_cfg_req *cfg_req = &tx->bfi_enet_cmd.cfg_req;
+	struct bna_txq *txq = NULL;
+	struct list_head *qe;
+	int i;
 
-	/* retrieve the running txf_flags & turn off enable bit */
-	page_num = BNA_GET_PAGE_NUM(LUT0_MEM_BLK_BASE_PG_NUM +
-			(tx->bna->port_num * 2), TX_FNDB_RAM_BASE_OFFSET);
-	writel(page_num, tx->bna->regs.page_addr);
+	bfi_msgq_mhdr_set(cfg_req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_TX_CFG_SET_REQ, 0, tx->rid);
+	cfg_req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_tx_cfg_req)));
 
-	base_addr = BNA_GET_MEM_BASE_ADDR(tx->bna->pcidev.pci_bar_kva,
-					TX_FNDB_RAM_BASE_OFFSET);
-	tx_fndb = (struct bna_tx_fndb_ram *)0;
-	off = (unsigned long)&tx_fndb[txf->txf_id].vlan_n_ctrl_flags;
+	cfg_req->num_queues = tx->num_txq;
+	for (i = 0, qe = bfa_q_first(&tx->txq_q);
+		i < tx->num_txq;
+		i++, qe = bfa_q_next(qe)) {
+		txq = (struct bna_txq *)qe;
 
-	ctl_flags = readl(base_addr + off);
-	ctl_flags &= ~BFI_TXF_CF_ENABLE;
+		bfi_enet_datapath_q_init(&cfg_req->q_cfg[i].q.q, &txq->qpt);
+		cfg_req->q_cfg[i].q.priority = txq->priority;
 
-	writel(ctl_flags, base_addr + off);
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_lo =
+			txq->ib.ib_seg_host_addr.lsb;
+		cfg_req->q_cfg[i].ib.index_addr.a32.addr_hi =
+			txq->ib.ib_seg_host_addr.msb;
+		cfg_req->q_cfg[i].ib.intr.msix_index =
+			htons((u16)txq->ib.intr_vector);
+	}
 
-	if (tx->txf.txf_id < 32)
-		tx->bna->tx_mod.txf_bmap[0] &= ~((u32)1 << tx->txf.txf_id);
-	else
-		tx->bna->tx_mod.txf_bmap[0] &= ~((u32)
-						 1 << (tx->txf.txf_id - 32));
-}
+	cfg_req->ib_cfg.int_pkt_dma = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_enabled = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.int_pkt_enabled = BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.continuous_coalescing = BNA_STATUS_T_ENABLED;
+	cfg_req->ib_cfg.msix = (txq->ib.intr_type == BNA_INTR_T_MSIX)
+				? BNA_STATUS_T_ENABLED : BNA_STATUS_T_DISABLED;
+	cfg_req->ib_cfg.coalescing_timeout =
+			htonl((u32)txq->ib.coalescing_timeo);
+	cfg_req->ib_cfg.inter_pkt_timeout =
+			htonl((u32)txq->ib.interpkt_timeo);
+	cfg_req->ib_cfg.inter_pkt_count = (u8)txq->ib.interpkt_count;
 
-static void
-__bna_txf_stat_clr(struct bna_tx *tx)
-{
-	struct bfi_ll_stats_req ll_req;
-	u32 txf_bmap[2] = {0, 0};
-	if (tx->txf.txf_id < 32)
-		txf_bmap[0] = ((u32)1 << tx->txf.txf_id);
-	else
-		txf_bmap[1] = ((u32)1 << (tx->txf.txf_id - 32));
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_STATS_CLEAR_REQ, 0);
-	ll_req.stats_mask = 0;
-	ll_req.rxf_id_mask[0] = 0;
-	ll_req.rxf_id_mask[1] =	0;
-	ll_req.txf_id_mask[0] =	htonl(txf_bmap[0]);
-	ll_req.txf_id_mask[1] =	htonl(txf_bmap[1]);
+	cfg_req->tx_cfg.vlan_mode = BFI_ENET_TX_VLAN_WI;
+	cfg_req->tx_cfg.vlan_id = htons((u16)tx->txf_vlan_id);
+	cfg_req->tx_cfg.admit_tagged_frame = BNA_STATUS_T_DISABLED;
+	cfg_req->tx_cfg.apply_vlan_filter = BNA_STATUS_T_DISABLED;
 
-	bna_mbox_qe_fill(&tx->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_tx_cb_stats_cleared, tx);
-	bna_mbox_send(tx->bna, &tx->mbox_qe);
+	bfa_msgq_cmd_set(&tx->msgq_cmd, NULL, NULL,
+		sizeof(struct bfi_enet_tx_cfg_req), &cfg_req->mh);
+	bfa_msgq_cmd_post(&tx->bna->msgq, &tx->msgq_cmd);
 }
 
 static void
-__bna_tx_start(struct bna_tx *tx)
+bna_bfi_tx_enet_stop(struct bna_tx *tx)
 {
-	struct bna_txq *txq;
-	struct list_head		 *qe;
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		bna_ib_start(txq->ib);
-		__bna_txq_start(tx, txq);
-	}
-
-	__bna_txf_start(tx);
+	struct bfi_enet_req *req = &tx->bfi_enet_cmd.req;
 
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		txq->tcb->priority = txq->priority;
-		(tx->tx_resume_cbfn)(tx->bna->bnad, txq->tcb);
-	}
+	bfi_msgq_mhdr_set(req->mh, BFI_MC_ENET,
+		BFI_ENET_H2I_TX_CFG_CLR_REQ, 0, tx->rid);
+	req->mh.num_entries = htons(
+		bfi_msgq_num_cmd_entries(sizeof(struct bfi_enet_req)));
+	bfa_msgq_cmd_set(&tx->msgq_cmd, NULL, NULL, sizeof(struct bfi_enet_req),
+		&req->mh);
+	bfa_msgq_cmd_post(&tx->bna->msgq, &tx->msgq_cmd);
 }
 
 static void
-__bna_tx_stop(struct bna_tx *tx)
+bna_tx_enet_stop(struct bna_tx *tx)
 {
 	struct bna_txq *txq;
 	struct list_head		 *qe;
 
+	/* Stop IB */
 	list_for_each(qe, &tx->txq_q) {
 		txq = (struct bna_txq *)qe;
-		(tx->tx_stall_cbfn)(tx->bna->bnad, txq->tcb);
+		bna_ib_stop(tx->bna, &txq->ib);
 	}
 
-	__bna_txf_stop(tx);
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		bfa_wc_up(&tx->txq_stop_wc);
-	}
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		__bna_txq_stop(tx, txq);
-	}
+	bna_bfi_tx_enet_stop(tx);
 }
 
 static void
@@ -3615,8 +3313,27 @@ bna_txq_qpt_setup(struct bna_txq *txq, int page_count, int page_size,
 			page_mem[i].dma.lsb;
 		((struct bna_dma_addr *)txq->qpt.kv_qpt_ptr)[i].msb =
 			page_mem[i].dma.msb;
+	}
+}
+
+static struct bna_tx *
+bna_tx_get(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
+{
+	struct list_head	*qe = NULL;
+	struct bna_tx *tx = NULL;
 
+	if (list_empty(&tx_mod->tx_free_q))
+		return NULL;
+	if (type == BNA_TX_T_REGULAR) {
+		bfa_q_deq(&tx_mod->tx_free_q, &qe);
+	} else {
+		bfa_q_deq_tail(&tx_mod->tx_free_q, &qe);
 	}
+	tx = (struct bna_tx *)qe;
+	bfa_q_qe_init(&tx->qe);
+	tx->type = type;
+
+	return tx;
 }
 
 static void
@@ -3624,19 +3341,12 @@ bna_tx_free(struct bna_tx *tx)
 {
 	struct bna_tx_mod *tx_mod = &tx->bna->tx_mod;
 	struct bna_txq *txq;
-	struct bna_ib_mod *ib_mod = &tx->bna->ib_mod;
+	struct list_head *prev_qe;
 	struct list_head *qe;
 
 	while (!list_empty(&tx->txq_q)) {
 		bfa_q_deq(&tx->txq_q, &txq);
 		bfa_q_qe_init(&txq->qe);
-		if (txq->ib) {
-			if (txq->ib_seg_offset != -1)
-				bna_ib_release_idx(txq->ib,
-						txq->ib_seg_offset);
-			bna_ib_put(ib_mod, txq->ib);
-			txq->ib = NULL;
-		}
 		txq->tcb = NULL;
 		txq->tx = NULL;
 		list_add_tail(&txq->qe, &tx_mod->txq_free_q);
@@ -3652,40 +3362,35 @@ bna_tx_free(struct bna_tx *tx)
 
 	tx->bna = NULL;
 	tx->priv = NULL;
-	list_add_tail(&tx->qe, &tx_mod->tx_free_q);
-}
-
-static void
-bna_tx_cb_txq_stopped(void *arg, int status)
-{
-	struct bna_tx *tx = (struct bna_tx *)arg;
 
-	bfa_q_qe_init(&tx->mbox_qe.qe);
-	bfa_wc_down(&tx->txq_stop_wc);
-}
-
-static void
-bna_tx_cb_txq_stopped_all(void *arg)
-{
-	struct bna_tx *tx = (struct bna_tx *)arg;
-
-	bfa_fsm_send_event(tx, TX_E_TXQ_STOPPED);
-}
-
-static void
-bna_tx_cb_stats_cleared(void *arg, int status)
-{
-	struct bna_tx *tx = (struct bna_tx *)arg;
-
-	bfa_q_qe_init(&tx->mbox_qe.qe);
+	prev_qe = NULL;
+	list_for_each(qe, &tx_mod->tx_free_q) {
+		if (((struct bna_tx *)qe)->rid < tx->rid)
+			prev_qe = qe;
+		else {
+			break;
+		}
+	}
 
-	bfa_fsm_send_event(tx, TX_E_STAT_CLEARED);
+	if (prev_qe == NULL) {
+		/* This is the first entry */
+		bfa_q_enq_head(&tx_mod->tx_free_q, &tx->qe);
+	} else if (bfa_q_next(prev_qe) == &tx_mod->tx_free_q) {
+		/* This is the last entry */
+		list_add_tail(&tx->qe, &tx_mod->tx_free_q);
+	} else {
+		/* Somewhere in the middle */
+		bfa_q_next(&tx->qe) = bfa_q_next(prev_qe);
+		bfa_q_prev(&tx->qe) = prev_qe;
+		bfa_q_next(prev_qe) = &tx->qe;
+		bfa_q_prev(bfa_q_next(&tx->qe)) = &tx->qe;
+	}
 }
 
 static void
 bna_tx_start(struct bna_tx *tx)
 {
-	tx->flags |= BNA_TX_F_PORT_STARTED;
+	tx->flags |= BNA_TX_F_ENET_STARTED;
 	if (tx->flags & BNA_TX_F_ENABLED)
 		bfa_fsm_send_event(tx, TX_E_START);
 }
@@ -3696,57 +3401,93 @@ bna_tx_stop(struct bna_tx *tx)
 	tx->stop_cbfn = bna_tx_mod_cb_tx_stopped;
 	tx->stop_cbarg = &tx->bna->tx_mod;
 
-	tx->flags &= ~BNA_TX_F_PORT_STARTED;
+	tx->flags &= ~BNA_TX_F_ENET_STARTED;
 	bfa_fsm_send_event(tx, TX_E_STOP);
 }
 
 static void
 bna_tx_fail(struct bna_tx *tx)
 {
-	tx->flags &= ~BNA_TX_F_PORT_STARTED;
+	tx->flags &= ~BNA_TX_F_ENET_STARTED;
 	bfa_fsm_send_event(tx, TX_E_FAIL);
 }
 
 static void
-bna_tx_prio_changed(struct bna_tx *tx, int prio)
+bna_tx_prio_changed(struct bna_tx *tx)
 {
+	struct bna_tx_mod *tx_mod = &tx->bna->tx_mod;
 	struct bna_txq *txq;
 	struct list_head		 *qe;
 
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		txq->priority = prio;
-	}
+	/* No need of priority reconfiguration for loopback Tx */
+	if (tx->type != BNA_TX_T_REGULAR)
+		return;
 
-	bfa_fsm_send_event(tx, TX_E_PRIO_CHANGE);
+	/**
+	 * If there are exactly 8 TxQs, each one occupies one priority.
+	 * In such case, there is nothing to reconfigure, since their
+	 * priorities remain the same.
+	 */
+	if (tx->num_txq != BFI_TX_MAX_PRIO) {
+		list_for_each(qe, &tx->txq_q) {
+			txq = (struct bna_txq *)qe;
+			txq->priority = (u8)tx_mod->default_prio;
+		}
+
+		bfa_fsm_send_event(tx, TX_E_PRIO_CHANGE);
+	}
 }
 
-static void
-bna_tx_cee_link_status(struct bna_tx *tx, int cee_link)
+void
+bna_bfi_tx_enet_start_rsp(struct bna_tx *tx, struct bfi_msgq_mhdr *msghdr)
 {
-	if (cee_link)
-		tx->flags |= BNA_TX_F_PRIO_LOCK;
-	else
-		tx->flags &= ~BNA_TX_F_PRIO_LOCK;
+	struct bfi_enet_tx_cfg_rsp *cfg_rsp = &tx->bfi_enet_cmd.cfg_rsp;
+	struct bna_txq *txq = NULL;
+	struct list_head *qe;
+	int i;
+
+	bfa_msgq_rsp_copy(&tx->bna->msgq, (u8 *)cfg_rsp,
+		sizeof(struct bfi_enet_tx_cfg_rsp));
+
+	tx->hw_id = cfg_rsp->hw_id;
+
+	for (i = 0, qe = bfa_q_first(&tx->txq_q);
+		i < tx->num_txq; i++, qe = bfa_q_next(qe)) {
+		txq = (struct bna_txq *)qe;
+
+		/* Setup doorbells */
+		txq->tcb->i_dbell->doorbell_addr =
+			tx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].i_dbell);
+		txq->tcb->q_dbell =
+			tx->bna->pcidev.pci_bar_kva
+			+ ntohl(cfg_rsp->q_handles[i].q_dbell);
+		txq->hw_id = cfg_rsp->q_handles[i].hw_qid;
+
+		/* Initialize producer/consumer indexes */
+		(*txq->tcb->hw_consumer_index) = 0;
+		txq->tcb->producer_index = txq->tcb->consumer_index = 0;
+	}
+
+	bfa_fsm_send_event(tx, TX_E_STARTED);
 }
 
-static void
-bna_tx_mod_cb_tx_stopped(void *arg, struct bna_tx *tx,
-			enum bna_cb_status status)
+void
+bna_bfi_tx_enet_stop_rsp(struct bna_tx *tx, struct bfi_msgq_mhdr *msghdr)
 {
-	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
-
-	bfa_wc_down(&tx_mod->tx_stop_wc);
+	bfa_fsm_send_event(tx, TX_E_STOPPED);
 }
 
-static void
-bna_tx_mod_cb_tx_stopped_all(void *arg)
+void
+bna_bfi_bw_update_aen(struct bna_tx_mod *tx_mod)
 {
-	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
+	struct bna_tx *tx;
+	struct list_head		*qe;
 
-	if (tx_mod->stop_cbfn)
-		tx_mod->stop_cbfn(&tx_mod->bna->port, BNA_CB_SUCCESS);
-	tx_mod->stop_cbfn = NULL;
+	list_for_each(qe, &tx_mod->tx_active_q) {
+		tx = (struct bna_tx *)qe;
+		bfa_fsm_send_event(tx, TX_E_BW_UPDATE);
+	}
 }
 
 void
@@ -3784,6 +3525,12 @@ bna_tx_res_req(int num_txq, int txq_depth, struct bna_res_info *res_info)
 	mem_info->len = PAGE_SIZE;
 	mem_info->num = num_txq * page_count;
 
+	res_info[BNA_TX_RES_MEM_T_IBIDX].res_type = BNA_RES_T_MEM;
+	mem_info = &res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info;
+	mem_info->mem_type = BNA_MEM_T_DMA;
+	mem_info->len = BFI_IBIDX_SIZE;
+	mem_info->num = num_txq;
+
 	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_type = BNA_RES_T_INTR;
 	res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info.intr_type =
 			BNA_INTR_T_MSIX;
@@ -3801,14 +3548,10 @@ bna_tx_create(struct bna *bna, struct bnad *bnad,
 	struct bna_tx *tx;
 	struct bna_txq *txq;
 	struct list_head *qe;
-	struct bna_ib_mod *ib_mod = &bna->ib_mod;
-	struct bna_doorbell_qset *qset;
-	struct bna_ib_config ib_config;
 	int page_count;
 	int page_size;
 	int page_idx;
 	int i;
-	unsigned long off;
 
 	intr_info = &res_info[BNA_TX_RES_INTR_T_TXCMPL].res_u.intr_info;
 	page_count = (res_info[BNA_TX_RES_MEM_T_PAGE].res_u.mem_info.num) /
@@ -3824,10 +3567,11 @@ bna_tx_create(struct bna *bna, struct bnad *bnad,
 
 	/* Tx */
 
-	if (list_empty(&tx_mod->tx_free_q))
+	tx = bna_tx_get(tx_mod, tx_cfg->tx_type);
+	if (!tx)
 		return NULL;
-	bfa_q_deq(&tx_mod->tx_free_q, &tx);
-	bfa_q_qe_init(&tx->qe);
+	tx->bna = bna;
+	tx->priv = priv;
 
 	/* TxQs */
 
@@ -3839,33 +3583,9 @@ bna_tx_create(struct bna *bna, struct bnad *bnad,
 		bfa_q_deq(&tx_mod->txq_free_q, &txq);
 		bfa_q_qe_init(&txq->qe);
 		list_add_tail(&txq->qe, &tx->txq_q);
-		txq->ib = NULL;
-		txq->ib_seg_offset = -1;
 		txq->tx = tx;
 	}
 
-	/* IBs */
-	i = 0;
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-
-		if (intr_info->num == 1)
-			txq->ib = bna_ib_get(ib_mod, intr_info->intr_type,
-						intr_info->idl[0].vector);
-		else
-			txq->ib = bna_ib_get(ib_mod, intr_info->intr_type,
-						intr_info->idl[i].vector);
-
-		if (txq->ib == NULL)
-			goto err_return;
-
-		txq->ib_seg_offset = bna_ib_reserve_idx(txq->ib);
-		if (txq->ib_seg_offset == -1)
-			goto err_return;
-
-		i++;
-	}
-
 	/*
 	 * Initialize
 	 */
@@ -3880,30 +3600,23 @@ bna_tx_create(struct bna *bna, struct bnad *bnad,
 	tx->tx_cleanup_cbfn = tx_cbfn->tx_cleanup_cbfn;
 
 	list_add_tail(&tx->qe, &tx_mod->tx_active_q);
-	tx->bna = bna;
-	tx->priv = priv;
-	tx->txq_stop_wc.wc_resume = bna_tx_cb_txq_stopped_all;
-	tx->txq_stop_wc.wc_cbarg = tx;
-	tx->txq_stop_wc.wc_count = 0;
 
-	tx->type = tx_cfg->tx_type;
+	tx->num_txq = tx_cfg->num_txq;
 
 	tx->flags = 0;
-	if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_PORT_STARTED) {
+	if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_ENET_STARTED) {
 		switch (tx->type) {
 		case BNA_TX_T_REGULAR:
 			if (!(tx->bna->tx_mod.flags &
-				BNA_TX_MOD_F_PORT_LOOPBACK))
-				tx->flags |= BNA_TX_F_PORT_STARTED;
+				BNA_TX_MOD_F_ENET_LOOPBACK))
+				tx->flags |= BNA_TX_F_ENET_STARTED;
 			break;
 		case BNA_TX_T_LOOPBACK:
-			if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_PORT_LOOPBACK)
-				tx->flags |= BNA_TX_F_PORT_STARTED;
+			if (tx->bna->tx_mod.flags & BNA_TX_MOD_F_ENET_LOOPBACK)
+				tx->flags |= BNA_TX_F_ENET_STARTED;
 			break;
 		}
 	}
-	if (tx->bna->tx_mod.cee_link)
-		tx->flags |= BNA_TX_F_PRIO_LOCK;
 
 	/* TxQ */
 
@@ -3911,42 +3624,38 @@ bna_tx_create(struct bna *bna, struct bnad *bnad,
 	page_idx = 0;
 	list_for_each(qe, &tx->txq_q) {
 		txq = (struct bna_txq *)qe;
-		txq->priority = tx_mod->priority;
 		txq->tcb = (struct bna_tcb *)
-		  res_info[BNA_TX_RES_MEM_T_TCB].res_u.mem_info.mdl[i].kva;
+		res_info[BNA_TX_RES_MEM_T_TCB].res_u.mem_info.mdl[i].kva;
 		txq->tx_packets = 0;
 		txq->tx_bytes = 0;
 
 		/* IB */
-
-		ib_config.coalescing_timeo = BFI_TX_COALESCING_TIMEO;
-		ib_config.interpkt_timeo = 0; /* Not used */
-		ib_config.interpkt_count = BFI_TX_INTERPKT_COUNT;
-		ib_config.ctrl_flags = (BFI_IB_CF_INTER_PKT_DMA |
-					BFI_IB_CF_INT_ENABLE |
-					BFI_IB_CF_COALESCING_MODE);
-		bna_ib_config(txq->ib, &ib_config);
+		txq->ib.ib_seg_host_addr.lsb =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.lsb;
+		txq->ib.ib_seg_host_addr.msb =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].dma.msb;
+		txq->ib.ib_seg_host_addr_kva =
+		res_info[BNA_TX_RES_MEM_T_IBIDX].res_u.mem_info.mdl[i].kva;
+		txq->ib.intr_type = intr_info->intr_type;
+		txq->ib.intr_vector = (intr_info->num == 1) ?
+					intr_info->idl[0].vector :
+					intr_info->idl[i].vector;
+		if (intr_info->intr_type == BNA_INTR_T_INTX)
+			txq->ib.intr_vector = (1 <<  txq->ib.intr_vector);
+		txq->ib.coalescing_timeo = tx_cfg->coalescing_timeo;
+		txq->ib.interpkt_timeo = 0; /* Not used */
+		txq->ib.interpkt_count = BFI_TX_INTERPKT_COUNT;
 
 		/* TCB */
 
-		txq->tcb->producer_index = 0;
-		txq->tcb->consumer_index = 0;
-		txq->tcb->hw_consumer_index = (volatile u32 *)
-			((volatile u8 *)txq->ib->ib_seg_host_addr_kva +
-			 (txq->ib_seg_offset * BFI_IBIDX_SIZE));
-		*(txq->tcb->hw_consumer_index) = 0;
 		txq->tcb->q_depth = tx_cfg->txq_depth;
 		txq->tcb->unmap_q = (void *)
 		res_info[BNA_TX_RES_MEM_T_UNMAPQ].res_u.mem_info.mdl[i].kva;
-		qset = (struct bna_doorbell_qset *)0;
-		off = (unsigned long)&qset[txq->txq_id].txq[0];
-		txq->tcb->q_dbell = off +
-			BNA_GET_DOORBELL_BASE_ADDR(bna->pcidev.pci_bar_kva);
-		txq->tcb->i_dbell = &txq->ib->door_bell;
-		txq->tcb->intr_type = intr_info->intr_type;
-		txq->tcb->intr_vector = (intr_info->num == 1) ?
-					intr_info->idl[0].vector :
-					intr_info->idl[i].vector;
+		txq->tcb->hw_consumer_index =
+			(u32 *)txq->ib.ib_seg_host_addr_kva;
+		txq->tcb->i_dbell = &txq->ib.door_bell;
+		txq->tcb->intr_type = txq->ib.intr_type;
+		txq->tcb->intr_vector = txq->ib.intr_vector;
 		txq->tcb->txq = txq;
 		txq->tcb->bnad = bnad;
 		txq->tcb->id = i;
@@ -3965,19 +3674,20 @@ bna_tx_create(struct bna *bna, struct bnad *bnad,
 		if (tx->tcb_setup_cbfn)
 			(tx->tcb_setup_cbfn)(bna->bnad, txq->tcb);
 
+		if (tx_cfg->num_txq == BFI_TX_MAX_PRIO)
+			txq->priority = txq->tcb->id;
+		else
+			txq->priority = tx_mod->default_prio;
+
 		i++;
 	}
 
-	/* TxF */
-
-	tx->txf.ctrl_flags = BFI_TXF_CF_ENABLE | BFI_TXF_CF_VLAN_WI_BASED;
-	tx->txf.vlan = 0;
-
-	/* Mbox element */
-	bfa_q_qe_init(&tx->mbox_qe.qe);
+	tx->txf_vlan_id = 0;
 
 	bfa_fsm_set_state(tx, bna_tx_sm_stopped);
 
+	tx_mod->rid_mask |= (1 << tx->rid);
+
 	return tx;
 
 err_return:
@@ -3988,17 +3698,16 @@ err_return:
 void
 bna_tx_destroy(struct bna_tx *tx)
 {
-	/* Callback to bnad for destroying TCB */
-	if (tx->tcb_destroy_cbfn) {
-		struct bna_txq *txq;
-		struct list_head *qe;
+	struct bna_txq *txq;
+	struct list_head *qe;
 
-		list_for_each(qe, &tx->txq_q) {
-			txq = (struct bna_txq *)qe;
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		if (tx->tcb_destroy_cbfn)
 			(tx->tcb_destroy_cbfn)(tx->bna->bnad, txq->tcb);
-		}
 	}
 
+	tx->bna->tx_mod.rid_mask &= ~(1 << tx->rid);
 	bna_tx_free(tx);
 }
 
@@ -4010,16 +3719,16 @@ bna_tx_enable(struct bna_tx *tx)
 
 	tx->flags |= BNA_TX_F_ENABLED;
 
-	if (tx->flags & BNA_TX_F_PORT_STARTED)
+	if (tx->flags & BNA_TX_F_ENET_STARTED)
 		bfa_fsm_send_event(tx, TX_E_START);
 }
 
 void
 bna_tx_disable(struct bna_tx *tx, enum bna_cleanup_type type,
-		void (*cbfn)(void *, struct bna_tx *, enum bna_cb_status))
+		void (*cbfn)(void *, struct bna_tx *))
 {
 	if (type == BNA_SOFT_CLEANUP) {
-		(*cbfn)(tx->bna->bnad, tx, BNA_CB_SUCCESS);
+		(*cbfn)(tx->bna->bnad, tx);
 		return;
 	}
 
@@ -4031,10 +3740,45 @@ bna_tx_disable(struct bna_tx *tx, enum bna_cleanup_type type,
 	bfa_fsm_send_event(tx, TX_E_STOP);
 }
 
-int
-bna_tx_state_get(struct bna_tx *tx)
+void
+bna_tx_cleanup_complete(struct bna_tx *tx)
 {
-	return bfa_sm_to_state(tx_sm_table, tx->fsm);
+	bfa_fsm_send_event(tx, TX_E_CLEANUP_DONE);
+}
+
+void
+bna_tx_prio_set(struct bna_tx *tx, int prio,
+		void (*cbfn)(struct bnad *, struct bna_tx *))
+{
+	struct bna_txq *txq;
+	struct list_head		 *qe;
+
+	tx->prio_change_cbfn = cbfn;
+
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		txq->priority = (u8)prio;
+	}
+
+	bfa_fsm_send_event(tx, TX_E_PRIO_CHANGE);
+}
+
+static void
+bna_tx_mod_cb_tx_stopped(void *arg, struct bna_tx *tx)
+{
+	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
+
+	bfa_wc_down(&tx_mod->tx_stop_wc);
+}
+
+static void
+bna_tx_mod_cb_tx_stopped_all(void *arg)
+{
+	struct bna_tx_mod *tx_mod = (struct bna_tx_mod *)arg;
+
+	if (tx_mod->stop_cbfn)
+		tx_mod->stop_cbfn(&tx_mod->bna->enet);
+	tx_mod->stop_cbfn = NULL;
 }
 
 void
@@ -4047,28 +3791,27 @@ bna_tx_mod_init(struct bna_tx_mod *tx_mod, struct bna *bna,
 	tx_mod->flags = 0;
 
 	tx_mod->tx = (struct bna_tx *)
-		res_info[BNA_RES_MEM_T_TX_ARRAY].res_u.mem_info.mdl[0].kva;
+		res_info[BNA_MOD_RES_MEM_T_TX_ARRAY].res_u.mem_info.mdl[0].kva;
 	tx_mod->txq = (struct bna_txq *)
-		res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mdl[0].kva;
+		res_info[BNA_MOD_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mdl[0].kva;
 
 	INIT_LIST_HEAD(&tx_mod->tx_free_q);
 	INIT_LIST_HEAD(&tx_mod->tx_active_q);
 
 	INIT_LIST_HEAD(&tx_mod->txq_free_q);
 
-	for (i = 0; i < BFI_MAX_TXQ; i++) {
-		tx_mod->tx[i].txf.txf_id = i;
+	for (i = 0; i < bna->ioceth.attr.num_txq; i++) {
+		tx_mod->tx[i].rid = i;
 		bfa_q_qe_init(&tx_mod->tx[i].qe);
 		list_add_tail(&tx_mod->tx[i].qe, &tx_mod->tx_free_q);
-
-		tx_mod->txq[i].txq_id = i;
 		bfa_q_qe_init(&tx_mod->txq[i].qe);
 		list_add_tail(&tx_mod->txq[i].qe, &tx_mod->txq_free_q);
 	}
 
-	tx_mod->tx_stop_wc.wc_resume = bna_tx_mod_cb_tx_stopped_all;
-	tx_mod->tx_stop_wc.wc_cbarg = tx_mod;
-	tx_mod->tx_stop_wc.wc_count = 0;
+	tx_mod->prio_map = BFI_TX_PRIO_MAP_ALL;
+	tx_mod->default_prio = 0;
+	tx_mod->iscsi_over_cee = BNA_STATUS_T_DISABLED;
+	tx_mod->iscsi_prio = -1;
 }
 
 void
@@ -4094,9 +3837,9 @@ bna_tx_mod_start(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
 	struct bna_tx *tx;
 	struct list_head		*qe;
 
-	tx_mod->flags |= BNA_TX_MOD_F_PORT_STARTED;
+	tx_mod->flags |= BNA_TX_MOD_F_ENET_STARTED;
 	if (type == BNA_TX_T_LOOPBACK)
-		tx_mod->flags |= BNA_TX_MOD_F_PORT_LOOPBACK;
+		tx_mod->flags |= BNA_TX_MOD_F_ENET_LOOPBACK;
 
 	list_for_each(qe, &tx_mod->tx_active_q) {
 		tx = (struct bna_tx *)qe;
@@ -4111,32 +3854,22 @@ bna_tx_mod_stop(struct bna_tx_mod *tx_mod, enum bna_tx_type type)
 	struct bna_tx *tx;
 	struct list_head		*qe;
 
-	tx_mod->flags &= ~BNA_TX_MOD_F_PORT_STARTED;
-	tx_mod->flags &= ~BNA_TX_MOD_F_PORT_LOOPBACK;
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_STARTED;
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_LOOPBACK;
 
-	tx_mod->stop_cbfn = bna_port_cb_tx_stopped;
+	tx_mod->stop_cbfn = bna_enet_cb_tx_stopped;
 
-	/**
-	 * Before calling bna_tx_stop(), increment tx_stop_wc as many times
-	 * as we are going to call bna_tx_stop
-	 */
-	list_for_each(qe, &tx_mod->tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		if (tx->type == type)
-			bfa_wc_up(&tx_mod->tx_stop_wc);
-	}
-
-	if (tx_mod->tx_stop_wc.wc_count == 0) {
-		tx_mod->stop_cbfn(&tx_mod->bna->port, BNA_CB_SUCCESS);
-		tx_mod->stop_cbfn = NULL;
-		return;
-	}
+	bfa_wc_init(&tx_mod->tx_stop_wc, bna_tx_mod_cb_tx_stopped_all, tx_mod);
 
 	list_for_each(qe, &tx_mod->tx_active_q) {
 		tx = (struct bna_tx *)qe;
-		if (tx->type == type)
+		if (tx->type == type) {
+			bfa_wc_up(&tx_mod->tx_stop_wc);
 			bna_tx_stop(tx);
+		}
 	}
+
+	bfa_wc_wait(&tx_mod->tx_stop_wc);
 }
 
 void
@@ -4145,8 +3878,8 @@ bna_tx_mod_fail(struct bna_tx_mod *tx_mod)
 	struct bna_tx *tx;
 	struct list_head		*qe;
 
-	tx_mod->flags &= ~BNA_TX_MOD_F_PORT_STARTED;
-	tx_mod->flags &= ~BNA_TX_MOD_F_PORT_LOOPBACK;
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_STARTED;
+	tx_mod->flags &= ~BNA_TX_MOD_F_ENET_LOOPBACK;
 
 	list_for_each(qe, &tx_mod->tx_active_q) {
 		tx = (struct bna_tx *)qe;
@@ -4155,31 +3888,79 @@ bna_tx_mod_fail(struct bna_tx_mod *tx_mod)
 }
 
 void
-bna_tx_mod_prio_changed(struct bna_tx_mod *tx_mod, int prio)
+bna_tx_mod_prio_reconfig(struct bna_tx_mod *tx_mod, int cee_linkup,
+			u8 prio_map, u8 iscsi_prio_map)
 {
 	struct bna_tx *tx;
 	struct list_head		*qe;
+	int need_txq_reconfig = 0;
+	int iscsi_prio = -1;
+	int default_prio = -1;
+	int i;
+
+	/* Select the priority map */
+	if (!cee_linkup) {
+		/* No CEE. Use all priorities */
+		prio_map = BFI_TX_PRIO_MAP_ALL;
+		iscsi_prio_map = 0;
+		default_prio = 0;
+	} else {
+		/* Select default priority */
+		for (i = 0; i < BFI_TX_MAX_PRIO; i++) {
+			if ((prio_map >> i) & 0x1) {
+				default_prio = i;
+				break;
+			}
+		}
 
-	if (prio != tx_mod->priority) {
-		tx_mod->priority = prio;
+		/* Derive iscsi priority */
+		if (iscsi_prio_map) {
+			for (i = 0; i < BFI_TX_MAX_PRIO; i++) {
+				if ((iscsi_prio_map >> i) & 0x1) {
+					iscsi_prio = i;
+					break;
+				}
+			}
+		}
+
+		/**
+		 * Network traffic priority map is a superset of iSCSI and
+		 * non iSCSI traffic. We are only giving a 'lift' to iSCSI
+		 * traffic by redirecting it to iSCSI priority. Other NW
+		 * traffic is still allowed to use iSCSI priority
+		 */
+		prio_map |= iscsi_prio_map;
+	}
 
+	if ((prio_map != tx_mod->prio_map) ||
+	    (default_prio != tx_mod->default_prio) ||
+	    (iscsi_prio_map && (iscsi_prio != tx_mod->iscsi_prio)))
+		need_txq_reconfig = 1;
+
+	tx_mod->prio_map = prio_map;
+	tx_mod->default_prio = default_prio;
+	tx_mod->iscsi_over_cee = (iscsi_prio_map ? BNA_STATUS_T_ENABLED :
+				BNA_STATUS_T_DISABLED);
+	tx_mod->iscsi_prio = iscsi_prio;
+	tx_mod->prio_reconfigured = need_txq_reconfig;
+
+	/* Reconfigure the TxQs */
+	if (need_txq_reconfig) {
 		list_for_each(qe, &tx_mod->tx_active_q) {
 			tx = (struct bna_tx *)qe;
-			bna_tx_prio_changed(tx, prio);
+			bna_tx_prio_changed(tx);
 		}
 	}
 }
 
 void
-bna_tx_mod_cee_link_status(struct bna_tx_mod *tx_mod, int cee_link)
+bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo)
 {
-	struct bna_tx *tx;
-	struct list_head		*qe;
-
-	tx_mod->cee_link = cee_link;
+	struct bna_txq *txq;
+	struct list_head *qe;
 
-	list_for_each(qe, &tx_mod->tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		bna_tx_cee_link_status(tx, cee_link);
+	list_for_each(qe, &tx->txq_q) {
+		txq = (struct bna_txq *)qe;
+		bna_ib_coalescing_timeo_set(&txq->ib, coalescing_timeo);
 	}
 }
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 09/45] bna: ENET and Tx Rx Redesign Update
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (7 preceding siblings ...)
  2011-07-18  8:19 ` [PATCH 08/45] bna: Tx and Rx Redesign Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:19 ` [PATCH 10/45] bna: Remove Obsolete Files Rasesh Mody
  2011-07-18  8:26 ` [PATCH 00/45] Update bna driver version to 3.0.2.0 David Miller
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Change details:
 - This patch contains the structure and function definition changes to bna.h,
   bna_types.has a result of Ethport, Enet, IOCEth, Tx, Rx redesign. It removes
   all unused HW register definition from bna_hw.h. It contains structure name
   change for stats collection, removal of get_regs support in bnad_ethtool.c

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/Makefile       |    2 +-
 drivers/net/bna/bfa_ioc.c      |    2 +-
 drivers/net/bna/bfi.h          |    8 +
 drivers/net/bna/bfi_ll.h       |   13 -
 drivers/net/bna/bna.h          |  419 +++++++-----
 drivers/net/bna/bna_hw.h       | 1463 ++++++----------------------------------
 drivers/net/bna/bna_types.h    |  653 +++++++-----------
 drivers/net/bna/bnad.c         |  428 ++++++++-----
 drivers/net/bna/bnad.h         |   32 +-
 drivers/net/bna/bnad_ethtool.c |  384 +----------
 drivers/net/bna/cna.h          |   31 +-
 include/linux/pci_ids.h        |    1 +
 12 files changed, 1070 insertions(+), 2366 deletions(-)

diff --git a/drivers/net/bna/Makefile b/drivers/net/bna/Makefile
index d501f52..8becc00 100644
--- a/drivers/net/bna/Makefile
+++ b/drivers/net/bna/Makefile
@@ -5,7 +5,7 @@
 
 obj-$(CONFIG_BNA) += bna.o
 
-bna-objs := bnad.o bnad_ethtool.o bna_ctrl.o bna_txrx.o
+bna-objs := bnad.o bnad_ethtool.o bna_enet.o bna_txrx.o
 bna-objs += bfa_msgq.o bfa_ioc.o bfa_ioc_ct.o bfa_cee.o
 bna-objs += cna_fwimg.o
 
diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index bd833a1..ad98c86 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -19,7 +19,7 @@
 #include "bfa_ioc.h"
 #include "cna.h"
 #include "bfi.h"
-#include "bfi_ctreg.h"
+#include "bfi_reg.h"
 #include "bfa_defs.h"
 
 /**
diff --git a/drivers/net/bna/bfi.h b/drivers/net/bna/bfi.h
index 6640174..c5d46a6 100644
--- a/drivers/net/bna/bfi.h
+++ b/drivers/net/bna/bfi.h
@@ -149,6 +149,14 @@ struct bfi_mbmsg {
 };
 
 /**
+ * Supported PCI function class codes (personality)
+ */
+enum bfi_pcifn_class {
+	BFI_PCIFN_CLASS_FC	= 0x0c04,
+	BFI_PCIFN_CLASS_ETH	= 0x0200,
+};
+
+/**
  * Message Classes
  */
 enum bfi_mclass {
diff --git a/drivers/net/bna/bfi_ll.h b/drivers/net/bna/bfi_ll.h
index bee4d05..e3bdb87 100644
--- a/drivers/net/bna/bfi_ll.h
+++ b/drivers/net/bna/bfi_ll.h
@@ -233,19 +233,6 @@ struct bfi_ll_rsp {
 };
 
 /**
- * @brief bfi_ll_cee_aen is used by:
- *	BFI_LL_I2H_LINK_DOWN_AEN
- *	BFI_LL_I2H_LINK_UP_AEN
- */
-struct bfi_ll_aen {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u32	reason;
-	u8		cee_linkup;
-	u8		prio_map;    /*!< LL priority bit-map */
-	u8		rsvd[2];
-};
-
-/**
  * @brief
  * 	The following error codes can be returned
  *	by the mbox commands
diff --git a/drivers/net/bna/bna.h b/drivers/net/bna/bna.h
index 6b14c1d..b342bdb 100644
--- a/drivers/net/bna/bna.h
+++ b/drivers/net/bna/bna.h
@@ -16,10 +16,9 @@
 #include "bfa_wc.h"
 #include "bfa_ioc.h"
 #include "cna.h"
-#include "bfi_ll.h"
 #include "bna_types.h"
 
-extern const u32 bna_napi_dim_vector[][BNA_BIAS_T_MAX];
+extern u32 bna_napi_dim_vector[][BNA_BIAS_T_MAX];
 
 /**
  *
@@ -32,15 +31,7 @@ extern const u32 bna_napi_dim_vector[][BNA_BIAS_T_MAX];
 /* Log string size */
 #define BNA_MESSAGE_SIZE		256
 
-/* MBOX API for PORT, TX, RX */
-#define bna_mbox_qe_fill(_qe, _cmd, _cmd_len, _cbfn, _cbarg)		\
-do {									\
-	memcpy(&((_qe)->cmd.msg[0]), (_cmd), (_cmd_len));	\
-	(_qe)->cbfn = (_cbfn);						\
-	(_qe)->cbarg = (_cbarg);					\
-} while (0)
-
-#define bna_is_small_rxq(rcb) ((rcb)->id == 1)
+#define bna_is_small_rxq(_id) ((_id) & 0x1)
 
 #define BNA_MAC_IS_EQUAL(_mac1, _mac2)					\
 	(!memcmp((_mac1), (_mac2), sizeof(mac_t)))
@@ -177,32 +168,6 @@ do {								\
 #define BNA_Q_IN_USE_COUNT(_q_ptr)					\
 	(BNA_QE_IN_USE_CNT(&(_q_ptr)->q, (_q_ptr)->q.q_depth))
 
-/* These macros build the data portion of the TxQ/RxQ doorbell */
-#define BNA_DOORBELL_Q_PRD_IDX(_pi)	(0x80000000 | (_pi))
-#define BNA_DOORBELL_Q_STOP		(0x40000000)
-
-/* These macros build the data portion of the IB doorbell */
-#define BNA_DOORBELL_IB_INT_ACK(_timeout, _events) \
-	(0x80000000 | ((_timeout) << 16) | (_events))
-#define BNA_DOORBELL_IB_INT_DISABLE	(0x40000000)
-
-/* Set the coalescing timer for the given ib */
-#define bna_ib_coalescing_timer_set(_i_dbell, _cls_timer)		\
-	((_i_dbell)->doorbell_ack = BNA_DOORBELL_IB_INT_ACK((_cls_timer), 0));
-
-/* Acks 'events' # of events for a given ib */
-#define bna_ib_ack(_i_dbell, _events)					\
-	(writel(((_i_dbell)->doorbell_ack | (_events)), \
-		(_i_dbell)->doorbell_addr));
-
-#define bna_txq_prod_indx_doorbell(_tcb)				\
-	(writel(BNA_DOORBELL_Q_PRD_IDX((_tcb)->producer_index), \
-		(_tcb)->q_dbell));
-
-#define bna_rxq_prod_indx_doorbell(_rcb)				\
-	(writel(BNA_DOORBELL_Q_PRD_IDX((_rcb)->producer_index), \
-		(_rcb)->q_dbell));
-
 #define BNA_LARGE_PKT_SIZE		1000
 
 #define BNA_UPDATE_PKT_CNT(_pkt, _len)					\
@@ -214,38 +179,59 @@ do {									\
 	}								\
 } while (0)
 
-#define	call_rxf_stop_cbfn(rxf, status)					\
+#define	call_rxf_stop_cbfn(rxf)						\
+do {									\
 	if ((rxf)->stop_cbfn) {						\
-		(*(rxf)->stop_cbfn)((rxf)->stop_cbarg, (status));	\
+		void (*cbfn)(struct bna_rx *);			\
+		struct bna_rx *cbarg;					\
+		cbfn = (rxf)->stop_cbfn;				\
+		cbarg = (rxf)->stop_cbarg;				\
 		(rxf)->stop_cbfn = NULL;				\
 		(rxf)->stop_cbarg = NULL;				\
-	}
+		cbfn(cbarg);						\
+	}								\
+} while (0)
 
-#define	call_rxf_start_cbfn(rxf, status)				\
+#define	call_rxf_start_cbfn(rxf)					\
+do {									\
 	if ((rxf)->start_cbfn) {					\
-		(*(rxf)->start_cbfn)((rxf)->start_cbarg, (status));	\
+		void (*cbfn)(struct bna_rx *);			\
+		struct bna_rx *cbarg;					\
+		cbfn = (rxf)->start_cbfn;				\
+		cbarg = (rxf)->start_cbarg;				\
 		(rxf)->start_cbfn = NULL;				\
 		(rxf)->start_cbarg = NULL;				\
-	}
+		cbfn(cbarg);						\
+	}								\
+} while (0)
 
-#define	call_rxf_cam_fltr_cbfn(rxf, status)				\
+#define	call_rxf_cam_fltr_cbfn(rxf)					\
+do {									\
 	if ((rxf)->cam_fltr_cbfn) {					\
-		(*(rxf)->cam_fltr_cbfn)((rxf)->cam_fltr_cbarg, rxf->rx,	\
-					(status));			\
+		void (*cbfn)(struct bnad *, struct bna_rx *);	\
+		struct bnad *cbarg;					\
+		cbfn = (rxf)->cam_fltr_cbfn;				\
+		cbarg = (rxf)->cam_fltr_cbarg;				\
 		(rxf)->cam_fltr_cbfn = NULL;				\
 		(rxf)->cam_fltr_cbarg = NULL;				\
-	}
+		cbfn(cbarg, rxf->rx);					\
+	}								\
+} while (0)
 
-#define	call_rxf_pause_cbfn(rxf, status)				\
+#define	call_rxf_pause_cbfn(rxf)					\
+do {									\
 	if ((rxf)->oper_state_cbfn) {					\
-		(*(rxf)->oper_state_cbfn)((rxf)->oper_state_cbarg, rxf->rx,\
-					(status));			\
-		(rxf)->rxf_flags &= ~BNA_RXF_FL_OPERSTATE_CHANGED;	\
+		void (*cbfn)(struct bnad *, struct bna_rx *);	\
+		struct bnad *cbarg;					\
+		cbfn = (rxf)->oper_state_cbfn;				\
+		cbarg = (rxf)->oper_state_cbarg;			\
 		(rxf)->oper_state_cbfn = NULL;				\
 		(rxf)->oper_state_cbarg = NULL;				\
-	}
+		cbfn(cbarg, rxf->rx);					\
+	}								\
+} while (0)
 
-#define	call_rxf_resume_cbfn(rxf, status) call_rxf_pause_cbfn(rxf, status)
+#define	call_rxf_resume_cbfn(rxf) call_rxf_pause_cbfn(rxf)
 
 #define is_xxx_enable(mode, bitmask, xxx) ((bitmask & xxx) && (mode & xxx))
 
@@ -331,6 +317,59 @@ do {									\
 	}								\
 } while (0)
 
+#define bna_tx_rid_mask(_bna) ((_bna)->tx_mod.rid_mask)
+
+#define bna_rx_rid_mask(_bna) ((_bna)->rx_mod.rid_mask)
+
+#define bna_tx_from_rid(_bna, _rid, _tx)				\
+do {								    \
+	struct bna_tx_mod *__tx_mod = &(_bna)->tx_mod;	  \
+	struct bna_tx *__tx;					    \
+	struct list_head *qe;					   \
+	_tx = NULL;						     \
+	list_for_each(qe, &__tx_mod->tx_active_q) {		     \
+		__tx = (struct bna_tx *)qe;			     \
+		if (__tx->rid == (_rid)) {			      \
+			(_tx) = __tx;				   \
+			break;					  \
+		}						       \
+	}							       \
+} while (0)
+
+#define bna_rx_from_rid(_bna, _rid, _rx)				\
+do {									\
+	struct bna_rx_mod *__rx_mod = &(_bna)->rx_mod;			\
+	struct bna_rx *__rx;						\
+	struct list_head *qe;						\
+	_rx = NULL;							\
+	list_for_each(qe, &__rx_mod->rx_active_q) {			\
+		__rx = (struct bna_rx *)qe;				\
+		if (__rx->rid == (_rid)) {				\
+			(_rx) = __rx;					\
+			break;						\
+		}							\
+	}								\
+} while (0)
+
+/**
+ *
+ *  Inline functions
+ *
+ */
+
+static inline struct bna_mac *bna_mac_find(struct list_head *q, u8 *addr)
+{
+	struct bna_mac *mac = NULL;
+	struct list_head *qe;
+	list_for_each(qe, q) {
+		if (BNA_MAC_IS_EQUAL(((struct bna_mac *)qe)->addr, addr)) {
+			mac = (struct bna_mac *)qe;
+			break;
+		}
+	}
+	return mac;
+}
+
 /**
  *
  * Function prototypes
@@ -341,17 +380,23 @@ do {									\
  * BNA
  */
 
+/* FW response handlers */
+void bna_bfi_stats_get_rsp(struct bna *bna, struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_stats_clr_rsp(struct bna *bna, struct bfi_msgq_mhdr *msghdr);
+
 /* APIs for BNAD */
 void bna_res_req(struct bna_res_info *res_info);
+void bna_mod_res_req(struct bna *bna, struct bna_res_info *res_info);
 void bna_init(struct bna *bna, struct bnad *bnad,
 			struct bfa_pcidev *pcidev,
 			struct bna_res_info *res_info);
+void bna_mod_init(struct bna *bna, struct bna_res_info *res_info);
 void bna_uninit(struct bna *bna);
 void bna_stats_get(struct bna *bna);
 void bna_get_perm_mac(struct bna *bna, u8 *mac);
+void bna_hw_stats_get(struct bna *bna);
 
 /* APIs for Rx */
-int bna_rit_mod_can_satisfy(struct bna_rit_mod *rit_mod, int seg_size);
 
 /* APIs for RxF */
 struct bna_mac *bna_ucam_mod_mac_get(struct bna_ucam_mod *ucam_mod);
@@ -360,79 +405,79 @@ void bna_ucam_mod_mac_put(struct bna_ucam_mod *ucam_mod,
 struct bna_mac *bna_mcam_mod_mac_get(struct bna_mcam_mod *mcam_mod);
 void bna_mcam_mod_mac_put(struct bna_mcam_mod *mcam_mod,
 			  struct bna_mac *mac);
-struct bna_rit_segment *
-bna_rit_mod_seg_get(struct bna_rit_mod *rit_mod, int seg_size);
-void bna_rit_mod_seg_put(struct bna_rit_mod *rit_mod,
-			struct bna_rit_segment *seg);
-
-/**
- * DEVICE
- */
-
-/* APIs for BNAD */
-void bna_device_enable(struct bna_device *device);
-void bna_device_disable(struct bna_device *device,
-			enum bna_cleanup_type type);
+struct bna_mcam_handle *bna_mcam_mod_handle_get(struct bna_mcam_mod *mod);
+void bna_mcam_mod_handle_put(struct bna_mcam_mod *mcam_mod,
+			  struct bna_mcam_handle *handle);
 
 /**
  * MBOX
  */
 
-/* APIs for PORT, TX, RX */
+/* API for BNAD */
 void bna_mbox_handler(struct bna *bna, u32 intr_status);
-void bna_mbox_send(struct bna *bna, struct bna_mbox_qe *mbox_qe);
 
 /**
- * PORT
+ * ETHPORT
  */
 
-/* API for RX */
-int bna_port_mtu_get(struct bna_port *port);
-void bna_llport_rx_started(struct bna_llport *llport);
-void bna_llport_rx_stopped(struct bna_llport *llport);
+/* FW response handlers */
+void bna_bfi_ethport_admin_rsp(struct bna_ethport *ethport,
+			       struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_ethport_lpbk_rsp(struct bna_ethport *ethport,
+			      struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_ethport_linkup_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_ethport_linkdown_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_ethport_enable_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_ethport_disable_aen(struct bna_ethport *ethport,
+				struct bfi_msgq_mhdr *msghdr);
+
+/* Callbacks for RX */
+void bna_ethport_cb_rx_started(struct bna_ethport *ethport);
+void bna_ethport_cb_rx_stopped(struct bna_ethport *ethport);
+
+/* APIs for ENET */
+void bna_ethport_start(struct bna_ethport *ethport);
+void bna_ethport_stop(struct bna_ethport *ethport);
+void bna_ethport_fail(struct bna_ethport *ethport);
 
-/* API for BNAD */
-void bna_port_enable(struct bna_port *port);
-void bna_port_disable(struct bna_port *port, enum bna_cleanup_type type,
-		      void (*cbfn)(void *, enum bna_cb_status));
-void bna_port_pause_config(struct bna_port *port,
-			   struct bna_pause_config *pause_config,
-			   void (*cbfn)(struct bnad *, enum bna_cb_status));
-void bna_port_mtu_set(struct bna_port *port, int mtu,
-		      void (*cbfn)(struct bnad *, enum bna_cb_status));
-void bna_port_mac_get(struct bna_port *port, mac_t *mac);
+/* APIs for BNA */
+void bna_ethport_init(struct bna_ethport *ethport, struct bna *bna);
+void bna_ethport_uninit(struct bna_ethport *ethport);
 
-/* Callbacks for TX, RX */
-void bna_port_cb_tx_stopped(struct bna_port *port,
-			    enum bna_cb_status status);
-void bna_port_cb_rx_stopped(struct bna_port *port,
-			    enum bna_cb_status status);
+/* APIs for BNAD */
+void bna_ethport_admin_up(struct bna_ethport *ethport,
+			void (*cbfn)(struct bnad *, enum bna_cb_status));
+void bna_ethport_admin_down(struct bna_ethport *ethport);
+void bna_ethport_linkcbfn_set(struct bna_ethport *ethport,
+			   void (*linkcbfn)(struct bnad *,
+					    enum bna_link_status));
+int bna_ethport_is_disabled(struct bna_ethport *ethport);
 
 /**
- * IB
+ * TX MODULE AND TX
  */
 
-/* APIs for BNA */
-void bna_ib_mod_init(struct bna_ib_mod *ib_mod, struct bna *bna,
-		     struct bna_res_info *res_info);
-void bna_ib_mod_uninit(struct bna_ib_mod *ib_mod);
+/* FW response handelrs */
+void bna_bfi_tx_enet_start_rsp(struct bna_tx *tx,
+			       struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_tx_enet_stop_rsp(struct bna_tx *tx,
+			      struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_bw_update_aen(struct bna_tx_mod *tx_mod);
 
-/**
- * TX MODULE AND TX
- */
+/* APIs for ENET */
+void bna_tx_mod_start(struct bna_tx_mod *tx_mod, enum bna_tx_type type);
+void bna_tx_mod_stop(struct bna_tx_mod *tx_mod, enum bna_tx_type type);
+void bna_tx_mod_fail(struct bna_tx_mod *tx_mod);
+void bna_tx_mod_prio_reconfig(struct bna_tx_mod *tx_mod, int cee_linkup,
+			      u8 prio_map, u8 iscsi_prio_map);
 
 /* APIs for BNA */
 void bna_tx_mod_init(struct bna_tx_mod *tx_mod, struct bna *bna,
 		     struct bna_res_info *res_info);
 void bna_tx_mod_uninit(struct bna_tx_mod *tx_mod);
-int bna_tx_state_get(struct bna_tx *tx);
-
-/* APIs for PORT */
-void bna_tx_mod_start(struct bna_tx_mod *tx_mod, enum bna_tx_type type);
-void bna_tx_mod_stop(struct bna_tx_mod *tx_mod, enum bna_tx_type type);
-void bna_tx_mod_fail(struct bna_tx_mod *tx_mod);
-void bna_tx_mod_prio_changed(struct bna_tx_mod *tx_mod, int prio);
-void bna_tx_mod_cee_link_status(struct bna_tx_mod *tx_mod, int cee_link);
 
 /* APIs for BNAD */
 void bna_tx_res_req(int num_txq, int txq_depth,
@@ -444,46 +489,34 @@ struct bna_tx *bna_tx_create(struct bna *bna, struct bnad *bnad,
 void bna_tx_destroy(struct bna_tx *tx);
 void bna_tx_enable(struct bna_tx *tx);
 void bna_tx_disable(struct bna_tx *tx, enum bna_cleanup_type type,
-		    void (*cbfn)(void *, struct bna_tx *,
-				 enum bna_cb_status));
+		    void (*cbfn)(void *, struct bna_tx *));
+void bna_tx_cleanup_complete(struct bna_tx *tx);
+void bna_tx_prio_set(struct bna_tx *tx, int prio,
+		     void (*cbfn)(struct bnad *, struct bna_tx *));
 void bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo);
 
 /**
  * RX MODULE, RX, RXF
  */
 
-/* Internal APIs */
-void rxf_cb_cam_fltr_mbox_cmd(void *arg, int status);
-void rxf_cam_mbox_cmd(struct bna_rxf *rxf, u8 cmd,
-		const struct bna_mac *mac_addr);
-void __rxf_vlan_filter_set(struct bna_rxf *rxf, enum bna_status status);
-void bna_rxf_adv_init(struct bna_rxf *rxf,
-		struct bna_rx *rx,
-		struct bna_rx_config *q_config);
-int rxf_process_packet_filter_ucast(struct bna_rxf *rxf);
-int rxf_process_packet_filter_promisc(struct bna_rxf *rxf);
-int rxf_process_packet_filter_default(struct bna_rxf *rxf);
-int rxf_process_packet_filter_allmulti(struct bna_rxf *rxf);
-int rxf_clear_packet_filter_ucast(struct bna_rxf *rxf);
-int rxf_clear_packet_filter_promisc(struct bna_rxf *rxf);
-int rxf_clear_packet_filter_default(struct bna_rxf *rxf);
-int rxf_clear_packet_filter_allmulti(struct bna_rxf *rxf);
-void rxf_reset_packet_filter_ucast(struct bna_rxf *rxf);
-void rxf_reset_packet_filter_promisc(struct bna_rxf *rxf);
-void rxf_reset_packet_filter_default(struct bna_rxf *rxf);
-void rxf_reset_packet_filter_allmulti(struct bna_rxf *rxf);
+/* FW response handlers */
+void bna_bfi_rx_enet_start_rsp(struct bna_rx *rx,
+			       struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_rx_enet_stop_rsp(struct bna_rx *rx,
+			      struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_rxf_cfg_rsp(struct bna_rxf *rxf, struct bfi_msgq_mhdr *msghdr);
+void bna_bfi_rxf_mcast_add_rsp(struct bna_rxf *rxf,
+			       struct bfi_msgq_mhdr *msghdr);
+
+/* APIs for ENET */
+void bna_rx_mod_start(struct bna_rx_mod *rx_mod, enum bna_rx_type type);
+void bna_rx_mod_stop(struct bna_rx_mod *rx_mod, enum bna_rx_type type);
+void bna_rx_mod_fail(struct bna_rx_mod *rx_mod);
 
 /* APIs for BNA */
 void bna_rx_mod_init(struct bna_rx_mod *rx_mod, struct bna *bna,
 		     struct bna_res_info *res_info);
 void bna_rx_mod_uninit(struct bna_rx_mod *rx_mod);
-int bna_rx_state_get(struct bna_rx *rx);
-int bna_rxf_state_get(struct bna_rxf *rxf);
-
-/* APIs for PORT */
-void bna_rx_mod_start(struct bna_rx_mod *rx_mod, enum bna_rx_type type);
-void bna_rx_mod_stop(struct bna_rx_mod *rx_mod, enum bna_rx_type type);
-void bna_rx_mod_fail(struct bna_rx_mod *rx_mod);
 
 /* APIs for BNAD */
 void bna_rx_res_req(struct bna_rx_config *rx_config,
@@ -495,54 +528,120 @@ struct bna_rx *bna_rx_create(struct bna *bna, struct bnad *bnad,
 void bna_rx_destroy(struct bna_rx *rx);
 void bna_rx_enable(struct bna_rx *rx);
 void bna_rx_disable(struct bna_rx *rx, enum bna_cleanup_type type,
-		    void (*cbfn)(void *, struct bna_rx *,
-				 enum bna_cb_status));
+		    void (*cbfn)(void *, struct bna_rx *));
+void bna_rx_cleanup_complete(struct bna_rx *rx);
 void bna_rx_coalescing_timeo_set(struct bna_rx *rx, int coalescing_timeo);
-void bna_rx_dim_reconfig(struct bna *bna, const u32 vector[][BNA_BIAS_T_MAX]);
+void bna_rx_dim_reconfig(struct bna *bna, u32 vector[][BNA_BIAS_T_MAX]);
 void bna_rx_dim_update(struct bna_ccb *ccb);
 enum bna_cb_status
 bna_rx_ucast_set(struct bna_rx *rx, u8 *ucmac,
-		 void (*cbfn)(struct bnad *, struct bna_rx *,
-			      enum bna_cb_status));
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
+enum bna_cb_status
+bna_rx_ucast_add(struct bna_rx *rx, u8* ucmac,
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
+enum bna_cb_status
+bna_rx_ucast_del(struct bna_rx *rx, u8 *ucmac,
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
 enum bna_cb_status
 bna_rx_mcast_add(struct bna_rx *rx, u8 *mcmac,
-		 void (*cbfn)(struct bnad *, struct bna_rx *,
-			      enum bna_cb_status));
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
+enum bna_cb_status
+bna_rx_mcast_del(struct bna_rx *rx, u8 *mcmac,
+		 void (*cbfn)(struct bnad *, struct bna_rx *));
 enum bna_cb_status
 bna_rx_mcast_listset(struct bna_rx *rx, int count, u8 *mcmac,
-		     void (*cbfn)(struct bnad *, struct bna_rx *,
-				  enum bna_cb_status));
+		     void (*cbfn)(struct bnad *, struct bna_rx *));
+void bna_rx_mcast_delall(struct bna_rx *rx,
+			 void (*cbfn)(struct bnad *, struct bna_rx *));
 enum bna_cb_status
 bna_rx_mode_set(struct bna_rx *rx, enum bna_rxmode rxmode,
 		enum bna_rxmode bitmask,
-		void (*cbfn)(struct bnad *, struct bna_rx *,
-			     enum bna_cb_status));
+		void (*cbfn)(struct bnad *, struct bna_rx *));
 void bna_rx_vlan_add(struct bna_rx *rx, int vlan_id);
 void bna_rx_vlan_del(struct bna_rx *rx, int vlan_id);
 void bna_rx_vlanfilter_enable(struct bna_rx *rx);
-void bna_rx_hds_enable(struct bna_rx *rx, struct bna_rxf_hds *hds_config,
-		       void (*cbfn)(struct bnad *, struct bna_rx *,
-				    enum bna_cb_status));
+void bna_rx_hds_enable(struct bna_rx *rx, struct bna_hds_config *hds_config,
+		       void (*cbfn)(struct bnad *, struct bna_rx *));
 void bna_rx_hds_disable(struct bna_rx *rx,
-			void (*cbfn)(struct bnad *, struct bna_rx *,
-				     enum bna_cb_status));
+			void (*cbfn)(struct bnad *, struct bna_rx *));
+
+/**
+ * ENET
+ */
+
+/* FW response handlers */
+void bna_bfi_pause_set_rsp(struct bna_enet *enet,
+			   struct bfi_msgq_mhdr *msghdr);
+
+/* API for RX */
+int bna_enet_mtu_get(struct bna_enet *enet);
+
+/* Callbacks for ETHPORT, TX, RX */
+void bna_enet_cb_ethport_stopped(struct bna_enet *enet);
+void bna_enet_cb_tx_stopped(struct bna_enet *enet);
+void bna_enet_cb_rx_stopped(struct bna_enet *enet);
+
+/* APIs for IOCETH */
+void bna_enet_start(struct bna_enet *enet);
+void bna_enet_stop(struct bna_enet *enet);
+void bna_enet_fail(struct bna_enet *enet);
+
+/* APIs for BNA */
+void bna_enet_init(struct bna_enet *enet, struct bna *bna);
+void bna_enet_uninit(struct bna_enet *enet);
+
+/* API for BNAD */
+void bna_enet_enable(struct bna_enet *enet);
+void bna_enet_disable(struct bna_enet *enet, enum bna_cleanup_type type,
+		      void (*cbfn)(void *));
+void bna_enet_pause_config(struct bna_enet *enet,
+			   struct bna_pause_config *pause_config,
+			   void (*cbfn)(struct bnad *));
+void bna_enet_mtu_set(struct bna_enet *enet, int mtu,
+		      void (*cbfn)(struct bnad *));
+void bna_enet_perm_mac_get(struct bna_enet *enet, mac_t *mac);
+void bna_enet_type_set(struct bna_enet *enet, enum bna_enet_type type);
+enum bna_enet_type bna_enet_type_get(struct bna_enet *enet);
+
+/**
+ * IOCETH
+ */
+
+/* FW response handlers */
+void bna_bfi_attr_get_rsp(struct bna_ioceth *ioceth,
+			  struct bfi_msgq_mhdr *msghdr);
+
+/* APIs for ENET etc */
+void bna_ioceth_cb_enet_stopped(void *arg);
+
+/* APIs for BNA */
+void bna_ioceth_init(struct bna_ioceth *ioceth, struct bna *bna,
+		     struct bna_res_info *res_info);
+void bna_ioceth_uninit(struct bna_ioceth *ioceth);
+bool bna_ioceth_state_is_failed(struct bna_ioceth *ioceth);
+
+/* APIs for BNAD */
+void bna_ioceth_enable(struct bna_ioceth *ioceth);
+void bna_ioceth_disable(struct bna_ioceth *ioceth,
+			enum bna_cleanup_type type);
 
 /**
  * BNAD
  */
 
+/* Callbacks for ENET */
+void bnad_cb_ethport_link_status(struct bnad *bnad,
+			      enum bna_link_status status);
+
+/* Callbacks for IOCETH */
+void bnad_cb_ioceth_ready(struct bnad *bnad);
+void bnad_cb_ioceth_failed(struct bnad *bnad);
+void bnad_cb_ioceth_disabled(struct bnad *bnad);
+void bnad_cb_mbox_intr_enable(struct bnad *bnad);
+void bnad_cb_mbox_intr_disable(struct bnad *bnad);
+
 /* Callbacks for BNA */
 void bnad_cb_stats_get(struct bnad *bnad, enum bna_cb_status status,
 		       struct bna_stats *stats);
 
-/* Callbacks for DEVICE */
-void bnad_cb_device_enabled(struct bnad *bnad, enum bna_cb_status status);
-void bnad_cb_device_disabled(struct bnad *bnad, enum bna_cb_status status);
-void bnad_cb_device_enable_mbox_intr(struct bnad *bnad);
-void bnad_cb_device_disable_mbox_intr(struct bnad *bnad);
-
-/* Callbacks for port */
-void bnad_cb_port_link_status(struct bnad *bnad,
-			      enum bna_link_status status);
-
 #endif  /* __BNA_H__ */
diff --git a/drivers/net/bna/bna_hw.h b/drivers/net/bna/bna_hw.h
index cad233d..80b21ee 100644
--- a/drivers/net/bna/bna_hw.h
+++ b/drivers/net/bna/bna_hw.h
@@ -14,14 +14,16 @@
  * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
  * All rights reserved
  * www.brocade.com
- *
+ */
+
+/**
  * File for interrupt macros and functions
  */
 
 #ifndef __BNA_HW_H__
 #define __BNA_HW_H__
 
-#include "bfi_ctreg.h"
+#include "bfi_reg.h"
 
 /**
  *
@@ -29,98 +31,16 @@
  *
  */
 
-#ifndef BNA_BIOS_BUILD
-
-#define BFI_MAX_TXQ			64
-#define BFI_MAX_RXQ			64
-#define	BFI_MAX_RXF			64
-#define BFI_MAX_IB			128
-#define	BFI_MAX_RIT_SIZE		256
-#define	BFI_RSS_RIT_SIZE		64
-#define	BFI_NONRSS_RIT_SIZE		1
-#define BFI_MAX_UCMAC			256
-#define BFI_MAX_MCMAC			512
-#define BFI_IBIDX_SIZE			4
-#define BFI_MAX_VLAN			4095
+#define BFI_ENET_MAX_MCAM		256
 
-/**
- * There are 2 free IB index pools:
- *	pool1: 120 segments of 1 index each
- *	pool8: 1 segment of 8 indexes
- */
-#define BFI_IBIDX_POOL1_SIZE		116
-#define	BFI_IBIDX_POOL1_ENTRY_SIZE	1
-#define BFI_IBIDX_POOL2_SIZE		2
-#define	BFI_IBIDX_POOL2_ENTRY_SIZE	2
-#define	BFI_IBIDX_POOL8_SIZE		1
-#define	BFI_IBIDX_POOL8_ENTRY_SIZE	8
-#define	BFI_IBIDX_TOTAL_POOLS		3
-#define	BFI_IBIDX_TOTAL_SEGS		119 /* (POOL1 + POOL2 + POOL8)_SIZE */
-#define	BFI_IBIDX_MAX_SEGSIZE		8
-#define init_ibidx_pool(name)						\
-static struct bna_ibidx_pool name[BFI_IBIDX_TOTAL_POOLS] =		\
-{									\
-	{ BFI_IBIDX_POOL1_SIZE, BFI_IBIDX_POOL1_ENTRY_SIZE },		\
-	{ BFI_IBIDX_POOL2_SIZE, BFI_IBIDX_POOL2_ENTRY_SIZE },		\
-	{ BFI_IBIDX_POOL8_SIZE, BFI_IBIDX_POOL8_ENTRY_SIZE }		\
-}
+#define BFI_INVALID_RID			-1
 
-/**
- * There are 2 free RIT segment pools:
- *	Pool1: 192 segments of 1 RIT entry each
- *	Pool2: 1 segment of 64 RIT entry
- */
-#define BFI_RIT_SEG_POOL1_SIZE		192
-#define BFI_RIT_SEG_POOL1_ENTRY_SIZE	1
-#define BFI_RIT_SEG_POOLRSS_SIZE	1
-#define BFI_RIT_SEG_POOLRSS_ENTRY_SIZE	64
-#define BFI_RIT_SEG_TOTAL_POOLS		2
-#define BFI_RIT_TOTAL_SEGS		193 /* POOL1_SIZE + POOLRSS_SIZE */
-#define init_ritseg_pool(name)						\
-static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
-{									\
-	{ BFI_RIT_SEG_POOL1_SIZE, BFI_RIT_SEG_POOL1_ENTRY_SIZE },	\
-	{ BFI_RIT_SEG_POOLRSS_SIZE, BFI_RIT_SEG_POOLRSS_ENTRY_SIZE }	\
-}
-
-#else /* BNA_BIOS_BUILD */
-
-#define BFI_MAX_TXQ			1
-#define BFI_MAX_RXQ			1
-#define	BFI_MAX_RXF			1
-#define BFI_MAX_IB			2
-#define	BFI_MAX_RIT_SIZE		2
-#define	BFI_RSS_RIT_SIZE		64
-#define	BFI_NONRSS_RIT_SIZE		1
-#define BFI_MAX_UCMAC			1
-#define BFI_MAX_MCMAC			8
 #define BFI_IBIDX_SIZE			4
-#define BFI_MAX_VLAN			4095
-/* There is one free pool: 2 segments of 1 index each */
-#define BFI_IBIDX_POOL1_SIZE		2
-#define	BFI_IBIDX_POOL1_ENTRY_SIZE	1
-#define	BFI_IBIDX_TOTAL_POOLS		1
-#define	BFI_IBIDX_TOTAL_SEGS		2 /* POOL1_SIZE */
-#define	BFI_IBIDX_MAX_SEGSIZE		1
-#define init_ibidx_pool(name)						\
-static struct bna_ibidx_pool name[BFI_IBIDX_TOTAL_POOLS] =		\
-{									\
-	{ BFI_IBIDX_POOL1_SIZE, BFI_IBIDX_POOL1_ENTRY_SIZE }		\
-}
 
-#define BFI_RIT_SEG_POOL1_SIZE		1
-#define BFI_RIT_SEG_POOL1_ENTRY_SIZE	1
-#define BFI_RIT_SEG_TOTAL_POOLS		1
-#define BFI_RIT_TOTAL_SEGS		1 /* POOL1_SIZE */
-#define init_ritseg_pool(name)						\
-static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
-{									\
-	{ BFI_RIT_SEG_POOL1_SIZE, BFI_RIT_SEG_POOL1_ENTRY_SIZE }	\
-}
-
-#endif /* BNA_BIOS_BUILD */
-
-#define BFI_RSS_HASH_KEY_LEN		10
+#define BFI_VLAN_WORD_SHIFT		5	/* 32 bits */
+#define BFI_VLAN_WORD_MASK		0x1F
+#define BFI_VLAN_BLOCK_SHIFT		9	/* 512 bits */
+#define BFI_VLAN_BMASK_ALL		0xFF
 
 #define BFI_COALESCING_TIMER_UNIT	5	/* 5us */
 #define BFI_MAX_COALESCING_TIMEO	0xFF	/* in 5us units */
@@ -145,1018 +65,215 @@ static struct bna_ritseg_pool_cfg name[BFI_RIT_SEG_TOTAL_POOLS] =	\
 /* Small Q buffer size */
 #define BFI_SMALL_RXBUF_SIZE		128
 
-/* Defined separately since BFA_FLASH_DMA_BUF_SZ is in bfa_flash.c */
-#define BFI_FLASH_DMA_BUF_SZ		0x010000 /* 64K DMA */
-#define BFI_HW_STATS_SIZE		0x4000 /* 16K DMA */
-
-/**
- *
- * HW register offsets, macros
- *
- */
-
-/* DMA Block Register Host Window Start Address */
-#define DMA_BLK_REG_ADDR		0x00013000
-
-/* DMA Block Internal Registers */
-#define DMA_CTRL_REG0			(DMA_BLK_REG_ADDR + 0x000)
-#define DMA_CTRL_REG1			(DMA_BLK_REG_ADDR + 0x004)
-#define DMA_ERR_INT_STATUS		(DMA_BLK_REG_ADDR + 0x008)
-#define DMA_ERR_INT_ENABLE		(DMA_BLK_REG_ADDR + 0x00c)
-#define DMA_ERR_INT_STATUS_SET		(DMA_BLK_REG_ADDR + 0x010)
-
-/* APP Block Register Address Offset from BAR0 */
-#define APP_BLK_REG_ADDR		0x00014000
-
-/* Host Function Interrupt Mask Registers */
-#define HOSTFN0_INT_MASK		(APP_BLK_REG_ADDR + 0x004)
-#define HOSTFN1_INT_MASK		(APP_BLK_REG_ADDR + 0x104)
-#define HOSTFN2_INT_MASK		(APP_BLK_REG_ADDR + 0x304)
-#define HOSTFN3_INT_MASK		(APP_BLK_REG_ADDR + 0x404)
-
-/**
- * Host Function PCIe Error Registers
- * Duplicates "Correctable" & "Uncorrectable"
- * registers in PCIe Config space.
- */
-#define FN0_PCIE_ERR_REG		(APP_BLK_REG_ADDR + 0x014)
-#define FN1_PCIE_ERR_REG		(APP_BLK_REG_ADDR + 0x114)
-#define FN2_PCIE_ERR_REG		(APP_BLK_REG_ADDR + 0x314)
-#define FN3_PCIE_ERR_REG		(APP_BLK_REG_ADDR + 0x414)
-
-/* Host Function Error Type Status Registers */
-#define FN0_ERR_TYPE_STATUS_REG		(APP_BLK_REG_ADDR + 0x018)
-#define FN1_ERR_TYPE_STATUS_REG		(APP_BLK_REG_ADDR + 0x118)
-#define FN2_ERR_TYPE_STATUS_REG		(APP_BLK_REG_ADDR + 0x318)
-#define FN3_ERR_TYPE_STATUS_REG		(APP_BLK_REG_ADDR + 0x418)
-
-/* Host Function Error Type Mask Registers */
-#define FN0_ERR_TYPE_MSK_STATUS_REG	(APP_BLK_REG_ADDR + 0x01c)
-#define FN1_ERR_TYPE_MSK_STATUS_REG	(APP_BLK_REG_ADDR + 0x11c)
-#define FN2_ERR_TYPE_MSK_STATUS_REG	(APP_BLK_REG_ADDR + 0x31c)
-#define FN3_ERR_TYPE_MSK_STATUS_REG	(APP_BLK_REG_ADDR + 0x41c)
-
-/* Catapult Host Semaphore Status Registers (App block) */
-#define HOST_SEM_STS0_REG		(APP_BLK_REG_ADDR + 0x630)
-#define HOST_SEM_STS1_REG		(APP_BLK_REG_ADDR + 0x634)
-#define HOST_SEM_STS2_REG		(APP_BLK_REG_ADDR + 0x638)
-#define HOST_SEM_STS3_REG		(APP_BLK_REG_ADDR + 0x63c)
-#define HOST_SEM_STS4_REG		(APP_BLK_REG_ADDR + 0x640)
-#define HOST_SEM_STS5_REG		(APP_BLK_REG_ADDR + 0x644)
-#define HOST_SEM_STS6_REG		(APP_BLK_REG_ADDR + 0x648)
-#define HOST_SEM_STS7_REG		(APP_BLK_REG_ADDR + 0x64c)
-
-/* PCIe Misc Register */
-#define PCIE_MISC_REG			(APP_BLK_REG_ADDR + 0x200)
-
-/* Temp Sensor Control Registers */
-#define TEMPSENSE_CNTL_REG		(APP_BLK_REG_ADDR + 0x250)
-#define TEMPSENSE_STAT_REG		(APP_BLK_REG_ADDR + 0x254)
-
-/* APP Block local error registers */
-#define APP_LOCAL_ERR_STAT		(APP_BLK_REG_ADDR + 0x258)
-#define APP_LOCAL_ERR_MSK		(APP_BLK_REG_ADDR + 0x25c)
-
-/* PCIe Link Error registers */
-#define PCIE_LNK_ERR_STAT		(APP_BLK_REG_ADDR + 0x260)
-#define PCIE_LNK_ERR_MSK		(APP_BLK_REG_ADDR + 0x264)
-
-/**
- * FCoE/FIP Ethertype Register
- * 31:16 -- Chip wide value for FIP type
- * 15:0  -- Chip wide value for FCoE type
- */
-#define FCOE_FIP_ETH_TYPE		(APP_BLK_REG_ADDR + 0x280)
-
-/**
- * Reserved Ethertype Register
- * 31:16 -- Reserved
- * 15:0  -- Other ethertype
- */
-#define RESV_ETH_TYPE			(APP_BLK_REG_ADDR + 0x284)
-
-/**
- * Host Command Status Registers
- * Each set consists of 3 registers :
- * clear, set, cmd
- * 16 such register sets in all
- * See catapult_spec.pdf for detailed functionality
- * Put each type in a single macro accessed by _num ?
- */
-#define HOST_CMDSTS0_CLR_REG		(APP_BLK_REG_ADDR + 0x500)
-#define HOST_CMDSTS0_SET_REG		(APP_BLK_REG_ADDR + 0x504)
-#define HOST_CMDSTS0_REG		(APP_BLK_REG_ADDR + 0x508)
-#define HOST_CMDSTS1_CLR_REG		(APP_BLK_REG_ADDR + 0x510)
-#define HOST_CMDSTS1_SET_REG		(APP_BLK_REG_ADDR + 0x514)
-#define HOST_CMDSTS1_REG		(APP_BLK_REG_ADDR + 0x518)
-#define HOST_CMDSTS2_CLR_REG		(APP_BLK_REG_ADDR + 0x520)
-#define HOST_CMDSTS2_SET_REG		(APP_BLK_REG_ADDR + 0x524)
-#define HOST_CMDSTS2_REG		(APP_BLK_REG_ADDR + 0x528)
-#define HOST_CMDSTS3_CLR_REG		(APP_BLK_REG_ADDR + 0x530)
-#define HOST_CMDSTS3_SET_REG		(APP_BLK_REG_ADDR + 0x534)
-#define HOST_CMDSTS3_REG		(APP_BLK_REG_ADDR + 0x538)
-#define HOST_CMDSTS4_CLR_REG		(APP_BLK_REG_ADDR + 0x540)
-#define HOST_CMDSTS4_SET_REG		(APP_BLK_REG_ADDR + 0x544)
-#define HOST_CMDSTS4_REG		(APP_BLK_REG_ADDR + 0x548)
-#define HOST_CMDSTS5_CLR_REG		(APP_BLK_REG_ADDR + 0x550)
-#define HOST_CMDSTS5_SET_REG		(APP_BLK_REG_ADDR + 0x554)
-#define HOST_CMDSTS5_REG		(APP_BLK_REG_ADDR + 0x558)
-#define HOST_CMDSTS6_CLR_REG		(APP_BLK_REG_ADDR + 0x560)
-#define HOST_CMDSTS6_SET_REG		(APP_BLK_REG_ADDR + 0x564)
-#define HOST_CMDSTS6_REG		(APP_BLK_REG_ADDR + 0x568)
-#define HOST_CMDSTS7_CLR_REG		(APP_BLK_REG_ADDR + 0x570)
-#define HOST_CMDSTS7_SET_REG		(APP_BLK_REG_ADDR + 0x574)
-#define HOST_CMDSTS7_REG		(APP_BLK_REG_ADDR + 0x578)
-#define HOST_CMDSTS8_CLR_REG		(APP_BLK_REG_ADDR + 0x580)
-#define HOST_CMDSTS8_SET_REG		(APP_BLK_REG_ADDR + 0x584)
-#define HOST_CMDSTS8_REG		(APP_BLK_REG_ADDR + 0x588)
-#define HOST_CMDSTS9_CLR_REG		(APP_BLK_REG_ADDR + 0x590)
-#define HOST_CMDSTS9_SET_REG		(APP_BLK_REG_ADDR + 0x594)
-#define HOST_CMDSTS9_REG		(APP_BLK_REG_ADDR + 0x598)
-#define HOST_CMDSTS10_CLR_REG		(APP_BLK_REG_ADDR + 0x5A0)
-#define HOST_CMDSTS10_SET_REG		(APP_BLK_REG_ADDR + 0x5A4)
-#define HOST_CMDSTS10_REG		(APP_BLK_REG_ADDR + 0x5A8)
-#define HOST_CMDSTS11_CLR_REG		(APP_BLK_REG_ADDR + 0x5B0)
-#define HOST_CMDSTS11_SET_REG		(APP_BLK_REG_ADDR + 0x5B4)
-#define HOST_CMDSTS11_REG		(APP_BLK_REG_ADDR + 0x5B8)
-#define HOST_CMDSTS12_CLR_REG		(APP_BLK_REG_ADDR + 0x5C0)
-#define HOST_CMDSTS12_SET_REG		(APP_BLK_REG_ADDR + 0x5C4)
-#define HOST_CMDSTS12_REG		(APP_BLK_REG_ADDR + 0x5C8)
-#define HOST_CMDSTS13_CLR_REG		(APP_BLK_REG_ADDR + 0x5D0)
-#define HOST_CMDSTS13_SET_REG		(APP_BLK_REG_ADDR + 0x5D4)
-#define HOST_CMDSTS13_REG		(APP_BLK_REG_ADDR + 0x5D8)
-#define HOST_CMDSTS14_CLR_REG		(APP_BLK_REG_ADDR + 0x5E0)
-#define HOST_CMDSTS14_SET_REG		(APP_BLK_REG_ADDR + 0x5E4)
-#define HOST_CMDSTS14_REG		(APP_BLK_REG_ADDR + 0x5E8)
-#define HOST_CMDSTS15_CLR_REG		(APP_BLK_REG_ADDR + 0x5F0)
-#define HOST_CMDSTS15_SET_REG		(APP_BLK_REG_ADDR + 0x5F4)
-#define HOST_CMDSTS15_REG		(APP_BLK_REG_ADDR + 0x5F8)
-
-/**
- * LPU0 Block Register Address Offset from BAR0
- * Range 0x18000 - 0x18033
- */
-#define LPU0_BLK_REG_ADDR		0x00018000
-
-/**
- * LPU0 Registers
- * Should they be directly used from host,
- * except for diagnostics ?
- * CTL_REG : Control register
- * CMD_REG : Triggers exec. of cmd. in
- *           Mailbox memory
- */
-#define LPU0_MBOX_CTL_REG		(LPU0_BLK_REG_ADDR + 0x000)
-#define LPU0_MBOX_CMD_REG		(LPU0_BLK_REG_ADDR + 0x004)
-#define LPU0_MBOX_LINK_0REG		(LPU0_BLK_REG_ADDR + 0x008)
-#define LPU1_MBOX_LINK_0REG		(LPU0_BLK_REG_ADDR + 0x00c)
-#define LPU0_MBOX_STATUS_0REG		(LPU0_BLK_REG_ADDR + 0x010)
-#define LPU1_MBOX_STATUS_0REG		(LPU0_BLK_REG_ADDR + 0x014)
-#define LPU0_ERR_STATUS_REG		(LPU0_BLK_REG_ADDR + 0x018)
-#define LPU0_ERR_SET_REG		(LPU0_BLK_REG_ADDR + 0x020)
-
-/**
- * LPU1 Block Register Address Offset from BAR0
- * Range 0x18400 - 0x18433
- */
-#define LPU1_BLK_REG_ADDR		0x00018400
-
-/**
- * LPU1 Registers
- * Same as LPU0 registers above
- */
-#define LPU1_MBOX_CTL_REG		(LPU1_BLK_REG_ADDR + 0x000)
-#define LPU1_MBOX_CMD_REG		(LPU1_BLK_REG_ADDR + 0x004)
-#define LPU0_MBOX_LINK_1REG		(LPU1_BLK_REG_ADDR + 0x008)
-#define LPU1_MBOX_LINK_1REG		(LPU1_BLK_REG_ADDR + 0x00c)
-#define LPU0_MBOX_STATUS_1REG		(LPU1_BLK_REG_ADDR + 0x010)
-#define LPU1_MBOX_STATUS_1REG		(LPU1_BLK_REG_ADDR + 0x014)
-#define LPU1_ERR_STATUS_REG		(LPU1_BLK_REG_ADDR + 0x018)
-#define LPU1_ERR_SET_REG		(LPU1_BLK_REG_ADDR + 0x020)
-
-/**
- * PSS Block Register Address Offset from BAR0
- * Range 0x18800 - 0x188DB
- */
-#define PSS_BLK_REG_ADDR		0x00018800
-
-/**
- * PSS Registers
- * For details, see catapult_spec.pdf
- * ERR_STATUS_REG : Indicates error in PSS module
- * RAM_ERR_STATUS_REG : Indicates RAM module that detected error
- */
-#define ERR_STATUS_SET			(PSS_BLK_REG_ADDR + 0x018)
-#define PSS_RAM_ERR_STATUS_REG		(PSS_BLK_REG_ADDR + 0x01C)
-
-/**
- * PSS Semaphore Lock Registers, total 16
- * First read when unlocked returns 0,
- * and is set to 1, atomically.
- * Subsequent reads returns 1.
- * To clear set the value to 0.
- * Range : 0x20 to 0x5c
- */
-#define PSS_SEM_LOCK_REG(_num)		\
-	(PSS_BLK_REG_ADDR + 0x020 + ((_num) << 2))
-
-/**
- * PSS Semaphore Status Registers,
- * corresponding to the lock registers above
- */
-#define PSS_SEM_STATUS_REG(_num)		\
-	(PSS_BLK_REG_ADDR + 0x060 + ((_num) << 2))
-
-/**
- * Catapult CPQ Registers
- * Defines for Mailbox Registers
- * Used to send mailbox commands to firmware from
- * host. The data part is written to the MBox
- * memory, registers are used to indicate that
- * a commnad is resident in memory.
- *
- * Note : LPU0<->LPU1 mailboxes are not listed here
- */
-#define CPQ_BLK_REG_ADDR		0x00019000
-
-#define HOSTFN0_LPU0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x130)
-#define HOSTFN0_LPU1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x134)
-#define LPU0_HOSTFN0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x138)
-#define LPU1_HOSTFN0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x13C)
-
-#define HOSTFN1_LPU0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x140)
-#define HOSTFN1_LPU1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x144)
-#define LPU0_HOSTFN1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x148)
-#define LPU1_HOSTFN1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x14C)
-
-#define HOSTFN2_LPU0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x170)
-#define HOSTFN2_LPU1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x174)
-#define LPU0_HOSTFN2_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x178)
-#define LPU1_HOSTFN2_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x17C)
-
-#define HOSTFN3_LPU0_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x180)
-#define HOSTFN3_LPU1_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x184)
-#define LPU0_HOSTFN3_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x188)
-#define LPU1_HOSTFN3_MBOX1_CMD_STAT	(CPQ_BLK_REG_ADDR + 0x18C)
-
-/* Host Function Force Parity Error Registers */
-#define HOSTFN0_LPU_FORCE_PERR		(CPQ_BLK_REG_ADDR + 0x120)
-#define HOSTFN1_LPU_FORCE_PERR		(CPQ_BLK_REG_ADDR + 0x124)
-#define HOSTFN2_LPU_FORCE_PERR		(CPQ_BLK_REG_ADDR + 0x128)
-#define HOSTFN3_LPU_FORCE_PERR		(CPQ_BLK_REG_ADDR + 0x12C)
-
-/* LL Port[0|1] Halt Mask Registers */
-#define LL_HALT_MSK_P0			(CPQ_BLK_REG_ADDR + 0x1A0)
-#define LL_HALT_MSK_P1			(CPQ_BLK_REG_ADDR + 0x1B0)
-
-/* LL Port[0|1] Error Mask Registers */
-#define LL_ERR_MSK_P0			(CPQ_BLK_REG_ADDR + 0x1D0)
-#define LL_ERR_MSK_P1			(CPQ_BLK_REG_ADDR + 0x1D4)
-
-/* EMC FLI (Flash Controller) Block Register Address Offset from BAR0 */
-#define FLI_BLK_REG_ADDR		0x0001D000
-
-/* EMC FLI Registers */
-#define FLI_CMD_REG			(FLI_BLK_REG_ADDR + 0x000)
-#define FLI_ADDR_REG			(FLI_BLK_REG_ADDR + 0x004)
-#define FLI_CTL_REG			(FLI_BLK_REG_ADDR + 0x008)
-#define FLI_WRDATA_REG			(FLI_BLK_REG_ADDR + 0x00C)
-#define FLI_RDDATA_REG			(FLI_BLK_REG_ADDR + 0x010)
-#define FLI_DEV_STATUS_REG		(FLI_BLK_REG_ADDR + 0x014)
-#define FLI_SIG_WD_REG			(FLI_BLK_REG_ADDR + 0x018)
-
-/**
- * RO register
- * 31:16 -- Vendor Id
- * 15:0  -- Device Id
- */
-#define FLI_DEV_VENDOR_REG		(FLI_BLK_REG_ADDR + 0x01C)
-#define FLI_ERR_STATUS_REG		(FLI_BLK_REG_ADDR + 0x020)
-
-/**
- * RAD (RxAdm) Block Register Address Offset from BAR0
- * RAD0 Range : 0x20000 - 0x203FF
- * RAD1 Range : 0x20400 - 0x207FF
- */
-#define RAD0_BLK_REG_ADDR		0x00020000
-#define RAD1_BLK_REG_ADDR		0x00020400
-
-/* RAD0 Registers */
-#define RAD0_CTL_REG			(RAD0_BLK_REG_ADDR + 0x000)
-#define RAD0_PE_PARM_REG		(RAD0_BLK_REG_ADDR + 0x004)
-#define RAD0_BCN_REG			(RAD0_BLK_REG_ADDR + 0x008)
-
-/* Default function ID register */
-#define RAD0_DEFAULT_REG		(RAD0_BLK_REG_ADDR + 0x00C)
-
-/* Default promiscuous ID register */
-#define RAD0_PROMISC_REG		(RAD0_BLK_REG_ADDR + 0x010)
-
-#define RAD0_BCNQ_REG			(RAD0_BLK_REG_ADDR + 0x014)
-
-/*
- * This register selects 1 of 8 PM Q's using
- * VLAN pri, for non-BCN packets without a VLAN tag
- */
-#define RAD0_DEFAULTQ_REG		(RAD0_BLK_REG_ADDR + 0x018)
-
-#define RAD0_ERR_STS			(RAD0_BLK_REG_ADDR + 0x01C)
-#define RAD0_SET_ERR_STS		(RAD0_BLK_REG_ADDR + 0x020)
-#define RAD0_ERR_INT_EN			(RAD0_BLK_REG_ADDR + 0x024)
-#define RAD0_FIRST_ERR			(RAD0_BLK_REG_ADDR + 0x028)
-#define RAD0_FORCE_ERR			(RAD0_BLK_REG_ADDR + 0x02C)
-
-#define RAD0_IF_RCVD			(RAD0_BLK_REG_ADDR + 0x030)
-#define RAD0_IF_RCVD_OCTETS_HIGH	(RAD0_BLK_REG_ADDR + 0x034)
-#define RAD0_IF_RCVD_OCTETS_LOW		(RAD0_BLK_REG_ADDR + 0x038)
-#define RAD0_IF_RCVD_VLAN		(RAD0_BLK_REG_ADDR + 0x03C)
-#define RAD0_IF_RCVD_UCAST		(RAD0_BLK_REG_ADDR + 0x040)
-#define RAD0_IF_RCVD_UCAST_OCTETS_HIGH	(RAD0_BLK_REG_ADDR + 0x044)
-#define RAD0_IF_RCVD_UCAST_OCTETS_LOW   (RAD0_BLK_REG_ADDR + 0x048)
-#define RAD0_IF_RCVD_UCAST_VLAN		(RAD0_BLK_REG_ADDR + 0x04C)
-#define RAD0_IF_RCVD_MCAST		(RAD0_BLK_REG_ADDR + 0x050)
-#define RAD0_IF_RCVD_MCAST_OCTETS_HIGH  (RAD0_BLK_REG_ADDR + 0x054)
-#define RAD0_IF_RCVD_MCAST_OCTETS_LOW   (RAD0_BLK_REG_ADDR + 0x058)
-#define RAD0_IF_RCVD_MCAST_VLAN		(RAD0_BLK_REG_ADDR + 0x05C)
-#define RAD0_IF_RCVD_BCAST		(RAD0_BLK_REG_ADDR + 0x060)
-#define RAD0_IF_RCVD_BCAST_OCTETS_HIGH  (RAD0_BLK_REG_ADDR + 0x064)
-#define RAD0_IF_RCVD_BCAST_OCTETS_LOW   (RAD0_BLK_REG_ADDR + 0x068)
-#define RAD0_IF_RCVD_BCAST_VLAN		(RAD0_BLK_REG_ADDR + 0x06C)
-#define RAD0_DROPPED_FRAMES		(RAD0_BLK_REG_ADDR + 0x070)
-
-#define RAD0_MAC_MAN_1H			(RAD0_BLK_REG_ADDR + 0x080)
-#define RAD0_MAC_MAN_1L			(RAD0_BLK_REG_ADDR + 0x084)
-#define RAD0_MAC_MAN_2H			(RAD0_BLK_REG_ADDR + 0x088)
-#define RAD0_MAC_MAN_2L			(RAD0_BLK_REG_ADDR + 0x08C)
-#define RAD0_MAC_MAN_3H			(RAD0_BLK_REG_ADDR + 0x090)
-#define RAD0_MAC_MAN_3L			(RAD0_BLK_REG_ADDR + 0x094)
-#define RAD0_MAC_MAN_4H			(RAD0_BLK_REG_ADDR + 0x098)
-#define RAD0_MAC_MAN_4L			(RAD0_BLK_REG_ADDR + 0x09C)
-
-#define RAD0_LAST4_IP			(RAD0_BLK_REG_ADDR + 0x100)
-
-/* RAD1 Registers */
-#define RAD1_CTL_REG			(RAD1_BLK_REG_ADDR + 0x000)
-#define RAD1_PE_PARM_REG		(RAD1_BLK_REG_ADDR + 0x004)
-#define RAD1_BCN_REG			(RAD1_BLK_REG_ADDR + 0x008)
-
-/* Default function ID register */
-#define RAD1_DEFAULT_REG		(RAD1_BLK_REG_ADDR + 0x00C)
-
-/* Promiscuous function ID register */
-#define RAD1_PROMISC_REG		(RAD1_BLK_REG_ADDR + 0x010)
-
-#define RAD1_BCNQ_REG			(RAD1_BLK_REG_ADDR + 0x014)
+#define BFI_TX_MAX_PRIO			8
+#define BFI_TX_PRIO_MAP_ALL		0xFF
 
 /*
- * This register selects 1 of 8 PM Q's using
- * VLAN pri, for non-BCN packets without a VLAN tag
- */
-#define RAD1_DEFAULTQ_REG		(RAD1_BLK_REG_ADDR + 0x018)
-
-#define RAD1_ERR_STS			(RAD1_BLK_REG_ADDR + 0x01C)
-#define RAD1_SET_ERR_STS		(RAD1_BLK_REG_ADDR + 0x020)
-#define RAD1_ERR_INT_EN			(RAD1_BLK_REG_ADDR + 0x024)
-
-/**
- * TXA Block Register Address Offset from BAR0
- * TXA0 Range : 0x21000 - 0x213FF
- * TXA1 Range : 0x21400 - 0x217FF
- */
-#define TXA0_BLK_REG_ADDR		0x00021000
-#define TXA1_BLK_REG_ADDR		0x00021400
-
-/* TXA Registers */
-#define TXA0_CTRL_REG			(TXA0_BLK_REG_ADDR + 0x000)
-#define TXA1_CTRL_REG			(TXA1_BLK_REG_ADDR + 0x000)
-
-/**
- * TSO Sequence # Registers (RO)
- * Total 8 (for 8 queues)
- * Holds the last seq.# for TSO frames
- * See catapult_spec.pdf for more details
- */
-#define TXA0_TSO_TCP_SEQ_REG(_num)		\
-	(TXA0_BLK_REG_ADDR + 0x020 + ((_num) << 2))
-
-#define TXA1_TSO_TCP_SEQ_REG(_num)		\
-	(TXA1_BLK_REG_ADDR + 0x020 + ((_num) << 2))
-
-/**
- * TSO IP ID # Registers (RO)
- * Total 8 (for 8 queues)
- * Holds the last IP ID for TSO frames
- * See catapult_spec.pdf for more details
- */
-#define TXA0_TSO_IP_INFO_REG(_num)		\
-	(TXA0_BLK_REG_ADDR + 0x040 + ((_num) << 2))
-
-#define TXA1_TSO_IP_INFO_REG(_num)		\
-	(TXA1_BLK_REG_ADDR + 0x040 + ((_num) << 2))
-
-/**
- * RXA Block Register Address Offset from BAR0
- * RXA0 Range : 0x21800 - 0x21BFF
- * RXA1 Range : 0x21C00 - 0x21FFF
- */
-#define RXA0_BLK_REG_ADDR		0x00021800
-#define RXA1_BLK_REG_ADDR		0x00021C00
-
-/* RXA Registers */
-#define RXA0_CTL_REG			(RXA0_BLK_REG_ADDR + 0x040)
-#define RXA1_CTL_REG			(RXA1_BLK_REG_ADDR + 0x040)
-
-/**
- * PPLB Block Register Address Offset from BAR0
- * PPLB0 Range : 0x22000 - 0x223FF
- * PPLB1 Range : 0x22400 - 0x227FF
- */
-#define PLB0_BLK_REG_ADDR		0x00022000
-#define PLB1_BLK_REG_ADDR		0x00022400
-
-/**
- * PLB Registers
- * Holds RL timer used time stamps in RLT tagged frames
- */
-#define PLB0_ECM_TIMER_REG		(PLB0_BLK_REG_ADDR + 0x05C)
-#define PLB1_ECM_TIMER_REG		(PLB1_BLK_REG_ADDR + 0x05C)
-
-/* Controls the rate-limiter on each of the priority class */
-#define PLB0_RL_CTL			(PLB0_BLK_REG_ADDR + 0x060)
-#define PLB1_RL_CTL			(PLB1_BLK_REG_ADDR + 0x060)
-
-/**
- * Max byte register, total 8, 0-7
- * see catapult_spec.pdf for details
- */
-#define PLB0_RL_MAX_BC(_num)			\
-	(PLB0_BLK_REG_ADDR + 0x064 + ((_num) << 2))
-#define PLB1_RL_MAX_BC(_num)			\
-	(PLB1_BLK_REG_ADDR + 0x064 + ((_num) << 2))
-
-/**
- * RL Time Unit Register for priority 0-7
- * 4 bits per priority
- * (2^rl_unit)*1us is the actual time period
- */
-#define PLB0_RL_TU_PRIO			(PLB0_BLK_REG_ADDR + 0x084)
-#define PLB1_RL_TU_PRIO			(PLB1_BLK_REG_ADDR + 0x084)
-
-/**
- * RL byte count register,
- * bytes transmitted in (rl_unit*1)us time period
- * 1 per priority, 8 in all, 0-7.
- */
-#define PLB0_RL_BYTE_CNT(_num)			\
-	(PLB0_BLK_REG_ADDR + 0x088 + ((_num) << 2))
-#define PLB1_RL_BYTE_CNT(_num)			\
-	(PLB1_BLK_REG_ADDR + 0x088 + ((_num) << 2))
-
-/**
- * RL Min factor register
- * 2 bits per priority,
- * 4 factors possible: 1, 0.5, 0.25, 0
- * 2'b00 - 0; 2'b01 - 0.25; 2'b10 - 0.5; 2'b11 - 1
- */
-#define PLB0_RL_MIN_REG			(PLB0_BLK_REG_ADDR + 0x0A8)
-#define PLB1_RL_MIN_REG			(PLB1_BLK_REG_ADDR + 0x0A8)
-
-/**
- * RL Max factor register
- * 2 bits per priority,
- * 4 factors possible: 1, 0.5, 0.25, 0
- * 2'b00 - 0; 2'b01 - 0.25; 2'b10 - 0.5; 2'b11 - 1
- */
-#define PLB0_RL_MAX_REG			(PLB0_BLK_REG_ADDR + 0x0AC)
-#define PLB1_RL_MAX_REG			(PLB1_BLK_REG_ADDR + 0x0AC)
-
-/* MAC SERDES Address Paging register */
-#define PLB0_EMS_ADD_REG		(PLB0_BLK_REG_ADDR + 0xD0)
-#define PLB1_EMS_ADD_REG		(PLB1_BLK_REG_ADDR + 0xD0)
-
-/* LL EMS Registers */
-#define LL_EMS0_BLK_REG_ADDR		0x00026800
-#define LL_EMS1_BLK_REG_ADDR		0x00026C00
-
-/**
- * BPC Block Register Address Offset from BAR0
- * BPC0 Range : 0x23000 - 0x233FF
- * BPC1 Range : 0x23400 - 0x237FF
- */
-#define BPC0_BLK_REG_ADDR		0x00023000
-#define BPC1_BLK_REG_ADDR		0x00023400
-
-/**
- * PMM Block Register Address Offset from BAR0
- * PMM0 Range : 0x23800 - 0x23BFF
- * PMM1 Range : 0x23C00 - 0x23FFF
- */
-#define PMM0_BLK_REG_ADDR		0x00023800
-#define PMM1_BLK_REG_ADDR		0x00023C00
-
-/**
- * HQM Block Register Address Offset from BAR0
- * HQM0 Range : 0x24000 - 0x243FF
- * HQM1 Range : 0x24400 - 0x247FF
- */
-#define HQM0_BLK_REG_ADDR		0x00024000
-#define HQM1_BLK_REG_ADDR		0x00024400
-
-/**
- * HQM Control Register
- * Controls some aspects of IB
- * See catapult_spec.pdf for details
- */
-#define HQM0_CTL_REG			(HQM0_BLK_REG_ADDR + 0x000)
-#define HQM1_CTL_REG			(HQM1_BLK_REG_ADDR + 0x000)
-
-/**
- * HQM Stop Q Semaphore Registers.
- * Only one Queue resource can be stopped at
- * any given time. This register controls access
- * to the single stop Q resource.
- * See catapult_spec.pdf for details
- */
-#define HQM0_RXQ_STOP_SEM		(HQM0_BLK_REG_ADDR + 0x028)
-#define HQM0_TXQ_STOP_SEM		(HQM0_BLK_REG_ADDR + 0x02C)
-#define HQM1_RXQ_STOP_SEM		(HQM1_BLK_REG_ADDR + 0x028)
-#define HQM1_TXQ_STOP_SEM		(HQM1_BLK_REG_ADDR + 0x02C)
-
-/**
- * LUT Block Register Address Offset from BAR0
- * LUT0 Range : 0x25800 - 0x25BFF
- * LUT1 Range : 0x25C00 - 0x25FFF
- */
-#define LUT0_BLK_REG_ADDR		0x00025800
-#define LUT1_BLK_REG_ADDR		0x00025C00
-
-/**
- * LUT Registers
- * See catapult_spec.pdf for details
- */
-#define LUT0_ERR_STS			(LUT0_BLK_REG_ADDR + 0x000)
-#define LUT1_ERR_STS			(LUT1_BLK_REG_ADDR + 0x000)
-#define LUT0_SET_ERR_STS		(LUT0_BLK_REG_ADDR + 0x004)
-#define LUT1_SET_ERR_STS		(LUT1_BLK_REG_ADDR + 0x004)
-
-/**
- * TRC (Debug/Trace) Register Offset from BAR0
- * Range : 0x26000 -- 0x263FFF
- */
-#define TRC_BLK_REG_ADDR		0x00026000
-
-/**
- * TRC Registers
- * See catapult_spec.pdf for details of each
- */
-#define TRC_CTL_REG			(TRC_BLK_REG_ADDR + 0x000)
-#define TRC_MODS_REG			(TRC_BLK_REG_ADDR + 0x004)
-#define TRC_TRGC_REG			(TRC_BLK_REG_ADDR + 0x008)
-#define TRC_CNT1_REG			(TRC_BLK_REG_ADDR + 0x010)
-#define TRC_CNT2_REG			(TRC_BLK_REG_ADDR + 0x014)
-#define TRC_NXTS_REG			(TRC_BLK_REG_ADDR + 0x018)
-#define TRC_DIRR_REG			(TRC_BLK_REG_ADDR + 0x01C)
-
-/**
- * TRC Trigger match filters, total 10
- * Determines the trigger condition
- */
-#define TRC_TRGM_REG(_num)		\
-	(TRC_BLK_REG_ADDR + 0x040 + ((_num) << 2))
-
-/**
- * TRC Next State filters, total 10
- * Determines the next state conditions
- */
-#define TRC_NXTM_REG(_num)		\
-	(TRC_BLK_REG_ADDR + 0x080 + ((_num) << 2))
-
-/**
- * TRC Store Match filters, total 10
- * Determines the store conditions
- */
-#define TRC_STRM_REG(_num)		\
-	(TRC_BLK_REG_ADDR + 0x0C0 + ((_num) << 2))
-
-/* DOORBELLS ACCESS */
-
-/**
- * Catapult doorbells
- * Each doorbell-queue set has
- * 1 RxQ, 1 TxQ, 2 IBs in that order
- * Size of each entry in 32 bytes, even though only 1 word
- * is used. For Non-VM case each doorbell-q set is
- * separated by 128 bytes, for VM case it is separated
- * by 4K bytes
- * Non VM case Range : 0x38000 - 0x39FFF
- * VM case Range     : 0x100000 - 0x11FFFF
- * The range applies to both HQMs
- */
-#define HQM_DOORBELL_BLK_BASE_ADDR	0x00038000
-#define HQM_DOORBELL_VM_BLK_BASE_ADDR	0x00100000
-
-/* MEMORY ACCESS */
-
-/**
- * Catapult H/W Block Memory Access Address
- * To the host a memory space of 32K (page) is visible
- * at a time. The address range is from 0x08000 to 0x0FFFF
- */
-#define HW_BLK_HOST_MEM_ADDR		0x08000
-
-/**
- * Catapult LUT Memory Access Page Numbers
- * Range : LUT0 0xa0-0xa1
- *         LUT1 0xa2-0xa3
- */
-#define LUT0_MEM_BLK_BASE_PG_NUM	0x000000A0
-#define LUT1_MEM_BLK_BASE_PG_NUM	0x000000A2
-
-/**
- * Catapult RxFn Database Memory Block Base Offset
  *
- * The Rx function database exists in LUT block.
- * In PCIe space this is accessible as a 256x32
- * bit block. Each entry in this database is 4
- * (4 byte) words. Max. entries is 64.
- * Address of an entry corresponding to a function
- * = base_addr + (function_no. * 16)
- */
-#define RX_FNDB_RAM_BASE_OFFSET		0x0000B400
-
-/**
- * Catapult TxFn Database Memory Block Base Offset Address
- *
- * The Tx function database exists in LUT block.
- * In PCIe space this is accessible as a 64x32
- * bit block. Each entry in this database is 1
- * (4 byte) word. Max. entries is 64.
- * Address of an entry corresponding to a function
- * = base_addr + (function_no. * 4)
- */
-#define TX_FNDB_RAM_BASE_OFFSET		0x0000B800
-
-/**
- * Catapult Unicast CAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Shared by both the LL & FCoE driver.
- * Size is 256x48 bits; mapped to PCIe space
- * 512x32 bit blocks. For each address, bits
- * are written in the order : [47:32] and then
- * [31:0].
- */
-#define UCAST_CAM_BASE_OFFSET		0x0000A800
-
-/**
- * Catapult Unicast RAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Shared by both the LL & FCoE driver.
- * Size is 256x9 bits.
- */
-#define UCAST_RAM_BASE_OFFSET		0x0000B000
-
-/**
- * Catapult Mulicast CAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Shared by both the LL & FCoE driver.
- * Size is 256x48 bits; mapped to PCIe space
- * 512x32 bit blocks. For each address, bits
- * are written in the order : [47:32] and then
- * [31:0].
- */
-#define MCAST_CAM_BASE_OFFSET		0x0000A000
-
-/**
- * Catapult VLAN RAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Size is 4096x66 bits; mapped to PCIe space as
- * 8192x32 bit blocks.
- * All the 4K entries are within the address range
- * 0x0000 to 0x8000, so in the first LUT page.
- */
-#define VLAN_RAM_BASE_OFFSET		0x00000000
-
-/**
- * Catapult Tx Stats RAM Base Offset Address
+ * Register definitions and macros
  *
- * Exists in LUT memory space.
- * Size is 1024x33 bits;
- * Each Tx function has 64 bytes of space
- */
-#define TX_STATS_RAM_BASE_OFFSET	0x00009000
-
-/**
- * Catapult Rx Stats RAM Base Offset Address
- *
- * Exists in LUT memory space.
- * Size is 1024x33 bits;
- * Each Rx function has 64 bytes of space
- */
-#define RX_STATS_RAM_BASE_OFFSET	0x00008000
-
-/* Catapult RXA Memory Access Page Numbers */
-#define RXA0_MEM_BLK_BASE_PG_NUM	0x0000008C
-#define RXA1_MEM_BLK_BASE_PG_NUM	0x0000008D
-
-/**
- * Catapult Multicast Vector Table Base Offset Address
- *
- * Exists in RxA memory space.
- * Organized as 512x65 bit block.
- * However for each entry 16 bytes allocated (power of 2)
- * Total size 512*16 bytes.
- * There are two logical divisions, 256 entries each :
- * a) Entries 0x00 to 0xff (256) -- Approx. MVT
- *    Offset 0x000 to 0xFFF
- * b) Entries 0x100 to 0x1ff (256) -- Exact MVT
- *    Offsets 0x1000 to 0x1FFF
- */
-#define MCAST_APPROX_MVT_BASE_OFFSET	0x00000000
-#define MCAST_EXACT_MVT_BASE_OFFSET	0x00001000
-
-/**
- * Catapult RxQ Translate Table (RIT) Base Offset Address
- *
- * Exists in RxA memory space
- * Total no. of entries 64
- * Each entry is 1 (4 byte) word.
- * 31:12 -- Reserved
- * 11:0  -- Two 6 bit RxQ Ids
- */
-#define FUNCTION_TO_RXQ_TRANSLATE	0x00002000
-
-/* Catapult RxAdm (RAD) Memory Access Page Numbers */
-#define RAD0_MEM_BLK_BASE_PG_NUM	0x00000086
-#define RAD1_MEM_BLK_BASE_PG_NUM	0x00000087
-
-/**
- * Catapult RSS Table Base Offset Address
- *
- * Exists in RAD memory space.
- * Each entry is 352 bits, but aligned on
- * 64 byte (512 bit) boundary. Accessed
- * 4 byte words, the whole entry can be
- * broken into 11 word accesses.
- */
-#define RSS_TABLE_BASE_OFFSET		0x00000800
-
-/**
- * Catapult CPQ Block Page Number
- * This value is written to the page number registers
- * to access the memory associated with the mailboxes.
- */
-#define CPQ_BLK_PG_NUM			0x00000005
-
-/**
- * Clarification :
- * LL functions are 2 & 3; can HostFn0/HostFn1
- * <-> LPU0/LPU1 memories be used ?
- */
-/**
- * Catapult HostFn0/HostFn1 to LPU0/LPU1 Mbox memory
- * Per catapult_spec.pdf, the offset of the mbox
- * memory is in the register space at an offset of 0x200
- */
-#define CPQ_BLK_REG_MBOX_ADDR		(CPQ_BLK_REG_ADDR + 0x200)
-
-#define HOSTFN_LPU_MBOX			(CPQ_BLK_REG_MBOX_ADDR + 0x000)
-
-/* Catapult LPU0/LPU1 to HostFn0/HostFn1 Mbox memory */
-#define LPU_HOSTFN_MBOX			(CPQ_BLK_REG_MBOX_ADDR + 0x080)
-
-/**
- * Catapult HQM Block Page Number
- * This is written to the page number register for
- * the appropriate function to access the memory
- * associated with HQM
- */
-#define HQM0_BLK_PG_NUM			0x00000096
-#define HQM1_BLK_PG_NUM			0x00000097
-
-/**
- * Note that TxQ and RxQ entries are interlaced
- * the HQM memory, i.e RXQ0, TXQ0, RXQ1, TXQ1.. etc.
- */
-
-#define HQM_RXTX_Q_RAM_BASE_OFFSET	0x00004000
-
-/**
- * CQ Memory
- * Exists in HQM Memory space
- * Each entry is 16 (4 byte) words of which
- * only 12 words are used for configuration
- * Total 64 entries per HQM memory space
- */
-#define HQM_CQ_RAM_BASE_OFFSET		0x00006000
-
-/**
- * Interrupt Block (IB) Memory
- * Exists in HQM Memory space
- * Each entry is 8 (4 byte) words of which
- * only 5 words are used for configuration
- * Total 128 entries per HQM memory space
- */
-#define HQM_IB_RAM_BASE_OFFSET		0x00001000
-
-/**
- * Index Table (IT) Memory
- * Exists in HQM Memory space
- * Each entry is 1 (4 byte) word which
- * is used for configuration
- * Total 128 entries per HQM memory space
  */
-#define HQM_INDX_TBL_RAM_BASE_OFFSET	0x00002000
-
-/**
- * PSS Block Memory Page Number
- * This is written to the appropriate page number
- * register to access the CPU memory.
- * Also known as the PSS secondary memory (SMEM).
- * Range : 0x180 to 0x1CF
- * See catapult_spec.pdf for details
- */
-#define PSS_BLK_PG_NUM			0x00000180
-
-/**
- * Offsets of different instances of PSS SMEM
- * 2.5M of continuous 1T memory space : 2 blocks
- * of 1M each (32 pages each, page=32KB) and 4 smaller
- * blocks of 128K each (4 pages each, page=32KB)
- * PSS_LMEM_INST0 is used for firmware download
- */
-#define PSS_LMEM_INST0			0x00000000
-#define PSS_LMEM_INST1			0x00100000
-#define PSS_LMEM_INST2			0x00200000
-#define PSS_LMEM_INST3			0x00220000
-#define PSS_LMEM_INST4			0x00240000
-#define PSS_LMEM_INST5			0x00260000
 
 #define BNA_PCI_REG_CT_ADDRSZ		(0x40000)
 
-#define BNA_GET_PAGE_NUM(_base_page, _offset)   \
-	((_base_page) + ((_offset) >> 15))
+#define ct_reg_addr_init(_bna, _pcidev)					\
+{									\
+	struct bna_reg_offset reg_offset[] =				\
+	{{HOSTFN0_INT_STATUS, HOSTFN0_INT_MSK},				\
+	 {HOSTFN1_INT_STATUS, HOSTFN1_INT_MSK},				\
+	 {HOSTFN2_INT_STATUS, HOSTFN2_INT_MSK},				\
+	 {HOSTFN3_INT_STATUS, HOSTFN3_INT_MSK} };			\
+									\
+	(_bna)->regs.fn_int_status = (_pcidev)->pci_bar_kva +		\
+				reg_offset[(_pcidev)->pci_func].fn_int_status;\
+	(_bna)->regs.fn_int_mask = (_pcidev)->pci_bar_kva +		\
+				reg_offset[(_pcidev)->pci_func].fn_int_mask;\
+}
+
+#define ct_bit_defn_init(_bna, _pcidev)					\
+{									\
+	(_bna)->bits.mbox_status_bits = (__HFN_INT_MBOX_LPU0 |		\
+					__HFN_INT_MBOX_LPU1);		\
+	(_bna)->bits.mbox_mask_bits = (__HFN_INT_MBOX_LPU0 |		\
+					__HFN_INT_MBOX_LPU1);		\
+	(_bna)->bits.error_status_bits = (__HFN_INT_ERR_MASK);		\
+	(_bna)->bits.error_mask_bits = (__HFN_INT_ERR_MASK);		\
+	(_bna)->bits.halt_status_bits = __HFN_INT_LL_HALT;		\
+}
 
-#define BNA_GET_PAGE_OFFSET(_offset)    \
-	((_offset) & 0x7fff)
+#define ct2_reg_addr_init(_bna, _pcidev)				\
+{									\
+	(_bna)->regs.fn_int_status = (_pcidev)->pci_bar_kva +		\
+				CT2_HOSTFN_INT_STATUS;			\
+	(_bna)->regs.fn_int_mask = (_pcidev)->pci_bar_kva +		\
+				CT2_HOSTFN_INTR_MASK;			\
+}
 
-#define BNA_GET_MEM_BASE_ADDR(_bar0, _base_offset)	\
-	((_bar0) + HW_BLK_HOST_MEM_ADDR		\
-	  + BNA_GET_PAGE_OFFSET((_base_offset)))
+#define ct2_bit_defn_init(_bna, _pcidev)				\
+{									\
+	(_bna)->bits.mbox_status_bits = (__HFN_INT_MBOX_LPU0_CT2 |	\
+					__HFN_INT_MBOX_LPU1_CT2);	\
+	(_bna)->bits.mbox_mask_bits = (__HFN_INT_MBOX_LPU0_CT2 |	\
+					__HFN_INT_MBOX_LPU1_CT2);	\
+	(_bna)->bits.error_status_bits = (__HFN_INT_ERR_MASK_CT2);	\
+	(_bna)->bits.error_mask_bits = (__HFN_INT_ERR_MASK_CT2);	\
+	(_bna)->bits.halt_status_bits = __HFN_INT_CPQ_HALT_CT2;		\
+	(_bna)->bits.halt_mask_bits = __HFN_INT_CPQ_HALT_CT2;		\
+}
 
-#define BNA_GET_VLAN_MEM_ENTRY_ADDR(_bar0, _fn_id, _vlan_id)\
-	(_bar0 + (HW_BLK_HOST_MEM_ADDR)  \
-	+ (BNA_GET_PAGE_OFFSET(VLAN_RAM_BASE_OFFSET))	\
-	+ (((_fn_id) & 0x3f) << 9)	  \
-	+ (((_vlan_id) & 0xfe0) >> 3))
+#define bna_reg_addr_init(_bna, _pcidev)				\
+{									\
+	switch ((_pcidev)->device_id) {					\
+	case PCI_DEVICE_ID_BROCADE_CT:					\
+		ct_reg_addr_init((_bna), (_pcidev));			\
+		ct_bit_defn_init((_bna), (_pcidev));			\
+		break;							\
+	case PCI_DEVICE_ID_BROCADE_CT2:					\
+		ct2_reg_addr_init((_bna), (_pcidev));			\
+		ct2_bit_defn_init((_bna), (_pcidev));			\
+		break;							\
+	}								\
+}
 
+#define bna_port_id_get(_bna) ((_bna)->ioceth.ioc.port_id)
 /**
  *
  *  Interrupt related bits, flags and macros
  *
  */
 
-#define __LPU02HOST_MBOX0_STATUS_BITS 0x00100000
-#define __LPU12HOST_MBOX0_STATUS_BITS 0x00200000
-#define __LPU02HOST_MBOX1_STATUS_BITS 0x00400000
-#define __LPU12HOST_MBOX1_STATUS_BITS 0x00800000
-
-#define __LPU02HOST_MBOX0_MASK_BITS	0x00100000
-#define __LPU12HOST_MBOX0_MASK_BITS	0x00200000
-#define __LPU02HOST_MBOX1_MASK_BITS	0x00400000
-#define __LPU12HOST_MBOX1_MASK_BITS	0x00800000
-
-#define __LPU2HOST_MBOX_MASK_BITS			 \
-	(__LPU02HOST_MBOX0_MASK_BITS | __LPU02HOST_MBOX1_MASK_BITS |	\
-	  __LPU12HOST_MBOX0_MASK_BITS | __LPU12HOST_MBOX1_MASK_BITS)
-
-#define __LPU2HOST_IB_STATUS_BITS	0x0000ffff
-
-#define BNA_IS_LPU0_MBOX_INTR(_intr_status) \
-	((_intr_status) & (__LPU02HOST_MBOX0_STATUS_BITS | \
-			__LPU02HOST_MBOX1_STATUS_BITS))
+#define IB_STATUS_BITS		0x0000ffff
 
-#define BNA_IS_LPU1_MBOX_INTR(_intr_status) \
-	((_intr_status) & (__LPU12HOST_MBOX0_STATUS_BITS | \
-		__LPU12HOST_MBOX1_STATUS_BITS))
+#define BNA_IS_MBOX_INTR(_bna, _intr_status)				\
+	((_intr_status) & (_bna)->bits.mbox_status_bits)
 
-#define BNA_IS_MBOX_INTR(_intr_status)		\
-	((_intr_status) &			\
-	(__LPU02HOST_MBOX0_STATUS_BITS |	\
-	 __LPU02HOST_MBOX1_STATUS_BITS |	\
-	 __LPU12HOST_MBOX0_STATUS_BITS |	\
-	 __LPU12HOST_MBOX1_STATUS_BITS))
+#define BNA_IS_HALT_INTR(_bna, _intr_status)				\
+	((_intr_status) & (_bna)->bits.halt_status_bits)
 
-#define __EMC_ERROR_STATUS_BITS		0x00010000
-#define __LPU0_ERROR_STATUS_BITS	0x00020000
-#define __LPU1_ERROR_STATUS_BITS	0x00040000
-#define __PSS_ERROR_STATUS_BITS		0x00080000
+#define BNA_IS_ERR_INTR(_bna, _intr_status)	\
+	((_intr_status) & (_bna)->bits.error_status_bits)
 
-#define __HALT_STATUS_BITS		0x01000000
+#define BNA_IS_MBOX_ERR_INTR(_bna, _intr_status)	\
+	(BNA_IS_MBOX_INTR(_bna, _intr_status) |		\
+	BNA_IS_ERR_INTR(_bna, _intr_status))
 
-#define __EMC_ERROR_MASK_BITS		0x00010000
-#define __LPU0_ERROR_MASK_BITS		0x00020000
-#define __LPU1_ERROR_MASK_BITS		0x00040000
-#define __PSS_ERROR_MASK_BITS		0x00080000
+#define BNA_IS_INTX_DATA_INTR(_intr_status)		\
+		((_intr_status) & IB_STATUS_BITS)
 
-#define __HALT_MASK_BITS		0x01000000
-
-#define __ERROR_MASK_BITS		\
-	(__EMC_ERROR_MASK_BITS | __LPU0_ERROR_MASK_BITS | \
-	  __LPU1_ERROR_MASK_BITS | __PSS_ERROR_MASK_BITS | \
-	  __HALT_MASK_BITS)
-
-#define BNA_IS_ERR_INTR(_intr_status)	\
-	((_intr_status) &		\
-	(__EMC_ERROR_STATUS_BITS |	\
-	 __LPU0_ERROR_STATUS_BITS |	\
-	 __LPU1_ERROR_STATUS_BITS |	\
-	 __PSS_ERROR_STATUS_BITS  |	\
-	 __HALT_STATUS_BITS))
-
-#define BNA_IS_MBOX_ERR_INTR(_intr_status)	\
-	(BNA_IS_MBOX_INTR((_intr_status)) |	\
-	 BNA_IS_ERR_INTR((_intr_status)))
-
-#define BNA_IS_INTX_DATA_INTR(_intr_status)	\
-	((_intr_status) & __LPU2HOST_IB_STATUS_BITS)
-
-#define BNA_INTR_STATUS_MBOX_CLR(_intr_status)			\
-do {								\
-	(_intr_status) &= ~(__LPU02HOST_MBOX0_STATUS_BITS |	\
-			__LPU02HOST_MBOX1_STATUS_BITS |		\
-			__LPU12HOST_MBOX0_STATUS_BITS |		\
-			__LPU12HOST_MBOX1_STATUS_BITS);		\
+#define bna_halt_clear(_bna)						\
+do {									\
+	u32 init_halt;						\
+	init_halt = readl((_bna)->ioceth.ioc.ioc_regs.ll_halt);	\
+	init_halt &= ~__FW_INIT_HALT_P;					\
+	writel(init_halt, (_bna)->ioceth.ioc.ioc_regs.ll_halt);	\
+	init_halt = readl((_bna)->ioceth.ioc.ioc_regs.ll_halt);	\
 } while (0)
 
-#define BNA_INTR_STATUS_ERR_CLR(_intr_status)		\
-do {							\
-	(_intr_status) &= ~(__EMC_ERROR_STATUS_BITS |	\
-		__LPU0_ERROR_STATUS_BITS |		\
-		__LPU1_ERROR_STATUS_BITS |		\
-		__PSS_ERROR_STATUS_BITS  |		\
-		__HALT_STATUS_BITS);			\
-} while (0)
-
-#define bna_intx_disable(_bna, _cur_mask)		\
-{							\
-	(_cur_mask) = readl((_bna)->regs.fn_int_mask);\
-	writel(0xffffffff, (_bna)->regs.fn_int_mask);\
+#define bna_intx_disable(_bna, _cur_mask)				\
+{									\
+	(_cur_mask) = readl((_bna)->regs.fn_int_mask);		\
+	writel(0xffffffff, (_bna)->regs.fn_int_mask);		\
 }
 
-#define bna_intx_enable(bna, new_mask)			\
+#define bna_intx_enable(bna, new_mask)					\
 	writel((new_mask), (bna)->regs.fn_int_mask)
+#define bna_mbox_intr_disable(bna)					\
+do {									\
+	u32 mask;							\
+	mask = readl((bna)->regs.fn_int_mask);				\
+	writel((mask | (bna)->bits.mbox_mask_bits |			\
+		(bna)->bits.error_mask_bits), (bna)->regs.fn_int_mask); \
+	mask = readl((bna)->regs.fn_int_mask);				\
+} while (0)
 
-#define bna_mbox_intr_disable(bna)		\
-	writel((readl((bna)->regs.fn_int_mask) | \
-	     (__LPU2HOST_MBOX_MASK_BITS | __ERROR_MASK_BITS)), \
-	     (bna)->regs.fn_int_mask)
-
-#define bna_mbox_intr_enable(bna)		\
-	writel((readl((bna)->regs.fn_int_mask) & \
-	     ~(__LPU2HOST_MBOX_MASK_BITS | __ERROR_MASK_BITS)), \
-	     (bna)->regs.fn_int_mask)
+#define bna_mbox_intr_enable(bna)					\
+do {									\
+	u32 mask;							\
+	mask = readl((bna)->regs.fn_int_mask);				\
+	writel((mask & ~((bna)->bits.mbox_mask_bits |			\
+		(bna)->bits.error_mask_bits)), (bna)->regs.fn_int_mask);\
+	mask = readl((bna)->regs.fn_int_mask);				\
+} while (0)
 
 #define bna_intr_status_get(_bna, _status)				\
 {									\
-	(_status) = readl((_bna)->regs.fn_int_status);		\
-	if ((_status)) {						\
-		writel((_status) & ~(__LPU02HOST_MBOX0_STATUS_BITS |\
-					  __LPU02HOST_MBOX1_STATUS_BITS |\
-					  __LPU12HOST_MBOX0_STATUS_BITS |\
-					  __LPU12HOST_MBOX1_STATUS_BITS), \
-			      (_bna)->regs.fn_int_status);\
+	(_status) = readl((_bna)->regs.fn_int_status);			\
+	if (_status) {							\
+		writel(((_status) & ~(_bna)->bits.mbox_status_bits),	\
+			(_bna)->regs.fn_int_status);			\
 	}								\
 }
 
-#define bna_intr_status_get_no_clr(_bna, _status)		\
-	(_status) = readl((_bna)->regs.fn_int_status)
-
-#define bna_intr_mask_get(bna, mask)		\
-	(*mask) = readl((bna)->regs.fn_int_mask)
-
-#define bna_intr_ack(bna, intr_bmap)		\
-	writel((intr_bmap), (bna)->regs.fn_int_status)
+/*
+ * MAX ACK EVENTS : No. of acks that can be accumulated in driver,
+ * before acking to h/w. The no. of bits is 16 in the doorbell register,
+ * however we keep this limited to 15 bits.
+ * This is because around the edge of 64K boundary (16 bits), one
+ * single poll can make the accumulated ACK counter cross the 64K boundary,
+ * causing problems, when we try to ack with a value greater than 64K.
+ * 15 bits (32K) should  be large enough to accumulate, anyways, and the max.
+ * acked events to h/w can be (32K + max poll weight) (currently 64).
+ */
+#define	BNA_IB_MAX_ACK_EVENTS		(1 << 15)
+
+/* These macros build the data portion of the TxQ/RxQ doorbell */
+#define BNA_DOORBELL_Q_PRD_IDX(_pi)	(0x80000000 | (_pi))
+#define BNA_DOORBELL_Q_STOP		(0x40000000)
+
+/* These macros build the data portion of the IB doorbell */
+#define BNA_DOORBELL_IB_INT_ACK(_timeout, _events)			\
+	(0x80000000 | ((_timeout) << 16) | (_events))
+#define BNA_DOORBELL_IB_INT_DISABLE	(0x40000000)
+
+/* Set the coalescing timer for the given ib */
+#define bna_ib_coalescing_timer_set(_i_dbell, _cls_timer)		\
+	((_i_dbell)->doorbell_ack = BNA_DOORBELL_IB_INT_ACK((_cls_timer), 0));
+
+/* Acks 'events' # of events for a given ib while disabling interrupts */
+#define bna_ib_ack_disable_irq(_i_dbell, _events)			\
+	(writel(BNA_DOORBELL_IB_INT_ACK(0, (_events)), \
+		(_i_dbell)->doorbell_addr));
+
+/* Acks 'events' # of events for a given ib */
+#define bna_ib_ack(_i_dbell, _events)					\
+	(writel(((_i_dbell)->doorbell_ack | (_events)), \
+		(_i_dbell)->doorbell_addr));
+
+#define bna_ib_start(_bna, _ib, _is_regular)				\
+{									\
+	u32 intx_mask;						\
+	struct bna_ib *ib = _ib;					\
+	if ((ib->intr_type == BNA_INTR_T_INTX)) {			\
+		bna_intx_disable((_bna), intx_mask);			\
+		intx_mask &= ~(ib->intr_vector);			\
+		bna_intx_enable((_bna), intx_mask);			\
+	}								\
+	bna_ib_coalescing_timer_set(&ib->door_bell,			\
+			ib->coalescing_timeo);				\
+	if (_is_regular)						\
+		bna_ib_ack(&ib->door_bell, 0);				\
+}
 
-#define bna_ib_intx_disable(bna, ib_id)		\
-	writel(readl((bna)->regs.fn_int_mask) | \
-	    (1 << (ib_id)), \
-	    (bna)->regs.fn_int_mask)
+#define bna_ib_stop(_bna, _ib)						\
+{									\
+	u32 intx_mask;						\
+	struct bna_ib *ib = _ib;					\
+	writel(BNA_DOORBELL_IB_INT_DISABLE,				\
+		ib->door_bell.doorbell_addr);				\
+	if (ib->intr_type == BNA_INTR_T_INTX) {				\
+		bna_intx_disable((_bna), intx_mask);			\
+		intx_mask |= ib->intr_vector;				\
+		bna_intx_enable((_bna), intx_mask);			\
+	}								\
+}
 
-#define bna_ib_intx_enable(bna, ib_id)		\
-	writel(readl((bna)->regs.fn_int_mask) & \
-	    ~(1 << (ib_id)), \
-	    (bna)->regs.fn_int_mask)
+#define bna_txq_prod_indx_doorbell(_tcb)				\
+	(writel(BNA_DOORBELL_Q_PRD_IDX((_tcb)->producer_index), \
+		(_tcb)->q_dbell));
 
-#define bna_mbox_msix_idx_set(_device) \
-do {\
-	writel(((_device)->vector & 0x000001FF), \
-		(_device)->bna->pcidev.pci_bar_kva + \
-		reg_offset[(_device)->bna->pcidev.pci_func].msix_idx);\
-} while (0)
+#define bna_rxq_prod_indx_doorbell(_rcb)				\
+	(writel(BNA_DOORBELL_Q_PRD_IDX((_rcb)->producer_index), \
+		(_rcb)->q_dbell));
 
 /**
  *
@@ -1164,20 +281,6 @@ do {\
  *
  */
 
-#define	BNA_Q_IDLE_STATE	0x00008001
-
-#define BNA_GET_DOORBELL_BASE_ADDR(_bar0)	\
-	((_bar0) + HQM_DOORBELL_BLK_BASE_ADDR)
-
-#define BNA_GET_DOORBELL_ENTRY_OFFSET(_entry)		\
-	((HQM_DOORBELL_BLK_BASE_ADDR)		\
-	+ (_entry << 7))
-
-#define BNA_DOORBELL_IB_INT_ACK(_timeout, _events) \
-		(0x80000000 | ((_timeout) << 16) | (_events))
-
-#define BNA_DOORBELL_IB_INT_DISABLE		(0x40000000)
-
 /* TxQ Entry Opcodes */
 #define BNA_TXQ_WI_SEND			(0x402)	/* Single Frame Transmission */
 #define BNA_TXQ_WI_SEND_LSO		(0x403)	/* Multi-Frame Transmission */
@@ -1232,191 +335,23 @@ do {\
  *
  */
 
-enum txf_flags {
-	BFI_TXF_CF_ENABLE		= 1 << 0,
-	BFI_TXF_CF_VLAN_FILTER		= 1 << 8,
-	BFI_TXF_CF_VLAN_ADMIT		= 1 << 9,
-	BFI_TXF_CF_VLAN_INSERT		= 1 << 10,
-	BFI_TXF_CF_RSVD1		= 1 << 11,
-	BFI_TXF_CF_MAC_SA_CHECK		= 1 << 12,
-	BFI_TXF_CF_VLAN_WI_BASED	= 1 << 13,
-	BFI_TXF_CF_VSWITCH_MCAST	= 1 << 14,
-	BFI_TXF_CF_VSWITCH_UCAST	= 1 << 15,
-	BFI_TXF_CF_RSVD2		= 0x7F << 1
-};
-
-enum ib_flags {
-	BFI_IB_CF_MASTER_ENABLE		= (1 << 0),
-	BFI_IB_CF_MSIX_MODE		= (1 << 1),
-	BFI_IB_CF_COALESCING_MODE	= (1 << 2),
-	BFI_IB_CF_INTER_PKT_ENABLE	= (1 << 3),
-	BFI_IB_CF_INT_ENABLE		= (1 << 4),
-	BFI_IB_CF_INTER_PKT_DMA		= (1 << 5),
-	BFI_IB_CF_ACK_PENDING		= (1 << 6),
-	BFI_IB_CF_RESERVED1		= (1 << 7)
-};
-
-enum rss_hash_type {
-	BFI_RSS_T_V4_TCP		= (1 << 11),
-	BFI_RSS_T_V4_IP			= (1 << 10),
-	BFI_RSS_T_V6_TCP		= (1 <<  9),
-	BFI_RSS_T_V6_IP			= (1 <<  8)
-};
-enum hds_header_type {
-	BNA_HDS_T_V4_TCP	= (1 << 11),
-	BNA_HDS_T_V4_UDP	= (1 << 10),
-	BNA_HDS_T_V6_TCP	= (1 << 9),
-	BNA_HDS_T_V6_UDP	= (1 << 8),
-	BNA_HDS_FORCED		= (1 << 7),
-};
-enum rxf_flags {
-	BNA_RXF_CF_SM_LG_RXQ			= (1 << 15),
-	BNA_RXF_CF_DEFAULT_VLAN			= (1 << 14),
-	BNA_RXF_CF_DEFAULT_FUNCTION_ENABLE	= (1 << 13),
-	BNA_RXF_CF_VLAN_STRIP			= (1 << 12),
-	BNA_RXF_CF_RSS_ENABLE			= (1 <<  8)
-};
-struct bna_chip_regs_offset {
-	u32 page_addr;
+struct bna_reg_offset {
 	u32 fn_int_status;
 	u32 fn_int_mask;
-	u32 msix_idx;
-};
-
-struct bna_chip_regs {
-	void __iomem *page_addr;
-	void __iomem *fn_int_status;
-	void __iomem *fn_int_mask;
-};
-
-struct bna_txq_mem {
-	u32 pg_tbl_addr_lo;
-	u32 pg_tbl_addr_hi;
-	u32 cur_q_entry_lo;
-	u32 cur_q_entry_hi;
-	u32 reserved1;
-	u32 reserved2;
-	u32 pg_cnt_n_prd_ptr;	/* 31:16->total page count */
-					/* 15:0 ->producer pointer (index?) */
-	u32 entry_n_pg_size;	/* 31:16->entry size */
-					/* 15:0 ->page size */
-	u32 int_blk_n_cns_ptr;	/* 31:24->Int Blk Id;  */
-					/* 23:16->Int Blk Offset */
-					/* 15:0 ->consumer pointer(index?) */
-	u32 cns_ptr2_n_q_state;	/* 31:16->cons. ptr 2; 15:0-> Q state */
-	u32 nxt_qid_n_fid_n_pri;	/* 17:10->next */
-					/* QId;9:3->FID;2:0->Priority */
-	u32 wvc_n_cquota_n_rquota; /* 31:24->WI Vector Count; */
-					/* 23:12->Cfg Quota; */
-					/* 11:0 ->Run Quota */
-	u32 reserved3[4];
-};
-
-struct bna_rxq_mem {
-	u32 pg_tbl_addr_lo;
-	u32 pg_tbl_addr_hi;
-	u32 cur_q_entry_lo;
-	u32 cur_q_entry_hi;
-	u32 reserved1;
-	u32 reserved2;
-	u32 pg_cnt_n_prd_ptr;	/* 31:16->total page count */
-					/* 15:0 ->producer pointer (index?) */
-	u32 entry_n_pg_size;	/* 31:16->entry size */
-					/* 15:0 ->page size */
-	u32 sg_n_cq_n_cns_ptr;	/* 31:28->reserved; 27:24->sg count */
-					/* 23:16->CQ; */
-					/* 15:0->consumer pointer(index?) */
-	u32 buf_sz_n_q_state;	/* 31:16->buffer size; 15:0-> Q state */
-	u32 next_qid;		/* 17:10->next QId */
-	u32 reserved3;
-	u32 reserved4[4];
-};
-
-struct bna_rxtx_q_mem {
-	struct bna_rxq_mem rxq;
-	struct bna_txq_mem txq;
-};
-
-struct bna_cq_mem {
-	u32 pg_tbl_addr_lo;
-	u32 pg_tbl_addr_hi;
-	u32 cur_q_entry_lo;
-	u32 cur_q_entry_hi;
-
-	u32 reserved1;
-	u32 reserved2;
-	u32 pg_cnt_n_prd_ptr;	/* 31:16->total page count */
-					/* 15:0 ->producer pointer (index?) */
-	u32 entry_n_pg_size;	/* 31:16->entry size */
-					/* 15:0 ->page size */
-	u32 int_blk_n_cns_ptr;	/* 31:24->Int Blk Id; */
-					/* 23:16->Int Blk Offset */
-					/* 15:0 ->consumer pointer(index?) */
-	u32 q_state;		/* 31:16->reserved; 15:0-> Q state */
-	u32 reserved3[2];
-	u32 reserved4[4];
-};
-
-struct bna_ib_blk_mem {
-	u32 host_addr_lo;
-	u32 host_addr_hi;
-	u32 clsc_n_ctrl_n_msix;	/* 31:24->coalescing; */
-					/* 23:16->coalescing cfg; */
-					/* 15:8 ->control; */
-					/* 7:0 ->msix; */
-	u32 ipkt_n_ent_n_idxof;
-	u32 ipkt_cnt_cfg_n_unacked;
-
-	u32 reserved[3];
-};
-
-struct bna_idx_tbl_mem {
-	u32 idx;	  /* !< 31:16->res;15:0->idx; */
-};
-
-struct bna_doorbell_qset {
-	u32 rxq[0x20 >> 2];
-	u32 txq[0x20 >> 2];
-	u32 ib0[0x20 >> 2];
-	u32 ib1[0x20 >> 2];
-};
-
-struct bna_rx_fndb_ram {
-	u32 rss_prop;
-	u32 size_routing_props;
-	u32 rit_hds_mcastq;
-	u32 control_flags;
 };
 
-struct bna_tx_fndb_ram {
-	u32 vlan_n_ctrl_flags;
+struct bna_bit_defn {
+	u32 mbox_status_bits;
+	u32 mbox_mask_bits;
+	u32 error_status_bits;
+	u32 error_mask_bits;
+	u32 halt_status_bits;
+	u32 halt_mask_bits;
 };
 
-/**
- * @brief
- *  Structure which maps to RxFn Indirection Table (RIT)
- *  Size : 1 word
- *  See catapult_spec.pdf, RxA for details
- */
-struct bna_rit_mem {
-	u32 rxq_ids;	/* !< 31:12->res;11:0->two 6 bit RxQ Ids */
-};
-
-/**
- * @brief
- *  Structure which maps to RSS Table entry
- *  Size : 16 words
- *  See catapult_spec.pdf, RAD for details
- */
-struct bna_rss_mem {
-	/*
-	 * 31:12-> res
-	 * 11:8 -> protocol type
-	 *  7:0 -> hash index
-	 */
-	u32 type_n_hash;
-	u32 hash_key[10];  /* !< 40 byte Toeplitz hash key */
-	u32 reserved[5];
+struct bna_reg {
+	void __iomem *fn_int_status;
+	void __iomem *fn_int_mask;
 };
 
 /* TxQ Vector (a.k.a. Tx-Buffer Descriptor) */
@@ -1431,10 +366,6 @@ struct bna_txq_wi_vector {
 	struct bna_dma_addr host_addr; /* Tx-Buf DMA addr */
 };
 
-typedef u16 bna_txq_wi_opcode_t;
-
-typedef u16 bna_txq_wi_ctrl_flag_t;
-
 /**
  *  TxQ Entry Structure
  *
@@ -1445,10 +376,10 @@ struct bna_txq_entry {
 		struct {
 			u8 reserved;
 			u8 num_vectors;	/* number of vectors present */
-			bna_txq_wi_opcode_t opcode; /* Either */
+			u16 opcode; /* Either */
 						    /* BNA_TXQ_WI_SEND or */
 						    /* BNA_TXQ_WI_SEND_LSO */
-			bna_txq_wi_ctrl_flag_t flags; /* OR of all the flags */
+			u16 flags; /* OR of all the flags */
 			u16 l4_hdr_size_n_offset;
 			u16 vlan_tag;
 			u16 lso_mss;	/* Only 14 LSB are valid */
@@ -1457,7 +388,7 @@ struct bna_txq_entry {
 
 		struct {
 			u16 reserved;
-			bna_txq_wi_opcode_t opcode; /* Must be */
+			u16 opcode; /* Must be */
 						    /* BNA_TXQ_WI_EXTENSION */
 			u32 reserved2[3];	/* Place holder for */
 						/* removed vector (12 bytes) */
@@ -1465,19 +396,15 @@ struct bna_txq_entry {
 	} hdr;
 	struct bna_txq_wi_vector vector[4];
 };
-#define wi_hdr		hdr.wi
-#define wi_ext_hdr  hdr.wi_ext
 
 /* RxQ Entry Structure */
 struct bna_rxq_entry {		/* Rx-Buffer */
 	struct bna_dma_addr host_addr; /* Rx-Buffer DMA address */
 };
 
-typedef u32 bna_cq_e_flag_t;
-
 /* CQ Entry Structure */
 struct bna_cq_entry {
-	bna_cq_e_flag_t flags;
+	u32 flags;
 	u16 vlan_tag;
 	u16 length;
 	u32 rss_hash;
diff --git a/drivers/net/bna/bna_types.h b/drivers/net/bna/bna_types.h
index 2f89cb2..a4f71c0 100644
--- a/drivers/net/bna/bna_types.h
+++ b/drivers/net/bna/bna_types.h
@@ -21,6 +21,8 @@
 #include "cna.h"
 #include "bna_hw.h"
 #include "bfa_cee.h"
+#include "bfi_enet.h"
+#include "bfa_msgq.h"
 
 /**
  *
@@ -28,13 +30,14 @@
  *
  */
 
+struct bna_mcam_handle;
 struct bna_txq;
 struct bna_tx;
 struct bna_rxq;
 struct bna_cq;
 struct bna_rx;
 struct bna_rxf;
-struct bna_port;
+struct bna_enet;
 struct bna;
 struct bnad;
 
@@ -86,31 +89,29 @@ enum bna_res_req_type {
 	BNA_RES_MEM_T_ATTR		= 1,
 	BNA_RES_MEM_T_FWTRC		= 2,
 	BNA_RES_MEM_T_STATS		= 3,
-	BNA_RES_MEM_T_SWSTATS		= 4,
-	BNA_RES_MEM_T_IBIDX		= 5,
-	BNA_RES_MEM_T_IB_ARRAY		= 6,
-	BNA_RES_MEM_T_INTR_ARRAY	= 7,
-	BNA_RES_MEM_T_IDXSEG_ARRAY	= 8,
-	BNA_RES_MEM_T_TX_ARRAY		= 9,
-	BNA_RES_MEM_T_TXQ_ARRAY		= 10,
-	BNA_RES_MEM_T_RX_ARRAY		= 11,
-	BNA_RES_MEM_T_RXP_ARRAY		= 12,
-	BNA_RES_MEM_T_RXQ_ARRAY		= 13,
-	BNA_RES_MEM_T_UCMAC_ARRAY	= 14,
-	BNA_RES_MEM_T_MCMAC_ARRAY	= 15,
-	BNA_RES_MEM_T_RIT_ENTRY		= 16,
-	BNA_RES_MEM_T_RIT_SEGMENT	= 17,
-	BNA_RES_INTR_T_MBOX		= 18,
 	BNA_RES_T_MAX
 };
 
+enum bna_mod_res_req_type {
+	BNA_MOD_RES_MEM_T_TX_ARRAY	= 0,
+	BNA_MOD_RES_MEM_T_TXQ_ARRAY	= 1,
+	BNA_MOD_RES_MEM_T_RX_ARRAY	= 2,
+	BNA_MOD_RES_MEM_T_RXP_ARRAY	= 3,
+	BNA_MOD_RES_MEM_T_RXQ_ARRAY	= 4,
+	BNA_MOD_RES_MEM_T_UCMAC_ARRAY	= 5,
+	BNA_MOD_RES_MEM_T_MCMAC_ARRAY	= 6,
+	BNA_MOD_RES_MEM_T_MCHANDLE_ARRAY = 7,
+	BNA_MOD_RES_T_MAX
+};
+
 enum bna_tx_res_req_type {
 	BNA_TX_RES_MEM_T_TCB	= 0,
 	BNA_TX_RES_MEM_T_UNMAPQ	= 1,
 	BNA_TX_RES_MEM_T_QPT	= 2,
 	BNA_TX_RES_MEM_T_SWQPT	= 3,
 	BNA_TX_RES_MEM_T_PAGE	= 4,
-	BNA_TX_RES_INTR_T_TXCMPL = 5,
+	BNA_TX_RES_MEM_T_IBIDX	= 5,
+	BNA_TX_RES_INTR_T_TXCMPL = 6,
 	BNA_TX_RES_T_MAX,
 };
 
@@ -127,13 +128,10 @@ enum bna_rx_mem_type {
 	BNA_RX_RES_MEM_T_DSWQPT		= 9,	/* RX s/w QPT */
 	BNA_RX_RES_MEM_T_DPAGE		= 10,	/* RX s/w QPT */
 	BNA_RX_RES_MEM_T_HPAGE		= 11,	/* RX s/w QPT */
-	BNA_RX_RES_T_INTR		= 12,	/* Rx interrupts */
-	BNA_RX_RES_T_MAX		= 13
-};
-
-enum bna_mbox_state {
-	BNA_MBOX_FREE		= 0,
-	BNA_MBOX_POSTED		= 1
+	BNA_RX_RES_MEM_T_IBIDX		= 12,
+	BNA_RX_RES_MEM_T_RIT		= 13,
+	BNA_RX_RES_T_INTR		= 14,	/* Rx interrupts */
+	BNA_RX_RES_T_MAX		= 15
 };
 
 enum bna_tx_type {
@@ -142,14 +140,15 @@ enum bna_tx_type {
 };
 
 enum bna_tx_flags {
-	BNA_TX_F_PORT_STARTED	= 1,
+	BNA_TX_F_ENET_STARTED	= 1,
 	BNA_TX_F_ENABLED	= 2,
-	BNA_TX_F_PRIO_LOCK	= 4,
+	BNA_TX_F_PRIO_CHANGED	= 4,
+	BNA_TX_F_BW_UPDATED	= 8,
 };
 
 enum bna_tx_mod_flags {
-	BNA_TX_MOD_F_PORT_STARTED	= 1,
-	BNA_TX_MOD_F_PORT_LOOPBACK	= 2,
+	BNA_TX_MOD_F_ENET_STARTED	= 1,
+	BNA_TX_MOD_F_ENET_LOOPBACK	= 2,
 };
 
 enum bna_rx_type {
@@ -165,80 +164,49 @@ enum bna_rxp_type {
 
 enum bna_rxmode {
 	BNA_RXMODE_PROMISC	= 1,
-	BNA_RXMODE_ALLMULTI	= 2
+	BNA_RXMODE_DEFAULT	= 2,
+	BNA_RXMODE_ALLMULTI	= 4
 };
 
 enum bna_rx_event {
 	RX_E_START			= 1,
 	RX_E_STOP			= 2,
 	RX_E_FAIL			= 3,
-	RX_E_RXF_STARTED		= 4,
-	RX_E_RXF_STOPPED		= 5,
-	RX_E_RXQ_STOPPED		= 6,
-};
-
-enum bna_rx_state {
-	BNA_RX_STOPPED			= 1,
-	BNA_RX_RXF_START_WAIT		= 2,
-	BNA_RX_STARTED			= 3,
-	BNA_RX_RXF_STOP_WAIT		= 4,
-	BNA_RX_RXQ_STOP_WAIT		= 5,
+	RX_E_STARTED			= 4,
+	RX_E_STOPPED			= 5,
+	RX_E_RXF_STARTED		= 6,
+	RX_E_RXF_STOPPED		= 7,
+	RX_E_CLEANUP_DONE		= 8,
 };
 
 enum bna_rx_flags {
-	BNA_RX_F_ENABLE		= 0x01,		/* bnad enabled rxf */
-	BNA_RX_F_PORT_ENABLED	= 0x02,		/* Port object is enabled */
-	BNA_RX_F_PORT_FAILED	= 0x04,		/* Port in failed state */
+	BNA_RX_F_ENET_STARTED	= 1,
+	BNA_RX_F_ENABLED	= 2,
 };
 
 enum bna_rx_mod_flags {
-	BNA_RX_MOD_F_PORT_STARTED	= 1,
-	BNA_RX_MOD_F_PORT_LOOPBACK	= 2,
-};
-
-enum bna_rxf_oper_state {
-	BNA_RXF_OPER_STATE_RUNNING	= 0x01, /* rxf operational */
-	BNA_RXF_OPER_STATE_PAUSED	= 0x02,	/* rxf in PAUSED state */
+	BNA_RX_MOD_F_ENET_STARTED	= 1,
+	BNA_RX_MOD_F_ENET_LOOPBACK	= 2,
 };
 
 enum bna_rxf_flags {
-	BNA_RXF_FL_STOP_PENDING		= 0x01,
-	BNA_RXF_FL_FAILED		= 0x02,
-	BNA_RXF_FL_RSS_CONFIG_PENDING	= 0x04,
-	BNA_RXF_FL_OPERSTATE_CHANGED	= 0x08,
-	BNA_RXF_FL_RXF_ENABLED		= 0x10,
-	BNA_RXF_FL_VLAN_CONFIG_PENDING	= 0x20,
+	BNA_RXF_F_PAUSED		= 1,
 };
 
 enum bna_rxf_event {
 	RXF_E_START			= 1,
 	RXF_E_STOP			= 2,
 	RXF_E_FAIL			= 3,
-	RXF_E_CAM_FLTR_MOD		= 4,
-	RXF_E_STARTED			= 5,
-	RXF_E_STOPPED			= 6,
-	RXF_E_CAM_FLTR_RESP		= 7,
-	RXF_E_PAUSE			= 8,
-	RXF_E_RESUME			= 9,
-	RXF_E_STAT_CLEARED		= 10,
-};
-
-enum bna_rxf_state {
-	BNA_RXF_STOPPED			= 1,
-	BNA_RXF_START_WAIT		= 2,
-	BNA_RXF_CAM_FLTR_MOD_WAIT	= 3,
-	BNA_RXF_STARTED			= 4,
-	BNA_RXF_CAM_FLTR_CLR_WAIT	= 5,
-	BNA_RXF_STOP_WAIT		= 6,
-	BNA_RXF_PAUSE_WAIT		= 7,
-	BNA_RXF_RESUME_WAIT		= 8,
-	BNA_RXF_STAT_CLR_WAIT		= 9,
+	RXF_E_CONFIG			= 4,
+	RXF_E_PAUSE			= 5,
+	RXF_E_RESUME			= 6,
+	RXF_E_FW_RESP			= 7,
 };
 
-enum bna_port_type {
-	BNA_PORT_T_REGULAR		= 0,
-	BNA_PORT_T_LOOPBACK_INTERNAL	= 1,
-	BNA_PORT_T_LOOPBACK_EXTERNAL	= 2,
+enum bna_enet_type {
+	BNA_ENET_T_REGULAR		= 0,
+	BNA_ENET_T_LOOPBACK_INTERNAL	= 1,
+	BNA_ENET_T_LOOPBACK_EXTERNAL	= 2,
 };
 
 enum bna_link_status {
@@ -247,17 +215,27 @@ enum bna_link_status {
 	BNA_CEE_UP		= 2
 };
 
-enum bna_llport_flags {
-	BNA_LLPORT_F_ADMIN_UP		= 1,
-	BNA_LLPORT_F_PORT_ENABLED	= 2,
-	BNA_LLPORT_F_RX_STARTED		= 4
+enum bna_ethport_flags {
+	BNA_ETHPORT_F_ADMIN_UP		= 1,
+	BNA_ETHPORT_F_PORT_ENABLED	= 2,
+	BNA_ETHPORT_F_RX_STARTED	= 4,
 };
 
-enum bna_port_flags {
-	BNA_PORT_F_DEVICE_READY	= 1,
-	BNA_PORT_F_ENABLED	= 2,
-	BNA_PORT_F_PAUSE_CHANGED = 4,
-	BNA_PORT_F_MTU_CHANGED	= 8
+enum bna_enet_flags {
+	BNA_ENET_F_IOCETH_READY		= 1,
+	BNA_ENET_F_ENABLED		= 2,
+	BNA_ENET_F_PAUSE_CHANGED	= 4,
+	BNA_ENET_F_MTU_CHANGED		= 8
+};
+
+enum bna_rss_flags {
+	BNA_RSS_F_RIT_PENDING		= 1,
+	BNA_RSS_F_CFG_PENDING		= 2,
+	BNA_RSS_F_STATUS_PENDING	= 4,
+};
+
+enum bna_mod_flags {
+	BNA_MOD_F_INIT_DONE		= 1,
 };
 
 enum bna_pkt_rates {
@@ -289,10 +267,17 @@ enum bna_dim_bias_types {
 	BNA_BIAS_T_MAX			= 2
 };
 
+#define BNA_MAX_NAME_SIZE	64
+struct bna_ident {
+	int			id;
+	char			name[BNA_MAX_NAME_SIZE];
+};
+
 struct bna_mac {
 	/* This should be the first one */
 	struct list_head			qe;
 	u8			addr[ETH_ALEN];
+	struct bna_mcam_handle *handle;
 };
 
 struct bna_mem_descr {
@@ -338,23 +323,29 @@ struct bna_qpt {
 	u32		page_size;
 };
 
+struct bna_attr {
+	int			num_txq;
+	int			num_rxp;
+	int			num_ucmac;
+	int			num_mcmac;
+	int			max_rit_size;
+};
+
 /**
  *
- * Device
+ * IOCEth
  *
  */
 
-struct bna_device {
+struct bna_ioceth {
 	bfa_fsm_t		fsm;
 	struct bfa_ioc ioc;
 
-	enum bna_intr_type intr_type;
-	int			vector;
+	struct bna_attr attr;
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	struct bfi_enet_attr_req attr_req;
 
-	void (*ready_cbfn)(struct bnad *bnad, enum bna_cb_status status);
-	struct bnad *ready_cbarg;
-
-	void (*stop_cbfn)(struct bnad *bnad, enum bna_cb_status status);
+	void (*stop_cbfn)(struct bnad *bnad);
 	struct bnad *stop_cbarg;
 
 	struct bna *bna;
@@ -362,32 +353,7 @@ struct bna_device {
 
 /**
  *
- * Mail box
- *
- */
-
-struct bna_mbox_qe {
-	/* This should be the first one */
-	struct list_head			qe;
-
-	struct bfa_mbox_cmd cmd;
-	u32		cmd_len;
-	/* Callback for port, tx, rx, rxf */
-	void (*cbfn)(void *arg, int status);
-	void			*cbarg;
-};
-
-struct bna_mbox_mod {
-	enum bna_mbox_state state;
-	struct list_head			posted_q;
-	u32		msg_pending;
-	u32		msg_ctr;
-	struct bna *bna;
-};
-
-/**
- *
- * Port
+ * Enet
  *
  */
 
@@ -397,50 +363,58 @@ struct bna_pause_config {
 	enum bna_status rx_pause;
 };
 
-struct bna_llport {
+struct bna_enet {
 	bfa_fsm_t		fsm;
-	enum bna_llport_flags flags;
+	enum bna_enet_flags flags;
 
-	enum bna_port_type type;
+	enum bna_enet_type type;
 
-	enum bna_link_status link_status;
+	struct bna_pause_config pause_config;
+	int			mtu;
 
-	int			rx_started_count;
+	/* Callback for bna_enet_disable(), enet_stop() */
+	void (*stop_cbfn)(void *);
+	void			*stop_cbarg;
+
+	/* Callback for bna_enet_pause_config() */
+	void (*pause_cbfn)(struct bnad *);
+
+	/* Callback for bna_enet_mtu_set() */
+	void (*mtu_cbfn)(struct bnad *);
 
-	void (*stop_cbfn)(struct bna_port *, enum bna_cb_status);
+	struct bfa_wc		chld_stop_wc;
 
-	struct bna_mbox_qe mbox_qe;
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	struct bfi_enet_set_pause_req pause_req;
 
 	struct bna *bna;
 };
 
-struct bna_port {
-	bfa_fsm_t		fsm;
-	enum bna_port_flags flags;
-
-	enum bna_port_type type;
+/**
+ *
+ * Ethport
+ *
+ */
 
-	struct bna_llport llport;
+struct bna_ethport {
+	bfa_fsm_t		fsm;
+	enum bna_ethport_flags flags;
 
-	struct bna_pause_config pause_config;
-	u8			priority;
-	int			mtu;
+	enum bna_link_status link_status;
 
-	/* Callback for bna_port_disable(), port_stop() */
-	void (*stop_cbfn)(void *, enum bna_cb_status);
-	void			*stop_cbarg;
+	int			rx_started_count;
 
-	/* Callback for bna_port_pause_config() */
-	void (*pause_cbfn)(struct bnad *, enum bna_cb_status);
+	void (*stop_cbfn)(struct bna_enet *);
 
-	/* Callback for bna_port_mtu_set() */
-	void (*mtu_cbfn)(struct bnad *, enum bna_cb_status);
+	void (*adminup_cbfn)(struct bnad *, enum bna_cb_status);
 
 	void (*link_cbfn)(struct bnad *, enum bna_link_status);
 
-	struct bfa_wc		chld_stop_wc;
-
-	struct bna_mbox_qe mbox_qe;
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	union {
+		struct bfi_enet_enable_req admin_req;
+		struct bfi_enet_diag_lb_req lpbk_req;
+	} bfi_enet_cmd;
 
 	struct bna *bna;
 };
@@ -451,82 +425,25 @@ struct bna_port {
  *
  */
 
-/* IB index segment structure */
-struct bna_ibidx_seg {
-	/* This should be the first one */
-	struct list_head			qe;
-
-	u8			ib_seg_size;
-	u8			ib_idx_tbl_offset;
-};
-
-/* Interrupt structure */
-struct bna_intr {
-	/* This should be the first one */
-	struct list_head			qe;
-	int			ref_count;
-
-	enum bna_intr_type intr_type;
-	int			vector;
-
-	struct bna_ib *ib;
-};
-
 /* Doorbell structure */
 struct bna_ib_dbell {
 	void *__iomem doorbell_addr;
 	u32		doorbell_ack;
 };
 
-/* Interrupt timer configuration */
-struct bna_ib_config {
-	u8		coalescing_timeo;    /* Unit is 5usec. */
-
-	int			interpkt_count;
-	int			interpkt_timeo;
-
-	enum ib_flags ctrl_flags;
-};
-
 /* IB structure */
 struct bna_ib {
-	/* This should be the first one */
-	struct list_head			qe;
-
-	int			ib_id;
-
-	int			ref_count;
-	int			start_count;
-
 	struct bna_dma_addr ib_seg_host_addr;
 	void		*ib_seg_host_addr_kva;
-	u32		idx_mask; /* Size >= BNA_IBIDX_MAX_SEGSIZE */
-
-	struct bna_ibidx_seg *idx_seg;
 
 	struct bna_ib_dbell door_bell;
 
-	struct bna_intr *intr;
-
-	struct bna_ib_config ib_config;
-
-	struct bna *bna;
-};
-
-/* IB module - keeps track of IBs and interrupts */
-struct bna_ib_mod {
-	struct bna_ib *ib;		/* BFI_MAX_IB entries */
-	struct bna_intr *intr;		/* BFI_MAX_IB entries */
-	struct bna_ibidx_seg *idx_seg;	/* BNA_IBIDX_TOTAL_SEGS */
-
-	struct list_head			ib_free_q;
-
-	struct list_head		ibidx_seg_pool[BFI_IBIDX_TOTAL_POOLS];
-
-	struct list_head			intr_free_q;
-	struct list_head			intr_active_q;
+	enum bna_intr_type	intr_type;
+	int			intr_vector;
 
-	struct bna *bna;
+	u8			coalescing_timeo;    /* Unit is 5usec. */
+	int			interpkt_count;
+	int			interpkt_timeo;
 };
 
 /**
@@ -552,6 +469,7 @@ struct bna_tcb {
 	/* Control path */
 	struct bna_txq *txq;
 	struct bnad *bnad;
+	void			*priv; /* BNAD's cookie */
 	enum bna_intr_type intr_type;
 	int			intr_vector;
 	u8			priority; /* Current priority */
@@ -565,68 +483,66 @@ struct bna_txq {
 	/* This should be the first one */
 	struct list_head			qe;
 
-	int			txq_id;
-
 	u8			priority;
 
 	struct bna_qpt qpt;
 	struct bna_tcb *tcb;
-	struct bna_ib *ib;
-	int			ib_seg_offset;
+	struct bna_ib ib;
 
 	struct bna_tx *tx;
 
+	int			hw_id;
+
 	u64		tx_packets;
 	u64		tx_bytes;
 };
 
-/* TxF structure (hardware Tx Function) */
-struct bna_txf {
-	int			txf_id;
-	enum txf_flags ctrl_flags;
-	u16		vlan;
-};
-
 /* Tx object */
 struct bna_tx {
 	/* This should be the first one */
 	struct list_head			qe;
+	int			rid;
+	int			hw_id;
 
 	bfa_fsm_t		fsm;
 	enum bna_tx_flags flags;
 
 	enum bna_tx_type type;
+	int			num_txq;
 
 	struct list_head			txq_q;
-	struct bna_txf txf;
+	u16			txf_vlan_id;
 
 	/* Tx event handlers */
 	void (*tcb_setup_cbfn)(struct bnad *, struct bna_tcb *);
 	void (*tcb_destroy_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_stall_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_resume_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_cleanup_cbfn)(struct bnad *, struct bna_tcb *);
+	void (*tx_stall_cbfn)(struct bnad *, struct bna_tx *);
+	void (*tx_resume_cbfn)(struct bnad *, struct bna_tx *);
+	void (*tx_cleanup_cbfn)(struct bnad *, struct bna_tx *);
 
 	/* callback for bna_tx_disable(), bna_tx_stop() */
-	void (*stop_cbfn)(void *arg, struct bna_tx *tx,
-				enum bna_cb_status status);
+	void (*stop_cbfn)(void *arg, struct bna_tx *tx);
 	void			*stop_cbarg;
 
 	/* callback for bna_tx_prio_set() */
-	void (*prio_change_cbfn)(struct bnad *bnad, struct bna_tx *tx,
-				enum bna_cb_status status);
-
-	struct bfa_wc		txq_stop_wc;
+	void (*prio_change_cbfn)(struct bnad *bnad, struct bna_tx *tx);
 
-	struct bna_mbox_qe mbox_qe;
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	union {
+		struct bfi_enet_tx_cfg_req	cfg_req;
+		struct bfi_enet_req		req;
+		struct bfi_enet_tx_cfg_rsp	cfg_rsp;
+	} bfi_enet_cmd;
 
 	struct bna *bna;
 	void			*priv;	/* bnad's cookie */
 };
 
+/* Tx object configuration used during creation */
 struct bna_tx_config {
 	int			num_txq;
 	int			txq_depth;
+	int			coalescing_timeo;
 	enum bna_tx_type tx_type;
 };
 
@@ -635,9 +551,9 @@ struct bna_tx_event_cbfn {
 	void (*tcb_setup_cbfn)(struct bnad *, struct bna_tcb *);
 	void (*tcb_destroy_cbfn)(struct bnad *, struct bna_tcb *);
 	/* Mandatory */
-	void (*tx_stall_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_resume_cbfn)(struct bnad *, struct bna_tcb *);
-	void (*tx_cleanup_cbfn)(struct bnad *, struct bna_tcb *);
+	void (*tx_stall_cbfn)(struct bnad *, struct bna_tx *);
+	void (*tx_resume_cbfn)(struct bnad *, struct bna_tx *);
+	void (*tx_cleanup_cbfn)(struct bnad *, struct bna_tx *);
 };
 
 /* Tx module - keeps track of free, active tx objects */
@@ -651,57 +567,25 @@ struct bna_tx_mod {
 	struct list_head			txq_free_q;
 
 	/* callback for bna_tx_mod_stop() */
-	void (*stop_cbfn)(struct bna_port *port,
-				enum bna_cb_status status);
+	void (*stop_cbfn)(struct bna_enet *enet);
 
 	struct bfa_wc		tx_stop_wc;
 
 	enum bna_tx_mod_flags flags;
 
-	int			priority;
-	int			cee_link;
+	u8			prio_map;
+	int			default_prio;
+	int			iscsi_over_cee;
+	int			iscsi_prio;
+	int			prio_reconfigured;
 
-	u32		txf_bmap[2];
+	u32			rid_mask;
 
 	struct bna *bna;
 };
 
 /**
  *
- * Receive Indirection Table
- *
- */
-
-/* One row of RIT table */
-struct bna_rit_entry {
-	u8 large_rxq_id;	/* used for either large or data buffers */
-	u8 small_rxq_id;	/* used for either small or header buffers */
-};
-
-/* RIT segment */
-struct bna_rit_segment {
-	struct list_head			qe;
-
-	u32		rit_offset;
-	u32		rit_size;
-	/**
-	 * max_rit_size: Varies per RIT segment depending on how RIT is
-	 * partitioned
-	 */
-	u32		max_rit_size;
-
-	struct bna_rit_entry *rit;
-};
-
-struct bna_rit_mod {
-	struct bna_rit_entry *rit;
-	struct bna_rit_segment *rit_segment;
-
-	struct list_head		rit_seg_pool[BFI_RIT_SEG_TOTAL_POOLS];
-};
-
-/**
- *
  * Rx object
  *
  */
@@ -719,8 +603,9 @@ struct bna_rcb {
 	int			page_count;
 	/* Control path */
 	struct bna_rxq *rxq;
-	struct bna_cq *cq;
+	struct bna_ccb *ccb;
 	struct bnad *bnad;
+	void			*priv; /* BNAD's cookie */
 	unsigned long		flags;
 	int			id;
 };
@@ -728,7 +613,6 @@ struct bna_rcb {
 /* RxQ structure - QPT, configuration */
 struct bna_rxq {
 	struct list_head			qe;
-	int			rxq_id;
 
 	int			buffer_size;
 	int			q_depth;
@@ -739,6 +623,8 @@ struct bna_rxq {
 	struct bna_rxp *rxp;
 	struct bna_rx *rx;
 
+	int			hw_id;
+
 	u64		rx_packets;
 	u64		rx_bytes;
 	u64		rx_packets_with_error;
@@ -784,6 +670,7 @@ struct bna_ccb {
 	/* Control path */
 	struct bna_cq *cq;
 	struct bnad *bnad;
+	void			*priv; /* BNAD's cookie */
 	enum bna_intr_type intr_type;
 	int			intr_vector;
 	u8			rx_coalescing_timeo; /* For NAPI */
@@ -793,46 +680,43 @@ struct bna_ccb {
 
 /* CQ QPT, configuration  */
 struct bna_cq {
-	int			cq_id;
-
 	struct bna_qpt qpt;
 	struct bna_ccb *ccb;
 
-	struct bna_ib *ib;
-	u8			ib_seg_offset;
+	struct bna_ib ib;
 
 	struct bna_rx *rx;
 };
 
 struct bna_rss_config {
-	enum rss_hash_type hash_type;
+	enum bfi_enet_rss_type	hash_type;
 	u8			hash_mask;
-	u32		toeplitz_hash_key[BFI_RSS_HASH_KEY_LEN];
+	u32		toeplitz_hash_key[BFI_ENET_RSS_KEY_LEN];
 };
 
 struct bna_hds_config {
-	enum hds_header_type hdr_type;
-	int			header_size;
+	enum bfi_enet_hds_type	hdr_type;
+	int			forced_offset;
 };
 
-/* This structure is used during RX creation */
+/* Rx object configuration used during creation */
 struct bna_rx_config {
 	enum bna_rx_type rx_type;
 	int			num_paths;
 	enum bna_rxp_type rxp_type;
 	int			paused;
 	int			q_depth;
+	int			coalescing_timeo;
 	/*
 	 * Small/Large (or Header/Data) buffer size to be configured
 	 * for SLR and HDS queue type. Large buffer size comes from
-	 * port->mtu.
+	 * enet->mtu.
 	 */
 	int			small_buff_size;
 
 	enum bna_status rss_status;
 	struct bna_rss_config rss_config;
 
-	enum bna_status hds_status;
 	struct bna_hds_config hds_config;
 
 	enum bna_status vlan_strip_status;
@@ -851,51 +735,35 @@ struct bna_rxp {
 
 	/* MSI-x vector number for configuring RSS */
 	int			vector;
-
-	struct bna_mbox_qe mbox_qe;
-};
-
-/* HDS configuration structure */
-struct bna_rxf_hds {
-	enum hds_header_type hdr_type;
-	int			header_size;
-};
-
-/* RSS configuration structure */
-struct bna_rxf_rss {
-	enum rss_hash_type hash_type;
-	u8			hash_mask;
-	u32		toeplitz_hash_key[BFI_RSS_HASH_KEY_LEN];
+	int			hw_id;
 };
 
 /* RxF structure (hardware Rx Function) */
 struct bna_rxf {
 	bfa_fsm_t		fsm;
-	int			rxf_id;
-	enum rxf_flags ctrl_flags;
-	u16		default_vlan_tag;
-	enum bna_rxf_oper_state rxf_oper_state;
-	enum bna_status hds_status;
-	struct bna_rxf_hds hds_cfg;
-	enum bna_status rss_status;
-	struct bna_rxf_rss rss_cfg;
-	struct bna_rit_segment *rit_segment;
-	struct bna_rx *rx;
-	u32		forced_offset;
-	struct bna_mbox_qe mbox_qe;
-	int			mcast_rxq_id;
+	enum bna_rxf_flags flags;
+
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	union {
+		struct bfi_enet_enable_req req;
+		struct bfi_enet_rss_cfg_req rss_req;
+		struct bfi_enet_rit_req rit_req;
+		struct bfi_enet_rx_vlan_req vlan_req;
+		struct bfi_enet_mcast_add_req mcast_add_req;
+		struct bfi_enet_mcast_del_req mcast_del_req;
+		struct bfi_enet_ucast_req ucast_req;
+	} bfi_enet_cmd;
 
 	/* callback for bna_rxf_start() */
-	void (*start_cbfn) (struct bna_rx *rx, enum bna_cb_status status);
+	void (*start_cbfn) (struct bna_rx *rx);
 	struct bna_rx *start_cbarg;
 
 	/* callback for bna_rxf_stop() */
-	void (*stop_cbfn) (struct bna_rx *rx, enum bna_cb_status status);
+	void (*stop_cbfn) (struct bna_rx *rx);
 	struct bna_rx *stop_cbarg;
 
-	/* callback for bna_rxf_receive_enable() / bna_rxf_receive_disable() */
-	void (*oper_state_cbfn) (struct bnad *bnad, struct bna_rx *rx,
-			enum bna_cb_status status);
+	/* callback for bna_rx_receive_pause() / bna_rx_receive_resume() */
+	void (*oper_state_cbfn) (struct bnad *bnad, struct bna_rx *rx);
 	struct bnad *oper_state_cbarg;
 
 	/**
@@ -905,25 +773,25 @@ struct bna_rxf {
 	 *	bna_rxf_{ucast/mcast}_del(),
 	 *	bna_rxf_mode_set()
 	 */
-	void (*cam_fltr_cbfn)(struct bnad *bnad, struct bna_rx *rx,
-				enum bna_cb_status status);
+	void (*cam_fltr_cbfn)(struct bnad *bnad, struct bna_rx *rx);
 	struct bnad *cam_fltr_cbarg;
 
-	enum bna_rxf_flags rxf_flags;
-
 	/* List of unicast addresses yet to be applied to h/w */
 	struct list_head			ucast_pending_add_q;
 	struct list_head			ucast_pending_del_q;
+	struct bna_mac *ucast_pending_mac;
 	int			ucast_pending_set;
 	/* ucast addresses applied to the h/w */
 	struct list_head			ucast_active_q;
-	struct bna_mac *ucast_active_mac;
+	struct bna_mac ucast_active_mac;
+	int			ucast_active_set;
 
 	/* List of multicast addresses yet to be applied to h/w */
 	struct list_head			mcast_pending_add_q;
 	struct list_head			mcast_pending_del_q;
 	/* multicast addresses applied to the h/w */
 	struct list_head			mcast_active_q;
+	struct list_head			mcast_handle_q;
 
 	/* Rx modes yet to be applied to h/w */
 	enum bna_rxmode rxmode_pending;
@@ -931,41 +799,58 @@ struct bna_rxf {
 	/* Rx modes applied to h/w */
 	enum bna_rxmode rxmode_active;
 
+	u8			vlan_pending_bitmask;
 	enum bna_status vlan_filter_status;
-	u32		vlan_filter_table[(BFI_MAX_VLAN + 1) / 32];
+	u32	vlan_filter_table[(BFI_ENET_VLAN_ID_MAX + 1) / 32];
+	bool			vlan_strip_pending;
+	enum bna_status		vlan_strip_status;
+
+	enum bna_rss_flags	rss_pending;
+	enum bna_status		rss_status;
+	struct bna_rss_config	rss_cfg;
+	u8			*rit;
+	int			rit_size;
+
+	struct bna_rx		*rx;
 };
 
 /* Rx object */
 struct bna_rx {
 	/* This should be the first one */
 	struct list_head			qe;
+	int			rid;
+	int			hw_id;
 
 	bfa_fsm_t		fsm;
 
 	enum bna_rx_type type;
 
-	/* list-head for RX path objects */
+	int			num_paths;
 	struct list_head			rxp_q;
 
+	struct bna_hds_config	hds_cfg;
+
 	struct bna_rxf rxf;
 
 	enum bna_rx_flags rx_flags;
 
-	struct bna_mbox_qe mbox_qe;
-
-	struct bfa_wc		rxq_stop_wc;
+	struct bfa_msgq_cmd_entry msgq_cmd;
+	union {
+		struct bfi_enet_rx_cfg_req	cfg_req;
+		struct bfi_enet_req		req;
+		struct bfi_enet_rx_cfg_rsp	cfg_rsp;
+	} bfi_enet_cmd;
 
 	/* Rx event handlers */
 	void (*rcb_setup_cbfn)(struct bnad *, struct bna_rcb *);
 	void (*rcb_destroy_cbfn)(struct bnad *, struct bna_rcb *);
 	void (*ccb_setup_cbfn)(struct bnad *, struct bna_ccb *);
 	void (*ccb_destroy_cbfn)(struct bnad *, struct bna_ccb *);
-	void (*rx_cleanup_cbfn)(struct bnad *, struct bna_ccb *);
-	void (*rx_post_cbfn)(struct bnad *, struct bna_rcb *);
+	void (*rx_cleanup_cbfn)(struct bnad *, struct bna_rx *);
+	void (*rx_post_cbfn)(struct bnad *, struct bna_rx *);
 
 	/* callback for bna_rx_disable(), bna_rx_stop() */
-	void (*stop_cbfn)(void *arg, struct bna_rx *rx,
-				enum bna_cb_status status);
+	void (*stop_cbfn)(void *arg, struct bna_rx *rx);
 	void			*stop_cbarg;
 
 	struct bna *bna;
@@ -979,8 +864,8 @@ struct bna_rx_event_cbfn {
 	void (*ccb_setup_cbfn)(struct bnad *, struct bna_ccb *);
 	void (*ccb_destroy_cbfn)(struct bnad *, struct bna_ccb *);
 	/* Mandatory */
-	void (*rx_cleanup_cbfn)(struct bnad *, struct bna_ccb *);
-	void (*rx_post_cbfn)(struct bnad *, struct bna_rcb *);
+	void (*rx_cleanup_cbfn)(struct bnad *, struct bna_rx *);
+	void (*rx_post_cbfn)(struct bnad *, struct bna_rx *);
 };
 
 /* Rx module - keeps track of free, active rx objects */
@@ -1003,12 +888,11 @@ struct bna_rx_mod {
 	enum bna_rx_mod_flags flags;
 
 	/* callback for bna_rx_mod_stop() */
-	void (*stop_cbfn)(struct bna_port *port,
-				enum bna_cb_status status);
+	void (*stop_cbfn)(struct bna_enet *enet);
 
 	struct bfa_wc		rx_stop_wc;
 	u32		dim_vector[BNA_LOAD_T_MAX][BNA_BIAS_T_MAX];
-	u32		rxf_bmap[2];
+	u32		rid_mask;
 };
 
 /**
@@ -1024,9 +908,18 @@ struct bna_ucam_mod {
 	struct bna *bna;
 };
 
+struct bna_mcam_handle {
+	/* This should be the first one */
+	struct list_head			qe;
+	int			handle;
+	int			refcnt;
+};
+
 struct bna_mcam_mod {
 	struct bna_mac *mcmac;		/* BFI_MAX_MCMAC entries */
+	struct bna_mcam_handle *mchandle;	/* BFI_MAX_MCMAC entries */
 	struct list_head			free_q;
+	struct list_head			free_handle_q;
 
 	struct bna *bna;
 };
@@ -1037,50 +930,20 @@ struct bna_mcam_mod {
  *
  */
 
-struct bna_tx_stats {
-	int			tx_state;
-	int			tx_flags;
-	int			num_txqs;
-	u32		txq_bmap[2];
-	int			txf_id;
-};
-
-struct bna_rx_stats {
-	int			rx_state;
-	int			rx_flags;
-	int			num_rxps;
-	int			num_rxqs;
-	u32		rxq_bmap[2];
-	u32		cq_bmap[2];
-	int			rxf_id;
-	int			rxf_state;
-	int			rxf_oper_state;
-	int			num_active_ucast;
-	int			num_active_mcast;
-	int			rxmode_active;
-	int			vlan_filter_status;
-	u32		vlan_filter_table[(BFI_MAX_VLAN + 1) / 32];
-	int			rss_status;
-	int			hds_status;
-};
-
-struct bna_sw_stats {
-	int			device_state;
-	int			port_state;
-	int			port_flags;
-	int			llport_state;
-	int			priority;
-	int			num_active_tx;
-	int			num_active_rx;
-	struct bna_tx_stats tx_stats[BFI_MAX_TXQ];
-	struct bna_rx_stats rx_stats[BFI_MAX_RXQ];
+struct bna_stats {
+	struct bna_dma_addr	hw_stats_dma;
+	struct bfi_enet_stats	*hw_stats_kva;
+	struct bfi_enet_stats	hw_stats;
 };
 
-struct bna_stats {
-	u32		txf_bmap[2];
-	u32		rxf_bmap[2];
-	struct bfi_ll_stats	*hw_stats;
-	struct bna_sw_stats *sw_stats;
+struct bna_stats_mod {
+	bool		ioc_ready;
+	bool		stats_get_busy;
+	bool		stats_clr_busy;
+	struct bfa_msgq_cmd_entry stats_get_cmd;
+	struct bfa_msgq_cmd_entry stats_clr_cmd;
+	struct bfi_enet_stats_req stats_get;
+	struct bfi_enet_stats_req stats_clr;
 };
 
 /**
@@ -1090,38 +953,32 @@ struct bna_stats {
  */
 
 struct bna {
+	struct bna_ident ident;
 	struct bfa_pcidev pcidev;
 
-	int			port_num;
-
-	struct bna_chip_regs regs;
+	struct bna_reg regs;
+	struct bna_bit_defn bits;
 
-	struct bna_dma_addr hw_stats_dma;
 	struct bna_stats stats;
 
-	struct bna_device device;
+	struct bna_ioceth ioceth;
 	struct bfa_cee cee;
+	struct bfa_msgq msgq;
 
-	struct bna_mbox_mod mbox_mod;
-
-	struct bna_port port;
+	struct bna_ethport ethport;
+	struct bna_enet enet;
+	struct bna_stats_mod stats_mod;
 
 	struct bna_tx_mod tx_mod;
-
 	struct bna_rx_mod rx_mod;
-
-	struct bna_ib_mod ib_mod;
-
 	struct bna_ucam_mod ucam_mod;
 	struct bna_mcam_mod mcam_mod;
 
-	struct bna_rit_mod rit_mod;
+	enum bna_mod_flags mod_flags;
 
-	int			rxf_promisc_id;
-
-	struct bna_mbox_qe mbox_qe;
+	int			default_mode_rid;
+	int			promisc_rid;
 
 	struct bnad *bnad;
 };
-
 #endif	/* __BNA_TYPES_H__ */
diff --git a/drivers/net/bna/bnad.c b/drivers/net/bna/bnad.c
index abad4e3..50a6868 100644
--- a/drivers/net/bna/bnad.c
+++ b/drivers/net/bna/bnad.c
@@ -440,7 +440,6 @@ bnad_poll_cq(struct bnad *bnad, struct bna_ccb *ccb, int budget)
 	struct bnad_skb_unmap *unmap_array;
 	struct sk_buff *skb;
 	u32 flags, unmap_cons;
-	u32 qid0 = ccb->rcb[0]->rxq->rxq_id;
 	struct bna_pkt_rate *pkt_rt = &ccb->pkt_rate;
 
 	if (!test_bit(BNAD_RXQ_STARTED, &ccb->rcb[0]->flags))
@@ -454,10 +453,10 @@ bnad_poll_cq(struct bnad *bnad, struct bna_ccb *ccb, int budget)
 		packets++;
 		BNA_UPDATE_PKT_CNT(pkt_rt, ntohs(cmpl->length));
 
-		if (qid0 == cmpl->rxq_id)
-			rcb = ccb->rcb[0];
-		else
+		if (bna_is_small_rxq(cmpl->rxq_id))
 			rcb = ccb->rcb[1];
+		else
+			rcb = ccb->rcb[0];
 
 		unmap_q = rcb->unmap_q;
 		unmap_array = unmap_q->unmap_array;
@@ -618,7 +617,7 @@ bnad_msix_mbox_handler(int irq, void *data)
 
 	bna_intr_status_get(&bnad->bna, intr_status);
 
-	if (BNA_IS_MBOX_ERR_INTR(intr_status))
+	if (BNA_IS_MBOX_ERR_INTR(&bnad->bna, intr_status))
 		bna_mbox_handler(&bnad->bna, intr_status);
 
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
@@ -646,7 +645,7 @@ bnad_isr(int irq, void *data)
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 
-	if (BNA_IS_MBOX_ERR_INTR(intr_status))
+	if (BNA_IS_MBOX_ERR_INTR(&bnad->bna, intr_status))
 		bna_mbox_handler(&bnad->bna, intr_status);
 
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
@@ -713,43 +712,49 @@ bnad_set_netdev_perm_addr(struct bnad *bnad)
 
 /* Callbacks */
 void
-bnad_cb_device_enable_mbox_intr(struct bnad *bnad)
+bnad_cb_mbox_intr_enable(struct bnad *bnad)
 {
 	bnad_enable_mbox_irq(bnad);
 }
 
 void
-bnad_cb_device_disable_mbox_intr(struct bnad *bnad)
+bnad_cb_mbox_intr_disable(struct bnad *bnad)
 {
 	bnad_disable_mbox_irq(bnad);
 }
 
 void
-bnad_cb_device_enabled(struct bnad *bnad, enum bna_cb_status status)
+bnad_cb_ioceth_ready(struct bnad *bnad)
 {
+	bnad->bnad_completions.ioc_comp_status = BNA_CB_SUCCESS;
 	complete(&bnad->bnad_completions.ioc_comp);
-	bnad->bnad_completions.ioc_comp_status = status;
 }
 
 void
-bnad_cb_device_disabled(struct bnad *bnad, enum bna_cb_status status)
+bnad_cb_ioceth_failed(struct bnad *bnad)
 {
+	bnad->bnad_completions.ioc_comp_status = BNA_CB_FAIL;
+	complete(&bnad->bnad_completions.ioc_comp);
+}
+
+void
+bnad_cb_ioceth_disabled(struct bnad *bnad)
+{
+	bnad->bnad_completions.ioc_comp_status = BNA_CB_SUCCESS;
 	complete(&bnad->bnad_completions.ioc_comp);
-	bnad->bnad_completions.ioc_comp_status = status;
 }
 
 static void
-bnad_cb_port_disabled(void *arg, enum bna_cb_status status)
+bnad_cb_enet_disabled(void *arg)
 {
 	struct bnad *bnad = (struct bnad *)arg;
 
-	complete(&bnad->bnad_completions.port_comp);
-
 	netif_carrier_off(bnad->netdev);
+	complete(&bnad->bnad_completions.enet_comp);
 }
 
 void
-bnad_cb_port_link_status(struct bnad *bnad,
+bnad_cb_ethport_link_status(struct bnad *bnad,
 			enum bna_link_status link_status)
 {
 	bool link_up = 0;
@@ -757,34 +762,60 @@ bnad_cb_port_link_status(struct bnad *bnad,
 	link_up = (link_status == BNA_LINK_UP) || (link_status == BNA_CEE_UP);
 
 	if (link_status == BNA_CEE_UP) {
+		if (!test_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags))
+			BNAD_UPDATE_CTR(bnad, cee_toggle);
 		set_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags);
-		BNAD_UPDATE_CTR(bnad, cee_up);
-	} else
+	} else {
+		if (test_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags))
+			BNAD_UPDATE_CTR(bnad, cee_toggle);
 		clear_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags);
+	}
 
 	if (link_up) {
 		if (!netif_carrier_ok(bnad->netdev)) {
-			struct bna_tcb *tcb = bnad->tx_info[0].tcb[0];
-			if (!tcb)
-				return;
-			pr_warn("bna: %s link up\n",
+			uint tx_id, tcb_id;
+			printk(KERN_WARNING "bna: %s link up\n",
 				bnad->netdev->name);
 			netif_carrier_on(bnad->netdev);
 			BNAD_UPDATE_CTR(bnad, link_toggle);
-			if (test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags)) {
-				/* Force an immediate Transmit Schedule */
-				pr_info("bna: %s TX_STARTED\n",
-					bnad->netdev->name);
-				netif_wake_queue(bnad->netdev);
-				BNAD_UPDATE_CTR(bnad, netif_queue_wakeup);
-			} else {
-				netif_stop_queue(bnad->netdev);
-				BNAD_UPDATE_CTR(bnad, netif_queue_stop);
+			for (tx_id = 0; tx_id < bnad->num_tx; tx_id++) {
+				for (tcb_id = 0; tcb_id < bnad->num_txq_per_tx;
+				      tcb_id++) {
+					struct bna_tcb *tcb =
+					bnad->tx_info[tx_id].tcb[tcb_id];
+					u32 txq_id;
+					if (!tcb)
+						continue;
+
+					txq_id = tcb->id;
+
+					if (test_bit(BNAD_TXQ_TX_STARTED,
+						     &tcb->flags)) {
+						/*
+						 * Force an immediate
+						 * Transmit Schedule */
+						printk(KERN_INFO "bna: %s %d "
+						      "TXQ_STARTED\n",
+						       bnad->netdev->name,
+						       txq_id);
+						netif_wake_subqueue(
+								bnad->netdev,
+								txq_id);
+						BNAD_UPDATE_CTR(bnad,
+							netif_queue_wakeup);
+					} else {
+						netif_stop_subqueue(
+								bnad->netdev,
+								txq_id);
+						BNAD_UPDATE_CTR(bnad,
+							netif_queue_stop);
+					}
+				}
 			}
 		}
 	} else {
 		if (netif_carrier_ok(bnad->netdev)) {
-			pr_warn("bna: %s link down\n",
+			printk(KERN_WARNING "bna: %s link down\n",
 				bnad->netdev->name);
 			netif_carrier_off(bnad->netdev);
 			BNAD_UPDATE_CTR(bnad, link_toggle);
@@ -793,8 +824,7 @@ bnad_cb_port_link_status(struct bnad *bnad,
 }
 
 static void
-bnad_cb_tx_disabled(void *arg, struct bna_tx *tx,
-			enum bna_cb_status status)
+bnad_cb_tx_disabled(void *arg, struct bna_tx *tx)
 {
 	struct bnad *bnad = (struct bnad *)arg;
 
@@ -871,108 +901,166 @@ bnad_cb_ccb_destroy(struct bnad *bnad, struct bna_ccb *ccb)
 }
 
 static void
-bnad_cb_tx_stall(struct bnad *bnad, struct bna_tcb *tcb)
+bnad_cb_tx_stall(struct bnad *bnad, struct bna_tx *tx)
 {
 	struct bnad_tx_info *tx_info =
-			(struct bnad_tx_info *)tcb->txq->tx->priv;
-
-	if (tx_info != &bnad->tx_info[0])
-		return;
+			(struct bnad_tx_info *)tx->priv;
+	struct bna_tcb *tcb;
+	u32 txq_id;
+	int i;
 
-	clear_bit(BNAD_TXQ_TX_STARTED, &tcb->flags);
-	netif_stop_queue(bnad->netdev);
-	pr_info("bna: %s TX_STOPPED\n", bnad->netdev->name);
+	for (i = 0; i < BNAD_MAX_TXQ_PER_TX; i++) {
+		tcb = tx_info->tcb[i];
+		if (!tcb)
+			continue;
+		txq_id = tcb->id;
+		clear_bit(BNAD_TXQ_TX_STARTED, &tcb->flags);
+		netif_stop_subqueue(bnad->netdev, txq_id);
+		printk(KERN_INFO "bna: %s %d TXQ_STOPPED\n",
+			bnad->netdev->name, txq_id);
+	}
 }
 
 static void
-bnad_cb_tx_resume(struct bnad *bnad, struct bna_tcb *tcb)
+bnad_cb_tx_resume(struct bnad *bnad, struct bna_tx *tx)
 {
-	struct bnad_unmap_q *unmap_q = tcb->unmap_q;
+	struct bnad_tx_info *tx_info = (struct bnad_tx_info *)tx->priv;
+	struct bna_tcb *tcb;
+	struct bnad_unmap_q *unmap_q;
+	u32 txq_id;
+	int i;
 
-	if (test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags))
-		return;
+	for (i = 0; i < BNAD_MAX_TXQ_PER_TX; i++) {
+		tcb = tx_info->tcb[i];
+		if (!tcb)
+			continue;
+		txq_id = tcb->id;
 
-	clear_bit(BNAD_RF_TX_SHUTDOWN_DELAYED, &bnad->run_flags);
+		unmap_q = tcb->unmap_q;
 
-	while (test_and_set_bit(BNAD_TXQ_FREE_SENT, &tcb->flags))
-		cpu_relax();
+		if (test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags))
+			continue;
 
-	bnad_free_all_txbufs(bnad, tcb);
+		while (test_and_set_bit(BNAD_TXQ_FREE_SENT, &tcb->flags))
+			cpu_relax();
 
-	unmap_q->producer_index = 0;
-	unmap_q->consumer_index = 0;
+		bnad_free_all_txbufs(bnad, tcb);
 
-	smp_mb__before_clear_bit();
-	clear_bit(BNAD_TXQ_FREE_SENT, &tcb->flags);
+		unmap_q->producer_index = 0;
+		unmap_q->consumer_index = 0;
+
+		smp_mb__before_clear_bit();
+		clear_bit(BNAD_TXQ_FREE_SENT, &tcb->flags);
+
+		set_bit(BNAD_TXQ_TX_STARTED, &tcb->flags);
+
+		if (netif_carrier_ok(bnad->netdev)) {
+			printk(KERN_INFO "bna: %s %d TXQ_STARTED\n",
+				bnad->netdev->name, txq_id);
+			netif_wake_subqueue(bnad->netdev, txq_id);
+			BNAD_UPDATE_CTR(bnad, netif_queue_wakeup);
+		}
+	}
 
 	/*
-	 * Workaround for first device enable failure & we
+	 * Workaround for first ioceth enable failure & we
 	 * get a 0 MAC address. We try to get the MAC address
 	 * again here.
 	 */
 	if (is_zero_ether_addr(&bnad->perm_addr.mac[0])) {
-		bna_port_mac_get(&bnad->bna.port, &bnad->perm_addr);
+		bna_enet_perm_mac_get(&bnad->bna.enet, &bnad->perm_addr);
 		bnad_set_netdev_perm_addr(bnad);
 	}
-
-	set_bit(BNAD_TXQ_TX_STARTED, &tcb->flags);
-
-	if (netif_carrier_ok(bnad->netdev)) {
-		pr_info("bna: %s TX_STARTED\n", bnad->netdev->name);
-		netif_wake_queue(bnad->netdev);
-		BNAD_UPDATE_CTR(bnad, netif_queue_wakeup);
-	}
 }
 
 static void
-bnad_cb_tx_cleanup(struct bnad *bnad, struct bna_tcb *tcb)
+bnad_cb_tx_cleanup(struct bnad *bnad, struct bna_tx *tx)
 {
-	/* Delay only once for the whole Tx Path Shutdown */
-	if (!test_and_set_bit(BNAD_RF_TX_SHUTDOWN_DELAYED, &bnad->run_flags))
-		mdelay(BNAD_TXRX_SYNC_MDELAY);
+	struct bnad_tx_info *tx_info = (struct bnad_tx_info *)tx->priv;
+	struct bna_tcb *tcb;
+	int i;
+
+	for (i = 0; i < BNAD_MAX_TXQ_PER_TX; i++) {
+		tcb = tx_info->tcb[i];
+		if (!tcb)
+			continue;
+	}
+
+	mdelay(BNAD_TXRX_SYNC_MDELAY);
+	bna_tx_cleanup_complete(tx);
 }
 
 static void
-bnad_cb_rx_cleanup(struct bnad *bnad,
-			struct bna_ccb *ccb)
+bnad_cb_rx_cleanup(struct bnad *bnad, struct bna_rx *rx)
 {
-	clear_bit(BNAD_RXQ_STARTED, &ccb->rcb[0]->flags);
+	struct bnad_rx_info *rx_info = (struct bnad_rx_info *)rx->priv;
+	struct bna_ccb *ccb;
+	struct bnad_rx_ctrl *rx_ctrl;
+	int i;
+
+	mdelay(BNAD_TXRX_SYNC_MDELAY);
+
+	for (i = 0; i < BNAD_MAX_RXP_PER_RX; i++) {
+		rx_ctrl = &rx_info->rx_ctrl[i];
+		ccb = rx_ctrl->ccb;
+		if (!ccb)
+			continue;
+
+		clear_bit(BNAD_RXQ_STARTED, &ccb->rcb[0]->flags);
+
+		if (ccb->rcb[1])
+			clear_bit(BNAD_RXQ_STARTED, &ccb->rcb[1]->flags);
 
-	if (ccb->rcb[1])
-		clear_bit(BNAD_RXQ_STARTED, &ccb->rcb[1]->flags);
+		while (test_bit(BNAD_FP_IN_RX_PATH, &rx_ctrl->flags))
+			cpu_relax();
+	}
 
-	if (!test_and_set_bit(BNAD_RF_RX_SHUTDOWN_DELAYED, &bnad->run_flags))
-		mdelay(BNAD_TXRX_SYNC_MDELAY);
+	bna_rx_cleanup_complete(rx);
 }
 
 static void
-bnad_cb_rx_post(struct bnad *bnad, struct bna_rcb *rcb)
+bnad_cb_rx_post(struct bnad *bnad, struct bna_rx *rx)
 {
-	struct bnad_unmap_q *unmap_q = rcb->unmap_q;
-
-	clear_bit(BNAD_RF_RX_SHUTDOWN_DELAYED, &bnad->run_flags);
-
-	if (rcb == rcb->cq->ccb->rcb[0])
-		bnad_cq_cmpl_init(bnad, rcb->cq->ccb);
+	struct bnad_rx_info *rx_info = (struct bnad_rx_info *)rx->priv;
+	struct bna_ccb *ccb;
+	struct bna_rcb *rcb;
+	struct bnad_rx_ctrl *rx_ctrl;
+	struct bnad_unmap_q *unmap_q;
+	int i;
+	int j;
 
-	bnad_free_all_rxbufs(bnad, rcb);
+	for (i = 0; i < BNAD_MAX_RXP_PER_RX; i++) {
+		rx_ctrl = &rx_info->rx_ctrl[i];
+		ccb = rx_ctrl->ccb;
+		if (!ccb)
+			continue;
 
-	set_bit(BNAD_RXQ_STARTED, &rcb->flags);
+		bnad_cq_cmpl_init(bnad, ccb);
 
-	/* Now allocate & post buffers for this RCB */
-	/* !!Allocation in callback context */
-	if (!test_and_set_bit(BNAD_RXQ_REFILL, &rcb->flags)) {
-		if (BNA_QE_FREE_CNT(unmap_q, unmap_q->q_depth)
-			 >> BNAD_RXQ_REFILL_THRESHOLD_SHIFT)
-			bnad_alloc_n_post_rxbufs(bnad, rcb);
-		smp_mb__before_clear_bit();
-		clear_bit(BNAD_RXQ_REFILL, &rcb->flags);
+		for (j = 0; j < BNAD_MAX_RXQ_PER_RXP; j++) {
+			rcb = ccb->rcb[j];
+			if (!rcb)
+				continue;
+			bnad_free_all_rxbufs(bnad, rcb);
+
+			set_bit(BNAD_RXQ_STARTED, &rcb->flags);
+			unmap_q = rcb->unmap_q;
+
+			/* Now allocate & post buffers for this RCB */
+			/* !!Allocation in callback context */
+			if (!test_and_set_bit(BNAD_RXQ_REFILL, &rcb->flags)) {
+				if (BNA_QE_FREE_CNT(unmap_q, unmap_q->q_depth)
+					>> BNAD_RXQ_REFILL_THRESHOLD_SHIFT)
+					bnad_alloc_n_post_rxbufs(bnad, rcb);
+					smp_mb__before_clear_bit();
+				clear_bit(BNAD_RXQ_REFILL, &rcb->flags);
+			}
+		}
 	}
 }
 
 static void
-bnad_cb_rx_disabled(void *arg, struct bna_rx *rx,
-			enum bna_cb_status status)
+bnad_cb_rx_disabled(void *arg, struct bna_rx *rx)
 {
 	struct bnad *bnad = (struct bnad *)arg;
 
@@ -980,10 +1068,9 @@ bnad_cb_rx_disabled(void *arg, struct bna_rx *rx,
 }
 
 static void
-bnad_cb_rx_mcast_add(struct bnad *bnad, struct bna_rx *rx,
-				enum bna_cb_status status)
+bnad_cb_rx_mcast_add(struct bnad *bnad, struct bna_rx *rx)
 {
-	bnad->bnad_completions.mcast_comp_status = status;
+	bnad->bnad_completions.mcast_comp_status = BNA_CB_SUCCESS;
 	complete(&bnad->bnad_completions.mcast_comp);
 }
 
@@ -1002,6 +1089,13 @@ bnad_cb_stats_get(struct bnad *bnad, enum bna_cb_status status,
 		  jiffies + msecs_to_jiffies(BNAD_STATS_TIMER_FREQ));
 }
 
+static void
+bnad_cb_enet_mtu_set(struct bnad *bnad)
+{
+	bnad->bnad_completions.mtu_comp_status = BNA_CB_SUCCESS;
+	complete(&bnad->bnad_completions.mtu_comp);
+}
+
 /* Resource allocation, free functions */
 
 static void
@@ -1413,7 +1507,7 @@ bnad_ioc_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bfa_nw_ioc_timeout((void *) &bnad->bna.device.ioc);
+	bfa_nw_ioc_timeout((void *) &bnad->bna.ioceth.ioc);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1424,7 +1518,7 @@ bnad_ioc_hb_check(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bfa_nw_ioc_hb_check((void *) &bnad->bna.device.ioc);
+	bfa_nw_ioc_hb_check((void *) &bnad->bna.ioceth.ioc);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1435,7 +1529,7 @@ bnad_iocpf_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bfa_nw_iocpf_timeout((void *) &bnad->bna.device.ioc);
+	bfa_nw_iocpf_timeout((void *) &bnad->bna.ioceth.ioc);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1446,7 +1540,7 @@ bnad_iocpf_sem_timeout(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bfa_nw_iocpf_sem_timeout((void *) &bnad->bna.device.ioc);
+	bfa_nw_iocpf_sem_timeout((void *) &bnad->bna.ioceth.ioc);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1505,7 +1599,7 @@ bnad_stats_timeout(unsigned long data)
 		return;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_stats_get(&bnad->bna);
+	bna_hw_stats_get(&bnad->bna);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 }
 
@@ -1751,10 +1845,10 @@ bnad_init_rx_config(struct bnad *bnad, struct bna_rx_config *rx_config)
 	if (bnad->num_rxp_per_rx > 1) {
 		rx_config->rss_status = BNA_STATUS_T_ENABLED;
 		rx_config->rss_config.hash_type =
-				(BFI_RSS_T_V4_TCP |
-				 BFI_RSS_T_V6_TCP |
-				 BFI_RSS_T_V4_IP  |
-				 BFI_RSS_T_V6_IP);
+				(BFI_ENET_RSS_IPV6 |
+				 BFI_ENET_RSS_IPV6_TCP |
+				 BFI_ENET_RSS_IPV4 |
+				 BFI_ENET_RSS_IPV4_TCP);
 		rx_config->rss_config.hash_mask =
 				bnad->num_rxp_per_rx - 1;
 		get_random_bytes(rx_config->rss_config.toeplitz_hash_key,
@@ -1987,7 +2081,7 @@ bnad_restore_vlans(struct bnad *bnad, u32 rx_id)
 	if (!bnad->vlan_grp)
 		return;
 
-	BUG_ON(!(VLAN_N_VID == (BFI_MAX_VLAN + 1)));
+	BUG_ON(!(VLAN_N_VID == (BFI_ENET_VLAN_ID_MAX + 1)));
 
 	for (vlan_id = 0; vlan_id < VLAN_N_VID; vlan_id++) {
 		if (!vlan_group_get_device(bnad->vlan_grp, vlan_id))
@@ -2042,11 +2136,11 @@ bnad_netdev_qstats_fill(struct bnad *bnad, struct rtnl_link_stats64 *stats)
 void
 bnad_netdev_hwstats_fill(struct bnad *bnad, struct rtnl_link_stats64 *stats)
 {
-	struct bfi_ll_stats_mac *mac_stats;
+	struct bfi_enet_stats_mac *mac_stats;
 	u64 bmap;
 	int i;
 
-	mac_stats = &bnad->stats.bna_stats->hw_stats->mac_stats;
+	mac_stats = &bnad->stats.bna_stats->hw_stats.mac_stats;
 	stats->rx_errors =
 		mac_stats->rx_fcs_error + mac_stats->rx_alignment_error +
 		mac_stats->rx_frame_length_error + mac_stats->rx_code_error +
@@ -2065,13 +2159,12 @@ bnad_netdev_hwstats_fill(struct bnad *bnad, struct rtnl_link_stats64 *stats)
 	stats->rx_crc_errors = mac_stats->rx_fcs_error;
 	stats->rx_frame_errors = mac_stats->rx_alignment_error;
 	/* recv'r fifo overrun */
-	bmap = (u64)bnad->stats.bna_stats->rxf_bmap[0] |
-		((u64)bnad->stats.bna_stats->rxf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_RXF_ID_MAX); i++) {
+	bmap = bna_rx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1) {
 			stats->rx_fifo_errors +=
 				bnad->stats.bna_stats->
-					hw_stats->rxf_stats[i].frame_drops;
+					hw_stats.rxf_stats[i].frame_drops;
 			break;
 		}
 		bmap >>= 1;
@@ -2151,7 +2244,7 @@ bnad_q_num_init(struct bnad *bnad)
 	int rxps;
 
 	rxps = min((uint)num_online_cpus(),
-			(uint)(BNAD_MAX_RXS * BNAD_MAX_RXPS_PER_RX));
+			(uint)(BNAD_MAX_RX * BNAD_MAX_RXP_PER_RX));
 
 	if (!(bnad->cfg_flags & BNAD_CF_MSIX))
 		rxps = 1;	/* INTx */
@@ -2182,37 +2275,41 @@ bnad_q_num_adjust(struct bnad *bnad, int msix_vectors)
 		bnad->num_rxp_per_rx = 1;
 }
 
-/* Enable / disable device */
-static void
-bnad_device_disable(struct bnad *bnad)
+/* Enable / disable ioceth */
+static int
+bnad_ioceth_disable(struct bnad *bnad)
 {
 	unsigned long flags;
-
-	init_completion(&bnad->bnad_completions.ioc_comp);
+	int err = 0;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_device_disable(&bnad->bna.device, BNA_HARD_CLEANUP);
+	init_completion(&bnad->bnad_completions.ioc_comp);
+	bna_ioceth_disable(&bnad->bna.ioceth, BNA_HARD_CLEANUP);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
-	wait_for_completion(&bnad->bnad_completions.ioc_comp);
+	wait_for_completion_timeout(&bnad->bnad_completions.ioc_comp,
+		msecs_to_jiffies(BNAD_IOCETH_TIMEOUT));
+
+	err = bnad->bnad_completions.ioc_comp_status;
+	return err;
 }
 
 static int
-bnad_device_enable(struct bnad *bnad)
+bnad_ioceth_enable(struct bnad *bnad)
 {
 	int err = 0;
 	unsigned long flags;
 
-	init_completion(&bnad->bnad_completions.ioc_comp);
-
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_device_enable(&bnad->bna.device);
+	init_completion(&bnad->bnad_completions.ioc_comp);
+	bnad->bnad_completions.ioc_comp_status = BNA_CB_WAITING;
+	bna_ioceth_enable(&bnad->bna.ioceth);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
-	wait_for_completion(&bnad->bnad_completions.ioc_comp);
+	wait_for_completion_timeout(&bnad->bnad_completions.ioc_comp,
+		msecs_to_jiffies(BNAD_IOCETH_TIMEOUT));
 
-	if (bnad->bnad_completions.ioc_comp_status)
-		err = bnad->bnad_completions.ioc_comp_status;
+	err = bnad->bnad_completions.ioc_comp_status;
 
 	return err;
 }
@@ -2365,9 +2462,9 @@ bnad_open(struct net_device *netdev)
 	mtu = ETH_HLEN + bnad->netdev->mtu + ETH_FCS_LEN;
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_port_mtu_set(&bnad->bna.port, mtu, NULL);
-	bna_port_pause_config(&bnad->bna.port, &pause_config, NULL);
-	bna_port_enable(&bnad->bna.port);
+	bna_enet_mtu_set(&bnad->bna.enet, mtu, NULL);
+	bna_enet_pause_config(&bnad->bna.enet, &pause_config, NULL);
+	bna_enet_enable(&bnad->bna.enet);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 	/* Enable broadcast */
@@ -2407,14 +2504,14 @@ bnad_stop(struct net_device *netdev)
 	/* Stop the stats timer */
 	bnad_stats_timer_stop(bnad);
 
-	init_completion(&bnad->bnad_completions.port_comp);
+	init_completion(&bnad->bnad_completions.enet_comp);
 
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_port_disable(&bnad->bna.port, BNA_HARD_CLEANUP,
-			bnad_cb_port_disabled);
+	bna_enet_disable(&bnad->bna.enet, BNA_HARD_CLEANUP,
+			bnad_cb_enet_disabled);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
-	wait_for_completion(&bnad->bnad_completions.port_comp);
+	wait_for_completion(&bnad->bnad_completions.enet_comp);
 
 	bnad_cleanup_tx(bnad, 0);
 	bnad_cleanup_rx(bnad, 0);
@@ -2448,7 +2545,7 @@ bnad_start_xmit(struct sk_buff *skb, struct net_device *netdev)
 	struct bnad_unmap_q *unmap_q;
 	dma_addr_t		dma_addr;
 	struct bna_txq_entry *txqent;
-	bna_txq_wi_ctrl_flag_t	flags;
+	u16	flags;
 
 	if (unlikely
 	    (skb->len <= ETH_HLEN || skb->len > BFI_TX_MAX_DATA_PER_PKT)) {
@@ -2771,11 +2868,25 @@ bnad_set_mac_address(struct net_device *netdev, void *mac_addr)
 }
 
 static int
-bnad_change_mtu(struct net_device *netdev, int new_mtu)
+bnad_mtu_set(struct bnad *bnad, int mtu)
 {
-	int mtu, err = 0;
 	unsigned long flags;
 
+	init_completion(&bnad->bnad_completions.mtu_comp);
+
+	spin_lock_irqsave(&bnad->bna_lock, flags);
+	bna_enet_mtu_set(&bnad->bna.enet, mtu, bnad_cb_enet_mtu_set);
+	spin_unlock_irqrestore(&bnad->bna_lock, flags);
+
+	wait_for_completion(&bnad->bnad_completions.mtu_comp);
+
+	return bnad->bnad_completions.mtu_comp_status;
+}
+
+static int
+bnad_change_mtu(struct net_device *netdev, int new_mtu)
+{
+	int err, mtu = netdev->mtu;
 	struct bnad *bnad = netdev_priv(netdev);
 
 	if (new_mtu + ETH_HLEN < ETH_ZLEN || new_mtu > BNAD_JUMBO_MTU)
@@ -2785,11 +2896,10 @@ bnad_change_mtu(struct net_device *netdev, int new_mtu)
 
 	netdev->mtu = new_mtu;
 
-	mtu = ETH_HLEN + new_mtu + ETH_FCS_LEN;
-
-	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_port_mtu_set(&bnad->bna.port, mtu, NULL);
-	spin_unlock_irqrestore(&bnad->bna_lock, flags);
+	mtu = ETH_HLEN + VLAN_HLEN + new_mtu + ETH_FCS_LEN;
+	err = bnad_mtu_set(bnad, mtu);
+	if (err)
+		err = -EBUSY;
 
 	mutex_unlock(&bnad->conf_mutex);
 	return err;
@@ -2989,7 +3099,7 @@ bnad_uninit(struct bnad *bnad)
 
 /*
  * Initialize locks
-	a) Per device mutes used for serializing configuration
+	a) Per ioceth mutes used for serializing configuration
 	   changes from OS interface
 	b) spin lock used to protect bna state machine
  */
@@ -3136,17 +3246,17 @@ bnad_pci_probe(struct pci_dev *pdev,
 	bnad->stats.bna_stats = &bna->stats;
 
 	/* Set up timers */
-	setup_timer(&bnad->bna.device.ioc.ioc_timer, bnad_ioc_timeout,
+	setup_timer(&bnad->bna.ioceth.ioc.ioc_timer, bnad_ioc_timeout,
 				((unsigned long)bnad));
-	setup_timer(&bnad->bna.device.ioc.hb_timer, bnad_ioc_hb_check,
+	setup_timer(&bnad->bna.ioceth.ioc.hb_timer, bnad_ioc_hb_check,
 				((unsigned long)bnad));
-	setup_timer(&bnad->bna.device.ioc.iocpf_timer, bnad_iocpf_timeout,
+	setup_timer(&bnad->bna.ioceth.ioc.iocpf_timer, bnad_iocpf_timeout,
 				((unsigned long)bnad));
-	setup_timer(&bnad->bna.device.ioc.sem_timer, bnad_iocpf_sem_timeout,
+	setup_timer(&bnad->bna.ioceth.ioc.sem_timer, bnad_iocpf_sem_timeout,
 				((unsigned long)bnad));
 
 	/* Now start the timer before calling IOC */
-	mod_timer(&bnad->bna.device.ioc.iocpf_timer,
+	mod_timer(&bnad->bna.ioceth.ioc.iocpf_timer,
 		  jiffies + msecs_to_jiffies(BNA_IOC_TIMER_FREQ));
 
 	/*
@@ -3154,11 +3264,11 @@ bnad_pci_probe(struct pci_dev *pdev,
 	 * Don't care even if err != 0, bna state machine will
 	 * deal with it
 	 */
-	err = bnad_device_enable(bnad);
+	err = bnad_ioceth_enable(bnad);
 
 	/* Get the burnt-in mac */
 	spin_lock_irqsave(&bnad->bna_lock, flags);
-	bna_port_mac_get(&bna->port, &bnad->perm_addr);
+	bna_enet_perm_mac_get(&bna->enet, &bnad->perm_addr);
 	bnad_set_netdev_perm_addr(bnad);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
@@ -3175,10 +3285,10 @@ bnad_pci_probe(struct pci_dev *pdev,
 
 disable_device:
 	mutex_lock(&bnad->conf_mutex);
-	bnad_device_disable(bnad);
-	del_timer_sync(&bnad->bna.device.ioc.ioc_timer);
-	del_timer_sync(&bnad->bna.device.ioc.sem_timer);
-	del_timer_sync(&bnad->bna.device.ioc.hb_timer);
+	bnad_ioceth_disable(bnad);
+	del_timer_sync(&bnad->bna.ioceth.ioc.ioc_timer);
+	del_timer_sync(&bnad->bna.ioceth.ioc.sem_timer);
+	del_timer_sync(&bnad->bna.ioceth.ioc.hb_timer);
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 	bna_uninit(bna);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
@@ -3213,10 +3323,10 @@ bnad_pci_remove(struct pci_dev *pdev)
 	unregister_netdev(netdev);
 
 	mutex_lock(&bnad->conf_mutex);
-	bnad_device_disable(bnad);
-	del_timer_sync(&bnad->bna.device.ioc.ioc_timer);
-	del_timer_sync(&bnad->bna.device.ioc.sem_timer);
-	del_timer_sync(&bnad->bna.device.ioc.hb_timer);
+	bnad_ioceth_disable(bnad);
+	del_timer_sync(&bnad->bna.ioceth.ioc.ioc_timer);
+	del_timer_sync(&bnad->bna.ioceth.ioc.sem_timer);
+	del_timer_sync(&bnad->bna.ioceth.ioc.hb_timer);
 	spin_lock_irqsave(&bnad->bna_lock, flags);
 	bna_uninit(bna);
 	spin_unlock_irqrestore(&bnad->bna_lock, flags);
diff --git a/drivers/net/bna/bnad.h b/drivers/net/bna/bnad.h
index acf04b6..0928e68 100644
--- a/drivers/net/bna/bnad.h
+++ b/drivers/net/bna/bnad.h
@@ -37,12 +37,13 @@
 #define BNAD_TXQ_DEPTH		2048
 #define BNAD_RXQ_DEPTH		2048
 
-#define BNAD_MAX_TXS		1
+#define BNAD_MAX_TX		1
 #define BNAD_MAX_TXQ_PER_TX	8	/* 8 priority queues */
 #define BNAD_TXQ_NUM		1
 
-#define BNAD_MAX_RXS		1
-#define BNAD_MAX_RXPS_PER_RX	16
+#define BNAD_MAX_RX		1
+#define BNAD_MAX_RXP_PER_RX	16
+#define BNAD_MAX_RXQ_PER_RXP    2
 
 /*
  * Control structure pointed to ccb->ctrl, which
@@ -72,6 +73,8 @@ struct bnad_rx_ctrl {
 #define BNAD_STATS_TIMER_FREQ		1000	/* in msecs */
 #define BNAD_DIM_TIMER_FREQ		1000	/* in msecs */
 
+#define BNAD_IOCETH_TIMEOUT	     10000
+
 #define BNAD_MAX_Q_DEPTH		0x10000
 #define BNAD_MIN_Q_DEPTH		0x200
 
@@ -111,7 +114,8 @@ struct bnad_completion {
 	struct completion	tx_comp;
 	struct completion	rx_comp;
 	struct completion	stats_comp;
-	struct completion	port_comp;
+	struct completion	enet_comp;
+	struct completion	mtu_comp;
 
 	u8			ioc_comp_status;
 	u8			ucast_comp_status;
@@ -120,6 +124,7 @@ struct bnad_completion {
 	u8			rx_comp_status;
 	u8			stats_comp_status;
 	u8			port_comp_status;
+	u8			mtu_comp_status;
 };
 
 /* Tx Rx Control Stats */
@@ -141,6 +146,7 @@ struct bnad_drv_stats {
 	u64		netif_rx_dropped;
 
 	u64		link_toggle;
+	u64		cee_toggle;
 	u64		cee_up;
 
 	u64		rxp_info_alloc_failed;
@@ -175,7 +181,7 @@ struct bnad_tx_info {
 struct bnad_rx_info {
 	struct bna_rx *rx; /* 1:1 between rx_info & rx */
 
-	struct bnad_rx_ctrl rx_ctrl[BNAD_MAX_RXPS_PER_RX];
+	struct bnad_rx_ctrl rx_ctrl[BNAD_MAX_RXP_PER_RX];
 } ____cacheline_aligned;
 
 /* Unmap queues for Tx / Rx cleanup */
@@ -209,12 +215,16 @@ struct bnad_unmap_q {
 #define BNAD_RF_TX_SHUTDOWN_DELAYED	6
 #define BNAD_RF_RX_SHUTDOWN_DELAYED	7
 
+/* Define for Fast Path flags */
+/* Defined as bit positions */
+#define BNAD_FP_IN_RX_PATH	      0
+
 struct bnad {
 	struct net_device	*netdev;
 
 	/* Data path */
-	struct bnad_tx_info tx_info[BNAD_MAX_TXS];
-	struct bnad_rx_info rx_info[BNAD_MAX_RXS];
+	struct bnad_tx_info tx_info[BNAD_MAX_TX];
+	struct bnad_rx_info rx_info[BNAD_MAX_RX];
 
 	struct vlan_group	*vlan_grp;
 	/*
@@ -234,8 +244,8 @@ struct bnad {
 	u8			tx_coalescing_timeo;
 	u8			rx_coalescing_timeo;
 
-	struct bna_rx_config rx_config[BNAD_MAX_RXS];
-	struct bna_tx_config tx_config[BNAD_MAX_TXS];
+	struct bna_rx_config rx_config[BNAD_MAX_RX];
+	struct bna_tx_config tx_config[BNAD_MAX_TX];
 
 	void __iomem		*bar0;	/* BAR0 address */
 
@@ -261,8 +271,8 @@ struct bnad {
 
 	/* Control path resources, memory & irq */
 	struct bna_res_info res_info[BNA_RES_T_MAX];
-	struct bnad_tx_res_info tx_res_info[BNAD_MAX_TXS];
-	struct bnad_rx_res_info rx_res_info[BNAD_MAX_RXS];
+	struct bnad_tx_res_info tx_res_info[BNAD_MAX_TX];
+	struct bnad_rx_res_info rx_res_info[BNAD_MAX_RX];
 
 	struct bnad_completion bnad_completions;
 
diff --git a/drivers/net/bna/bnad_ethtool.c b/drivers/net/bna/bnad_ethtool.c
index fea07f1..1c19dce 100644
--- a/drivers/net/bna/bnad_ethtool.c
+++ b/drivers/net/bna/bnad_ethtool.c
@@ -29,14 +29,14 @@
 
 #define BNAD_NUM_TXF_COUNTERS 12
 #define BNAD_NUM_RXF_COUNTERS 10
-#define BNAD_NUM_CQ_COUNTERS 3
+#define BNAD_NUM_CQ_COUNTERS (3 + 5)
 #define BNAD_NUM_RXQ_COUNTERS 6
 #define BNAD_NUM_TXQ_COUNTERS 5
 
 #define BNAD_ETHTOOL_STATS_NUM						\
 	(sizeof(struct rtnl_link_stats64) / sizeof(u64) +	\
 	sizeof(struct bnad_drv_stats) / sizeof(u64) +		\
-	offsetof(struct bfi_ll_stats, rxf_stats[0]) / sizeof(u64))
+	offsetof(struct bfi_enet_stats, rxf_stats[0]) / sizeof(u64))
 
 static char *bnad_net_stats_strings[BNAD_ETHTOOL_STATS_NUM] = {
 	"rx_packets",
@@ -277,7 +277,7 @@ bnad_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)
 	ioc_attr = kzalloc(sizeof(*ioc_attr), GFP_KERNEL);
 	if (ioc_attr) {
 		spin_lock_irqsave(&bnad->bna_lock, flags);
-		bfa_nw_ioc_get_attr(&bnad->bna.device.ioc, ioc_attr);
+		bfa_nw_ioc_get_attr(&bnad->bna.ioceth.ioc, ioc_attr);
 		spin_unlock_irqrestore(&bnad->bna_lock, flags);
 
 		strncpy(drvinfo->fw_version, ioc_attr->adapter_attr.fw_ver,
@@ -288,323 +288,6 @@ bnad_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)
 	strncpy(drvinfo->bus_info, pci_name(bnad->pcidev), ETHTOOL_BUSINFO_LEN);
 }
 
-static int
-get_regs(struct bnad *bnad, u32 * regs)
-{
-	int num = 0, i;
-	u32 reg_addr;
-	unsigned long flags;
-
-#define BNAD_GET_REG(addr)					\
-do {								\
-	if (regs)						\
-		regs[num++] = readl(bnad->bar0 + (addr));	\
-	else							\
-		num++;						\
-} while (0)
-
-	spin_lock_irqsave(&bnad->bna_lock, flags);
-
-	/* DMA Block Internal Registers */
-	BNAD_GET_REG(DMA_CTRL_REG0);
-	BNAD_GET_REG(DMA_CTRL_REG1);
-	BNAD_GET_REG(DMA_ERR_INT_STATUS);
-	BNAD_GET_REG(DMA_ERR_INT_ENABLE);
-	BNAD_GET_REG(DMA_ERR_INT_STATUS_SET);
-
-	/* APP Block Register Address Offset from BAR0 */
-	BNAD_GET_REG(HOSTFN0_INT_STATUS);
-	BNAD_GET_REG(HOSTFN0_INT_MASK);
-	BNAD_GET_REG(HOST_PAGE_NUM_FN0);
-	BNAD_GET_REG(HOST_MSIX_ERR_INDEX_FN0);
-	BNAD_GET_REG(FN0_PCIE_ERR_REG);
-	BNAD_GET_REG(FN0_ERR_TYPE_STATUS_REG);
-	BNAD_GET_REG(FN0_ERR_TYPE_MSK_STATUS_REG);
-
-	BNAD_GET_REG(HOSTFN1_INT_STATUS);
-	BNAD_GET_REG(HOSTFN1_INT_MASK);
-	BNAD_GET_REG(HOST_PAGE_NUM_FN1);
-	BNAD_GET_REG(HOST_MSIX_ERR_INDEX_FN1);
-	BNAD_GET_REG(FN1_PCIE_ERR_REG);
-	BNAD_GET_REG(FN1_ERR_TYPE_STATUS_REG);
-	BNAD_GET_REG(FN1_ERR_TYPE_MSK_STATUS_REG);
-
-	BNAD_GET_REG(PCIE_MISC_REG);
-
-	BNAD_GET_REG(HOST_SEM0_INFO_REG);
-	BNAD_GET_REG(HOST_SEM1_INFO_REG);
-	BNAD_GET_REG(HOST_SEM2_INFO_REG);
-	BNAD_GET_REG(HOST_SEM3_INFO_REG);
-
-	BNAD_GET_REG(TEMPSENSE_CNTL_REG);
-	BNAD_GET_REG(TEMPSENSE_STAT_REG);
-
-	BNAD_GET_REG(APP_LOCAL_ERR_STAT);
-	BNAD_GET_REG(APP_LOCAL_ERR_MSK);
-
-	BNAD_GET_REG(PCIE_LNK_ERR_STAT);
-	BNAD_GET_REG(PCIE_LNK_ERR_MSK);
-
-	BNAD_GET_REG(FCOE_FIP_ETH_TYPE);
-	BNAD_GET_REG(RESV_ETH_TYPE);
-
-	BNAD_GET_REG(HOSTFN2_INT_STATUS);
-	BNAD_GET_REG(HOSTFN2_INT_MASK);
-	BNAD_GET_REG(HOST_PAGE_NUM_FN2);
-	BNAD_GET_REG(HOST_MSIX_ERR_INDEX_FN2);
-	BNAD_GET_REG(FN2_PCIE_ERR_REG);
-	BNAD_GET_REG(FN2_ERR_TYPE_STATUS_REG);
-	BNAD_GET_REG(FN2_ERR_TYPE_MSK_STATUS_REG);
-
-	BNAD_GET_REG(HOSTFN3_INT_STATUS);
-	BNAD_GET_REG(HOSTFN3_INT_MASK);
-	BNAD_GET_REG(HOST_PAGE_NUM_FN3);
-	BNAD_GET_REG(HOST_MSIX_ERR_INDEX_FN3);
-	BNAD_GET_REG(FN3_PCIE_ERR_REG);
-	BNAD_GET_REG(FN3_ERR_TYPE_STATUS_REG);
-	BNAD_GET_REG(FN3_ERR_TYPE_MSK_STATUS_REG);
-
-	/* Host Command Status Registers */
-	reg_addr = HOST_CMDSTS0_CLR_REG;
-	for (i = 0; i < 16; i++) {
-		BNAD_GET_REG(reg_addr);
-		BNAD_GET_REG(reg_addr + 4);
-		BNAD_GET_REG(reg_addr + 8);
-		reg_addr += 0x10;
-	}
-
-	/* Function ID register */
-	BNAD_GET_REG(FNC_ID_REG);
-
-	/* Function personality register */
-	BNAD_GET_REG(FNC_PERS_REG);
-
-	/* Operation mode register */
-	BNAD_GET_REG(OP_MODE);
-
-	/* LPU0 Registers */
-	BNAD_GET_REG(LPU0_MBOX_CTL_REG);
-	BNAD_GET_REG(LPU0_MBOX_CMD_REG);
-	BNAD_GET_REG(LPU0_MBOX_LINK_0REG);
-	BNAD_GET_REG(LPU1_MBOX_LINK_0REG);
-	BNAD_GET_REG(LPU0_MBOX_STATUS_0REG);
-	BNAD_GET_REG(LPU1_MBOX_STATUS_0REG);
-	BNAD_GET_REG(LPU0_ERR_STATUS_REG);
-	BNAD_GET_REG(LPU0_ERR_SET_REG);
-
-	/* LPU1 Registers */
-	BNAD_GET_REG(LPU1_MBOX_CTL_REG);
-	BNAD_GET_REG(LPU1_MBOX_CMD_REG);
-	BNAD_GET_REG(LPU0_MBOX_LINK_1REG);
-	BNAD_GET_REG(LPU1_MBOX_LINK_1REG);
-	BNAD_GET_REG(LPU0_MBOX_STATUS_1REG);
-	BNAD_GET_REG(LPU1_MBOX_STATUS_1REG);
-	BNAD_GET_REG(LPU1_ERR_STATUS_REG);
-	BNAD_GET_REG(LPU1_ERR_SET_REG);
-
-	/* PSS Registers */
-	BNAD_GET_REG(PSS_CTL_REG);
-	BNAD_GET_REG(PSS_ERR_STATUS_REG);
-	BNAD_GET_REG(ERR_STATUS_SET);
-	BNAD_GET_REG(PSS_RAM_ERR_STATUS_REG);
-
-	/* Catapult CPQ Registers */
-	BNAD_GET_REG(HOSTFN0_LPU0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(HOSTFN0_LPU1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN0_MBOX0_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN0_LPU0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(HOSTFN0_LPU1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN0_MBOX1_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN1_LPU0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(HOSTFN1_LPU1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN1_MBOX0_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN1_LPU0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(HOSTFN1_LPU1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN1_MBOX1_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN2_LPU0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(HOSTFN2_LPU1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN2_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN2_MBOX0_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN2_LPU0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(HOSTFN2_LPU1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN2_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN2_MBOX1_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN3_LPU0_MBOX0_CMD_STAT);
-	BNAD_GET_REG(HOSTFN3_LPU1_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN3_MBOX0_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN3_MBOX0_CMD_STAT);
-
-	BNAD_GET_REG(HOSTFN3_LPU0_MBOX1_CMD_STAT);
-	BNAD_GET_REG(HOSTFN3_LPU1_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU0_HOSTFN3_MBOX1_CMD_STAT);
-	BNAD_GET_REG(LPU1_HOSTFN3_MBOX1_CMD_STAT);
-
-	/* Host Function Force Parity Error Registers */
-	BNAD_GET_REG(HOSTFN0_LPU_FORCE_PERR);
-	BNAD_GET_REG(HOSTFN1_LPU_FORCE_PERR);
-	BNAD_GET_REG(HOSTFN2_LPU_FORCE_PERR);
-	BNAD_GET_REG(HOSTFN3_LPU_FORCE_PERR);
-
-	/* LL Port[0|1] Halt Mask Registers */
-	BNAD_GET_REG(LL_HALT_MSK_P0);
-	BNAD_GET_REG(LL_HALT_MSK_P1);
-
-	/* LL Port[0|1] Error Mask Registers */
-	BNAD_GET_REG(LL_ERR_MSK_P0);
-	BNAD_GET_REG(LL_ERR_MSK_P1);
-
-	/* EMC FLI Registers */
-	BNAD_GET_REG(FLI_CMD_REG);
-	BNAD_GET_REG(FLI_ADDR_REG);
-	BNAD_GET_REG(FLI_CTL_REG);
-	BNAD_GET_REG(FLI_WRDATA_REG);
-	BNAD_GET_REG(FLI_RDDATA_REG);
-	BNAD_GET_REG(FLI_DEV_STATUS_REG);
-	BNAD_GET_REG(FLI_SIG_WD_REG);
-
-	BNAD_GET_REG(FLI_DEV_VENDOR_REG);
-	BNAD_GET_REG(FLI_ERR_STATUS_REG);
-
-	/* RxAdm 0 Registers */
-	BNAD_GET_REG(RAD0_CTL_REG);
-	BNAD_GET_REG(RAD0_PE_PARM_REG);
-	BNAD_GET_REG(RAD0_BCN_REG);
-	BNAD_GET_REG(RAD0_DEFAULT_REG);
-	BNAD_GET_REG(RAD0_PROMISC_REG);
-	BNAD_GET_REG(RAD0_BCNQ_REG);
-	BNAD_GET_REG(RAD0_DEFAULTQ_REG);
-
-	BNAD_GET_REG(RAD0_ERR_STS);
-	BNAD_GET_REG(RAD0_SET_ERR_STS);
-	BNAD_GET_REG(RAD0_ERR_INT_EN);
-	BNAD_GET_REG(RAD0_FIRST_ERR);
-	BNAD_GET_REG(RAD0_FORCE_ERR);
-
-	BNAD_GET_REG(RAD0_MAC_MAN_1H);
-	BNAD_GET_REG(RAD0_MAC_MAN_1L);
-	BNAD_GET_REG(RAD0_MAC_MAN_2H);
-	BNAD_GET_REG(RAD0_MAC_MAN_2L);
-	BNAD_GET_REG(RAD0_MAC_MAN_3H);
-	BNAD_GET_REG(RAD0_MAC_MAN_3L);
-	BNAD_GET_REG(RAD0_MAC_MAN_4H);
-	BNAD_GET_REG(RAD0_MAC_MAN_4L);
-
-	BNAD_GET_REG(RAD0_LAST4_IP);
-
-	/* RxAdm 1 Registers */
-	BNAD_GET_REG(RAD1_CTL_REG);
-	BNAD_GET_REG(RAD1_PE_PARM_REG);
-	BNAD_GET_REG(RAD1_BCN_REG);
-	BNAD_GET_REG(RAD1_DEFAULT_REG);
-	BNAD_GET_REG(RAD1_PROMISC_REG);
-	BNAD_GET_REG(RAD1_BCNQ_REG);
-	BNAD_GET_REG(RAD1_DEFAULTQ_REG);
-
-	BNAD_GET_REG(RAD1_ERR_STS);
-	BNAD_GET_REG(RAD1_SET_ERR_STS);
-	BNAD_GET_REG(RAD1_ERR_INT_EN);
-
-	/* TxA0 Registers */
-	BNAD_GET_REG(TXA0_CTRL_REG);
-	/* TxA0 TSO Sequence # Registers (RO) */
-	for (i = 0; i < 8; i++) {
-		BNAD_GET_REG(TXA0_TSO_TCP_SEQ_REG(i));
-		BNAD_GET_REG(TXA0_TSO_IP_INFO_REG(i));
-	}
-
-	/* TxA1 Registers */
-	BNAD_GET_REG(TXA1_CTRL_REG);
-	/* TxA1 TSO Sequence # Registers (RO) */
-	for (i = 0; i < 8; i++) {
-		BNAD_GET_REG(TXA1_TSO_TCP_SEQ_REG(i));
-		BNAD_GET_REG(TXA1_TSO_IP_INFO_REG(i));
-	}
-
-	/* RxA Registers */
-	BNAD_GET_REG(RXA0_CTL_REG);
-	BNAD_GET_REG(RXA1_CTL_REG);
-
-	/* PLB0 Registers */
-	BNAD_GET_REG(PLB0_ECM_TIMER_REG);
-	BNAD_GET_REG(PLB0_RL_CTL);
-	for (i = 0; i < 8; i++)
-		BNAD_GET_REG(PLB0_RL_MAX_BC(i));
-	BNAD_GET_REG(PLB0_RL_TU_PRIO);
-	for (i = 0; i < 8; i++)
-		BNAD_GET_REG(PLB0_RL_BYTE_CNT(i));
-	BNAD_GET_REG(PLB0_RL_MIN_REG);
-	BNAD_GET_REG(PLB0_RL_MAX_REG);
-	BNAD_GET_REG(PLB0_EMS_ADD_REG);
-
-	/* PLB1 Registers */
-	BNAD_GET_REG(PLB1_ECM_TIMER_REG);
-	BNAD_GET_REG(PLB1_RL_CTL);
-	for (i = 0; i < 8; i++)
-		BNAD_GET_REG(PLB1_RL_MAX_BC(i));
-	BNAD_GET_REG(PLB1_RL_TU_PRIO);
-	for (i = 0; i < 8; i++)
-		BNAD_GET_REG(PLB1_RL_BYTE_CNT(i));
-	BNAD_GET_REG(PLB1_RL_MIN_REG);
-	BNAD_GET_REG(PLB1_RL_MAX_REG);
-	BNAD_GET_REG(PLB1_EMS_ADD_REG);
-
-	/* HQM Control Register */
-	BNAD_GET_REG(HQM0_CTL_REG);
-	BNAD_GET_REG(HQM0_RXQ_STOP_SEM);
-	BNAD_GET_REG(HQM0_TXQ_STOP_SEM);
-	BNAD_GET_REG(HQM1_CTL_REG);
-	BNAD_GET_REG(HQM1_RXQ_STOP_SEM);
-	BNAD_GET_REG(HQM1_TXQ_STOP_SEM);
-
-	/* LUT Registers */
-	BNAD_GET_REG(LUT0_ERR_STS);
-	BNAD_GET_REG(LUT0_SET_ERR_STS);
-	BNAD_GET_REG(LUT1_ERR_STS);
-	BNAD_GET_REG(LUT1_SET_ERR_STS);
-
-	/* TRC Registers */
-	BNAD_GET_REG(TRC_CTL_REG);
-	BNAD_GET_REG(TRC_MODS_REG);
-	BNAD_GET_REG(TRC_TRGC_REG);
-	BNAD_GET_REG(TRC_CNT1_REG);
-	BNAD_GET_REG(TRC_CNT2_REG);
-	BNAD_GET_REG(TRC_NXTS_REG);
-	BNAD_GET_REG(TRC_DIRR_REG);
-	for (i = 0; i < 10; i++)
-		BNAD_GET_REG(TRC_TRGM_REG(i));
-	for (i = 0; i < 10; i++)
-		BNAD_GET_REG(TRC_NXTM_REG(i));
-	for (i = 0; i < 10; i++)
-		BNAD_GET_REG(TRC_STRM_REG(i));
-
-	spin_unlock_irqrestore(&bnad->bna_lock, flags);
-#undef BNAD_GET_REG
-	return num;
-}
-static int
-bnad_get_regs_len(struct net_device *netdev)
-{
-	int ret = get_regs(netdev_priv(netdev), NULL) * sizeof(u32);
-	return ret;
-}
-
-static void
-bnad_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *buf)
-{
-	memset(buf, 0, bnad_get_regs_len(netdev));
-	get_regs(netdev_priv(netdev), buf);
-}
-
 static void
 bnad_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wolinfo)
 {
@@ -779,8 +462,8 @@ bnad_get_pauseparam(struct net_device *netdev,
 	struct bnad *bnad = netdev_priv(netdev);
 
 	pauseparam->autoneg = 0;
-	pauseparam->rx_pause = bnad->bna.port.pause_config.rx_pause;
-	pauseparam->tx_pause = bnad->bna.port.pause_config.tx_pause;
+	pauseparam->rx_pause = bnad->bna.enet.pause_config.rx_pause;
+	pauseparam->tx_pause = bnad->bna.enet.pause_config.tx_pause;
 }
 
 static int
@@ -795,12 +478,12 @@ bnad_set_pauseparam(struct net_device *netdev,
 		return -EINVAL;
 
 	mutex_lock(&bnad->conf_mutex);
-	if (pauseparam->rx_pause != bnad->bna.port.pause_config.rx_pause ||
-	    pauseparam->tx_pause != bnad->bna.port.pause_config.tx_pause) {
+	if (pauseparam->rx_pause != bnad->bna.enet.pause_config.rx_pause ||
+	    pauseparam->tx_pause != bnad->bna.enet.pause_config.tx_pause) {
 		pause_config.rx_pause = pauseparam->rx_pause;
 		pause_config.tx_pause = pauseparam->tx_pause;
 		spin_lock_irqsave(&bnad->bna_lock, flags);
-		bna_port_pause_config(&bnad->bna.port, &pause_config, NULL);
+		bna_enet_pause_config(&bnad->bna.enet, &pause_config, NULL);
 		spin_unlock_irqrestore(&bnad->bna_lock, flags);
 	}
 	mutex_unlock(&bnad->conf_mutex);
@@ -812,7 +495,7 @@ bnad_get_strings(struct net_device *netdev, u32 stringset, u8 * string)
 {
 	struct bnad *bnad = netdev_priv(netdev);
 	int i, j, q_num;
-	u64 bmap;
+	u32 bmap;
 
 	mutex_lock(&bnad->conf_mutex);
 
@@ -825,9 +508,8 @@ bnad_get_strings(struct net_device *netdev, u32 stringset, u8 * string)
 			       ETH_GSTRING_LEN);
 			string += ETH_GSTRING_LEN;
 		}
-		bmap = (u64)bnad->bna.tx_mod.txf_bmap[0] |
-			((u64)bnad->bna.tx_mod.txf_bmap[1] << 32);
-		for (i = 0; bmap && (i < BFI_LL_TXF_ID_MAX); i++) {
+		bmap = bna_tx_rid_mask(&bnad->bna);
+		for (i = 0; bmap; i++) {
 			if (bmap & 1) {
 				sprintf(string, "txf%d_ucast_octets", i);
 				string += ETH_GSTRING_LEN;
@@ -857,9 +539,8 @@ bnad_get_strings(struct net_device *netdev, u32 stringset, u8 * string)
 			bmap >>= 1;
 		}
 
-		bmap = (u64)bnad->bna.rx_mod.rxf_bmap[0] |
-			((u64)bnad->bna.rx_mod.rxf_bmap[1] << 32);
-		for (i = 0; bmap && (i < BFI_LL_RXF_ID_MAX); i++) {
+		bmap = bna_rx_rid_mask(&bnad->bna);
+		for (i = 0; bmap; i++) {
 			if (bmap & 1) {
 				sprintf(string, "rxf%d_ucast_octets", i);
 				string += ETH_GSTRING_LEN;
@@ -980,18 +661,16 @@ bnad_get_stats_count_locked(struct net_device *netdev)
 {
 	struct bnad *bnad = netdev_priv(netdev);
 	int i, j, count, rxf_active_num = 0, txf_active_num = 0;
-	u64 bmap;
+	u32 bmap;
 
-	bmap = (u64)bnad->bna.tx_mod.txf_bmap[0] |
-			((u64)bnad->bna.tx_mod.txf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_TXF_ID_MAX); i++) {
+	bmap = bna_tx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1)
 			txf_active_num++;
 		bmap >>= 1;
 	}
-	bmap = (u64)bnad->bna.rx_mod.rxf_bmap[0] |
-			((u64)bnad->bna.rx_mod.rxf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_RXF_ID_MAX); i++) {
+	bmap = bna_rx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1)
 			rxf_active_num++;
 		bmap >>= 1;
@@ -1104,7 +783,7 @@ bnad_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats,
 	unsigned long flags;
 	struct rtnl_link_stats64 *net_stats64;
 	u64 *stats64;
-	u64 bmap;
+	u32 bmap;
 
 	mutex_lock(&bnad->conf_mutex);
 	if (bnad_get_stats_count_locked(netdev) != stats->n_stats) {
@@ -1135,20 +814,20 @@ bnad_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats,
 		buf[bi++] = stats64[i];
 
 	/* Fill hardware stats excluding the rxf/txf into ethtool bufs */
-	stats64 = (u64 *) bnad->stats.bna_stats->hw_stats;
+	stats64 = (u64 *) &bnad->stats.bna_stats->hw_stats;
 	for (i = 0;
-	     i < offsetof(struct bfi_ll_stats, rxf_stats[0]) / sizeof(u64);
+	     i < offsetof(struct bfi_enet_stats, rxf_stats[0]) /
+		sizeof(u64);
 	     i++)
 		buf[bi++] = stats64[i];
 
 	/* Fill txf stats into ethtool buffers */
-	bmap = (u64)bnad->bna.tx_mod.txf_bmap[0] |
-			((u64)bnad->bna.tx_mod.txf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_TXF_ID_MAX); i++) {
+	bmap = bna_tx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1) {
 			stats64 = (u64 *)&bnad->stats.bna_stats->
-						hw_stats->txf_stats[i];
-			for (j = 0; j < sizeof(struct bfi_ll_stats_txf) /
+						hw_stats.txf_stats[i];
+			for (j = 0; j < sizeof(struct bfi_enet_stats_txf) /
 					sizeof(u64); j++)
 				buf[bi++] = stats64[j];
 		}
@@ -1156,13 +835,12 @@ bnad_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats,
 	}
 
 	/*  Fill rxf stats into ethtool buffers */
-	bmap = (u64)bnad->bna.rx_mod.rxf_bmap[0] |
-			((u64)bnad->bna.rx_mod.rxf_bmap[1] << 32);
-	for (i = 0; bmap && (i < BFI_LL_RXF_ID_MAX); i++) {
+	bmap = bna_rx_rid_mask(&bnad->bna);
+	for (i = 0; bmap; i++) {
 		if (bmap & 1) {
 			stats64 = (u64 *)&bnad->stats.bna_stats->
-						hw_stats->rxf_stats[i];
-			for (j = 0; j < sizeof(struct bfi_ll_stats_rxf) /
+						hw_stats.rxf_stats[i];
+			for (j = 0; j < sizeof(struct bfi_enet_stats_rxf) /
 					sizeof(u64); j++)
 				buf[bi++] = stats64[j];
 		}
@@ -1192,8 +870,6 @@ static struct ethtool_ops bnad_ethtool_ops = {
 	.get_settings = bnad_get_settings,
 	.set_settings = bnad_set_settings,
 	.get_drvinfo = bnad_get_drvinfo,
-	.get_regs_len = bnad_get_regs_len,
-	.get_regs = bnad_get_regs,
 	.get_wol = bnad_get_wol,
 	.get_link = ethtool_op_get_link,
 	.get_coalesce = bnad_get_coalesce,
diff --git a/drivers/net/bna/cna.h b/drivers/net/bna/cna.h
index a679e03..50fce15 100644
--- a/drivers/net/bna/cna.h
+++ b/drivers/net/bna/cna.h
@@ -40,7 +40,7 @@
 
 extern char bfa_version[];
 
-#define	CNA_FW_FILE_CT	"ctfw_cna.bin"
+#define	CNA_FW_FILE_CT	"ctfw.bin"
 #define FC_SYMNAME_MAX	256	/*!< max name server symbolic name size */
 
 #pragma pack(1)
@@ -77,4 +77,33 @@ typedef struct mac { u8 mac[MAC_ADDRLEN]; } mac_t;
 	}								\
 }
 
+/*
+ * bfa_q_deq_tail - dequeue an element from tail of the queue
+ */
+#define bfa_q_deq_tail(_q, _qe) {					\
+	if (!list_empty(_q)) {						\
+		*((struct list_head **) (_qe)) = bfa_q_prev(_q);	\
+		bfa_q_next(bfa_q_prev(*((struct list_head **) _qe))) =  \
+						(struct list_head *) (_q); \
+		bfa_q_prev(_q) = bfa_q_prev(*(struct list_head **) _qe);\
+		bfa_q_qe_init(*((struct list_head **) _qe));		\
+	} else {							\
+		*((struct list_head **) (_qe)) = (struct list_head *) NULL; \
+	}								\
+}
+
+/*
+ * bfa_add_tail_head - enqueue an element at the head of queue
+ */
+#define bfa_q_enq_head(_q, _qe) {					\
+	if (!(bfa_q_next(_qe) == NULL) && (bfa_q_prev(_qe) == NULL))	\
+		pr_err("Assertion failure: %s:%d: %d",			\
+			__FILE__, __LINE__,				\
+		(bfa_q_next(_qe) == NULL) && (bfa_q_prev(_qe) == NULL));\
+	bfa_q_next(_qe) = bfa_q_next(_q);				\
+	bfa_q_prev(_qe) = (struct list_head *) (_q);			\
+	bfa_q_prev(bfa_q_next(_q)) = (struct list_head *) (_qe);	\
+	bfa_q_next(_q) = (struct list_head *) (_qe);			\
+}
+
 #endif /* __CNA_H__ */
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 0570930..09afeab 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -2223,6 +2223,7 @@
 #define PCI_DEVICE_ID_BROCADE_CT	0x0014
 #define PCI_DEVICE_ID_BROCADE_FC_8G1P	0x0017
 #define PCI_DEVICE_ID_BROCADE_CT_FC	0x0021
+#define PCI_DEVICE_ID_BROCADE_CT	0x0022
 
 #define PCI_VENDOR_ID_SIBYTE		0x166d
 #define PCI_DEVICE_ID_BCM1250_PCI	0x0001
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 10/45] bna: Remove Obsolete Files
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (8 preceding siblings ...)
  2011-07-18  8:19 ` [PATCH 09/45] bna: ENET and Tx Rx Redesign Update Rasesh Mody
@ 2011-07-18  8:19 ` Rasesh Mody
  2011-07-18  8:26 ` [PATCH 00/45] Update bna driver version to 3.0.2.0 David Miller
  10 siblings, 0 replies; 16+ messages in thread
From: Rasesh Mody @ 2011-07-18  8:19 UTC (permalink / raw)
  To: davem, netdev; +Cc: adapter_linux_open_src_team, dradovan, Rasesh Mody

Change details:
 - Removec bfi_ctreg.h bna_ctrl.c and bfi_ll.h due to ENET, MSGQ and TXRX
   changes for new FW Driver interface.

Signed-off-by: Rasesh Mody <rmody@brocade.com>
---
 drivers/net/bna/bfi_ctreg.h |  646 ---------
 drivers/net/bna/bfi_ll.h    |  425 ------
 drivers/net/bna/bna_ctrl.c  | 3079 -------------------------------------------
 3 files changed, 0 insertions(+), 4150 deletions(-)
 delete mode 100644 drivers/net/bna/bfi_ctreg.h
 delete mode 100644 drivers/net/bna/bfi_ll.h
 delete mode 100644 drivers/net/bna/bna_ctrl.c

diff --git a/drivers/net/bna/bfi_ctreg.h b/drivers/net/bna/bfi_ctreg.h
deleted file mode 100644
index 5130d79..0000000
--- a/drivers/net/bna/bfi_ctreg.h
+++ /dev/null
@@ -1,646 +0,0 @@
-/*
- * Linux network driver for Brocade Converged Network Adapter.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License (GPL) Version 2 as
- * published by the Free Software Foundation
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- */
-/*
- * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
- * All rights reserved
- * www.brocade.com
- */
-
-/*
- * bfi_ctreg.h catapult host block register definitions
- *
- * !!! Do not edit. Auto generated. !!!
- */
-
-#ifndef __BFI_CTREG_H__
-#define __BFI_CTREG_H__
-
-#define HOSTFN0_LPU_MBOX0_0		0x00019200
-#define HOSTFN1_LPU_MBOX0_8		0x00019260
-#define LPU_HOSTFN0_MBOX0_0		0x00019280
-#define LPU_HOSTFN1_MBOX0_8		0x000192e0
-#define HOSTFN2_LPU_MBOX0_0		0x00019400
-#define HOSTFN3_LPU_MBOX0_8		0x00019460
-#define LPU_HOSTFN2_MBOX0_0		0x00019480
-#define LPU_HOSTFN3_MBOX0_8		0x000194e0
-#define HOSTFN0_INT_STATUS		0x00014000
-#define __HOSTFN0_HALT_OCCURRED		0x01000000
-#define __HOSTFN0_INT_STATUS_LVL_MK	0x00f00000
-#define __HOSTFN0_INT_STATUS_LVL_SH	20
-#define __HOSTFN0_INT_STATUS_LVL(_v)	((_v) << __HOSTFN0_INT_STATUS_LVL_SH)
-#define __HOSTFN0_INT_STATUS_P_MK	0x000f0000
-#define __HOSTFN0_INT_STATUS_P_SH	16
-#define __HOSTFN0_INT_STATUS_P(_v)	((_v) << __HOSTFN0_INT_STATUS_P_SH)
-#define __HOSTFN0_INT_STATUS_F		0x0000ffff
-#define HOSTFN0_INT_MSK			0x00014004
-#define HOST_PAGE_NUM_FN0		0x00014008
-#define __HOST_PAGE_NUM_FN		0x000001ff
-#define HOST_MSIX_ERR_INDEX_FN0		0x0001400c
-#define __MSIX_ERR_INDEX_FN		0x000001ff
-#define HOSTFN1_INT_STATUS		0x00014100
-#define __HOSTFN1_HALT_OCCURRED		0x01000000
-#define __HOSTFN1_INT_STATUS_LVL_MK	0x00f00000
-#define __HOSTFN1_INT_STATUS_LVL_SH	20
-#define __HOSTFN1_INT_STATUS_LVL(_v)	((_v) << __HOSTFN1_INT_STATUS_LVL_SH)
-#define __HOSTFN1_INT_STATUS_P_MK	0x000f0000
-#define __HOSTFN1_INT_STATUS_P_SH	16
-#define __HOSTFN1_INT_STATUS_P(_v)	((_v) << __HOSTFN1_INT_STATUS_P_SH)
-#define __HOSTFN1_INT_STATUS_F		0x0000ffff
-#define HOSTFN1_INT_MSK			0x00014104
-#define HOST_PAGE_NUM_FN1		0x00014108
-#define HOST_MSIX_ERR_INDEX_FN1		0x0001410c
-#define APP_PLL_425_CTL_REG		0x00014204
-#define __P_425_PLL_LOCK		0x80000000
-#define __APP_PLL_425_SRAM_USE_100MHZ	0x00100000
-#define __APP_PLL_425_RESET_TIMER_MK	0x000e0000
-#define __APP_PLL_425_RESET_TIMER_SH	17
-#define __APP_PLL_425_RESET_TIMER(_v)	((_v) << __APP_PLL_425_RESET_TIMER_SH)
-#define __APP_PLL_425_LOGIC_SOFT_RESET	0x00010000
-#define __APP_PLL_425_CNTLMT0_1_MK	0x0000c000
-#define __APP_PLL_425_CNTLMT0_1_SH	14
-#define __APP_PLL_425_CNTLMT0_1(_v)	((_v) << __APP_PLL_425_CNTLMT0_1_SH)
-#define __APP_PLL_425_JITLMT0_1_MK	0x00003000
-#define __APP_PLL_425_JITLMT0_1_SH	12
-#define __APP_PLL_425_JITLMT0_1(_v)	((_v) << __APP_PLL_425_JITLMT0_1_SH)
-#define __APP_PLL_425_HREF		0x00000800
-#define __APP_PLL_425_HDIV		0x00000400
-#define __APP_PLL_425_P0_1_MK		0x00000300
-#define __APP_PLL_425_P0_1_SH		8
-#define __APP_PLL_425_P0_1(_v)		((_v) << __APP_PLL_425_P0_1_SH)
-#define __APP_PLL_425_Z0_2_MK		0x000000e0
-#define __APP_PLL_425_Z0_2_SH		5
-#define __APP_PLL_425_Z0_2(_v)		((_v) << __APP_PLL_425_Z0_2_SH)
-#define __APP_PLL_425_RSEL200500	0x00000010
-#define __APP_PLL_425_ENARST		0x00000008
-#define __APP_PLL_425_BYPASS		0x00000004
-#define __APP_PLL_425_LRESETN		0x00000002
-#define __APP_PLL_425_ENABLE		0x00000001
-#define APP_PLL_312_CTL_REG		0x00014208
-#define __P_312_PLL_LOCK		0x80000000
-#define __ENABLE_MAC_AHB_1		0x00800000
-#define __ENABLE_MAC_AHB_0		0x00400000
-#define __ENABLE_MAC_1			0x00200000
-#define __ENABLE_MAC_0			0x00100000
-#define __APP_PLL_312_RESET_TIMER_MK	0x000e0000
-#define __APP_PLL_312_RESET_TIMER_SH	17
-#define __APP_PLL_312_RESET_TIMER(_v)	((_v) << __APP_PLL_312_RESET_TIMER_SH)
-#define __APP_PLL_312_LOGIC_SOFT_RESET	0x00010000
-#define __APP_PLL_312_CNTLMT0_1_MK	0x0000c000
-#define __APP_PLL_312_CNTLMT0_1_SH	14
-#define __APP_PLL_312_CNTLMT0_1(_v)	((_v) << __APP_PLL_312_CNTLMT0_1_SH)
-#define __APP_PLL_312_JITLMT0_1_MK	0x00003000
-#define __APP_PLL_312_JITLMT0_1_SH	12
-#define __APP_PLL_312_JITLMT0_1(_v)	((_v) << __APP_PLL_312_JITLMT0_1_SH)
-#define __APP_PLL_312_HREF		0x00000800
-#define __APP_PLL_312_HDIV		0x00000400
-#define __APP_PLL_312_P0_1_MK		0x00000300
-#define __APP_PLL_312_P0_1_SH		8
-#define __APP_PLL_312_P0_1(_v)		((_v) << __APP_PLL_312_P0_1_SH)
-#define __APP_PLL_312_Z0_2_MK		0x000000e0
-#define __APP_PLL_312_Z0_2_SH		5
-#define __APP_PLL_312_Z0_2(_v)		((_v) << __APP_PLL_312_Z0_2_SH)
-#define __APP_PLL_312_RSEL200500	0x00000010
-#define __APP_PLL_312_ENARST		0x00000008
-#define __APP_PLL_312_BYPASS		0x00000004
-#define __APP_PLL_312_LRESETN		0x00000002
-#define __APP_PLL_312_ENABLE		0x00000001
-#define MBIST_CTL_REG			0x00014220
-#define __EDRAM_BISTR_START		0x00000004
-#define __MBIST_RESET			0x00000002
-#define __MBIST_START			0x00000001
-#define MBIST_STAT_REG			0x00014224
-#define __EDRAM_BISTR_STATUS		0x00000008
-#define __EDRAM_BISTR_DONE		0x00000004
-#define __MEM_BIT_STATUS		0x00000002
-#define __MBIST_DONE			0x00000001
-#define HOST_SEM0_REG			0x00014230
-#define __HOST_SEMAPHORE		0x00000001
-#define HOST_SEM1_REG			0x00014234
-#define HOST_SEM2_REG			0x00014238
-#define HOST_SEM3_REG			0x0001423c
-#define HOST_SEM0_INFO_REG		0x00014240
-#define HOST_SEM1_INFO_REG		0x00014244
-#define HOST_SEM2_INFO_REG		0x00014248
-#define HOST_SEM3_INFO_REG		0x0001424c
-#define ETH_MAC_SER_REG			0x00014288
-#define __APP_EMS_CKBUFAMPIN		0x00000020
-#define __APP_EMS_REFCLKSEL		0x00000010
-#define __APP_EMS_CMLCKSEL		0x00000008
-#define __APP_EMS_REFCKBUFEN2		0x00000004
-#define __APP_EMS_REFCKBUFEN1		0x00000002
-#define __APP_EMS_CHANNEL_SEL		0x00000001
-#define HOSTFN2_INT_STATUS		0x00014300
-#define __HOSTFN2_HALT_OCCURRED		0x01000000
-#define __HOSTFN2_INT_STATUS_LVL_MK	0x00f00000
-#define __HOSTFN2_INT_STATUS_LVL_SH	20
-#define __HOSTFN2_INT_STATUS_LVL(_v)	((_v) << __HOSTFN2_INT_STATUS_LVL_SH)
-#define __HOSTFN2_INT_STATUS_P_MK	0x000f0000
-#define __HOSTFN2_INT_STATUS_P_SH	16
-#define __HOSTFN2_INT_STATUS_P(_v)	((_v) << __HOSTFN2_INT_STATUS_P_SH)
-#define __HOSTFN2_INT_STATUS_F		0x0000ffff
-#define HOSTFN2_INT_MSK			0x00014304
-#define HOST_PAGE_NUM_FN2		0x00014308
-#define HOST_MSIX_ERR_INDEX_FN2		0x0001430c
-#define HOSTFN3_INT_STATUS		0x00014400
-#define __HALT_OCCURRED			0x01000000
-#define __HOSTFN3_INT_STATUS_LVL_MK	0x00f00000
-#define __HOSTFN3_INT_STATUS_LVL_SH	20
-#define __HOSTFN3_INT_STATUS_LVL(_v)	((_v) << __HOSTFN3_INT_STATUS_LVL_SH)
-#define __HOSTFN3_INT_STATUS_P_MK	0x000f0000
-#define __HOSTFN3_INT_STATUS_P_SH	16
-#define __HOSTFN3_INT_STATUS_P(_v)	((_v) << __HOSTFN3_INT_STATUS_P_SH)
-#define __HOSTFN3_INT_STATUS_F		0x0000ffff
-#define HOSTFN3_INT_MSK			0x00014404
-#define HOST_PAGE_NUM_FN3		0x00014408
-#define HOST_MSIX_ERR_INDEX_FN3		0x0001440c
-#define FNC_ID_REG			0x00014600
-#define __FUNCTION_NUMBER		0x00000007
-#define FNC_PERS_REG			0x00014604
-#define __F3_FUNCTION_ACTIVE		0x80000000
-#define __F3_FUNCTION_MODE		0x40000000
-#define __F3_PORT_MAP_MK		0x30000000
-#define __F3_PORT_MAP_SH		28
-#define __F3_PORT_MAP(_v)		((_v) << __F3_PORT_MAP_SH)
-#define __F3_VM_MODE			0x08000000
-#define __F3_INTX_STATUS_MK		0x07000000
-#define __F3_INTX_STATUS_SH		24
-#define __F3_INTX_STATUS(_v)		((_v) << __F3_INTX_STATUS_SH)
-#define __F2_FUNCTION_ACTIVE		0x00800000
-#define __F2_FUNCTION_MODE		0x00400000
-#define __F2_PORT_MAP_MK		0x00300000
-#define __F2_PORT_MAP_SH		20
-#define __F2_PORT_MAP(_v)		((_v) << __F2_PORT_MAP_SH)
-#define __F2_VM_MODE			0x00080000
-#define __F2_INTX_STATUS_MK		0x00070000
-#define __F2_INTX_STATUS_SH		16
-#define __F2_INTX_STATUS(_v)		((_v) << __F2_INTX_STATUS_SH)
-#define __F1_FUNCTION_ACTIVE		0x00008000
-#define __F1_FUNCTION_MODE		0x00004000
-#define __F1_PORT_MAP_MK		0x00003000
-#define __F1_PORT_MAP_SH		12
-#define __F1_PORT_MAP(_v)		((_v) << __F1_PORT_MAP_SH)
-#define __F1_VM_MODE			0x00000800
-#define __F1_INTX_STATUS_MK		0x00000700
-#define __F1_INTX_STATUS_SH		8
-#define __F1_INTX_STATUS(_v)		((_v) << __F1_INTX_STATUS_SH)
-#define __F0_FUNCTION_ACTIVE		0x00000080
-#define __F0_FUNCTION_MODE		0x00000040
-#define __F0_PORT_MAP_MK		0x00000030
-#define __F0_PORT_MAP_SH		4
-#define __F0_PORT_MAP(_v)		((_v) << __F0_PORT_MAP_SH)
-#define __F0_VM_MODE		0x00000008
-#define __F0_INTX_STATUS		0x00000007
-enum {
-	__F0_INTX_STATUS_MSIX		= 0x0,
-	__F0_INTX_STATUS_INTA		= 0x1,
-	__F0_INTX_STATUS_INTB		= 0x2,
-	__F0_INTX_STATUS_INTC		= 0x3,
-	__F0_INTX_STATUS_INTD		= 0x4,
-};
-#define OP_MODE				0x0001460c
-#define __APP_ETH_CLK_LOWSPEED		0x00000004
-#define __GLOBAL_CORECLK_HALFSPEED	0x00000002
-#define __GLOBAL_FCOE_MODE		0x00000001
-#define HOST_SEM4_REG			0x00014610
-#define HOST_SEM5_REG			0x00014614
-#define HOST_SEM6_REG			0x00014618
-#define HOST_SEM7_REG			0x0001461c
-#define HOST_SEM4_INFO_REG		0x00014620
-#define HOST_SEM5_INFO_REG		0x00014624
-#define HOST_SEM6_INFO_REG		0x00014628
-#define HOST_SEM7_INFO_REG		0x0001462c
-#define HOSTFN0_LPU0_MBOX0_CMD_STAT	0x00019000
-#define __HOSTFN0_LPU0_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN0_LPU0_MBOX0_INFO_SH	1
-#define __HOSTFN0_LPU0_MBOX0_INFO(_v)	((_v) << __HOSTFN0_LPU0_MBOX0_INFO_SH)
-#define __HOSTFN0_LPU0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN0_LPU1_MBOX0_CMD_STAT	0x00019004
-#define __HOSTFN0_LPU1_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN0_LPU1_MBOX0_INFO_SH	1
-#define __HOSTFN0_LPU1_MBOX0_INFO(_v)	((_v) << __HOSTFN0_LPU1_MBOX0_INFO_SH)
-#define __HOSTFN0_LPU1_MBOX0_CMD_STATUS 0x00000001
-#define LPU0_HOSTFN0_MBOX0_CMD_STAT	0x00019008
-#define __LPU0_HOSTFN0_MBOX0_INFO_MK	0xfffffffe
-#define __LPU0_HOSTFN0_MBOX0_INFO_SH	1
-#define __LPU0_HOSTFN0_MBOX0_INFO(_v)	((_v) << __LPU0_HOSTFN0_MBOX0_INFO_SH)
-#define __LPU0_HOSTFN0_MBOX0_CMD_STATUS 0x00000001
-#define LPU1_HOSTFN0_MBOX0_CMD_STAT	0x0001900c
-#define __LPU1_HOSTFN0_MBOX0_INFO_MK	0xfffffffe
-#define __LPU1_HOSTFN0_MBOX0_INFO_SH	1
-#define __LPU1_HOSTFN0_MBOX0_INFO(_v)	((_v) << __LPU1_HOSTFN0_MBOX0_INFO_SH)
-#define __LPU1_HOSTFN0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN1_LPU0_MBOX0_CMD_STAT	0x00019010
-#define __HOSTFN1_LPU0_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN1_LPU0_MBOX0_INFO_SH	1
-#define __HOSTFN1_LPU0_MBOX0_INFO(_v)	((_v) << __HOSTFN1_LPU0_MBOX0_INFO_SH)
-#define __HOSTFN1_LPU0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN1_LPU1_MBOX0_CMD_STAT	0x00019014
-#define __HOSTFN1_LPU1_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN1_LPU1_MBOX0_INFO_SH	1
-#define __HOSTFN1_LPU1_MBOX0_INFO(_v)	((_v) << __HOSTFN1_LPU1_MBOX0_INFO_SH)
-#define __HOSTFN1_LPU1_MBOX0_CMD_STATUS 0x00000001
-#define LPU0_HOSTFN1_MBOX0_CMD_STAT	0x00019018
-#define __LPU0_HOSTFN1_MBOX0_INFO_MK	0xfffffffe
-#define __LPU0_HOSTFN1_MBOX0_INFO_SH	1
-#define __LPU0_HOSTFN1_MBOX0_INFO(_v)	((_v) << __LPU0_HOSTFN1_MBOX0_INFO_SH)
-#define __LPU0_HOSTFN1_MBOX0_CMD_STATUS 0x00000001
-#define LPU1_HOSTFN1_MBOX0_CMD_STAT	0x0001901c
-#define __LPU1_HOSTFN1_MBOX0_INFO_MK	0xfffffffe
-#define __LPU1_HOSTFN1_MBOX0_INFO_SH	1
-#define __LPU1_HOSTFN1_MBOX0_INFO(_v)	((_v) << __LPU1_HOSTFN1_MBOX0_INFO_SH)
-#define __LPU1_HOSTFN1_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN2_LPU0_MBOX0_CMD_STAT	0x00019150
-#define __HOSTFN2_LPU0_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN2_LPU0_MBOX0_INFO_SH	1
-#define __HOSTFN2_LPU0_MBOX0_INFO(_v)	((_v) << __HOSTFN2_LPU0_MBOX0_INFO_SH)
-#define __HOSTFN2_LPU0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN2_LPU1_MBOX0_CMD_STAT	0x00019154
-#define __HOSTFN2_LPU1_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN2_LPU1_MBOX0_INFO_SH	1
-#define __HOSTFN2_LPU1_MBOX0_INFO(_v)	((_v) << __HOSTFN2_LPU1_MBOX0_INFO_SH)
-#define __HOSTFN2_LPU1_MBOX0BOX0_CMD_STATUS 0x00000001
-#define LPU0_HOSTFN2_MBOX0_CMD_STAT	0x00019158
-#define __LPU0_HOSTFN2_MBOX0_INFO_MK	0xfffffffe
-#define __LPU0_HOSTFN2_MBOX0_INFO_SH	1
-#define __LPU0_HOSTFN2_MBOX0_INFO(_v)	((_v) << __LPU0_HOSTFN2_MBOX0_INFO_SH)
-#define __LPU0_HOSTFN2_MBOX0_CMD_STATUS 0x00000001
-#define LPU1_HOSTFN2_MBOX0_CMD_STAT	0x0001915c
-#define __LPU1_HOSTFN2_MBOX0_INFO_MK	0xfffffffe
-#define __LPU1_HOSTFN2_MBOX0_INFO_SH	1
-#define __LPU1_HOSTFN2_MBOX0_INFO(_v)	((_v) << __LPU1_HOSTFN2_MBOX0_INFO_SH)
-#define __LPU1_HOSTFN2_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN3_LPU0_MBOX0_CMD_STAT	0x00019160
-#define __HOSTFN3_LPU0_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN3_LPU0_MBOX0_INFO_SH	1
-#define __HOSTFN3_LPU0_MBOX0_INFO(_v)	((_v) << __HOSTFN3_LPU0_MBOX0_INFO_SH)
-#define __HOSTFN3_LPU0_MBOX0_CMD_STATUS 0x00000001
-#define HOSTFN3_LPU1_MBOX0_CMD_STAT	0x00019164
-#define __HOSTFN3_LPU1_MBOX0_INFO_MK	0xfffffffe
-#define __HOSTFN3_LPU1_MBOX0_INFO_SH	1
-#define __HOSTFN3_LPU1_MBOX0_INFO(_v)	((_v) << __HOSTFN3_LPU1_MBOX0_INFO_SH)
-#define __HOSTFN3_LPU1_MBOX0_CMD_STATUS 0x00000001
-#define LPU0_HOSTFN3_MBOX0_CMD_STAT	0x00019168
-#define __LPU0_HOSTFN3_MBOX0_INFO_MK	0xfffffffe
-#define __LPU0_HOSTFN3_MBOX0_INFO_SH	1
-#define __LPU0_HOSTFN3_MBOX0_INFO(_v)	((_v) << __LPU0_HOSTFN3_MBOX0_INFO_SH)
-#define __LPU0_HOSTFN3_MBOX0_CMD_STATUS 0x00000001
-#define LPU1_HOSTFN3_MBOX0_CMD_STAT	0x0001916c
-#define __LPU1_HOSTFN3_MBOX0_INFO_MK	0xfffffffe
-#define __LPU1_HOSTFN3_MBOX0_INFO_SH	1
-#define __LPU1_HOSTFN3_MBOX0_INFO(_v)	((_v) << __LPU1_HOSTFN3_MBOX0_INFO_SH)
-#define __LPU1_HOSTFN3_MBOX0_CMD_STATUS	0x00000001
-#define FW_INIT_HALT_P0			0x000191ac
-#define __FW_INIT_HALT_P		0x00000001
-#define FW_INIT_HALT_P1			0x000191bc
-#define CPE_PI_PTR_Q0			0x00038000
-#define __CPE_PI_UNUSED_MK		0xffff0000
-#define __CPE_PI_UNUSED_SH		16
-#define __CPE_PI_UNUSED(_v)		((_v) << __CPE_PI_UNUSED_SH)
-#define __CPE_PI_PTR			0x0000ffff
-#define CPE_PI_PTR_Q1			0x00038040
-#define CPE_CI_PTR_Q0			0x00038004
-#define __CPE_CI_UNUSED_MK		0xffff0000
-#define __CPE_CI_UNUSED_SH		16
-#define __CPE_CI_UNUSED(_v)		((_v) << __CPE_CI_UNUSED_SH)
-#define __CPE_CI_PTR			0x0000ffff
-#define CPE_CI_PTR_Q1			0x00038044
-#define CPE_DEPTH_Q0			0x00038008
-#define __CPE_DEPTH_UNUSED_MK		0xf8000000
-#define __CPE_DEPTH_UNUSED_SH		27
-#define __CPE_DEPTH_UNUSED(_v)		((_v) << __CPE_DEPTH_UNUSED_SH)
-#define __CPE_MSIX_VEC_INDEX_MK		0x07ff0000
-#define __CPE_MSIX_VEC_INDEX_SH		16
-#define __CPE_MSIX_VEC_INDEX(_v)	((_v) << __CPE_MSIX_VEC_INDEX_SH)
-#define __CPE_DEPTH			0x0000ffff
-#define CPE_DEPTH_Q1			0x00038048
-#define CPE_QCTRL_Q0			0x0003800c
-#define __CPE_CTRL_UNUSED30_MK		0xfc000000
-#define __CPE_CTRL_UNUSED30_SH		26
-#define __CPE_CTRL_UNUSED30(_v)		((_v) << __CPE_CTRL_UNUSED30_SH)
-#define __CPE_FUNC_INT_CTRL_MK		0x03000000
-#define __CPE_FUNC_INT_CTRL_SH		24
-#define __CPE_FUNC_INT_CTRL(_v)		((_v) << __CPE_FUNC_INT_CTRL_SH)
-enum {
-	__CPE_FUNC_INT_CTRL_DISABLE		= 0x0,
-	__CPE_FUNC_INT_CTRL_F2NF		= 0x1,
-	__CPE_FUNC_INT_CTRL_3QUART		= 0x2,
-	__CPE_FUNC_INT_CTRL_HALF		= 0x3,
-};
-#define __CPE_CTRL_UNUSED20_MK		0x00f00000
-#define __CPE_CTRL_UNUSED20_SH		20
-#define __CPE_CTRL_UNUSED20(_v)		((_v) << __CPE_CTRL_UNUSED20_SH)
-#define __CPE_SCI_TH_MK			0x000f0000
-#define __CPE_SCI_TH_SH			16
-#define __CPE_SCI_TH(_v)		((_v) << __CPE_SCI_TH_SH)
-#define __CPE_CTRL_UNUSED10_MK		0x0000c000
-#define __CPE_CTRL_UNUSED10_SH		14
-#define __CPE_CTRL_UNUSED10(_v)		((_v) << __CPE_CTRL_UNUSED10_SH)
-#define __CPE_ACK_PENDING		0x00002000
-#define __CPE_CTRL_UNUSED40_MK		0x00001c00
-#define __CPE_CTRL_UNUSED40_SH		10
-#define __CPE_CTRL_UNUSED40(_v)		((_v) << __CPE_CTRL_UNUSED40_SH)
-#define __CPE_PCIEID_MK			0x00000300
-#define __CPE_PCIEID_SH			8
-#define __CPE_PCIEID(_v)		((_v) << __CPE_PCIEID_SH)
-#define __CPE_CTRL_UNUSED00_MK		0x000000fe
-#define __CPE_CTRL_UNUSED00_SH		1
-#define __CPE_CTRL_UNUSED00(_v)		((_v) << __CPE_CTRL_UNUSED00_SH)
-#define __CPE_ESIZE			0x00000001
-#define CPE_QCTRL_Q1			0x0003804c
-#define __CPE_CTRL_UNUSED31_MK		0xfc000000
-#define __CPE_CTRL_UNUSED31_SH		26
-#define __CPE_CTRL_UNUSED31(_v)		((_v) << __CPE_CTRL_UNUSED31_SH)
-#define __CPE_CTRL_UNUSED21_MK		0x00f00000
-#define __CPE_CTRL_UNUSED21_SH		20
-#define __CPE_CTRL_UNUSED21(_v)		((_v) << __CPE_CTRL_UNUSED21_SH)
-#define __CPE_CTRL_UNUSED11_MK		0x0000c000
-#define __CPE_CTRL_UNUSED11_SH		14
-#define __CPE_CTRL_UNUSED11(_v)		((_v) << __CPE_CTRL_UNUSED11_SH)
-#define __CPE_CTRL_UNUSED41_MK		0x00001c00
-#define __CPE_CTRL_UNUSED41_SH		10
-#define __CPE_CTRL_UNUSED41(_v)		((_v) << __CPE_CTRL_UNUSED41_SH)
-#define __CPE_CTRL_UNUSED01_MK		0x000000fe
-#define __CPE_CTRL_UNUSED01_SH		1
-#define __CPE_CTRL_UNUSED01(_v)		((_v) << __CPE_CTRL_UNUSED01_SH)
-#define RME_PI_PTR_Q0			0x00038020
-#define __LATENCY_TIME_STAMP_MK		0xffff0000
-#define __LATENCY_TIME_STAMP_SH		16
-#define __LATENCY_TIME_STAMP(_v)	((_v) << __LATENCY_TIME_STAMP_SH)
-#define __RME_PI_PTR			0x0000ffff
-#define RME_PI_PTR_Q1			0x00038060
-#define RME_CI_PTR_Q0			0x00038024
-#define __DELAY_TIME_STAMP_MK		0xffff0000
-#define __DELAY_TIME_STAMP_SH		16
-#define __DELAY_TIME_STAMP(_v)		((_v) << __DELAY_TIME_STAMP_SH)
-#define __RME_CI_PTR			0x0000ffff
-#define RME_CI_PTR_Q1			0x00038064
-#define RME_DEPTH_Q0			0x00038028
-#define __RME_DEPTH_UNUSED_MK		0xf8000000
-#define __RME_DEPTH_UNUSED_SH		27
-#define __RME_DEPTH_UNUSED(_v)		((_v) << __RME_DEPTH_UNUSED_SH)
-#define __RME_MSIX_VEC_INDEX_MK		0x07ff0000
-#define __RME_MSIX_VEC_INDEX_SH		16
-#define __RME_MSIX_VEC_INDEX(_v)	((_v) << __RME_MSIX_VEC_INDEX_SH)
-#define __RME_DEPTH			0x0000ffff
-#define RME_DEPTH_Q1			0x00038068
-#define RME_QCTRL_Q0			0x0003802c
-#define __RME_INT_LATENCY_TIMER_MK	0xff000000
-#define __RME_INT_LATENCY_TIMER_SH	24
-#define __RME_INT_LATENCY_TIMER(_v)	((_v) << __RME_INT_LATENCY_TIMER_SH)
-#define __RME_INT_DELAY_TIMER_MK	0x00ff0000
-#define __RME_INT_DELAY_TIMER_SH	16
-#define __RME_INT_DELAY_TIMER(_v)	((_v) << __RME_INT_DELAY_TIMER_SH)
-#define __RME_INT_DELAY_DISABLE		0x00008000
-#define __RME_DLY_DELAY_DISABLE		0x00004000
-#define __RME_ACK_PENDING		0x00002000
-#define __RME_FULL_INTERRUPT_DISABLE	0x00001000
-#define __RME_CTRL_UNUSED10_MK		0x00000c00
-#define __RME_CTRL_UNUSED10_SH		10
-#define __RME_CTRL_UNUSED10(_v)		((_v) << __RME_CTRL_UNUSED10_SH)
-#define __RME_PCIEID_MK			0x00000300
-#define __RME_PCIEID_SH			8
-#define __RME_PCIEID(_v)		((_v) << __RME_PCIEID_SH)
-#define __RME_CTRL_UNUSED00_MK		0x000000fe
-#define __RME_CTRL_UNUSED00_SH		1
-#define __RME_CTRL_UNUSED00(_v)		((_v) << __RME_CTRL_UNUSED00_SH)
-#define __RME_ESIZE			0x00000001
-#define RME_QCTRL_Q1			0x0003806c
-#define __RME_CTRL_UNUSED11_MK		0x00000c00
-#define __RME_CTRL_UNUSED11_SH		10
-#define __RME_CTRL_UNUSED11(_v)		((_v) << __RME_CTRL_UNUSED11_SH)
-#define __RME_CTRL_UNUSED01_MK		0x000000fe
-#define __RME_CTRL_UNUSED01_SH		1
-#define __RME_CTRL_UNUSED01(_v)		((_v) << __RME_CTRL_UNUSED01_SH)
-#define PSS_CTL_REG			0x00018800
-#define __PSS_I2C_CLK_DIV_MK		0x007f0000
-#define __PSS_I2C_CLK_DIV_SH		16
-#define __PSS_I2C_CLK_DIV(_v)		((_v) << __PSS_I2C_CLK_DIV_SH)
-#define __PSS_LMEM_INIT_DONE		0x00001000
-#define __PSS_LMEM_RESET		0x00000200
-#define __PSS_LMEM_INIT_EN		0x00000100
-#define __PSS_LPU1_RESET		0x00000002
-#define __PSS_LPU0_RESET		0x00000001
-#define PSS_ERR_STATUS_REG		0x00018810
-#define __PSS_LPU1_TCM_READ_ERR		0x00200000
-#define __PSS_LPU0_TCM_READ_ERR		0x00100000
-#define __PSS_LMEM5_CORR_ERR		0x00080000
-#define __PSS_LMEM4_CORR_ERR		0x00040000
-#define __PSS_LMEM3_CORR_ERR		0x00020000
-#define __PSS_LMEM2_CORR_ERR		0x00010000
-#define __PSS_LMEM1_CORR_ERR		0x00008000
-#define __PSS_LMEM0_CORR_ERR		0x00004000
-#define __PSS_LMEM5_UNCORR_ERR		0x00002000
-#define __PSS_LMEM4_UNCORR_ERR		0x00001000
-#define __PSS_LMEM3_UNCORR_ERR		0x00000800
-#define __PSS_LMEM2_UNCORR_ERR		0x00000400
-#define __PSS_LMEM1_UNCORR_ERR		0x00000200
-#define __PSS_LMEM0_UNCORR_ERR		0x00000100
-#define __PSS_BAL_PERR			0x00000080
-#define __PSS_DIP_IF_ERR		0x00000040
-#define __PSS_IOH_IF_ERR		0x00000020
-#define __PSS_TDS_IF_ERR		0x00000010
-#define __PSS_RDS_IF_ERR		0x00000008
-#define __PSS_SGM_IF_ERR		0x00000004
-#define __PSS_LPU1_RAM_ERR		0x00000002
-#define __PSS_LPU0_RAM_ERR		0x00000001
-#define ERR_SET_REG			0x00018818
-#define __PSS_ERR_STATUS_SET		0x003fffff
-#define PMM_1T_RESET_REG_P0		0x0002381c
-#define __PMM_1T_RESET_P		0x00000001
-#define PMM_1T_RESET_REG_P1		0x00023c1c
-#define HQM_QSET0_RXQ_DRBL_P0		0x00038000
-#define __RXQ0_ADD_VECTORS_P		0x80000000
-#define __RXQ0_STOP_P			0x40000000
-#define __RXQ0_PRD_PTR_P		0x0000ffff
-#define HQM_QSET1_RXQ_DRBL_P0		0x00038080
-#define __RXQ1_ADD_VECTORS_P		0x80000000
-#define __RXQ1_STOP_P			0x40000000
-#define __RXQ1_PRD_PTR_P		0x0000ffff
-#define HQM_QSET0_RXQ_DRBL_P1		0x0003c000
-#define HQM_QSET1_RXQ_DRBL_P1		0x0003c080
-#define HQM_QSET0_TXQ_DRBL_P0		0x00038020
-#define __TXQ0_ADD_VECTORS_P		0x80000000
-#define __TXQ0_STOP_P			0x40000000
-#define __TXQ0_PRD_PTR_P		0x0000ffff
-#define HQM_QSET1_TXQ_DRBL_P0		0x000380a0
-#define __TXQ1_ADD_VECTORS_P		0x80000000
-#define __TXQ1_STOP_P			0x40000000
-#define __TXQ1_PRD_PTR_P		0x0000ffff
-#define HQM_QSET0_TXQ_DRBL_P1		0x0003c020
-#define HQM_QSET1_TXQ_DRBL_P1		0x0003c0a0
-#define HQM_QSET0_IB_DRBL_1_P0		0x00038040
-#define __IB1_0_ACK_P			0x80000000
-#define __IB1_0_DISABLE_P		0x40000000
-#define __IB1_0_COALESCING_CFG_P_MK	0x00ff0000
-#define __IB1_0_COALESCING_CFG_P_SH	16
-#define __IB1_0_COALESCING_CFG_P(_v)	((_v) << __IB1_0_COALESCING_CFG_P_SH)
-#define __IB1_0_NUM_OF_ACKED_EVENTS_P	0x0000ffff
-#define HQM_QSET1_IB_DRBL_1_P0		0x000380c0
-#define __IB1_1_ACK_P			0x80000000
-#define __IB1_1_DISABLE_P		0x40000000
-#define __IB1_1_COALESCING_CFG_P_MK	0x00ff0000
-#define __IB1_1_COALESCING_CFG_P_SH	16
-#define __IB1_1_COALESCING_CFG_P(_v)	((_v) << __IB1_1_COALESCING_CFG_P_SH)
-#define __IB1_1_NUM_OF_ACKED_EVENTS_P	0x0000ffff
-#define HQM_QSET0_IB_DRBL_1_P1		0x0003c040
-#define HQM_QSET1_IB_DRBL_1_P1		0x0003c0c0
-#define HQM_QSET0_IB_DRBL_2_P0		0x00038060
-#define __IB2_0_ACK_P			0x80000000
-#define __IB2_0_DISABLE_P		0x40000000
-#define __IB2_0_COALESCING_CFG_P_MK	0x00ff0000
-#define __IB2_0_COALESCING_CFG_P_SH	16
-#define __IB2_0_COALESCING_CFG_P(_v)	((_v) << __IB2_0_COALESCING_CFG_P_SH)
-#define __IB2_0_NUM_OF_ACKED_EVENTS_P	0x0000ffff
-#define HQM_QSET1_IB_DRBL_2_P0		0x000380e0
-#define __IB2_1_ACK_P			0x80000000
-#define __IB2_1_DISABLE_P		0x40000000
-#define __IB2_1_COALESCING_CFG_P_MK	0x00ff0000
-#define __IB2_1_COALESCING_CFG_P_SH	16
-#define __IB2_1_COALESCING_CFG_P(_v)	((_v) << __IB2_1_COALESCING_CFG_P_SH)
-#define __IB2_1_NUM_OF_ACKED_EVENTS_P	0x0000ffff
-#define HQM_QSET0_IB_DRBL_2_P1		0x0003c060
-#define HQM_QSET1_IB_DRBL_2_P1		0x0003c0e0
-
-/*
- * These definitions are either in error/missing in spec. Its auto-generated
- * from hard coded values in regparse.pl.
- */
-#define __EMPHPOST_AT_4G_MK_FIX		0x0000001c
-#define __EMPHPOST_AT_4G_SH_FIX		0x00000002
-#define __EMPHPRE_AT_4G_FIX		0x00000003
-#define __SFP_TXRATE_EN_FIX		0x00000100
-#define __SFP_RXRATE_EN_FIX		0x00000080
-
-/*
- * These register definitions are auto-generated from hard coded values
- * in regparse.pl.
- */
-
-/*
- * These register mapping definitions are auto-generated from mapping tables
- * in regparse.pl.
- */
-#define BFA_IOC0_HBEAT_REG		HOST_SEM0_INFO_REG
-#define BFA_IOC0_STATE_REG		HOST_SEM1_INFO_REG
-#define BFA_IOC1_HBEAT_REG		HOST_SEM2_INFO_REG
-#define BFA_IOC1_STATE_REG		HOST_SEM3_INFO_REG
-#define BFA_FW_USE_COUNT		 HOST_SEM4_INFO_REG
-#define BFA_IOC_FAIL_SYNC		HOST_SEM5_INFO_REG
-
-#define CPE_DEPTH_Q(__n) \
-	(CPE_DEPTH_Q0 + (__n) * (CPE_DEPTH_Q1 - CPE_DEPTH_Q0))
-#define CPE_QCTRL_Q(__n) \
-	(CPE_QCTRL_Q0 + (__n) * (CPE_QCTRL_Q1 - CPE_QCTRL_Q0))
-#define CPE_PI_PTR_Q(__n) \
-	(CPE_PI_PTR_Q0 + (__n) * (CPE_PI_PTR_Q1 - CPE_PI_PTR_Q0))
-#define CPE_CI_PTR_Q(__n) \
-	(CPE_CI_PTR_Q0 + (__n) * (CPE_CI_PTR_Q1 - CPE_CI_PTR_Q0))
-#define RME_DEPTH_Q(__n) \
-	(RME_DEPTH_Q0 + (__n) * (RME_DEPTH_Q1 - RME_DEPTH_Q0))
-#define RME_QCTRL_Q(__n) \
-	(RME_QCTRL_Q0 + (__n) * (RME_QCTRL_Q1 - RME_QCTRL_Q0))
-#define RME_PI_PTR_Q(__n) \
-	(RME_PI_PTR_Q0 + (__n) * (RME_PI_PTR_Q1 - RME_PI_PTR_Q0))
-#define RME_CI_PTR_Q(__n) \
-	(RME_CI_PTR_Q0 + (__n) * (RME_CI_PTR_Q1 - RME_CI_PTR_Q0))
-#define HQM_QSET_RXQ_DRBL_P0(__n) \
-	(HQM_QSET0_RXQ_DRBL_P0 + (__n) * \
-		(HQM_QSET1_RXQ_DRBL_P0 - HQM_QSET0_RXQ_DRBL_P0))
-#define HQM_QSET_TXQ_DRBL_P0(__n) \
-	(HQM_QSET0_TXQ_DRBL_P0 + (__n) * \
-		(HQM_QSET1_TXQ_DRBL_P0 - HQM_QSET0_TXQ_DRBL_P0))
-#define HQM_QSET_IB_DRBL_1_P0(__n) \
-	(HQM_QSET0_IB_DRBL_1_P0 + (__n) * \
-		(HQM_QSET1_IB_DRBL_1_P0 - HQM_QSET0_IB_DRBL_1_P0))
-#define HQM_QSET_IB_DRBL_2_P0(__n) \
-	(HQM_QSET0_IB_DRBL_2_P0 + (__n) * \
-		(HQM_QSET1_IB_DRBL_2_P0 - HQM_QSET0_IB_DRBL_2_P0))
-#define HQM_QSET_RXQ_DRBL_P1(__n) \
-	(HQM_QSET0_RXQ_DRBL_P1 + (__n) * \
-		(HQM_QSET1_RXQ_DRBL_P1 - HQM_QSET0_RXQ_DRBL_P1))
-#define HQM_QSET_TXQ_DRBL_P1(__n) \
-	(HQM_QSET0_TXQ_DRBL_P1 + (__n) * \
-		(HQM_QSET1_TXQ_DRBL_P1 - HQM_QSET0_TXQ_DRBL_P1))
-#define HQM_QSET_IB_DRBL_1_P1(__n) \
-	(HQM_QSET0_IB_DRBL_1_P1 + (__n) * \
-		(HQM_QSET1_IB_DRBL_1_P1 - HQM_QSET0_IB_DRBL_1_P1))
-#define HQM_QSET_IB_DRBL_2_P1(__n) \
-	(HQM_QSET0_IB_DRBL_2_P1 + (__n) * \
-		(HQM_QSET1_IB_DRBL_2_P1 - HQM_QSET0_IB_DRBL_2_P1))
-
-#define CPE_Q_NUM(__fn, __q) (((__fn) << 2) + (__q))
-#define RME_Q_NUM(__fn, __q) (((__fn) << 2) + (__q))
-#define CPE_Q_MASK(__q) ((__q) & 0x3)
-#define RME_Q_MASK(__q) ((__q) & 0x3)
-
-/*
- * PCI MSI-X vector defines
- */
-enum {
-	BFA_MSIX_CPE_Q0 = 0,
-	BFA_MSIX_CPE_Q1 = 1,
-	BFA_MSIX_CPE_Q2 = 2,
-	BFA_MSIX_CPE_Q3 = 3,
-	BFA_MSIX_RME_Q0 = 4,
-	BFA_MSIX_RME_Q1 = 5,
-	BFA_MSIX_RME_Q2 = 6,
-	BFA_MSIX_RME_Q3 = 7,
-	BFA_MSIX_LPU_ERR = 8,
-	BFA_MSIX_CT_MAX = 9,
-};
-
-/*
- * And corresponding host interrupt status bit field defines
- */
-#define __HFN_INT_CPE_Q0		0x00000001U
-#define __HFN_INT_CPE_Q1		0x00000002U
-#define __HFN_INT_CPE_Q2		0x00000004U
-#define __HFN_INT_CPE_Q3		0x00000008U
-#define __HFN_INT_CPE_Q4		0x00000010U
-#define __HFN_INT_CPE_Q5		0x00000020U
-#define __HFN_INT_CPE_Q6		0x00000040U
-#define __HFN_INT_CPE_Q7		0x00000080U
-#define __HFN_INT_RME_Q0		0x00000100U
-#define __HFN_INT_RME_Q1		0x00000200U
-#define __HFN_INT_RME_Q2		0x00000400U
-#define __HFN_INT_RME_Q3		0x00000800U
-#define __HFN_INT_RME_Q4		0x00001000U
-#define __HFN_INT_RME_Q5		0x00002000U
-#define __HFN_INT_RME_Q6		0x00004000U
-#define __HFN_INT_RME_Q7		0x00008000U
-#define __HFN_INT_ERR_EMC		0x00010000U
-#define __HFN_INT_ERR_LPU0		0x00020000U
-#define __HFN_INT_ERR_LPU1		0x00040000U
-#define __HFN_INT_ERR_PSS		0x00080000U
-#define __HFN_INT_MBOX_LPU0		0x00100000U
-#define __HFN_INT_MBOX_LPU1		0x00200000U
-#define __HFN_INT_MBOX1_LPU0		0x00400000U
-#define __HFN_INT_MBOX1_LPU1		0x00800000U
-#define __HFN_INT_LL_HALT		0x01000000U
-#define __HFN_INT_CPE_MASK		0x000000ffU
-#define __HFN_INT_RME_MASK		0x0000ff00U
-
-/*
- * catapult memory map.
- */
-#define LL_PGN_HQM0		0x0096
-#define LL_PGN_HQM1		0x0097
-#define PSS_SMEM_PAGE_START	0x8000
-#define PSS_SMEM_PGNUM(_pg0, _ma)	((_pg0) + ((_ma) >> 15))
-#define PSS_SMEM_PGOFF(_ma)	((_ma) & 0x7fff)
-
-/*
- * End of catapult memory map
- */
-
-#endif /* __BFI_CTREG_H__ */
diff --git a/drivers/net/bna/bfi_ll.h b/drivers/net/bna/bfi_ll.h
deleted file mode 100644
index e3bdb87..0000000
--- a/drivers/net/bna/bfi_ll.h
+++ /dev/null
@@ -1,425 +0,0 @@
-/*
- * Linux network driver for Brocade Converged Network Adapter.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License (GPL) Version 2 as
- * published by the Free Software Foundation
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- */
-/*
- * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
- * All rights reserved
- * www.brocade.com
- */
-#ifndef __BFI_LL_H__
-#define __BFI_LL_H__
-
-#include "bfi.h"
-
-#pragma pack(1)
-
-/**
- * @brief
- *	"enums" for all LL mailbox messages other than IOC
- */
-enum {
-	BFI_LL_H2I_MAC_UCAST_SET_REQ = 1,
-	BFI_LL_H2I_MAC_UCAST_ADD_REQ = 2,
-	BFI_LL_H2I_MAC_UCAST_DEL_REQ = 3,
-
-	BFI_LL_H2I_MAC_MCAST_ADD_REQ = 4,
-	BFI_LL_H2I_MAC_MCAST_DEL_REQ = 5,
-	BFI_LL_H2I_MAC_MCAST_FILTER_REQ = 6,
-	BFI_LL_H2I_MAC_MCAST_DEL_ALL_REQ = 7,
-
-	BFI_LL_H2I_PORT_ADMIN_REQ = 8,
-	BFI_LL_H2I_STATS_GET_REQ = 9,
-	BFI_LL_H2I_STATS_CLEAR_REQ = 10,
-
-	BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ = 11,
-	BFI_LL_H2I_RXF_DEFAULT_SET_REQ = 12,
-
-	BFI_LL_H2I_TXQ_STOP_REQ = 13,
-	BFI_LL_H2I_RXQ_STOP_REQ = 14,
-
-	BFI_LL_H2I_DIAG_LOOPBACK_REQ = 15,
-
-	BFI_LL_H2I_SET_PAUSE_REQ = 16,
-	BFI_LL_H2I_MTU_INFO_REQ = 17,
-
-	BFI_LL_H2I_RX_REQ = 18,
-} ;
-
-enum {
-	BFI_LL_I2H_MAC_UCAST_SET_RSP = BFA_I2HM(1),
-	BFI_LL_I2H_MAC_UCAST_ADD_RSP = BFA_I2HM(2),
-	BFI_LL_I2H_MAC_UCAST_DEL_RSP = BFA_I2HM(3),
-
-	BFI_LL_I2H_MAC_MCAST_ADD_RSP = BFA_I2HM(4),
-	BFI_LL_I2H_MAC_MCAST_DEL_RSP = BFA_I2HM(5),
-	BFI_LL_I2H_MAC_MCAST_FILTER_RSP = BFA_I2HM(6),
-	BFI_LL_I2H_MAC_MCAST_DEL_ALL_RSP = BFA_I2HM(7),
-
-	BFI_LL_I2H_PORT_ADMIN_RSP = BFA_I2HM(8),
-	BFI_LL_I2H_STATS_GET_RSP = BFA_I2HM(9),
-	BFI_LL_I2H_STATS_CLEAR_RSP = BFA_I2HM(10),
-
-	BFI_LL_I2H_RXF_PROMISCUOUS_SET_RSP = BFA_I2HM(11),
-	BFI_LL_I2H_RXF_DEFAULT_SET_RSP = BFA_I2HM(12),
-
-	BFI_LL_I2H_TXQ_STOP_RSP = BFA_I2HM(13),
-	BFI_LL_I2H_RXQ_STOP_RSP = BFA_I2HM(14),
-
-	BFI_LL_I2H_DIAG_LOOPBACK_RSP = BFA_I2HM(15),
-
-	BFI_LL_I2H_SET_PAUSE_RSP = BFA_I2HM(16),
-
-	BFI_LL_I2H_MTU_INFO_RSP = BFA_I2HM(17),
-	BFI_LL_I2H_RX_RSP = BFA_I2HM(18),
-
-	BFI_LL_I2H_LINK_DOWN_AEN = BFA_I2HM(19),
-	BFI_LL_I2H_LINK_UP_AEN = BFA_I2HM(20),
-
-	BFI_LL_I2H_PORT_ENABLE_AEN = BFA_I2HM(21),
-	BFI_LL_I2H_PORT_DISABLE_AEN = BFA_I2HM(22),
-} ;
-
-/**
- * @brief bfi_ll_mac_addr_req is used by:
- *        BFI_LL_H2I_MAC_UCAST_SET_REQ
- *        BFI_LL_H2I_MAC_UCAST_ADD_REQ
- *        BFI_LL_H2I_MAC_UCAST_DEL_REQ
- *        BFI_LL_H2I_MAC_MCAST_ADD_REQ
- *        BFI_LL_H2I_MAC_MCAST_DEL_REQ
- */
-struct bfi_ll_mac_addr_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		rxf_id;
-	u8		rsvd1[3];
-	mac_t		mac_addr;
-	u8		rsvd2[2];
-};
-
-/**
- * @brief bfi_ll_mcast_filter_req is used by:
- *	  BFI_LL_H2I_MAC_MCAST_FILTER_REQ
- */
-struct bfi_ll_mcast_filter_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		rxf_id;
-	u8		enable;
-	u8		rsvd[2];
-};
-
-/**
- * @brief bfi_ll_mcast_del_all is used by:
- *	  BFI_LL_H2I_MAC_MCAST_DEL_ALL_REQ
- */
-struct bfi_ll_mcast_del_all_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		   rxf_id;
-	u8		   rsvd[3];
-};
-
-/**
- * @brief bfi_ll_q_stop_req is used by:
- *	BFI_LL_H2I_TXQ_STOP_REQ
- *	BFI_LL_H2I_RXQ_STOP_REQ
- */
-struct bfi_ll_q_stop_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u32	q_id_mask[2];	/* !< bit-mask for queue ids */
-};
-
-/**
- * @brief bfi_ll_stats_req is used by:
- *    BFI_LL_I2H_STATS_GET_REQ
- *    BFI_LL_I2H_STATS_CLEAR_REQ
- */
-struct bfi_ll_stats_req {
-	struct bfi_mhdr mh;	/*!< common msg header */
-	u16 stats_mask;	/* !< bit-mask for non-function statistics */
-	u8	rsvd[2];
-	u32 rxf_id_mask[2];	/* !< bit-mask for RxF Statistics */
-	u32 txf_id_mask[2];	/* !< bit-mask for TxF Statistics */
-	union bfi_addr_u  host_buffer;	/* !< where statistics are returned */
-};
-
-/**
- * @brief defines for "stats_mask" above.
- */
-#define BFI_LL_STATS_MAC	(1 << 0)	/* !< MAC Statistics */
-#define BFI_LL_STATS_BPC	(1 << 1)	/* !< Pause Stats from BPC */
-#define BFI_LL_STATS_RAD	(1 << 2)	/* !< Rx Admission Statistics */
-#define BFI_LL_STATS_RX_FC	(1 << 3)	/* !< Rx FC Stats from RxA */
-#define BFI_LL_STATS_TX_FC	(1 << 4)	/* !< Tx FC Stats from TxA */
-
-#define BFI_LL_STATS_ALL	0x1f
-
-/**
- * @brief bfi_ll_port_admin_req
- */
-struct bfi_ll_port_admin_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		 up;
-	u8		 rsvd[3];
-};
-
-/**
- * @brief bfi_ll_rxf_req is used by:
- *      BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ
- *      BFI_LL_H2I_RXF_DEFAULT_SET_REQ
- */
-struct bfi_ll_rxf_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		rxf_id;
-	u8		enable;
-	u8		rsvd[2];
-};
-
-/**
- * @brief bfi_ll_rxf_multi_req is used by:
- *	BFI_LL_H2I_RX_REQ
- */
-struct bfi_ll_rxf_multi_req {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u32	rxf_id_mask[2];
-	u8		enable;
-	u8		rsvd[3];
-};
-
-/**
- * @brief enum for Loopback opmodes
- */
-enum {
-	BFI_LL_DIAG_LB_OPMODE_EXT = 0,
-	BFI_LL_DIAG_LB_OPMODE_CBL = 1,
-};
-
-/**
- * @brief bfi_ll_set_pause_req is used by:
- *	BFI_LL_H2I_SET_PAUSE_REQ
- */
-struct bfi_ll_set_pause_req {
-	struct bfi_mhdr mh;
-	u8		tx_pause; /* 1 = enable, 0 =  disable */
-	u8		rx_pause; /* 1 = enable, 0 =  disable */
-	u8		rsvd[2];
-};
-
-/**
- * @brief bfi_ll_mtu_info_req is used by:
- *	BFI_LL_H2I_MTU_INFO_REQ
- */
-struct bfi_ll_mtu_info_req {
-	struct bfi_mhdr mh;
-	u16	mtu;
-	u8		rsvd[2];
-};
-
-/**
- * @brief
- *	  Response header format used by all responses
- *	  For both responses and asynchronous notifications
- */
-struct bfi_ll_rsp {
-	struct bfi_mhdr mh;		/*!< common msg header */
-	u8		error;
-	u8		rsvd[3];
-};
-
-/**
- * @brief
- * 	The following error codes can be returned
- *	by the mbox commands
- */
-enum {
-	BFI_LL_CMD_OK 		= 0,
-	BFI_LL_CMD_FAIL 	= 1,
-	BFI_LL_CMD_DUP_ENTRY	= 2,	/* !< Duplicate entry in CAM */
-	BFI_LL_CMD_CAM_FULL	= 3,	/* !< CAM is full */
-	BFI_LL_CMD_NOT_OWNER	= 4,   	/* !< Not permitted, b'cos not owner */
-	BFI_LL_CMD_NOT_EXEC	= 5,   	/* !< Was not sent to f/w at all */
-	BFI_LL_CMD_WAITING	= 6,	/* !< Waiting for completion (VMware) */
-	BFI_LL_CMD_PORT_DISABLED	= 7,	/* !< port in disabled state */
-} ;
-
-/* Statistics */
-#define BFI_LL_TXF_ID_MAX  	64
-#define BFI_LL_RXF_ID_MAX  	64
-
-/* TxF Frame Statistics */
-struct bfi_ll_stats_txf {
-	u64 ucast_octets;
-	u64 ucast;
-	u64 ucast_vlan;
-
-	u64 mcast_octets;
-	u64 mcast;
-	u64 mcast_vlan;
-
-	u64 bcast_octets;
-	u64 bcast;
-	u64 bcast_vlan;
-
-	u64 errors;
-	u64 filter_vlan;      /* frames filtered due to VLAN */
-	u64 filter_mac_sa;    /* frames filtered due to SA check */
-};
-
-/* RxF Frame Statistics */
-struct bfi_ll_stats_rxf {
-	u64 ucast_octets;
-	u64 ucast;
-	u64 ucast_vlan;
-
-	u64 mcast_octets;
-	u64 mcast;
-	u64 mcast_vlan;
-
-	u64 bcast_octets;
-	u64 bcast;
-	u64 bcast_vlan;
-	u64 frame_drops;
-};
-
-/* FC Tx Frame Statistics */
-struct bfi_ll_stats_fc_tx {
-	u64 txf_ucast_octets;
-	u64 txf_ucast;
-	u64 txf_ucast_vlan;
-
-	u64 txf_mcast_octets;
-	u64 txf_mcast;
-	u64 txf_mcast_vlan;
-
-	u64 txf_bcast_octets;
-	u64 txf_bcast;
-	u64 txf_bcast_vlan;
-
-	u64 txf_parity_errors;
-	u64 txf_timeout;
-	u64 txf_fid_parity_errors;
-};
-
-/* FC Rx Frame Statistics */
-struct bfi_ll_stats_fc_rx {
-	u64 rxf_ucast_octets;
-	u64 rxf_ucast;
-	u64 rxf_ucast_vlan;
-
-	u64 rxf_mcast_octets;
-	u64 rxf_mcast;
-	u64 rxf_mcast_vlan;
-
-	u64 rxf_bcast_octets;
-	u64 rxf_bcast;
-	u64 rxf_bcast_vlan;
-};
-
-/* RAD Frame Statistics */
-struct bfi_ll_stats_rad {
-	u64 rx_frames;
-	u64 rx_octets;
-	u64 rx_vlan_frames;
-
-	u64 rx_ucast;
-	u64 rx_ucast_octets;
-	u64 rx_ucast_vlan;
-
-	u64 rx_mcast;
-	u64 rx_mcast_octets;
-	u64 rx_mcast_vlan;
-
-	u64 rx_bcast;
-	u64 rx_bcast_octets;
-	u64 rx_bcast_vlan;
-
-	u64 rx_drops;
-};
-
-/* BPC Tx Registers */
-struct bfi_ll_stats_bpc {
-	/* transmit stats */
-	u64 tx_pause[8];
-	u64 tx_zero_pause[8];	/*!< Pause cancellation */
-	/*!<Pause initiation rather than retention */
-	u64 tx_first_pause[8];
-
-	/* receive stats */
-	u64 rx_pause[8];
-	u64 rx_zero_pause[8];	/*!< Pause cancellation */
-	/*!<Pause initiation rather than retention */
-	u64 rx_first_pause[8];
-};
-
-/* MAC Rx Statistics */
-struct bfi_ll_stats_mac {
-	u64 frame_64;		/* both rx and tx counter */
-	u64 frame_65_127;		/* both rx and tx counter */
-	u64 frame_128_255;		/* both rx and tx counter */
-	u64 frame_256_511;		/* both rx and tx counter */
-	u64 frame_512_1023;	/* both rx and tx counter */
-	u64 frame_1024_1518;	/* both rx and tx counter */
-	u64 frame_1519_1522;	/* both rx and tx counter */
-
-	/* receive stats */
-	u64 rx_bytes;
-	u64 rx_packets;
-	u64 rx_fcs_error;
-	u64 rx_multicast;
-	u64 rx_broadcast;
-	u64 rx_control_frames;
-	u64 rx_pause;
-	u64 rx_unknown_opcode;
-	u64 rx_alignment_error;
-	u64 rx_frame_length_error;
-	u64 rx_code_error;
-	u64 rx_carrier_sense_error;
-	u64 rx_undersize;
-	u64 rx_oversize;
-	u64 rx_fragments;
-	u64 rx_jabber;
-	u64 rx_drop;
-
-	/* transmit stats */
-	u64 tx_bytes;
-	u64 tx_packets;
-	u64 tx_multicast;
-	u64 tx_broadcast;
-	u64 tx_pause;
-	u64 tx_deferral;
-	u64 tx_excessive_deferral;
-	u64 tx_single_collision;
-	u64 tx_muliple_collision;
-	u64 tx_late_collision;
-	u64 tx_excessive_collision;
-	u64 tx_total_collision;
-	u64 tx_pause_honored;
-	u64 tx_drop;
-	u64 tx_jabber;
-	u64 tx_fcs_error;
-	u64 tx_control_frame;
-	u64 tx_oversize;
-	u64 tx_undersize;
-	u64 tx_fragments;
-};
-
-/* Complete statistics */
-struct bfi_ll_stats {
-	struct bfi_ll_stats_mac		mac_stats;
-	struct bfi_ll_stats_bpc		bpc_stats;
-	struct bfi_ll_stats_rad		rad_stats;
-	struct bfi_ll_stats_fc_rx	fc_rx_stats;
-	struct bfi_ll_stats_fc_tx	fc_tx_stats;
-	struct bfi_ll_stats_rxf	rxf_stats[BFI_LL_RXF_ID_MAX];
-	struct bfi_ll_stats_txf	txf_stats[BFI_LL_TXF_ID_MAX];
-};
-
-#pragma pack()
-
-#endif  /* __BFI_LL_H__ */
diff --git a/drivers/net/bna/bna_ctrl.c b/drivers/net/bna/bna_ctrl.c
deleted file mode 100644
index 7d6099f..0000000
--- a/drivers/net/bna/bna_ctrl.c
+++ /dev/null
@@ -1,3079 +0,0 @@
-/*
- * Linux network driver for Brocade Converged Network Adapter.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License (GPL) Version 2 as
- * published by the Free Software Foundation
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- */
-/*
- * Copyright (c) 2005-2010 Brocade Communications Systems, Inc.
- * All rights reserved
- * www.brocade.com
- */
-#include "bna.h"
-#include "bfa_sm.h"
-#include "bfa_wc.h"
-
-static void bna_device_cb_port_stopped(void *arg, enum bna_cb_status status);
-
-static void
-bna_port_cb_link_up(struct bna_port *port, struct bfi_ll_aen *aen,
-			int status)
-{
-	int i;
-	u8 prio_map;
-
-	port->llport.link_status = BNA_LINK_UP;
-	if (aen->cee_linkup)
-		port->llport.link_status = BNA_CEE_UP;
-
-	/* Compute the priority */
-	prio_map = aen->prio_map;
-	if (prio_map) {
-		for (i = 0; i < 8; i++) {
-			if ((prio_map >> i) & 0x1)
-				break;
-		}
-		port->priority = i;
-	} else
-		port->priority = 0;
-
-	/* Dispatch events */
-	bna_tx_mod_cee_link_status(&port->bna->tx_mod, aen->cee_linkup);
-	bna_tx_mod_prio_changed(&port->bna->tx_mod, port->priority);
-	port->link_cbfn(port->bna->bnad, port->llport.link_status);
-}
-
-static void
-bna_port_cb_link_down(struct bna_port *port, int status)
-{
-	port->llport.link_status = BNA_LINK_DOWN;
-
-	/* Dispatch events */
-	bna_tx_mod_cee_link_status(&port->bna->tx_mod, BNA_LINK_DOWN);
-	port->link_cbfn(port->bna->bnad, BNA_LINK_DOWN);
-}
-
-static inline int
-llport_can_be_up(struct bna_llport *llport)
-{
-	int ready = 0;
-	if (llport->type == BNA_PORT_T_REGULAR)
-		ready = ((llport->flags & BNA_LLPORT_F_ADMIN_UP) &&
-			 (llport->flags & BNA_LLPORT_F_RX_STARTED) &&
-			 (llport->flags & BNA_LLPORT_F_PORT_ENABLED));
-	else
-		ready = ((llport->flags & BNA_LLPORT_F_ADMIN_UP) &&
-			 (llport->flags & BNA_LLPORT_F_RX_STARTED) &&
-			 !(llport->flags & BNA_LLPORT_F_PORT_ENABLED));
-	return ready;
-}
-
-#define llport_is_up llport_can_be_up
-
-enum bna_llport_event {
-	LLPORT_E_START			= 1,
-	LLPORT_E_STOP			= 2,
-	LLPORT_E_FAIL			= 3,
-	LLPORT_E_UP			= 4,
-	LLPORT_E_DOWN			= 5,
-	LLPORT_E_FWRESP_UP_OK		= 6,
-	LLPORT_E_FWRESP_UP_FAIL		= 7,
-	LLPORT_E_FWRESP_DOWN		= 8
-};
-
-static void
-bna_llport_cb_port_enabled(struct bna_llport *llport)
-{
-	llport->flags |= BNA_LLPORT_F_PORT_ENABLED;
-
-	if (llport_can_be_up(llport))
-		bfa_fsm_send_event(llport, LLPORT_E_UP);
-}
-
-static void
-bna_llport_cb_port_disabled(struct bna_llport *llport)
-{
-	int llport_up = llport_is_up(llport);
-
-	llport->flags &= ~BNA_LLPORT_F_PORT_ENABLED;
-
-	if (llport_up)
-		bfa_fsm_send_event(llport, LLPORT_E_DOWN);
-}
-
-/**
- * MBOX
- */
-static int
-bna_is_aen(u8 msg_id)
-{
-	switch (msg_id) {
-	case BFI_LL_I2H_LINK_DOWN_AEN:
-	case BFI_LL_I2H_LINK_UP_AEN:
-	case BFI_LL_I2H_PORT_ENABLE_AEN:
-	case BFI_LL_I2H_PORT_DISABLE_AEN:
-		return 1;
-
-	default:
-		return 0;
-	}
-}
-
-static void
-bna_mbox_aen_callback(struct bna *bna, struct bfi_mbmsg *msg)
-{
-	struct bfi_ll_aen *aen = (struct bfi_ll_aen *)(msg);
-
-	switch (aen->mh.msg_id) {
-	case BFI_LL_I2H_LINK_UP_AEN:
-		bna_port_cb_link_up(&bna->port, aen, aen->reason);
-		break;
-	case BFI_LL_I2H_LINK_DOWN_AEN:
-		bna_port_cb_link_down(&bna->port, aen->reason);
-		break;
-	case BFI_LL_I2H_PORT_ENABLE_AEN:
-		bna_llport_cb_port_enabled(&bna->port.llport);
-		break;
-	case BFI_LL_I2H_PORT_DISABLE_AEN:
-		bna_llport_cb_port_disabled(&bna->port.llport);
-		break;
-	default:
-		break;
-	}
-}
-
-static void
-bna_ll_isr(void *llarg, struct bfi_mbmsg *msg)
-{
-	struct bna *bna = (struct bna *)(llarg);
-	struct bfi_ll_rsp *mb_rsp = (struct bfi_ll_rsp *)(msg);
-	struct bfi_mhdr *cmd_h, *rsp_h;
-	struct bna_mbox_qe *mb_qe = NULL;
-	int to_post = 0;
-	u8 aen = 0;
-	char message[BNA_MESSAGE_SIZE];
-
-	aen = bna_is_aen(mb_rsp->mh.msg_id);
-
-	if (!aen) {
-		mb_qe = bfa_q_first(&bna->mbox_mod.posted_q);
-		cmd_h = (struct bfi_mhdr *)(&mb_qe->cmd.msg[0]);
-		rsp_h = (struct bfi_mhdr *)(&mb_rsp->mh);
-
-		if ((BFA_I2HM(cmd_h->msg_id) == rsp_h->msg_id) &&
-		    (cmd_h->mtag.i2htok == rsp_h->mtag.i2htok)) {
-			/* Remove the request from posted_q, update state  */
-			list_del(&mb_qe->qe);
-			bna->mbox_mod.msg_pending--;
-			if (list_empty(&bna->mbox_mod.posted_q))
-				bna->mbox_mod.state = BNA_MBOX_FREE;
-			else
-				to_post = 1;
-
-			/* Dispatch the cbfn */
-			if (mb_qe->cbfn)
-				mb_qe->cbfn(mb_qe->cbarg, mb_rsp->error);
-
-			/* Post the next entry, if needed */
-			if (to_post) {
-				mb_qe = bfa_q_first(&bna->mbox_mod.posted_q);
-				bfa_nw_ioc_mbox_queue(&bna->device.ioc,
-							&mb_qe->cmd, NULL,
-							NULL);
-			}
-		} else {
-			snprintf(message, BNA_MESSAGE_SIZE,
-				       "No matching rsp for [%d:%d:%d]\n",
-				       mb_rsp->mh.msg_class, mb_rsp->mh.msg_id,
-				       mb_rsp->mh.mtag.i2htok);
-		pr_info("%s", message);
-		}
-
-	} else
-		bna_mbox_aen_callback(bna, msg);
-}
-
-static void
-bna_err_handler(struct bna *bna, u32 intr_status)
-{
-	u32 init_halt;
-
-	if (intr_status & __HALT_STATUS_BITS) {
-		init_halt = readl(bna->device.ioc.ioc_regs.ll_halt);
-		init_halt &= ~__FW_INIT_HALT_P;
-		writel(init_halt, bna->device.ioc.ioc_regs.ll_halt);
-	}
-
-	bfa_nw_ioc_error_isr(&bna->device.ioc);
-}
-
-void
-bna_mbox_handler(struct bna *bna, u32 intr_status)
-{
-	if (BNA_IS_ERR_INTR(intr_status)) {
-		bna_err_handler(bna, intr_status);
-		return;
-	}
-	if (BNA_IS_MBOX_INTR(intr_status))
-		bfa_nw_ioc_mbox_isr(&bna->device.ioc);
-}
-
-void
-bna_mbox_send(struct bna *bna, struct bna_mbox_qe *mbox_qe)
-{
-	struct bfi_mhdr *mh;
-
-	mh = (struct bfi_mhdr *)(&mbox_qe->cmd.msg[0]);
-
-	mh->mtag.i2htok = htons(bna->mbox_mod.msg_ctr);
-	bna->mbox_mod.msg_ctr++;
-	bna->mbox_mod.msg_pending++;
-	if (bna->mbox_mod.state == BNA_MBOX_FREE) {
-		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-		bfa_nw_ioc_mbox_queue(&bna->device.ioc, &mbox_qe->cmd,
-					NULL, NULL);
-		bna->mbox_mod.state = BNA_MBOX_POSTED;
-	} else {
-		list_add_tail(&mbox_qe->qe, &bna->mbox_mod.posted_q);
-	}
-}
-
-static void
-bna_mbox_flush_q(struct bna *bna, struct list_head *q)
-{
-	struct bna_mbox_qe *mb_qe = NULL;
-	struct list_head			*mb_q;
-	void 			(*cbfn)(void *arg, int status);
-	void 			*cbarg;
-
-	mb_q = &bna->mbox_mod.posted_q;
-
-	while (!list_empty(mb_q)) {
-		bfa_q_deq(mb_q, &mb_qe);
-		cbfn = mb_qe->cbfn;
-		cbarg = mb_qe->cbarg;
-		bfa_q_qe_init(mb_qe);
-		bna->mbox_mod.msg_pending--;
-
-		if (cbfn)
-			cbfn(cbarg, BNA_CB_NOT_EXEC);
-	}
-
-	bna->mbox_mod.state = BNA_MBOX_FREE;
-}
-
-static void
-bna_mbox_mod_start(struct bna_mbox_mod *mbox_mod)
-{
-}
-
-static void
-bna_mbox_mod_stop(struct bna_mbox_mod *mbox_mod)
-{
-	bna_mbox_flush_q(mbox_mod->bna, &mbox_mod->posted_q);
-}
-
-static void
-bna_mbox_mod_init(struct bna_mbox_mod *mbox_mod, struct bna *bna)
-{
-	bfa_nw_ioc_mbox_regisr(&bna->device.ioc, BFI_MC_LL, bna_ll_isr, bna);
-	mbox_mod->state = BNA_MBOX_FREE;
-	mbox_mod->msg_ctr = mbox_mod->msg_pending = 0;
-	INIT_LIST_HEAD(&mbox_mod->posted_q);
-	mbox_mod->bna = bna;
-}
-
-static void
-bna_mbox_mod_uninit(struct bna_mbox_mod *mbox_mod)
-{
-	mbox_mod->bna = NULL;
-}
-
-/**
- * LLPORT
- */
-#define call_llport_stop_cbfn(llport, status)\
-do {\
-	if ((llport)->stop_cbfn)\
-		(llport)->stop_cbfn(&(llport)->bna->port, status);\
-	(llport)->stop_cbfn = NULL;\
-} while (0)
-
-static void bna_fw_llport_up(struct bna_llport *llport);
-static void bna_fw_cb_llport_up(void *arg, int status);
-static void bna_fw_llport_down(struct bna_llport *llport);
-static void bna_fw_cb_llport_down(void *arg, int status);
-static void bna_llport_start(struct bna_llport *llport);
-static void bna_llport_stop(struct bna_llport *llport);
-static void bna_llport_fail(struct bna_llport *llport);
-
-enum bna_llport_state {
-	BNA_LLPORT_STOPPED		= 1,
-	BNA_LLPORT_DOWN			= 2,
-	BNA_LLPORT_UP_RESP_WAIT		= 3,
-	BNA_LLPORT_DOWN_RESP_WAIT	= 4,
-	BNA_LLPORT_UP			= 5,
-	BNA_LLPORT_LAST_RESP_WAIT 	= 6
-};
-
-bfa_fsm_state_decl(bna_llport, stopped, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, down, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, up_resp_wait, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, down_resp_wait, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, up, struct bna_llport,
-			enum bna_llport_event);
-bfa_fsm_state_decl(bna_llport, last_resp_wait, struct bna_llport,
-			enum bna_llport_event);
-
-static struct bfa_sm_table llport_sm_table[] = {
-	{BFA_SM(bna_llport_sm_stopped), BNA_LLPORT_STOPPED},
-	{BFA_SM(bna_llport_sm_down), BNA_LLPORT_DOWN},
-	{BFA_SM(bna_llport_sm_up_resp_wait), BNA_LLPORT_UP_RESP_WAIT},
-	{BFA_SM(bna_llport_sm_down_resp_wait), BNA_LLPORT_DOWN_RESP_WAIT},
-	{BFA_SM(bna_llport_sm_up), BNA_LLPORT_UP},
-	{BFA_SM(bna_llport_sm_last_resp_wait), BNA_LLPORT_LAST_RESP_WAIT}
-};
-
-static void
-bna_llport_sm_stopped_entry(struct bna_llport *llport)
-{
-	llport->bna->port.link_cbfn((llport)->bna->bnad, BNA_LINK_DOWN);
-	call_llport_stop_cbfn(llport, BNA_CB_SUCCESS);
-}
-
-static void
-bna_llport_sm_stopped(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_START:
-		bfa_fsm_set_state(llport, bna_llport_sm_down);
-		break;
-
-	case LLPORT_E_STOP:
-		call_llport_stop_cbfn(llport, BNA_CB_SUCCESS);
-		break;
-
-	case LLPORT_E_FAIL:
-		break;
-
-	case LLPORT_E_DOWN:
-		/* This event is received due to Rx objects failing */
-		/* No-op */
-		break;
-
-	case LLPORT_E_FWRESP_UP_OK:
-	case LLPORT_E_FWRESP_DOWN:
-		/**
-		 * These events are received due to flushing of mbox when
-		 * device fails
-		 */
-		/* No-op */
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_down_entry(struct bna_llport *llport)
-{
-	bnad_cb_port_link_status((llport)->bna->bnad, BNA_LINK_DOWN);
-}
-
-static void
-bna_llport_sm_down(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_STOP:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_UP:
-		bfa_fsm_set_state(llport, bna_llport_sm_up_resp_wait);
-		bna_fw_llport_up(llport);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_up_resp_wait_entry(struct bna_llport *llport)
-{
-	BUG_ON(!llport_can_be_up(llport));
-	/**
-	 * NOTE: Do not call bna_fw_llport_up() here. That will over step
-	 * mbox due to down_resp_wait -> up_resp_wait transition on event
-	 * LLPORT_E_UP
-	 */
-}
-
-static void
-bna_llport_sm_up_resp_wait(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_STOP:
-		bfa_fsm_set_state(llport, bna_llport_sm_last_resp_wait);
-		break;
-
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_DOWN:
-		bfa_fsm_set_state(llport, bna_llport_sm_down_resp_wait);
-		break;
-
-	case LLPORT_E_FWRESP_UP_OK:
-		bfa_fsm_set_state(llport, bna_llport_sm_up);
-		break;
-
-	case LLPORT_E_FWRESP_UP_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_down);
-		break;
-
-	case LLPORT_E_FWRESP_DOWN:
-		/* down_resp_wait -> up_resp_wait transition on LLPORT_E_UP */
-		bna_fw_llport_up(llport);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_down_resp_wait_entry(struct bna_llport *llport)
-{
-	/**
-	 * NOTE: Do not call bna_fw_llport_down() here. That will over step
-	 * mbox due to up_resp_wait -> down_resp_wait transition on event
-	 * LLPORT_E_DOWN
-	 */
-}
-
-static void
-bna_llport_sm_down_resp_wait(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_STOP:
-		bfa_fsm_set_state(llport, bna_llport_sm_last_resp_wait);
-		break;
-
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_UP:
-		bfa_fsm_set_state(llport, bna_llport_sm_up_resp_wait);
-		break;
-
-	case LLPORT_E_FWRESP_UP_OK:
-		/* up_resp_wait->down_resp_wait transition on LLPORT_E_DOWN */
-		bna_fw_llport_down(llport);
-		break;
-
-	case LLPORT_E_FWRESP_UP_FAIL:
-	case LLPORT_E_FWRESP_DOWN:
-		bfa_fsm_set_state(llport, bna_llport_sm_down);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_up_entry(struct bna_llport *llport)
-{
-}
-
-static void
-bna_llport_sm_up(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_STOP:
-		bfa_fsm_set_state(llport, bna_llport_sm_last_resp_wait);
-		bna_fw_llport_down(llport);
-		break;
-
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_DOWN:
-		bfa_fsm_set_state(llport, bna_llport_sm_down_resp_wait);
-		bna_fw_llport_down(llport);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_llport_sm_last_resp_wait_entry(struct bna_llport *llport)
-{
-}
-
-static void
-bna_llport_sm_last_resp_wait(struct bna_llport *llport,
-			enum bna_llport_event event)
-{
-	switch (event) {
-	case LLPORT_E_FAIL:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	case LLPORT_E_DOWN:
-		/**
-		 * This event is received due to Rx objects stopping in
-		 * parallel to llport
-		 */
-		/* No-op */
-		break;
-
-	case LLPORT_E_FWRESP_UP_OK:
-		/* up_resp_wait->last_resp_wait transition on LLPORT_T_STOP */
-		bna_fw_llport_down(llport);
-		break;
-
-	case LLPORT_E_FWRESP_UP_FAIL:
-	case LLPORT_E_FWRESP_DOWN:
-		bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_fw_llport_admin_up(struct bna_llport *llport)
-{
-	struct bfi_ll_port_admin_req ll_req;
-
-	memset(&ll_req, 0, sizeof(ll_req));
-	ll_req.mh.msg_class = BFI_MC_LL;
-	ll_req.mh.msg_id = BFI_LL_H2I_PORT_ADMIN_REQ;
-	ll_req.mh.mtag.h2i.lpu_id = 0;
-
-	ll_req.up = BNA_STATUS_T_ENABLED;
-
-	bna_mbox_qe_fill(&llport->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_fw_cb_llport_up, llport);
-
-	bna_mbox_send(llport->bna, &llport->mbox_qe);
-}
-
-static void
-bna_fw_llport_up(struct bna_llport *llport)
-{
-	if (llport->type == BNA_PORT_T_REGULAR)
-		bna_fw_llport_admin_up(llport);
-}
-
-static void
-bna_fw_cb_llport_up(void *arg, int status)
-{
-	struct bna_llport *llport = (struct bna_llport *)arg;
-
-	bfa_q_qe_init(&llport->mbox_qe.qe);
-	if (status == BFI_LL_CMD_FAIL) {
-		if (llport->type == BNA_PORT_T_REGULAR)
-			llport->flags &= ~BNA_LLPORT_F_PORT_ENABLED;
-		else
-			llport->flags &= ~BNA_LLPORT_F_ADMIN_UP;
-		bfa_fsm_send_event(llport, LLPORT_E_FWRESP_UP_FAIL);
-	} else
-		bfa_fsm_send_event(llport, LLPORT_E_FWRESP_UP_OK);
-}
-
-static void
-bna_fw_llport_admin_down(struct bna_llport *llport)
-{
-	struct bfi_ll_port_admin_req ll_req;
-
-	memset(&ll_req, 0, sizeof(ll_req));
-	ll_req.mh.msg_class = BFI_MC_LL;
-	ll_req.mh.msg_id = BFI_LL_H2I_PORT_ADMIN_REQ;
-	ll_req.mh.mtag.h2i.lpu_id = 0;
-
-	ll_req.up = BNA_STATUS_T_DISABLED;
-
-	bna_mbox_qe_fill(&llport->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_fw_cb_llport_down, llport);
-
-	bna_mbox_send(llport->bna, &llport->mbox_qe);
-}
-
-static void
-bna_fw_llport_down(struct bna_llport *llport)
-{
-	if (llport->type == BNA_PORT_T_REGULAR)
-		bna_fw_llport_admin_down(llport);
-}
-
-static void
-bna_fw_cb_llport_down(void *arg, int status)
-{
-	struct bna_llport *llport = (struct bna_llport *)arg;
-
-	bfa_q_qe_init(&llport->mbox_qe.qe);
-	bfa_fsm_send_event(llport, LLPORT_E_FWRESP_DOWN);
-}
-
-static void
-bna_port_cb_llport_stopped(struct bna_port *port,
-				enum bna_cb_status status)
-{
-	bfa_wc_down(&port->chld_stop_wc);
-}
-
-static void
-bna_llport_init(struct bna_llport *llport, struct bna *bna)
-{
-	llport->flags |= BNA_LLPORT_F_ADMIN_UP;
-	llport->flags |= BNA_LLPORT_F_PORT_ENABLED;
-	llport->type = BNA_PORT_T_REGULAR;
-	llport->bna = bna;
-
-	llport->link_status = BNA_LINK_DOWN;
-
-	llport->rx_started_count = 0;
-
-	llport->stop_cbfn = NULL;
-
-	bfa_q_qe_init(&llport->mbox_qe.qe);
-
-	bfa_fsm_set_state(llport, bna_llport_sm_stopped);
-}
-
-static void
-bna_llport_uninit(struct bna_llport *llport)
-{
-	llport->flags &= ~BNA_LLPORT_F_ADMIN_UP;
-	llport->flags &= ~BNA_LLPORT_F_PORT_ENABLED;
-
-	llport->bna = NULL;
-}
-
-static void
-bna_llport_start(struct bna_llport *llport)
-{
-	bfa_fsm_send_event(llport, LLPORT_E_START);
-}
-
-static void
-bna_llport_stop(struct bna_llport *llport)
-{
-	llport->stop_cbfn = bna_port_cb_llport_stopped;
-
-	bfa_fsm_send_event(llport, LLPORT_E_STOP);
-}
-
-static void
-bna_llport_fail(struct bna_llport *llport)
-{
-	/* Reset the physical port status to enabled */
-	llport->flags |= BNA_LLPORT_F_PORT_ENABLED;
-	bfa_fsm_send_event(llport, LLPORT_E_FAIL);
-}
-
-static int
-bna_llport_state_get(struct bna_llport *llport)
-{
-	return bfa_sm_to_state(llport_sm_table, llport->fsm);
-}
-
-void
-bna_llport_rx_started(struct bna_llport *llport)
-{
-	llport->rx_started_count++;
-
-	if (llport->rx_started_count == 1) {
-
-		llport->flags |= BNA_LLPORT_F_RX_STARTED;
-
-		if (llport_can_be_up(llport))
-			bfa_fsm_send_event(llport, LLPORT_E_UP);
-	}
-}
-
-void
-bna_llport_rx_stopped(struct bna_llport *llport)
-{
-	int llport_up = llport_is_up(llport);
-
-	llport->rx_started_count--;
-
-	if (llport->rx_started_count == 0) {
-
-		llport->flags &= ~BNA_LLPORT_F_RX_STARTED;
-
-		if (llport_up)
-			bfa_fsm_send_event(llport, LLPORT_E_DOWN);
-	}
-}
-
-/**
- * PORT
- */
-#define bna_port_chld_start(port)\
-do {\
-	enum bna_tx_type tx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;\
-	enum bna_rx_type rx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;\
-	bna_llport_start(&(port)->llport);\
-	bna_tx_mod_start(&(port)->bna->tx_mod, tx_type);\
-	bna_rx_mod_start(&(port)->bna->rx_mod, rx_type);\
-} while (0)
-
-#define bna_port_chld_stop(port)\
-do {\
-	enum bna_tx_type tx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_TX_T_REGULAR : BNA_TX_T_LOOPBACK;\
-	enum bna_rx_type rx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;\
-	bfa_wc_up(&(port)->chld_stop_wc);\
-	bfa_wc_up(&(port)->chld_stop_wc);\
-	bfa_wc_up(&(port)->chld_stop_wc);\
-	bna_llport_stop(&(port)->llport);\
-	bna_tx_mod_stop(&(port)->bna->tx_mod, tx_type);\
-	bna_rx_mod_stop(&(port)->bna->rx_mod, rx_type);\
-} while (0)
-
-#define bna_port_chld_fail(port)\
-do {\
-	bna_llport_fail(&(port)->llport);\
-	bna_tx_mod_fail(&(port)->bna->tx_mod);\
-	bna_rx_mod_fail(&(port)->bna->rx_mod);\
-} while (0)
-
-#define bna_port_rx_start(port)\
-do {\
-	enum bna_rx_type rx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;\
-	bna_rx_mod_start(&(port)->bna->rx_mod, rx_type);\
-} while (0)
-
-#define bna_port_rx_stop(port)\
-do {\
-	enum bna_rx_type rx_type = ((port)->type == BNA_PORT_T_REGULAR) ?\
-					BNA_RX_T_REGULAR : BNA_RX_T_LOOPBACK;\
-	bfa_wc_up(&(port)->chld_stop_wc);\
-	bna_rx_mod_stop(&(port)->bna->rx_mod, rx_type);\
-} while (0)
-
-#define call_port_stop_cbfn(port, status)\
-do {\
-	if ((port)->stop_cbfn)\
-		(port)->stop_cbfn((port)->stop_cbarg, status);\
-	(port)->stop_cbfn = NULL;\
-	(port)->stop_cbarg = NULL;\
-} while (0)
-
-#define call_port_pause_cbfn(port, status)\
-do {\
-	if ((port)->pause_cbfn)\
-		(port)->pause_cbfn((port)->bna->bnad, status);\
-	(port)->pause_cbfn = NULL;\
-} while (0)
-
-#define call_port_mtu_cbfn(port, status)\
-do {\
-	if ((port)->mtu_cbfn)\
-		(port)->mtu_cbfn((port)->bna->bnad, status);\
-	(port)->mtu_cbfn = NULL;\
-} while (0)
-
-static void bna_fw_pause_set(struct bna_port *port);
-static void bna_fw_cb_pause_set(void *arg, int status);
-static void bna_fw_mtu_set(struct bna_port *port);
-static void bna_fw_cb_mtu_set(void *arg, int status);
-
-enum bna_port_event {
-	PORT_E_START			= 1,
-	PORT_E_STOP			= 2,
-	PORT_E_FAIL			= 3,
-	PORT_E_PAUSE_CFG		= 4,
-	PORT_E_MTU_CFG			= 5,
-	PORT_E_CHLD_STOPPED		= 6,
-	PORT_E_FWRESP_PAUSE		= 7,
-	PORT_E_FWRESP_MTU		= 8
-};
-
-enum bna_port_state {
-	BNA_PORT_STOPPED		= 1,
-	BNA_PORT_MTU_INIT_WAIT		= 2,
-	BNA_PORT_PAUSE_INIT_WAIT	= 3,
-	BNA_PORT_LAST_RESP_WAIT		= 4,
-	BNA_PORT_STARTED		= 5,
-	BNA_PORT_PAUSE_CFG_WAIT		= 6,
-	BNA_PORT_RX_STOP_WAIT		= 7,
-	BNA_PORT_MTU_CFG_WAIT 		= 8,
-	BNA_PORT_CHLD_STOP_WAIT		= 9
-};
-
-bfa_fsm_state_decl(bna_port, stopped, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, mtu_init_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, pause_init_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, last_resp_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, started, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, pause_cfg_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, rx_stop_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, mtu_cfg_wait, struct bna_port,
-			enum bna_port_event);
-bfa_fsm_state_decl(bna_port, chld_stop_wait, struct bna_port,
-			enum bna_port_event);
-
-static struct bfa_sm_table port_sm_table[] = {
-	{BFA_SM(bna_port_sm_stopped), BNA_PORT_STOPPED},
-	{BFA_SM(bna_port_sm_mtu_init_wait), BNA_PORT_MTU_INIT_WAIT},
-	{BFA_SM(bna_port_sm_pause_init_wait), BNA_PORT_PAUSE_INIT_WAIT},
-	{BFA_SM(bna_port_sm_last_resp_wait), BNA_PORT_LAST_RESP_WAIT},
-	{BFA_SM(bna_port_sm_started), BNA_PORT_STARTED},
-	{BFA_SM(bna_port_sm_pause_cfg_wait), BNA_PORT_PAUSE_CFG_WAIT},
-	{BFA_SM(bna_port_sm_rx_stop_wait), BNA_PORT_RX_STOP_WAIT},
-	{BFA_SM(bna_port_sm_mtu_cfg_wait), BNA_PORT_MTU_CFG_WAIT},
-	{BFA_SM(bna_port_sm_chld_stop_wait), BNA_PORT_CHLD_STOP_WAIT}
-};
-
-static void
-bna_port_sm_stopped_entry(struct bna_port *port)
-{
-	call_port_pause_cbfn(port, BNA_CB_SUCCESS);
-	call_port_mtu_cbfn(port, BNA_CB_SUCCESS);
-	call_port_stop_cbfn(port, BNA_CB_SUCCESS);
-}
-
-static void
-bna_port_sm_stopped(struct bna_port *port, enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_START:
-		bfa_fsm_set_state(port, bna_port_sm_mtu_init_wait);
-		break;
-
-	case PORT_E_STOP:
-		call_port_stop_cbfn(port, BNA_CB_SUCCESS);
-		break;
-
-	case PORT_E_FAIL:
-		/* No-op */
-		break;
-
-	case PORT_E_PAUSE_CFG:
-		call_port_pause_cbfn(port, BNA_CB_SUCCESS);
-		break;
-
-	case PORT_E_MTU_CFG:
-		call_port_mtu_cbfn(port, BNA_CB_SUCCESS);
-		break;
-
-	case PORT_E_CHLD_STOPPED:
-		/**
-		 * This event is received due to LLPort, Tx and Rx objects
-		 * failing
-		 */
-		/* No-op */
-		break;
-
-	case PORT_E_FWRESP_PAUSE:
-	case PORT_E_FWRESP_MTU:
-		/**
-		 * These events are received due to flushing of mbox when
-		 * device fails
-		 */
-		/* No-op */
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_mtu_init_wait_entry(struct bna_port *port)
-{
-	bna_fw_mtu_set(port);
-}
-
-static void
-bna_port_sm_mtu_init_wait(struct bna_port *port, enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_STOP:
-		bfa_fsm_set_state(port, bna_port_sm_last_resp_wait);
-		break;
-
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		break;
-
-	case PORT_E_PAUSE_CFG:
-		/* No-op */
-		break;
-
-	case PORT_E_MTU_CFG:
-		port->flags |= BNA_PORT_F_MTU_CHANGED;
-		break;
-
-	case PORT_E_FWRESP_MTU:
-		if (port->flags & BNA_PORT_F_MTU_CHANGED) {
-			port->flags &= ~BNA_PORT_F_MTU_CHANGED;
-			bna_fw_mtu_set(port);
-		} else {
-			bfa_fsm_set_state(port, bna_port_sm_pause_init_wait);
-		}
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_pause_init_wait_entry(struct bna_port *port)
-{
-	bna_fw_pause_set(port);
-}
-
-static void
-bna_port_sm_pause_init_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_STOP:
-		bfa_fsm_set_state(port, bna_port_sm_last_resp_wait);
-		break;
-
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		break;
-
-	case PORT_E_PAUSE_CFG:
-		port->flags |= BNA_PORT_F_PAUSE_CHANGED;
-		break;
-
-	case PORT_E_MTU_CFG:
-		port->flags |= BNA_PORT_F_MTU_CHANGED;
-		break;
-
-	case PORT_E_FWRESP_PAUSE:
-		if (port->flags & BNA_PORT_F_PAUSE_CHANGED) {
-			port->flags &= ~BNA_PORT_F_PAUSE_CHANGED;
-			bna_fw_pause_set(port);
-		} else if (port->flags & BNA_PORT_F_MTU_CHANGED) {
-			port->flags &= ~BNA_PORT_F_MTU_CHANGED;
-			bfa_fsm_set_state(port, bna_port_sm_mtu_init_wait);
-		} else {
-			bfa_fsm_set_state(port, bna_port_sm_started);
-			bna_port_chld_start(port);
-		}
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_last_resp_wait_entry(struct bna_port *port)
-{
-}
-
-static void
-bna_port_sm_last_resp_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-	case PORT_E_FWRESP_PAUSE:
-	case PORT_E_FWRESP_MTU:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_started_entry(struct bna_port *port)
-{
-	/**
-	 * NOTE: Do not call bna_port_chld_start() here, since it will be
-	 * inadvertently called during pause_cfg_wait->started transition
-	 * as well
-	 */
-	call_port_pause_cbfn(port, BNA_CB_SUCCESS);
-	call_port_mtu_cbfn(port, BNA_CB_SUCCESS);
-}
-
-static void
-bna_port_sm_started(struct bna_port *port,
-			enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_STOP:
-		bfa_fsm_set_state(port, bna_port_sm_chld_stop_wait);
-		break;
-
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_PAUSE_CFG:
-		bfa_fsm_set_state(port, bna_port_sm_pause_cfg_wait);
-		break;
-
-	case PORT_E_MTU_CFG:
-		bfa_fsm_set_state(port, bna_port_sm_rx_stop_wait);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_pause_cfg_wait_entry(struct bna_port *port)
-{
-	bna_fw_pause_set(port);
-}
-
-static void
-bna_port_sm_pause_cfg_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_FWRESP_PAUSE:
-		bfa_fsm_set_state(port, bna_port_sm_started);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_rx_stop_wait_entry(struct bna_port *port)
-{
-	bna_port_rx_stop(port);
-}
-
-static void
-bna_port_sm_rx_stop_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_CHLD_STOPPED:
-		bfa_fsm_set_state(port, bna_port_sm_mtu_cfg_wait);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_mtu_cfg_wait_entry(struct bna_port *port)
-{
-	bna_fw_mtu_set(port);
-}
-
-static void
-bna_port_sm_mtu_cfg_wait(struct bna_port *port, enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_FWRESP_MTU:
-		bfa_fsm_set_state(port, bna_port_sm_started);
-		bna_port_rx_start(port);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_port_sm_chld_stop_wait_entry(struct bna_port *port)
-{
-	bna_port_chld_stop(port);
-}
-
-static void
-bna_port_sm_chld_stop_wait(struct bna_port *port,
-				enum bna_port_event event)
-{
-	switch (event) {
-	case PORT_E_FAIL:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		bna_port_chld_fail(port);
-		break;
-
-	case PORT_E_CHLD_STOPPED:
-		bfa_fsm_set_state(port, bna_port_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_fw_pause_set(struct bna_port *port)
-{
-	struct bfi_ll_set_pause_req ll_req;
-
-	memset(&ll_req, 0, sizeof(ll_req));
-	ll_req.mh.msg_class = BFI_MC_LL;
-	ll_req.mh.msg_id = BFI_LL_H2I_SET_PAUSE_REQ;
-	ll_req.mh.mtag.h2i.lpu_id = 0;
-
-	ll_req.tx_pause = port->pause_config.tx_pause;
-	ll_req.rx_pause = port->pause_config.rx_pause;
-
-	bna_mbox_qe_fill(&port->mbox_qe, &ll_req, sizeof(ll_req),
-			bna_fw_cb_pause_set, port);
-
-	bna_mbox_send(port->bna, &port->mbox_qe);
-}
-
-static void
-bna_fw_cb_pause_set(void *arg, int status)
-{
-	struct bna_port *port = (struct bna_port *)arg;
-
-	bfa_q_qe_init(&port->mbox_qe.qe);
-	bfa_fsm_send_event(port, PORT_E_FWRESP_PAUSE);
-}
-
-void
-bna_fw_mtu_set(struct bna_port *port)
-{
-	struct bfi_ll_mtu_info_req ll_req;
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_MTU_INFO_REQ, 0);
-	ll_req.mtu = htons((u16)port->mtu);
-
-	bna_mbox_qe_fill(&port->mbox_qe, &ll_req, sizeof(ll_req),
-				bna_fw_cb_mtu_set, port);
-	bna_mbox_send(port->bna, &port->mbox_qe);
-}
-
-void
-bna_fw_cb_mtu_set(void *arg, int status)
-{
-	struct bna_port *port = (struct bna_port *)arg;
-
-	bfa_q_qe_init(&port->mbox_qe.qe);
-	bfa_fsm_send_event(port, PORT_E_FWRESP_MTU);
-}
-
-static void
-bna_port_cb_chld_stopped(void *arg)
-{
-	struct bna_port *port = (struct bna_port *)arg;
-
-	bfa_fsm_send_event(port, PORT_E_CHLD_STOPPED);
-}
-
-static void
-bna_port_init(struct bna_port *port, struct bna *bna)
-{
-	port->bna = bna;
-	port->flags = 0;
-	port->mtu = 0;
-	port->type = BNA_PORT_T_REGULAR;
-
-	port->link_cbfn = bnad_cb_port_link_status;
-
-	port->chld_stop_wc.wc_resume = bna_port_cb_chld_stopped;
-	port->chld_stop_wc.wc_cbarg = port;
-	port->chld_stop_wc.wc_count = 0;
-
-	port->stop_cbfn = NULL;
-	port->stop_cbarg = NULL;
-
-	port->pause_cbfn = NULL;
-
-	port->mtu_cbfn = NULL;
-
-	bfa_q_qe_init(&port->mbox_qe.qe);
-
-	bfa_fsm_set_state(port, bna_port_sm_stopped);
-
-	bna_llport_init(&port->llport, bna);
-}
-
-static void
-bna_port_uninit(struct bna_port *port)
-{
-	bna_llport_uninit(&port->llport);
-
-	port->flags = 0;
-
-	port->bna = NULL;
-}
-
-static int
-bna_port_state_get(struct bna_port *port)
-{
-	return bfa_sm_to_state(port_sm_table, port->fsm);
-}
-
-static void
-bna_port_start(struct bna_port *port)
-{
-	port->flags |= BNA_PORT_F_DEVICE_READY;
-	if (port->flags & BNA_PORT_F_ENABLED)
-		bfa_fsm_send_event(port, PORT_E_START);
-}
-
-static void
-bna_port_stop(struct bna_port *port)
-{
-	port->stop_cbfn = bna_device_cb_port_stopped;
-	port->stop_cbarg = &port->bna->device;
-
-	port->flags &= ~BNA_PORT_F_DEVICE_READY;
-	bfa_fsm_send_event(port, PORT_E_STOP);
-}
-
-static void
-bna_port_fail(struct bna_port *port)
-{
-	port->flags &= ~BNA_PORT_F_DEVICE_READY;
-	bfa_fsm_send_event(port, PORT_E_FAIL);
-}
-
-void
-bna_port_cb_tx_stopped(struct bna_port *port, enum bna_cb_status status)
-{
-	bfa_wc_down(&port->chld_stop_wc);
-}
-
-void
-bna_port_cb_rx_stopped(struct bna_port *port, enum bna_cb_status status)
-{
-	bfa_wc_down(&port->chld_stop_wc);
-}
-
-int
-bna_port_mtu_get(struct bna_port *port)
-{
-	return port->mtu;
-}
-
-void
-bna_port_enable(struct bna_port *port)
-{
-	if (port->fsm != (bfa_sm_t)bna_port_sm_stopped)
-		return;
-
-	port->flags |= BNA_PORT_F_ENABLED;
-
-	if (port->flags & BNA_PORT_F_DEVICE_READY)
-		bfa_fsm_send_event(port, PORT_E_START);
-}
-
-void
-bna_port_disable(struct bna_port *port, enum bna_cleanup_type type,
-		 void (*cbfn)(void *, enum bna_cb_status))
-{
-	if (type == BNA_SOFT_CLEANUP) {
-		(*cbfn)(port->bna->bnad, BNA_CB_SUCCESS);
-		return;
-	}
-
-	port->stop_cbfn = cbfn;
-	port->stop_cbarg = port->bna->bnad;
-
-	port->flags &= ~BNA_PORT_F_ENABLED;
-
-	bfa_fsm_send_event(port, PORT_E_STOP);
-}
-
-void
-bna_port_pause_config(struct bna_port *port,
-		      struct bna_pause_config *pause_config,
-		      void (*cbfn)(struct bnad *, enum bna_cb_status))
-{
-	port->pause_config = *pause_config;
-
-	port->pause_cbfn = cbfn;
-
-	bfa_fsm_send_event(port, PORT_E_PAUSE_CFG);
-}
-
-void
-bna_port_mtu_set(struct bna_port *port, int mtu,
-		 void (*cbfn)(struct bnad *, enum bna_cb_status))
-{
-	port->mtu = mtu;
-
-	port->mtu_cbfn = cbfn;
-
-	bfa_fsm_send_event(port, PORT_E_MTU_CFG);
-}
-
-void
-bna_port_mac_get(struct bna_port *port, mac_t *mac)
-{
-	*mac = bfa_nw_ioc_get_mac(&port->bna->device.ioc);
-}
-
-/**
- * DEVICE
- */
-#define enable_mbox_intr(_device)\
-do {\
-	u32 intr_status;\
-	bna_intr_status_get((_device)->bna, intr_status);\
-	bnad_cb_device_enable_mbox_intr((_device)->bna->bnad);\
-	bna_mbox_intr_enable((_device)->bna);\
-} while (0)
-
-#define disable_mbox_intr(_device)\
-do {\
-	bna_mbox_intr_disable((_device)->bna);\
-	bnad_cb_device_disable_mbox_intr((_device)->bna->bnad);\
-} while (0)
-
-static const struct bna_chip_regs_offset reg_offset[] =
-{{HOST_PAGE_NUM_FN0, HOSTFN0_INT_STATUS,
-	HOSTFN0_INT_MASK, HOST_MSIX_ERR_INDEX_FN0},
-{HOST_PAGE_NUM_FN1, HOSTFN1_INT_STATUS,
-	HOSTFN1_INT_MASK, HOST_MSIX_ERR_INDEX_FN1},
-{HOST_PAGE_NUM_FN2, HOSTFN2_INT_STATUS,
-	HOSTFN2_INT_MASK, HOST_MSIX_ERR_INDEX_FN2},
-{HOST_PAGE_NUM_FN3, HOSTFN3_INT_STATUS,
-	HOSTFN3_INT_MASK, HOST_MSIX_ERR_INDEX_FN3},
-};
-
-enum bna_device_event {
-	DEVICE_E_ENABLE			= 1,
-	DEVICE_E_DISABLE		= 2,
-	DEVICE_E_IOC_READY		= 3,
-	DEVICE_E_IOC_FAILED		= 4,
-	DEVICE_E_IOC_DISABLED		= 5,
-	DEVICE_E_IOC_RESET		= 6,
-	DEVICE_E_PORT_STOPPED		= 7,
-};
-
-enum bna_device_state {
-	BNA_DEVICE_STOPPED		= 1,
-	BNA_DEVICE_IOC_READY_WAIT 	= 2,
-	BNA_DEVICE_READY		= 3,
-	BNA_DEVICE_PORT_STOP_WAIT 	= 4,
-	BNA_DEVICE_IOC_DISABLE_WAIT 	= 5,
-	BNA_DEVICE_FAILED		= 6
-};
-
-bfa_fsm_state_decl(bna_device, stopped, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, ioc_ready_wait, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, ready, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, port_stop_wait, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, ioc_disable_wait, struct bna_device,
-			enum bna_device_event);
-bfa_fsm_state_decl(bna_device, failed, struct bna_device,
-			enum bna_device_event);
-
-static struct bfa_sm_table device_sm_table[] = {
-	{BFA_SM(bna_device_sm_stopped), BNA_DEVICE_STOPPED},
-	{BFA_SM(bna_device_sm_ioc_ready_wait), BNA_DEVICE_IOC_READY_WAIT},
-	{BFA_SM(bna_device_sm_ready), BNA_DEVICE_READY},
-	{BFA_SM(bna_device_sm_port_stop_wait), BNA_DEVICE_PORT_STOP_WAIT},
-	{BFA_SM(bna_device_sm_ioc_disable_wait), BNA_DEVICE_IOC_DISABLE_WAIT},
-	{BFA_SM(bna_device_sm_failed), BNA_DEVICE_FAILED},
-};
-
-static void
-bna_device_sm_stopped_entry(struct bna_device *device)
-{
-	if (device->stop_cbfn)
-		device->stop_cbfn(device->stop_cbarg, BNA_CB_SUCCESS);
-
-	device->stop_cbfn = NULL;
-	device->stop_cbarg = NULL;
-}
-
-static void
-bna_device_sm_stopped(struct bna_device *device,
-			enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_ENABLE:
-		if (device->intr_type == BNA_INTR_T_MSIX)
-			bna_mbox_msix_idx_set(device);
-		bfa_nw_ioc_enable(&device->ioc);
-		bfa_fsm_set_state(device, bna_device_sm_ioc_ready_wait);
-		break;
-
-	case DEVICE_E_DISABLE:
-		bfa_fsm_set_state(device, bna_device_sm_stopped);
-		break;
-
-	case DEVICE_E_IOC_RESET:
-		enable_mbox_intr(device);
-		break;
-
-	case DEVICE_E_IOC_FAILED:
-		bfa_fsm_set_state(device, bna_device_sm_failed);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_ioc_ready_wait_entry(struct bna_device *device)
-{
-	/**
-	 * Do not call bfa_ioc_enable() here. It must be called in the
-	 * previous state due to failed -> ioc_ready_wait transition.
-	 */
-}
-
-static void
-bna_device_sm_ioc_ready_wait(struct bna_device *device,
-				enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_DISABLE:
-		if (device->ready_cbfn)
-			device->ready_cbfn(device->ready_cbarg,
-						BNA_CB_INTERRUPT);
-		device->ready_cbfn = NULL;
-		device->ready_cbarg = NULL;
-		bfa_fsm_set_state(device, bna_device_sm_ioc_disable_wait);
-		break;
-
-	case DEVICE_E_IOC_READY:
-		bfa_fsm_set_state(device, bna_device_sm_ready);
-		break;
-
-	case DEVICE_E_IOC_FAILED:
-		bfa_fsm_set_state(device, bna_device_sm_failed);
-		break;
-
-	case DEVICE_E_IOC_RESET:
-		enable_mbox_intr(device);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_ready_entry(struct bna_device *device)
-{
-	bna_mbox_mod_start(&device->bna->mbox_mod);
-	bna_port_start(&device->bna->port);
-
-	if (device->ready_cbfn)
-		device->ready_cbfn(device->ready_cbarg,
-					BNA_CB_SUCCESS);
-	device->ready_cbfn = NULL;
-	device->ready_cbarg = NULL;
-}
-
-static void
-bna_device_sm_ready(struct bna_device *device, enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_DISABLE:
-		bfa_fsm_set_state(device, bna_device_sm_port_stop_wait);
-		break;
-
-	case DEVICE_E_IOC_FAILED:
-		bfa_fsm_set_state(device, bna_device_sm_failed);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_port_stop_wait_entry(struct bna_device *device)
-{
-	bna_port_stop(&device->bna->port);
-}
-
-static void
-bna_device_sm_port_stop_wait(struct bna_device *device,
-				enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_PORT_STOPPED:
-		bna_mbox_mod_stop(&device->bna->mbox_mod);
-		bfa_fsm_set_state(device, bna_device_sm_ioc_disable_wait);
-		break;
-
-	case DEVICE_E_IOC_FAILED:
-		disable_mbox_intr(device);
-		bna_port_fail(&device->bna->port);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_ioc_disable_wait_entry(struct bna_device *device)
-{
-	bfa_nw_ioc_disable(&device->ioc);
-}
-
-static void
-bna_device_sm_ioc_disable_wait(struct bna_device *device,
-				enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_IOC_DISABLED:
-		disable_mbox_intr(device);
-		bfa_fsm_set_state(device, bna_device_sm_stopped);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-static void
-bna_device_sm_failed_entry(struct bna_device *device)
-{
-	disable_mbox_intr(device);
-	bna_port_fail(&device->bna->port);
-	bna_mbox_mod_stop(&device->bna->mbox_mod);
-
-	if (device->ready_cbfn)
-		device->ready_cbfn(device->ready_cbarg,
-					BNA_CB_FAIL);
-	device->ready_cbfn = NULL;
-	device->ready_cbarg = NULL;
-}
-
-static void
-bna_device_sm_failed(struct bna_device *device,
-			enum bna_device_event event)
-{
-	switch (event) {
-	case DEVICE_E_DISABLE:
-		bfa_fsm_set_state(device, bna_device_sm_ioc_disable_wait);
-		break;
-
-	case DEVICE_E_IOC_RESET:
-		enable_mbox_intr(device);
-		bfa_fsm_set_state(device, bna_device_sm_ioc_ready_wait);
-		break;
-
-	default:
-		bfa_sm_fault(event);
-	}
-}
-
-/* IOC callback functions */
-
-static void
-bna_device_cb_iocll_ready(void *dev, enum bfa_status error)
-{
-	struct bna_device *device = (struct bna_device *)dev;
-
-	if (error)
-		bfa_fsm_send_event(device, DEVICE_E_IOC_FAILED);
-	else
-		bfa_fsm_send_event(device, DEVICE_E_IOC_READY);
-}
-
-static void
-bna_device_cb_iocll_disabled(void *dev)
-{
-	struct bna_device *device = (struct bna_device *)dev;
-
-	bfa_fsm_send_event(device, DEVICE_E_IOC_DISABLED);
-}
-
-static void
-bna_device_cb_iocll_failed(void *dev)
-{
-	struct bna_device *device = (struct bna_device *)dev;
-
-	bfa_fsm_send_event(device, DEVICE_E_IOC_FAILED);
-}
-
-static void
-bna_device_cb_iocll_reset(void *dev)
-{
-	struct bna_device *device = (struct bna_device *)dev;
-
-	bfa_fsm_send_event(device, DEVICE_E_IOC_RESET);
-}
-
-static struct bfa_ioc_cbfn bfa_iocll_cbfn = {
-	bna_device_cb_iocll_ready,
-	bna_device_cb_iocll_disabled,
-	bna_device_cb_iocll_failed,
-	bna_device_cb_iocll_reset
-};
-
-/* device */
-static void
-bna_adv_device_init(struct bna_device *device, struct bna *bna,
-		struct bna_res_info *res_info)
-{
-	u8 *kva;
-	u64 dma;
-
-	device->bna = bna;
-
-	kva = res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mdl[0].kva;
-
-	/**
-	 * Attach common modules (Diag, SFP, CEE, Port) and claim respective
-	 * DMA memory.
-	 */
-	BNA_GET_DMA_ADDR(
-		&res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].dma, dma);
-	kva = res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mdl[0].kva;
-
-	bfa_nw_cee_attach(&bna->cee, &device->ioc, bna);
-	bfa_nw_cee_mem_claim(&bna->cee, kva, dma);
-	kva += bfa_nw_cee_meminfo();
-	dma += bfa_nw_cee_meminfo();
-
-}
-
-static void
-bna_device_init(struct bna_device *device, struct bna *bna,
-		struct bna_res_info *res_info)
-{
-	u64 dma;
-
-	device->bna = bna;
-
-	/**
-	 * Attach IOC and claim:
-	 *	1. DMA memory for IOC attributes
-	 *	2. Kernel memory for FW trace
-	 */
-	bfa_nw_ioc_attach(&device->ioc, device, &bfa_iocll_cbfn);
-	bfa_nw_ioc_pci_init(&device->ioc, &bna->pcidev, BFI_MC_LL);
-
-	BNA_GET_DMA_ADDR(
-		&res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].dma, dma);
-	bfa_nw_ioc_mem_claim(&device->ioc,
-		res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mdl[0].kva,
-			  dma);
-
-	bna_adv_device_init(device, bna, res_info);
-	/*
-	 * Initialize mbox_mod only after IOC, so that mbox handler
-	 * registration goes through
-	 */
-	device->intr_type =
-		res_info[BNA_RES_INTR_T_MBOX].res_u.intr_info.intr_type;
-	device->vector =
-		res_info[BNA_RES_INTR_T_MBOX].res_u.intr_info.idl[0].vector;
-	bna_mbox_mod_init(&bna->mbox_mod, bna);
-
-	device->ready_cbfn = device->stop_cbfn = NULL;
-	device->ready_cbarg = device->stop_cbarg = NULL;
-
-	bfa_fsm_set_state(device, bna_device_sm_stopped);
-}
-
-static void
-bna_device_uninit(struct bna_device *device)
-{
-	bna_mbox_mod_uninit(&device->bna->mbox_mod);
-
-	bfa_nw_ioc_detach(&device->ioc);
-
-	device->bna = NULL;
-}
-
-static void
-bna_device_cb_port_stopped(void *arg, enum bna_cb_status status)
-{
-	struct bna_device *device = (struct bna_device *)arg;
-
-	bfa_fsm_send_event(device, DEVICE_E_PORT_STOPPED);
-}
-
-static int
-bna_device_status_get(struct bna_device *device)
-{
-	return device->fsm == (bfa_fsm_t)bna_device_sm_ready;
-}
-
-void
-bna_device_enable(struct bna_device *device)
-{
-	if (device->fsm != (bfa_fsm_t)bna_device_sm_stopped) {
-		bnad_cb_device_enabled(device->bna->bnad, BNA_CB_BUSY);
-		return;
-	}
-
-	device->ready_cbfn = bnad_cb_device_enabled;
-	device->ready_cbarg = device->bna->bnad;
-
-	bfa_fsm_send_event(device, DEVICE_E_ENABLE);
-}
-
-void
-bna_device_disable(struct bna_device *device, enum bna_cleanup_type type)
-{
-	if (type == BNA_SOFT_CLEANUP) {
-		bnad_cb_device_disabled(device->bna->bnad, BNA_CB_SUCCESS);
-		return;
-	}
-
-	device->stop_cbfn = bnad_cb_device_disabled;
-	device->stop_cbarg = device->bna->bnad;
-
-	bfa_fsm_send_event(device, DEVICE_E_DISABLE);
-}
-
-static int
-bna_device_state_get(struct bna_device *device)
-{
-	return bfa_sm_to_state(device_sm_table, device->fsm);
-}
-
-const u32 bna_napi_dim_vector[BNA_LOAD_T_MAX][BNA_BIAS_T_MAX] = {
-	{12, 12},
-	{6, 10},
-	{5, 10},
-	{4, 8},
-	{3, 6},
-	{3, 6},
-	{2, 4},
-	{1, 2},
-};
-
-/* utils */
-
-static void
-bna_adv_res_req(struct bna_res_info *res_info)
-{
-	/* DMA memory for COMMON_MODULE */
-	res_info[BNA_RES_MEM_T_COM].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
-	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_COM].res_u.mem_info.len = ALIGN(
-				bfa_nw_cee_meminfo(), PAGE_SIZE);
-
-	/* Virtual memory for retreiving fw_trc */
-	res_info[BNA_RES_MEM_T_FWTRC].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.mem_type = BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.num = 0;
-	res_info[BNA_RES_MEM_T_FWTRC].res_u.mem_info.len = 0;
-
-	/* DMA memory for retreiving stats */
-	res_info[BNA_RES_MEM_T_STATS].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
-	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.len =
-				ALIGN(BFI_HW_STATS_SIZE, PAGE_SIZE);
-
-	/* Virtual memory for soft stats */
-	res_info[BNA_RES_MEM_T_SWSTATS].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_SWSTATS].res_u.mem_info.mem_type = BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_SWSTATS].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_SWSTATS].res_u.mem_info.len =
-				sizeof(struct bna_sw_stats);
-}
-
-static void
-bna_sw_stats_get(struct bna *bna, struct bna_sw_stats *sw_stats)
-{
-	struct bna_tx *tx;
-	struct bna_txq *txq;
-	struct bna_rx *rx;
-	struct bna_rxp *rxp;
-	struct list_head *qe;
-	struct list_head *txq_qe;
-	struct list_head *rxp_qe;
-	struct list_head *mac_qe;
-	int i;
-
-	sw_stats->device_state = bna_device_state_get(&bna->device);
-	sw_stats->port_state = bna_port_state_get(&bna->port);
-	sw_stats->port_flags = bna->port.flags;
-	sw_stats->llport_state = bna_llport_state_get(&bna->port.llport);
-	sw_stats->priority = bna->port.priority;
-
-	i = 0;
-	list_for_each(qe, &bna->tx_mod.tx_active_q) {
-		tx = (struct bna_tx *)qe;
-		sw_stats->tx_stats[i].tx_state = bna_tx_state_get(tx);
-		sw_stats->tx_stats[i].tx_flags = tx->flags;
-
-		sw_stats->tx_stats[i].num_txqs = 0;
-		sw_stats->tx_stats[i].txq_bmap[0] = 0;
-		sw_stats->tx_stats[i].txq_bmap[1] = 0;
-		list_for_each(txq_qe, &tx->txq_q) {
-			txq = (struct bna_txq *)txq_qe;
-			if (txq->txq_id < 32)
-				sw_stats->tx_stats[i].txq_bmap[0] |=
-						((u32)1 << txq->txq_id);
-			else
-				sw_stats->tx_stats[i].txq_bmap[1] |=
-						((u32)
-						 1 << (txq->txq_id - 32));
-			sw_stats->tx_stats[i].num_txqs++;
-		}
-
-		sw_stats->tx_stats[i].txf_id = tx->txf.txf_id;
-
-		i++;
-	}
-	sw_stats->num_active_tx = i;
-
-	i = 0;
-	list_for_each(qe, &bna->rx_mod.rx_active_q) {
-		rx = (struct bna_rx *)qe;
-		sw_stats->rx_stats[i].rx_state = bna_rx_state_get(rx);
-		sw_stats->rx_stats[i].rx_flags = rx->rx_flags;
-
-		sw_stats->rx_stats[i].num_rxps = 0;
-		sw_stats->rx_stats[i].num_rxqs = 0;
-		sw_stats->rx_stats[i].rxq_bmap[0] = 0;
-		sw_stats->rx_stats[i].rxq_bmap[1] = 0;
-		sw_stats->rx_stats[i].cq_bmap[0] = 0;
-		sw_stats->rx_stats[i].cq_bmap[1] = 0;
-		list_for_each(rxp_qe, &rx->rxp_q) {
-			rxp = (struct bna_rxp *)rxp_qe;
-
-			sw_stats->rx_stats[i].num_rxqs += 1;
-
-			if (rxp->type == BNA_RXP_SINGLE) {
-				if (rxp->rxq.single.only->rxq_id < 32) {
-					sw_stats->rx_stats[i].rxq_bmap[0] |=
-					((u32)1 <<
-					rxp->rxq.single.only->rxq_id);
-				} else {
-					sw_stats->rx_stats[i].rxq_bmap[1] |=
-					((u32)1 <<
-					(rxp->rxq.single.only->rxq_id - 32));
-				}
-			} else {
-				if (rxp->rxq.slr.large->rxq_id < 32) {
-					sw_stats->rx_stats[i].rxq_bmap[0] |=
-					((u32)1 <<
-					rxp->rxq.slr.large->rxq_id);
-				} else {
-					sw_stats->rx_stats[i].rxq_bmap[1] |=
-					((u32)1 <<
-					(rxp->rxq.slr.large->rxq_id - 32));
-				}
-
-				if (rxp->rxq.slr.small->rxq_id < 32) {
-					sw_stats->rx_stats[i].rxq_bmap[0] |=
-					((u32)1 <<
-					rxp->rxq.slr.small->rxq_id);
-				} else {
-					sw_stats->rx_stats[i].rxq_bmap[1] |=
-				((u32)1 <<
-				 (rxp->rxq.slr.small->rxq_id - 32));
-				}
-				sw_stats->rx_stats[i].num_rxqs += 1;
-			}
-
-			if (rxp->cq.cq_id < 32)
-				sw_stats->rx_stats[i].cq_bmap[0] |=
-					(1 << rxp->cq.cq_id);
-			else
-				sw_stats->rx_stats[i].cq_bmap[1] |=
-					(1 << (rxp->cq.cq_id - 32));
-
-			sw_stats->rx_stats[i].num_rxps++;
-		}
-
-		sw_stats->rx_stats[i].rxf_id = rx->rxf.rxf_id;
-		sw_stats->rx_stats[i].rxf_state = bna_rxf_state_get(&rx->rxf);
-		sw_stats->rx_stats[i].rxf_oper_state = rx->rxf.rxf_oper_state;
-
-		sw_stats->rx_stats[i].num_active_ucast = 0;
-		if (rx->rxf.ucast_active_mac)
-			sw_stats->rx_stats[i].num_active_ucast++;
-		list_for_each(mac_qe, &rx->rxf.ucast_active_q)
-			sw_stats->rx_stats[i].num_active_ucast++;
-
-		sw_stats->rx_stats[i].num_active_mcast = 0;
-		list_for_each(mac_qe, &rx->rxf.mcast_active_q)
-			sw_stats->rx_stats[i].num_active_mcast++;
-
-		sw_stats->rx_stats[i].rxmode_active = rx->rxf.rxmode_active;
-		sw_stats->rx_stats[i].vlan_filter_status =
-						rx->rxf.vlan_filter_status;
-		memcpy(sw_stats->rx_stats[i].vlan_filter_table,
-				rx->rxf.vlan_filter_table,
-				sizeof(u32) * ((BFI_MAX_VLAN + 1) / 32));
-
-		sw_stats->rx_stats[i].rss_status = rx->rxf.rss_status;
-		sw_stats->rx_stats[i].hds_status = rx->rxf.hds_status;
-
-		i++;
-	}
-	sw_stats->num_active_rx = i;
-}
-
-static void
-bna_fw_cb_stats_get(void *arg, int status)
-{
-	struct bna *bna = (struct bna *)arg;
-	u64 *p_stats;
-	int i, count;
-	int rxf_count, txf_count;
-	u64 rxf_bmap, txf_bmap;
-
-	bfa_q_qe_init(&bna->mbox_qe.qe);
-
-	if (status == 0) {
-		p_stats = (u64 *)bna->stats.hw_stats;
-		count = sizeof(struct bfi_ll_stats) / sizeof(u64);
-		for (i = 0; i < count; i++)
-			p_stats[i] = cpu_to_be64(p_stats[i]);
-
-		rxf_count = 0;
-		rxf_bmap = (u64)bna->stats.rxf_bmap[0] |
-			((u64)bna->stats.rxf_bmap[1] << 32);
-		for (i = 0; i < BFI_LL_RXF_ID_MAX; i++)
-			if (rxf_bmap & ((u64)1 << i))
-				rxf_count++;
-
-		txf_count = 0;
-		txf_bmap = (u64)bna->stats.txf_bmap[0] |
-			((u64)bna->stats.txf_bmap[1] << 32);
-		for (i = 0; i < BFI_LL_TXF_ID_MAX; i++)
-			if (txf_bmap & ((u64)1 << i))
-				txf_count++;
-
-		p_stats = (u64 *)&bna->stats.hw_stats->rxf_stats[0] +
-				((rxf_count * sizeof(struct bfi_ll_stats_rxf) +
-				txf_count * sizeof(struct bfi_ll_stats_txf))/
-				sizeof(u64));
-
-		/* Populate the TXF stats from the firmware DMAed copy */
-		for (i = (BFI_LL_TXF_ID_MAX - 1); i >= 0; i--)
-			if (txf_bmap & ((u64)1 << i)) {
-				p_stats -= sizeof(struct bfi_ll_stats_txf)/
-						sizeof(u64);
-				memcpy(&bna->stats.hw_stats->txf_stats[i],
-					p_stats,
-					sizeof(struct bfi_ll_stats_txf));
-			}
-
-		/* Populate the RXF stats from the firmware DMAed copy */
-		for (i = (BFI_LL_RXF_ID_MAX - 1); i >= 0; i--)
-			if (rxf_bmap & ((u64)1 << i)) {
-				p_stats -= sizeof(struct bfi_ll_stats_rxf)/
-						sizeof(u64);
-				memcpy(&bna->stats.hw_stats->rxf_stats[i],
-					p_stats,
-					sizeof(struct bfi_ll_stats_rxf));
-			}
-
-		bna_sw_stats_get(bna, bna->stats.sw_stats);
-		bnad_cb_stats_get(bna->bnad, BNA_CB_SUCCESS, &bna->stats);
-	} else
-		bnad_cb_stats_get(bna->bnad, BNA_CB_FAIL, &bna->stats);
-}
-
-static void
-bna_fw_stats_get(struct bna *bna)
-{
-	struct bfi_ll_stats_req ll_req;
-
-	bfi_h2i_set(ll_req.mh, BFI_MC_LL, BFI_LL_H2I_STATS_GET_REQ, 0);
-	ll_req.stats_mask = htons(BFI_LL_STATS_ALL);
-
-	ll_req.rxf_id_mask[0] = htonl(bna->rx_mod.rxf_bmap[0]);
-	ll_req.rxf_id_mask[1] =	htonl(bna->rx_mod.rxf_bmap[1]);
-	ll_req.txf_id_mask[0] =	htonl(bna->tx_mod.txf_bmap[0]);
-	ll_req.txf_id_mask[1] =	htonl(bna->tx_mod.txf_bmap[1]);
-
-	ll_req.host_buffer.a32.addr_hi = bna->hw_stats_dma.msb;
-	ll_req.host_buffer.a32.addr_lo = bna->hw_stats_dma.lsb;
-
-	bna_mbox_qe_fill(&bna->mbox_qe, &ll_req, sizeof(ll_req),
-				bna_fw_cb_stats_get, bna);
-	bna_mbox_send(bna, &bna->mbox_qe);
-
-	bna->stats.rxf_bmap[0] = bna->rx_mod.rxf_bmap[0];
-	bna->stats.rxf_bmap[1] = bna->rx_mod.rxf_bmap[1];
-	bna->stats.txf_bmap[0] = bna->tx_mod.txf_bmap[0];
-	bna->stats.txf_bmap[1] = bna->tx_mod.txf_bmap[1];
-}
-
-void
-bna_stats_get(struct bna *bna)
-{
-	if (bna_device_status_get(&bna->device))
-		bna_fw_stats_get(bna);
-	else
-		bnad_cb_stats_get(bna->bnad, BNA_CB_FAIL, &bna->stats);
-}
-
-/* IB */
-static void
-bna_ib_coalescing_timeo_set(struct bna_ib *ib, u8 coalescing_timeo)
-{
-	ib->ib_config.coalescing_timeo = coalescing_timeo;
-
-	if (ib->start_count)
-		ib->door_bell.doorbell_ack = BNA_DOORBELL_IB_INT_ACK(
-				(u32)ib->ib_config.coalescing_timeo, 0);
-}
-
-/* RxF */
-void
-bna_rxf_adv_init(struct bna_rxf *rxf,
-		struct bna_rx *rx,
-		struct bna_rx_config *q_config)
-{
-	switch (q_config->rxp_type) {
-	case BNA_RXP_SINGLE:
-		/* No-op */
-		break;
-	case BNA_RXP_SLR:
-		rxf->ctrl_flags |= BNA_RXF_CF_SM_LG_RXQ;
-		break;
-	case BNA_RXP_HDS:
-		rxf->hds_cfg.hdr_type = q_config->hds_config.hdr_type;
-		rxf->hds_cfg.header_size =
-				q_config->hds_config.header_size;
-		rxf->forced_offset = 0;
-		break;
-	default:
-		break;
-	}
-
-	if (q_config->rss_status == BNA_STATUS_T_ENABLED) {
-		rxf->ctrl_flags |= BNA_RXF_CF_RSS_ENABLE;
-		rxf->rss_cfg.hash_type = q_config->rss_config.hash_type;
-		rxf->rss_cfg.hash_mask = q_config->rss_config.hash_mask;
-		memcpy(&rxf->rss_cfg.toeplitz_hash_key[0],
-			&q_config->rss_config.toeplitz_hash_key[0],
-			sizeof(rxf->rss_cfg.toeplitz_hash_key));
-	}
-}
-
-static void
-rxf_fltr_mbox_cmd(struct bna_rxf *rxf, u8 cmd, enum bna_status status)
-{
-	struct bfi_ll_rxf_req req;
-
-	bfi_h2i_set(req.mh, BFI_MC_LL, cmd, 0);
-
-	req.rxf_id = rxf->rxf_id;
-	req.enable = status;
-
-	bna_mbox_qe_fill(&rxf->mbox_qe, &req, sizeof(req),
-			rxf_cb_cam_fltr_mbox_cmd, rxf);
-
-	bna_mbox_send(rxf->rx->bna, &rxf->mbox_qe);
-}
-
-int
-rxf_process_packet_filter_ucast(struct bna_rxf *rxf)
-{
-	struct bna_mac *mac = NULL;
-	struct list_head *qe;
-
-	/* Add additional MAC entries */
-	if (!list_empty(&rxf->ucast_pending_add_q)) {
-		bfa_q_deq(&rxf->ucast_pending_add_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_ADD_REQ, mac);
-		list_add_tail(&mac->qe, &rxf->ucast_active_q);
-		return 1;
-	}
-
-	/* Delete MAC addresses previousely added */
-	if (!list_empty(&rxf->ucast_pending_del_q)) {
-		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_DEL_REQ, mac);
-		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_process_packet_filter_promisc(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-
-	/* Enable/disable promiscuous mode */
-	if (is_promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move promisc configuration from pending -> active */
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active |= BNA_RXMODE_PROMISC;
-
-		/* Disable VLAN filter to allow all VLANs */
-		__rxf_vlan_filter_set(rxf, BNA_STATUS_T_DISABLED);
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ,
-				BNA_STATUS_T_ENABLED);
-		return 1;
-	} else if (is_promisc_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move promisc configuration from pending -> active */
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-		bna->rxf_promisc_id = BFI_MAX_RXF;
-
-		/* Revert VLAN filter */
-		__rxf_vlan_filter_set(rxf, rxf->vlan_filter_status);
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_process_packet_filter_allmulti(struct bna_rxf *rxf)
-{
-	/* Enable/disable allmulti mode */
-	if (is_allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move allmulti configuration from pending -> active */
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active |= BNA_RXMODE_ALLMULTI;
-
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_FILTER_REQ,
-				BNA_STATUS_T_ENABLED);
-		return 1;
-	} else if (is_allmulti_disable(rxf->rxmode_pending,
-					rxf->rxmode_pending_bitmask)) {
-		/* move allmulti configuration from pending -> active */
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_FILTER_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_clear_packet_filter_ucast(struct bna_rxf *rxf)
-{
-	struct bna_mac *mac = NULL;
-	struct list_head *qe;
-
-	/* 1. delete pending ucast entries */
-	if (!list_empty(&rxf->ucast_pending_del_q)) {
-		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_DEL_REQ, mac);
-		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
-		return 1;
-	}
-
-	/* 2. clear active ucast entries; move them to pending_add_q */
-	if (!list_empty(&rxf->ucast_active_q)) {
-		bfa_q_deq(&rxf->ucast_active_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		rxf_cam_mbox_cmd(rxf, BFI_LL_H2I_MAC_UCAST_DEL_REQ, mac);
-		list_add_tail(&mac->qe, &rxf->ucast_pending_add_q);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_clear_packet_filter_promisc(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-
-	/* 6. Execute pending promisc mode disable command */
-	if (is_promisc_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move promisc configuration from pending -> active */
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-		bna->rxf_promisc_id = BFI_MAX_RXF;
-
-		/* Revert VLAN filter */
-		__rxf_vlan_filter_set(rxf, rxf->vlan_filter_status);
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	/* 7. Clear active promisc mode; move it to pending enable */
-	if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
-		/* move promisc configuration from active -> pending */
-		promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-
-		/* Revert VLAN filter */
-		__rxf_vlan_filter_set(rxf, rxf->vlan_filter_status);
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_RXF_PROMISCUOUS_SET_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	return 0;
-}
-
-int
-rxf_clear_packet_filter_allmulti(struct bna_rxf *rxf)
-{
-	/* 10. Execute pending allmulti mode disable command */
-	if (is_allmulti_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* move allmulti configuration from pending -> active */
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_FILTER_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	/* 11. Clear active allmulti mode; move it to pending enable */
-	if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
-		/* move allmulti configuration from active -> pending */
-		allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-		rxf_fltr_mbox_cmd(rxf, BFI_LL_H2I_MAC_MCAST_FILTER_REQ,
-				BNA_STATUS_T_DISABLED);
-		return 1;
-	}
-
-	return 0;
-}
-
-void
-rxf_reset_packet_filter_ucast(struct bna_rxf *rxf)
-{
-	struct list_head *qe;
-	struct bna_mac *mac;
-
-	/* 1. Move active ucast entries to pending_add_q */
-	while (!list_empty(&rxf->ucast_active_q)) {
-		bfa_q_deq(&rxf->ucast_active_q, &qe);
-		bfa_q_qe_init(qe);
-		list_add_tail(qe, &rxf->ucast_pending_add_q);
-	}
-
-	/* 2. Throw away delete pending ucast entries */
-	while (!list_empty(&rxf->ucast_pending_del_q)) {
-		bfa_q_deq(&rxf->ucast_pending_del_q, &qe);
-		bfa_q_qe_init(qe);
-		mac = (struct bna_mac *)qe;
-		bna_ucam_mod_mac_put(&rxf->rx->bna->ucam_mod, mac);
-	}
-}
-
-void
-rxf_reset_packet_filter_promisc(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-
-	/* 6. Clear pending promisc mode disable */
-	if (is_promisc_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-		bna->rxf_promisc_id = BFI_MAX_RXF;
-	}
-
-	/* 7. Move promisc mode config from active -> pending */
-	if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
-		promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_PROMISC;
-	}
-
-}
-
-void
-rxf_reset_packet_filter_allmulti(struct bna_rxf *rxf)
-{
-	/* 10. Clear pending allmulti mode disable */
-	if (is_allmulti_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-	}
-
-	/* 11. Move allmulti mode config from active -> pending */
-	if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
-		allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		rxf->rxmode_active &= ~BNA_RXMODE_ALLMULTI;
-	}
-}
-
-/**
- * Should only be called by bna_rxf_mode_set.
- * Helps deciding if h/w configuration is needed or not.
- *  Returns:
- *	0 = no h/w change
- *	1 = need h/w change
- */
-static int
-rxf_promisc_enable(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-	int ret = 0;
-
-	/* There can not be any pending disable command */
-
-	/* Do nothing if pending enable or already enabled */
-	if (is_promisc_enable(rxf->rxmode_pending,
-			rxf->rxmode_pending_bitmask) ||
-			(rxf->rxmode_active & BNA_RXMODE_PROMISC)) {
-		/* Schedule enable */
-	} else {
-		/* Promisc mode should not be active in the system */
-		promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		bna->rxf_promisc_id = rxf->rxf_id;
-		ret = 1;
-	}
-
-	return ret;
-}
-
-/**
- * Should only be called by bna_rxf_mode_set.
- * Helps deciding if h/w configuration is needed or not.
- *  Returns:
- *	0 = no h/w change
- *	1 = need h/w change
- */
-static int
-rxf_promisc_disable(struct bna_rxf *rxf)
-{
-	struct bna *bna = rxf->rx->bna;
-	int ret = 0;
-
-	/* There can not be any pending disable */
-
-	/* Turn off pending enable command , if any */
-	if (is_promisc_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* Promisc mode should not be active */
-		/* system promisc state should be pending */
-		promisc_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		/* Remove the promisc state from the system */
-		bna->rxf_promisc_id = BFI_MAX_RXF;
-
-		/* Schedule disable */
-	} else if (rxf->rxmode_active & BNA_RXMODE_PROMISC) {
-		/* Promisc mode should be active in the system */
-		promisc_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		ret = 1;
-
-	/* Do nothing if already disabled */
-	} else {
-	}
-
-	return ret;
-}
-
-/**
- * Should only be called by bna_rxf_mode_set.
- * Helps deciding if h/w configuration is needed or not.
- *  Returns:
- *	0 = no h/w change
- *	1 = need h/w change
- */
-static int
-rxf_allmulti_enable(struct bna_rxf *rxf)
-{
-	int ret = 0;
-
-	/* There can not be any pending disable command */
-
-	/* Do nothing if pending enable or already enabled */
-	if (is_allmulti_enable(rxf->rxmode_pending,
-			rxf->rxmode_pending_bitmask) ||
-			(rxf->rxmode_active & BNA_RXMODE_ALLMULTI)) {
-		/* Schedule enable */
-	} else {
-		allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		ret = 1;
-	}
-
-	return ret;
-}
-
-/**
- * Should only be called by bna_rxf_mode_set.
- * Helps deciding if h/w configuration is needed or not.
- *  Returns:
- *	0 = no h/w change
- *	1 = need h/w change
- */
-static int
-rxf_allmulti_disable(struct bna_rxf *rxf)
-{
-	int ret = 0;
-
-	/* There can not be any pending disable */
-
-	/* Turn off pending enable command , if any */
-	if (is_allmulti_enable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask)) {
-		/* Allmulti mode should not be active */
-		allmulti_inactive(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-
-	/* Schedule disable */
-	} else if (rxf->rxmode_active & BNA_RXMODE_ALLMULTI) {
-		allmulti_disable(rxf->rxmode_pending,
-				rxf->rxmode_pending_bitmask);
-		ret = 1;
-	}
-
-	return ret;
-}
-
-/* RxF <- bnad */
-enum bna_cb_status
-bna_rx_mode_set(struct bna_rx *rx, enum bna_rxmode new_mode,
-		enum bna_rxmode bitmask,
-		void (*cbfn)(struct bnad *, struct bna_rx *,
-			     enum bna_cb_status))
-{
-	struct bna_rxf *rxf = &rx->rxf;
-	int need_hw_config = 0;
-
-	/* Process the commands */
-
-	if (is_promisc_enable(new_mode, bitmask)) {
-		/* If promisc mode is already enabled elsewhere in the system */
-		if ((rx->bna->rxf_promisc_id != BFI_MAX_RXF) &&
-			(rx->bna->rxf_promisc_id != rxf->rxf_id))
-			goto err_return;
-		if (rxf_promisc_enable(rxf))
-			need_hw_config = 1;
-	} else if (is_promisc_disable(new_mode, bitmask)) {
-		if (rxf_promisc_disable(rxf))
-			need_hw_config = 1;
-	}
-
-	if (is_allmulti_enable(new_mode, bitmask)) {
-		if (rxf_allmulti_enable(rxf))
-			need_hw_config = 1;
-	} else if (is_allmulti_disable(new_mode, bitmask)) {
-		if (rxf_allmulti_disable(rxf))
-			need_hw_config = 1;
-	}
-
-	/* Trigger h/w if needed */
-
-	if (need_hw_config) {
-		rxf->cam_fltr_cbfn = cbfn;
-		rxf->cam_fltr_cbarg = rx->bna->bnad;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-	} else if (cbfn)
-		(*cbfn)(rx->bna->bnad, rx, BNA_CB_SUCCESS);
-
-	return BNA_CB_SUCCESS;
-
-err_return:
-	return BNA_CB_FAIL;
-}
-
-void
-/* RxF <- bnad */
-bna_rx_vlanfilter_enable(struct bna_rx *rx)
-{
-	struct bna_rxf *rxf = &rx->rxf;
-
-	if (rxf->vlan_filter_status == BNA_STATUS_T_DISABLED) {
-		rxf->rxf_flags |= BNA_RXF_FL_VLAN_CONFIG_PENDING;
-		rxf->vlan_filter_status = BNA_STATUS_T_ENABLED;
-		bfa_fsm_send_event(rxf, RXF_E_CAM_FLTR_MOD);
-	}
-}
-
-/* Rx */
-
-/* Rx <- bnad */
-void
-bna_rx_coalescing_timeo_set(struct bna_rx *rx, int coalescing_timeo)
-{
-	struct bna_rxp *rxp;
-	struct list_head *qe;
-
-	list_for_each(qe, &rx->rxp_q) {
-		rxp = (struct bna_rxp *)qe;
-		rxp->cq.ccb->rx_coalescing_timeo = coalescing_timeo;
-		bna_ib_coalescing_timeo_set(rxp->cq.ib, coalescing_timeo);
-	}
-}
-
-/* Rx <- bnad */
-void
-bna_rx_dim_reconfig(struct bna *bna, const u32 vector[][BNA_BIAS_T_MAX])
-{
-	int i, j;
-
-	for (i = 0; i < BNA_LOAD_T_MAX; i++)
-		for (j = 0; j < BNA_BIAS_T_MAX; j++)
-			bna->rx_mod.dim_vector[i][j] = vector[i][j];
-}
-
-/* Rx <- bnad */
-void
-bna_rx_dim_update(struct bna_ccb *ccb)
-{
-	struct bna *bna = ccb->cq->rx->bna;
-	u32 load, bias;
-	u32 pkt_rt, small_rt, large_rt;
-	u8 coalescing_timeo;
-
-	if ((ccb->pkt_rate.small_pkt_cnt == 0) &&
-		(ccb->pkt_rate.large_pkt_cnt == 0))
-		return;
-
-	/* Arrive at preconfigured coalescing timeo value based on pkt rate */
-
-	small_rt = ccb->pkt_rate.small_pkt_cnt;
-	large_rt = ccb->pkt_rate.large_pkt_cnt;
-
-	pkt_rt = small_rt + large_rt;
-
-	if (pkt_rt < BNA_PKT_RATE_10K)
-		load = BNA_LOAD_T_LOW_4;
-	else if (pkt_rt < BNA_PKT_RATE_20K)
-		load = BNA_LOAD_T_LOW_3;
-	else if (pkt_rt < BNA_PKT_RATE_30K)
-		load = BNA_LOAD_T_LOW_2;
-	else if (pkt_rt < BNA_PKT_RATE_40K)
-		load = BNA_LOAD_T_LOW_1;
-	else if (pkt_rt < BNA_PKT_RATE_50K)
-		load = BNA_LOAD_T_HIGH_1;
-	else if (pkt_rt < BNA_PKT_RATE_60K)
-		load = BNA_LOAD_T_HIGH_2;
-	else if (pkt_rt < BNA_PKT_RATE_80K)
-		load = BNA_LOAD_T_HIGH_3;
-	else
-		load = BNA_LOAD_T_HIGH_4;
-
-	if (small_rt > (large_rt << 1))
-		bias = 0;
-	else
-		bias = 1;
-
-	ccb->pkt_rate.small_pkt_cnt = 0;
-	ccb->pkt_rate.large_pkt_cnt = 0;
-
-	coalescing_timeo = bna->rx_mod.dim_vector[load][bias];
-	ccb->rx_coalescing_timeo = coalescing_timeo;
-
-	/* Set it to IB */
-	bna_ib_coalescing_timeo_set(ccb->cq->ib, coalescing_timeo);
-}
-
-/* Tx */
-/* TX <- bnad */
-void
-bna_tx_coalescing_timeo_set(struct bna_tx *tx, int coalescing_timeo)
-{
-	struct bna_txq *txq;
-	struct list_head *qe;
-
-	list_for_each(qe, &tx->txq_q) {
-		txq = (struct bna_txq *)qe;
-		bna_ib_coalescing_timeo_set(txq->ib, coalescing_timeo);
-	}
-}
-
-/*
- * Private data
- */
-
-struct bna_ritseg_pool_cfg {
-	u32	pool_size;
-	u32	pool_entry_size;
-};
-init_ritseg_pool(ritseg_pool_cfg);
-
-/*
- * Private functions
- */
-static void
-bna_ucam_mod_init(struct bna_ucam_mod *ucam_mod, struct bna *bna,
-		  struct bna_res_info *res_info)
-{
-	int i;
-
-	ucam_mod->ucmac = (struct bna_mac *)
-		res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
-
-	INIT_LIST_HEAD(&ucam_mod->free_q);
-	for (i = 0; i < BFI_MAX_UCMAC; i++) {
-		bfa_q_qe_init(&ucam_mod->ucmac[i].qe);
-		list_add_tail(&ucam_mod->ucmac[i].qe, &ucam_mod->free_q);
-	}
-
-	ucam_mod->bna = bna;
-}
-
-static void
-bna_ucam_mod_uninit(struct bna_ucam_mod *ucam_mod)
-{
-	struct list_head *qe;
-	int i = 0;
-
-	list_for_each(qe, &ucam_mod->free_q)
-		i++;
-
-	ucam_mod->bna = NULL;
-}
-
-static void
-bna_mcam_mod_init(struct bna_mcam_mod *mcam_mod, struct bna *bna,
-		  struct bna_res_info *res_info)
-{
-	int i;
-
-	mcam_mod->mcmac = (struct bna_mac *)
-		res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mdl[0].kva;
-
-	INIT_LIST_HEAD(&mcam_mod->free_q);
-	for (i = 0; i < BFI_MAX_MCMAC; i++) {
-		bfa_q_qe_init(&mcam_mod->mcmac[i].qe);
-		list_add_tail(&mcam_mod->mcmac[i].qe, &mcam_mod->free_q);
-	}
-
-	mcam_mod->bna = bna;
-}
-
-static void
-bna_mcam_mod_uninit(struct bna_mcam_mod *mcam_mod)
-{
-	struct list_head *qe;
-	int i = 0;
-
-	list_for_each(qe, &mcam_mod->free_q)
-		i++;
-
-	mcam_mod->bna = NULL;
-}
-
-static void
-bna_rit_mod_init(struct bna_rit_mod *rit_mod,
-		struct bna_res_info *res_info)
-{
-	int i;
-	int j;
-	int count;
-	int offset;
-
-	rit_mod->rit = (struct bna_rit_entry *)
-		res_info[BNA_RES_MEM_T_RIT_ENTRY].res_u.mem_info.mdl[0].kva;
-	rit_mod->rit_segment = (struct bna_rit_segment *)
-		res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_u.mem_info.mdl[0].kva;
-
-	count = 0;
-	offset = 0;
-	for (i = 0; i < BFI_RIT_SEG_TOTAL_POOLS; i++) {
-		INIT_LIST_HEAD(&rit_mod->rit_seg_pool[i]);
-		for (j = 0; j < ritseg_pool_cfg[i].pool_size; j++) {
-			bfa_q_qe_init(&rit_mod->rit_segment[count].qe);
-			rit_mod->rit_segment[count].max_rit_size =
-					ritseg_pool_cfg[i].pool_entry_size;
-			rit_mod->rit_segment[count].rit_offset = offset;
-			rit_mod->rit_segment[count].rit =
-					&rit_mod->rit[offset];
-			list_add_tail(&rit_mod->rit_segment[count].qe,
-				&rit_mod->rit_seg_pool[i]);
-			count++;
-			offset += ritseg_pool_cfg[i].pool_entry_size;
-		}
-	}
-}
-
-/*
- * Public functions
- */
-
-/* Called during probe(), before calling bna_init() */
-void
-bna_res_req(struct bna_res_info *res_info)
-{
-	bna_adv_res_req(res_info);
-
-	/* DMA memory for retrieving IOC attributes */
-	res_info[BNA_RES_MEM_T_ATTR].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
-	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_ATTR].res_u.mem_info.len =
-				ALIGN(bfa_nw_ioc_meminfo(), PAGE_SIZE);
-
-	/* DMA memory for index segment of an IB */
-	res_info[BNA_RES_MEM_T_IBIDX].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.mem_type = BNA_MEM_T_DMA;
-	res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.len =
-				BFI_IBIDX_SIZE * BFI_IBIDX_MAX_SEGSIZE;
-	res_info[BNA_RES_MEM_T_IBIDX].res_u.mem_info.num = BFI_MAX_IB;
-
-	/* Virtual memory for IB objects - stored by IB module */
-	res_info[BNA_RES_MEM_T_IB_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_IB_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_IB_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_IB_ARRAY].res_u.mem_info.len =
-				BFI_MAX_IB * sizeof(struct bna_ib);
-
-	/* Virtual memory for intr objects - stored by IB module */
-	res_info[BNA_RES_MEM_T_INTR_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_INTR_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_INTR_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_INTR_ARRAY].res_u.mem_info.len =
-				BFI_MAX_IB * sizeof(struct bna_intr);
-
-	/* Virtual memory for idx_seg objects - stored by IB module */
-	res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_IDXSEG_ARRAY].res_u.mem_info.len =
-			BFI_IBIDX_TOTAL_SEGS * sizeof(struct bna_ibidx_seg);
-
-	/* Virtual memory for Tx objects - stored by Tx module */
-	res_info[BNA_RES_MEM_T_TX_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_TX_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_TX_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_TX_ARRAY].res_u.mem_info.len =
-			BFI_MAX_TXQ * sizeof(struct bna_tx);
-
-	/* Virtual memory for TxQ - stored by Tx module */
-	res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_TXQ_ARRAY].res_u.mem_info.len =
-			BFI_MAX_TXQ * sizeof(struct bna_txq);
-
-	/* Virtual memory for Rx objects - stored by Rx module */
-	res_info[BNA_RES_MEM_T_RX_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RX_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RX_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RX_ARRAY].res_u.mem_info.len =
-			BFI_MAX_RXQ * sizeof(struct bna_rx);
-
-	/* Virtual memory for RxPath - stored by Rx module */
-	res_info[BNA_RES_MEM_T_RXP_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RXP_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RXP_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RXP_ARRAY].res_u.mem_info.len =
-			BFI_MAX_RXQ * sizeof(struct bna_rxp);
-
-	/* Virtual memory for RxQ - stored by Rx module */
-	res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RXQ_ARRAY].res_u.mem_info.len =
-			BFI_MAX_RXQ * sizeof(struct bna_rxq);
-
-	/* Virtual memory for Unicast MAC address - stored by ucam module */
-	res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_UCMAC_ARRAY].res_u.mem_info.len =
-			BFI_MAX_UCMAC * sizeof(struct bna_mac);
-
-	/* Virtual memory for Multicast MAC address - stored by mcam module */
-	res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_MCMAC_ARRAY].res_u.mem_info.len =
-			BFI_MAX_MCMAC * sizeof(struct bna_mac);
-
-	/* Virtual memory for RIT entries */
-	res_info[BNA_RES_MEM_T_RIT_ENTRY].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RIT_ENTRY].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RIT_ENTRY].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RIT_ENTRY].res_u.mem_info.len =
-			BFI_MAX_RIT_SIZE * sizeof(struct bna_rit_entry);
-
-	/* Virtual memory for RIT segment table */
-	res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_type = BNA_RES_T_MEM;
-	res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_u.mem_info.mem_type =
-								BNA_MEM_T_KVA;
-	res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_u.mem_info.num = 1;
-	res_info[BNA_RES_MEM_T_RIT_SEGMENT].res_u.mem_info.len =
-			BFI_RIT_TOTAL_SEGS * sizeof(struct bna_rit_segment);
-
-	/* Interrupt resource for mailbox interrupt */
-	res_info[BNA_RES_INTR_T_MBOX].res_type = BNA_RES_T_INTR;
-	res_info[BNA_RES_INTR_T_MBOX].res_u.intr_info.intr_type =
-							BNA_INTR_T_MSIX;
-	res_info[BNA_RES_INTR_T_MBOX].res_u.intr_info.num = 1;
-}
-
-/* Called during probe() */
-void
-bna_init(struct bna *bna, struct bnad *bnad, struct bfa_pcidev *pcidev,
-		struct bna_res_info *res_info)
-{
-	bna->bnad = bnad;
-	bna->pcidev = *pcidev;
-
-	bna->stats.hw_stats = (struct bfi_ll_stats *)
-		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].kva;
-	bna->hw_stats_dma.msb =
-		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.msb;
-	bna->hw_stats_dma.lsb =
-		res_info[BNA_RES_MEM_T_STATS].res_u.mem_info.mdl[0].dma.lsb;
-	bna->stats.sw_stats = (struct bna_sw_stats *)
-		res_info[BNA_RES_MEM_T_SWSTATS].res_u.mem_info.mdl[0].kva;
-
-	bna->regs.page_addr = bna->pcidev.pci_bar_kva +
-				reg_offset[bna->pcidev.pci_func].page_addr;
-	bna->regs.fn_int_status = bna->pcidev.pci_bar_kva +
-				reg_offset[bna->pcidev.pci_func].fn_int_status;
-	bna->regs.fn_int_mask = bna->pcidev.pci_bar_kva +
-				reg_offset[bna->pcidev.pci_func].fn_int_mask;
-
-	if (bna->pcidev.pci_func < 3)
-		bna->port_num = 0;
-	else
-		bna->port_num = 1;
-
-	/* Also initializes diag, cee, sfp, phy_port and mbox_mod */
-	bna_device_init(&bna->device, bna, res_info);
-
-	bna_port_init(&bna->port, bna);
-
-	bna_tx_mod_init(&bna->tx_mod, bna, res_info);
-
-	bna_rx_mod_init(&bna->rx_mod, bna, res_info);
-
-	bna_ib_mod_init(&bna->ib_mod, bna, res_info);
-
-	bna_rit_mod_init(&bna->rit_mod, res_info);
-
-	bna_ucam_mod_init(&bna->ucam_mod, bna, res_info);
-
-	bna_mcam_mod_init(&bna->mcam_mod, bna, res_info);
-
-	bna->rxf_promisc_id = BFI_MAX_RXF;
-
-	/* Mbox q element for posting stat request to f/w */
-	bfa_q_qe_init(&bna->mbox_qe.qe);
-}
-
-void
-bna_uninit(struct bna *bna)
-{
-	bna_mcam_mod_uninit(&bna->mcam_mod);
-
-	bna_ucam_mod_uninit(&bna->ucam_mod);
-
-	bna_ib_mod_uninit(&bna->ib_mod);
-
-	bna_rx_mod_uninit(&bna->rx_mod);
-
-	bna_tx_mod_uninit(&bna->tx_mod);
-
-	bna_port_uninit(&bna->port);
-
-	bna_device_uninit(&bna->device);
-
-	bna->bnad = NULL;
-}
-
-struct bna_mac *
-bna_ucam_mod_mac_get(struct bna_ucam_mod *ucam_mod)
-{
-	struct list_head *qe;
-
-	if (list_empty(&ucam_mod->free_q))
-		return NULL;
-
-	bfa_q_deq(&ucam_mod->free_q, &qe);
-
-	return (struct bna_mac *)qe;
-}
-
-void
-bna_ucam_mod_mac_put(struct bna_ucam_mod *ucam_mod, struct bna_mac *mac)
-{
-	list_add_tail(&mac->qe, &ucam_mod->free_q);
-}
-
-struct bna_mac *
-bna_mcam_mod_mac_get(struct bna_mcam_mod *mcam_mod)
-{
-	struct list_head *qe;
-
-	if (list_empty(&mcam_mod->free_q))
-		return NULL;
-
-	bfa_q_deq(&mcam_mod->free_q, &qe);
-
-	return (struct bna_mac *)qe;
-}
-
-void
-bna_mcam_mod_mac_put(struct bna_mcam_mod *mcam_mod, struct bna_mac *mac)
-{
-	list_add_tail(&mac->qe, &mcam_mod->free_q);
-}
-
-/**
- * Note: This should be called in the same locking context as the call to
- * bna_rit_mod_seg_get()
- */
-int
-bna_rit_mod_can_satisfy(struct bna_rit_mod *rit_mod, int seg_size)
-{
-	int i;
-
-	/* Select the pool for seg_size */
-	for (i = 0; i < BFI_RIT_SEG_TOTAL_POOLS; i++) {
-		if (seg_size <= ritseg_pool_cfg[i].pool_entry_size)
-			break;
-	}
-
-	if (i == BFI_RIT_SEG_TOTAL_POOLS)
-		return 0;
-
-	if (list_empty(&rit_mod->rit_seg_pool[i]))
-		return 0;
-
-	return 1;
-}
-
-struct bna_rit_segment *
-bna_rit_mod_seg_get(struct bna_rit_mod *rit_mod, int seg_size)
-{
-	struct bna_rit_segment *seg;
-	struct list_head *qe;
-	int i;
-
-	/* Select the pool for seg_size */
-	for (i = 0; i < BFI_RIT_SEG_TOTAL_POOLS; i++) {
-		if (seg_size <= ritseg_pool_cfg[i].pool_entry_size)
-			break;
-	}
-
-	if (i == BFI_RIT_SEG_TOTAL_POOLS)
-		return NULL;
-
-	if (list_empty(&rit_mod->rit_seg_pool[i]))
-		return NULL;
-
-	bfa_q_deq(&rit_mod->rit_seg_pool[i], &qe);
-	seg = (struct bna_rit_segment *)qe;
-	bfa_q_qe_init(&seg->qe);
-	seg->rit_size = seg_size;
-
-	return seg;
-}
-
-void
-bna_rit_mod_seg_put(struct bna_rit_mod *rit_mod,
-			struct bna_rit_segment *seg)
-{
-	int i;
-
-	/* Select the pool for seg->max_rit_size */
-	for (i = 0; i < BFI_RIT_SEG_TOTAL_POOLS; i++) {
-		if (seg->max_rit_size == ritseg_pool_cfg[i].pool_entry_size)
-			break;
-	}
-
-	seg->rit_size = 0;
-	list_add_tail(&seg->qe, &rit_mod->rit_seg_pool[i]);
-}
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 00/45] Update bna driver version to 3.0.2.0
  2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
                   ` (9 preceding siblings ...)
  2011-07-18  8:19 ` [PATCH 10/45] bna: Remove Obsolete Files Rasesh Mody
@ 2011-07-18  8:26 ` David Miller
  2011-07-19  0:48   ` Rasesh Mody
  10 siblings, 1 reply; 16+ messages in thread
From: David Miller @ 2011-07-18  8:26 UTC (permalink / raw)
  To: rmody; +Cc: netdev, adapter_linux_open_src_team, dradovan


This is too many patche at once.

You need to submit patches more often, and in significantly smaller
sets at a time.

I'm not going to look at these patches at all.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH 00/45] Update bna driver version to 3.0.2.0
  2011-07-18  8:26 ` [PATCH 00/45] Update bna driver version to 3.0.2.0 David Miller
@ 2011-07-19  0:48   ` Rasesh Mody
  2011-07-19  1:58     ` David Miller
  0 siblings, 1 reply; 16+ messages in thread
From: Rasesh Mody @ 2011-07-19  0:48 UTC (permalink / raw)
  To: David Miller; +Cc: netdev@vger.kernel.org, Adapter Linux Open SRC Team

>From: David Miller [mailto:davem@davemloft.net]
>Sent: Monday, July 18, 2011 1:27 AM
>
>This is too many patche at once.
>
>You need to submit patches more often, and in significantly smaller
>sets at a time.

David,

This update to the BNA driver (release 3.0) is enabling new hardware with various virtualization and other capabilities e.g. new 1860 SRIOV capable Fabric Adapter (new ASIC), vNIC (PF based virtualization), architectural framework for SR-IOV, Multi-Tx priority queue support and iSCSI over DCB.

The driver had to go through significant changes to enable these capabilities thus resulting in large delta. 

We explored options to submit smaller incremental patches in last few months but quickly ran into following issues ...
   - The driver was not functional between patches because of large developmental changes 
   - There were iterative changes to many areas of code and that was resulting in many unnecessary patches.

We wanted to avoid submitting developing code that could break upstream driver due to partial changes. 
Hence we ended up submitting this driver update patch with consolidated changes after elaborate QA testing.

We understand that it is still big and can think of following options to address the concern:
1. Break this submission into 3 batches and split it into: 2.3 fixes, new features (which is still a big patch set - due to all the code reorganization), and bug fixes and cleanup.
OR
2. Submit patches as a new driver and would need your support in replacing the existing upstream BNA driver with this one.

Please suggest on the next steps.

Thanks,
Rasesh

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 00/45] Update bna driver version to 3.0.2.0
  2011-07-19  0:48   ` Rasesh Mody
@ 2011-07-19  1:58     ` David Miller
  2011-07-19 22:53       ` Rasesh Mody
  0 siblings, 1 reply; 16+ messages in thread
From: David Miller @ 2011-07-19  1:58 UTC (permalink / raw)
  To: rmody; +Cc: netdev, adapter_linux_open_src_team

From: Rasesh Mody <rmody@brocade.com>
Date: Mon, 18 Jul 2011 17:48:03 -0700

> We wanted to avoid submitting developing code that could break
> upstream driver due to partial changes.

So what you're saying is that if only some of the patches are applied
the driver will not work properly?

That's not allowed either, your set of changes must be fully
bisectable meaning that at any point in the series the driver
must still compile and work properly.

You guys need to sort out how you guys submit your changes.

When you have 4 patches ready to go, post them immediately.

Because if there are fundamental problem with the first 4,
everything else that depends upon them will need to change.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH 00/45] Update bna driver version to 3.0.2.0
  2011-07-19  1:58     ` David Miller
@ 2011-07-19 22:53       ` Rasesh Mody
  2011-07-19 23:24         ` David Miller
  0 siblings, 1 reply; 16+ messages in thread
From: Rasesh Mody @ 2011-07-19 22:53 UTC (permalink / raw)
  To: David Miller; +Cc: netdev@vger.kernel.org, Adapter Linux Open SRC Team

>From: David Miller [mailto:davem@davemloft.net]
>Sent: Monday, July 18, 2011 6:58 PM
>
>From: Rasesh Mody <rmody@brocade.com>
>Date: Mon, 18 Jul 2011 17:48:03 -0700
>
>> We wanted to avoid submitting developing code that could break
>> upstream driver due to partial changes.
>
>So what you're saying is that if only some of the patches are applied
>the driver will not work properly?
>
>That's not allowed either, your set of changes must be fully
>bisectable meaning that at any point in the series the driver
>must still compile and work properly.
>
>You guys need to sort out how you guys submit your changes.
>
>When you have 4 patches ready to go, post them immediately.
>
>Because if there are fundamental problem with the first 4,
>everything else that depends upon them will need to change.

David,

We can further split the submission into multiple smaller submissions (e.g. 4 patches a time as you suggested), but there still will be one big patch set for new hardware and feature enablement, which cannot be split it into small bisectable patches. Is this acceptable? What is the upstream guideline for submitting big changes such as driver re-architecture or re-organization?

Thanks,
--Rasesh

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 00/45] Update bna driver version to 3.0.2.0
  2011-07-19 22:53       ` Rasesh Mody
@ 2011-07-19 23:24         ` David Miller
  0 siblings, 0 replies; 16+ messages in thread
From: David Miller @ 2011-07-19 23:24 UTC (permalink / raw)
  To: rmody; +Cc: netdev, adapter_linux_open_src_team

From: Rasesh Mody <rmody@brocade.com>
Date: Tue, 19 Jul 2011 15:53:06 -0700

> We can further split the submission into multiple smaller
> submissions (e.g. 4 patches a time as you suggested), but there
> still will be one big patch set for new hardware and feature
> enablement

That's fine, just don't add the patch that causes the driver
to recognize the new device IDs until all the necessary
support code is there.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2011-07-19 23:25 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-07-18  8:19 [PATCH 00/45] Update bna driver version to 3.0.2.0 Rasesh Mody
2011-07-18  8:19 ` [PATCH 01/45] bna: Checkpatch Cleanup Rasesh Mody
2011-07-18  8:19 ` [PATCH 02/45] bna: Update ASIC Register Definition to Support New Hardware Rasesh Mody
2011-07-18  8:19 ` [PATCH 03/45] bna: Change IOC Event Notification Call Back Mechanism Rasesh Mody
2011-07-18  8:19 ` [PATCH 04/45] bna: State Machine Fault Handling Cleanup Rasesh Mody
2011-07-18  8:19 ` [PATCH 05/45] bna: MSGQ Implementation Rasesh Mody
2011-07-18  8:19 ` [PATCH 06/45] bna: Use DEFINE_PCI_DEVICE_TABLE and Print Driver Version Rasesh Mody
2011-07-18  8:19 ` [PATCH 07/45] bna: Introduce ENET as New Driver and FW Interface Rasesh Mody
2011-07-18  8:19 ` [PATCH 08/45] bna: Tx and Rx Redesign Rasesh Mody
2011-07-18  8:19 ` [PATCH 09/45] bna: ENET and Tx Rx Redesign Update Rasesh Mody
2011-07-18  8:19 ` [PATCH 10/45] bna: Remove Obsolete Files Rasesh Mody
2011-07-18  8:26 ` [PATCH 00/45] Update bna driver version to 3.0.2.0 David Miller
2011-07-19  0:48   ` Rasesh Mody
2011-07-19  1:58     ` David Miller
2011-07-19 22:53       ` Rasesh Mody
2011-07-19 23:24         ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).