netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels.
@ 2016-01-03  4:44 Michael Chan
  2016-01-03  4:44 ` [PATCH net-next 1/7] bnxt_en: Refactor bnxt_dbg_dump_states() Michael Chan
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Michael Chan @ 2016-01-03  4:44 UTC (permalink / raw)
  To: davem; +Cc: netdev

The bnxt hardware uses a completion ring for rx and tx events.  The driver
has to process the completion ring entries sequentially for the events.
The current code only supports an rx/tx ring pair for each completion ring.
This patch series add support for using a dedicated completion ring for
rx only or tx only as an option configuarble using ethtool -L.

The benefits for using dedicated completion rings are:

1. A burst of rx packets can cause delay in processing tx events if the
completion ring is shared.  If tx queue is stopped by BQL, this can cause
delay in re-starting the tx queue.

2. A completion ring is sized according to the rx and tx ring size rounded
up to the nearest power of 2.  When the completion ring is shared, it is
sized by adding the rx and tx ring sizes and then rounded to the next power
of 2, often with a lot of wasted space.

3. Using dedicated completion ring, we can adjust the tx and rx coalescing
parameters independently for rx and tx.


Michael Chan (7):
  bnxt_en: Refactor bnxt_dbg_dump_states().
  bnxt_en: Separate bnxt_{rx|tx}_ring_info structs from bnxt_napi
    struct.
  bnxt_en: Check for NULL rx or tx ring.
  bnxt_en: Re-structure ring indexing and mapping.
  bnxt_en: Modify bnxt_get_max_rings() to support shared or non shared
    rings.
  bnxt_en: Modify init sequence to support shared or non shared rings.
  bnxt_en: Modify ethtool -l|-L to support combined or rx/tx rings.

 drivers/net/ethernet/broadcom/bnxt/bnxt.c         | 439 +++++++++++++---------
 drivers/net/ethernet/broadcom/bnxt/bnxt.h         |  15 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c |  59 ++-
 3 files changed, 328 insertions(+), 185 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH net-next 1/7] bnxt_en: Refactor bnxt_dbg_dump_states().
  2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
@ 2016-01-03  4:44 ` Michael Chan
  2016-01-03  4:44 ` [PATCH net-next 2/7] bnxt_en: Separate bnxt_{rx|tx}_ring_info structs from bnxt_napi struct Michael Chan
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Michael Chan @ 2016-01-03  4:44 UTC (permalink / raw)
  To: davem; +Cc: netdev

By adding 3 separate functions to dump the different ring states.

Signed-off-by: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 50 ++++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 319653a..3f053de 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -5015,31 +5015,47 @@ static int bnxt_set_features(struct net_device *dev, netdev_features_t features)
 	return rc;
 }
 
+static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi)
+{
+	struct bnxt_tx_ring_info *txr = &bnapi->tx_ring;
+	int i = bnapi->index;
+
+	netdev_info(bnapi->bp->dev, "[%d]: tx{fw_ring: %d prod: %x cons: %x}\n",
+		    i, txr->tx_ring_struct.fw_ring_id, txr->tx_prod,
+		    txr->tx_cons);
+}
+
+static void bnxt_dump_rx_sw_state(struct bnxt_napi *bnapi)
+{
+	struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+	int i = bnapi->index;
+
+	netdev_info(bnapi->bp->dev, "[%d]: rx{fw_ring: %d prod: %x} rx_agg{fw_ring: %d agg_prod: %x sw_agg_prod: %x}\n",
+		    i, rxr->rx_ring_struct.fw_ring_id, rxr->rx_prod,
+		    rxr->rx_agg_ring_struct.fw_ring_id, rxr->rx_agg_prod,
+		    rxr->rx_sw_agg_prod);
+}
+
+static void bnxt_dump_cp_sw_state(struct bnxt_napi *bnapi)
+{
+	struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
+	int i = bnapi->index;
+
+	netdev_info(bnapi->bp->dev, "[%d]: cp{fw_ring: %d raw_cons: %x}\n",
+		    i, cpr->cp_ring_struct.fw_ring_id, cpr->cp_raw_cons);
+}
+
 static void bnxt_dbg_dump_states(struct bnxt *bp)
 {
 	int i;
 	struct bnxt_napi *bnapi;
-	struct bnxt_tx_ring_info *txr;
-	struct bnxt_rx_ring_info *rxr;
-	struct bnxt_cp_ring_info *cpr;
 
 	for (i = 0; i < bp->cp_nr_rings; i++) {
 		bnapi = bp->bnapi[i];
-		txr = &bnapi->tx_ring;
-		rxr = &bnapi->rx_ring;
-		cpr = &bnapi->cp_ring;
 		if (netif_msg_drv(bp)) {
-			netdev_info(bp->dev, "[%d]: tx{fw_ring: %d prod: %x cons: %x}\n",
-				    i, txr->tx_ring_struct.fw_ring_id,
-				    txr->tx_prod, txr->tx_cons);
-			netdev_info(bp->dev, "[%d]: rx{fw_ring: %d prod: %x} rx_agg{fw_ring: %d agg_prod: %x sw_agg_prod: %x}\n",
-				    i, rxr->rx_ring_struct.fw_ring_id,
-				    rxr->rx_prod,
-				    rxr->rx_agg_ring_struct.fw_ring_id,
-				    rxr->rx_agg_prod, rxr->rx_sw_agg_prod);
-			netdev_info(bp->dev, "[%d]: cp{fw_ring: %d raw_cons: %x}\n",
-				    i, cpr->cp_ring_struct.fw_ring_id,
-				    cpr->cp_raw_cons);
+			bnxt_dump_tx_sw_state(bnapi);
+			bnxt_dump_rx_sw_state(bnapi);
+			bnxt_dump_cp_sw_state(bnapi);
 		}
 	}
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 2/7] bnxt_en: Separate bnxt_{rx|tx}_ring_info structs from bnxt_napi struct.
  2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
  2016-01-03  4:44 ` [PATCH net-next 1/7] bnxt_en: Refactor bnxt_dbg_dump_states() Michael Chan
@ 2016-01-03  4:44 ` Michael Chan
  2016-01-03  4:45 ` [PATCH net-next 3/7] bnxt_en: Check for NULL rx or tx ring Michael Chan
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Michael Chan @ 2016-01-03  4:44 UTC (permalink / raw)
  To: davem; +Cc: netdev

Currently, an rx and a tx ring are always paired with a completion ring.
We want to restructure it so that it is possible to have a dedicated
completion ring for tx or rx only.

The bnxt hardware uses a completion ring for rx and tx events.  The driver
has to process the completion ring entries sequentially for the rx and tx
events.  Using a dedicated completion ring for rx only or tx only has these
benefits:

1. A burst of rx packets can cause delay in processing tx events if the
completion ring is shared.  If tx queue is stopped by BQL, this can cause
delay in re-starting the tx queue.

2. A completion ring is sized according to the rx and tx ring size rounded
up to the nearest power of 2.  When the completion ring is shared, it is
sized by adding the rx and tx ring sizes and then rounded to the next power
of 2, often with a lot of wasted space.

3. Using dedicated completion ring, we can adjust the tx and rx coalescing
parameters independently for rx and tx.

The first step is to separate the rx and tx ring structures from the
bnxt_napi struct.

In this patch, an rx ring and a tx ring will point to the same bnxt_napi
struct to share the same completion ring.  No change in ring assignment
and mapping yet.

Signed-off-by: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 147 +++++++++++++-----------------
 drivers/net/ethernet/broadcom/bnxt/bnxt.h |   9 +-
 2 files changed, 70 insertions(+), 86 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 3f053de..8635e03 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -179,7 +179,6 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u32 len, free_size, vlan_tag_flags, cfa_action, flags;
 	u16 prod, last_frag;
 	struct pci_dev *pdev = bp->pdev;
-	struct bnxt_napi *bnapi;
 	struct bnxt_tx_ring_info *txr;
 	struct bnxt_sw_tx_bd *tx_buf;
 
@@ -189,8 +188,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		return NETDEV_TX_OK;
 	}
 
-	bnapi = bp->bnapi[i];
-	txr = &bnapi->tx_ring;
+	txr = &bp->tx_ring[i];
 	txq = netdev_get_tx_queue(dev, i);
 	prod = txr->tx_prod;
 
@@ -423,7 +421,7 @@ tx_dma_error:
 
 static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
 {
-	struct bnxt_tx_ring_info *txr = &bnapi->tx_ring;
+	struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
 	int index = bnapi->index;
 	struct netdev_queue *txq = netdev_get_tx_queue(bp->dev, index);
 	u16 cons = txr->tx_cons;
@@ -602,7 +600,7 @@ static void bnxt_reuse_rx_agg_bufs(struct bnxt_napi *bnapi, u16 cp_cons,
 {
 	struct bnxt *bp = bnapi->bp;
 	struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
-	struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+	struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
 	u16 prod = rxr->rx_agg_prod;
 	u16 sw_prod = rxr->rx_sw_agg_prod;
 	u32 i;
@@ -681,7 +679,7 @@ static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, struct bnxt_napi *bnapi,
 {
 	struct pci_dev *pdev = bp->pdev;
 	struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
-	struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+	struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
 	u16 prod = rxr->rx_agg_prod;
 	u32 i;
 
@@ -940,7 +938,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
 					   bool *agg_event)
 {
 	struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
-	struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+	struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
 	u8 agg_id = TPA_END_AGG_ID(tpa_end);
 	u8 *data, agg_bufs;
 	u16 cp_cons = RING_CMP(*raw_cons);
@@ -1056,7 +1054,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_napi *bnapi, u32 *raw_cons,
 		       bool *agg_event)
 {
 	struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
-	struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+	struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
 	struct net_device *dev = bp->dev;
 	struct rx_cmp *rxcmp;
 	struct rx_cmp_ext *rxcmp1;
@@ -1390,7 +1388,7 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
 		bnxt_tx_int(bp, bnapi, tx_pkts);
 
 	if (rx_event) {
-		struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+		struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
 
 		writel(DB_KEY_RX | rxr->rx_prod, rxr->rx_doorbell);
 		writel(DB_KEY_RX | rxr->rx_prod, rxr->rx_doorbell);
@@ -1459,19 +1457,14 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
 	int i, max_idx;
 	struct pci_dev *pdev = bp->pdev;
 
-	if (!bp->bnapi)
+	if (!bp->tx_ring)
 		return;
 
 	max_idx = bp->tx_nr_pages * TX_DESC_CNT;
 	for (i = 0; i < bp->tx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_tx_ring_info *txr;
+		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
 		int j;
 
-		if (!bnapi)
-			continue;
-
-		txr = &bnapi->tx_ring;
 		for (j = 0; j < max_idx;) {
 			struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
 			struct sk_buff *skb = tx_buf->skb;
@@ -1517,21 +1510,15 @@ static void bnxt_free_rx_skbs(struct bnxt *bp)
 	int i, max_idx, max_agg_idx;
 	struct pci_dev *pdev = bp->pdev;
 
-	if (!bp->bnapi)
+	if (!bp->rx_ring)
 		return;
 
 	max_idx = bp->rx_nr_pages * RX_DESC_CNT;
 	max_agg_idx = bp->rx_agg_nr_pages * RX_DESC_CNT;
 	for (i = 0; i < bp->rx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_rx_ring_info *rxr;
+		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		int j;
 
-		if (!bnapi)
-			continue;
-
-		rxr = &bnapi->rx_ring;
-
 		if (rxr->rx_tpa) {
 			for (j = 0; j < MAX_TPA; j++) {
 				struct bnxt_tpa_info *tpa_info =
@@ -1659,19 +1646,13 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
 {
 	int i;
 
-	if (!bp->bnapi)
+	if (!bp->rx_ring)
 		return;
 
 	for (i = 0; i < bp->rx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_rx_ring_info *rxr;
+		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		struct bnxt_ring_struct *ring;
 
-		if (!bnapi)
-			continue;
-
-		rxr = &bnapi->rx_ring;
-
 		kfree(rxr->rx_tpa);
 		rxr->rx_tpa = NULL;
 
@@ -1690,6 +1671,9 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
 {
 	int i, rc, agg_rings = 0, tpa_rings = 0;
 
+	if (!bp->rx_ring)
+		return -ENOMEM;
+
 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
 		agg_rings = 1;
 
@@ -1697,14 +1681,9 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
 		tpa_rings = 1;
 
 	for (i = 0; i < bp->rx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_rx_ring_info *rxr;
+		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		struct bnxt_ring_struct *ring;
 
-		if (!bnapi)
-			continue;
-
-		rxr = &bnapi->rx_ring;
 		ring = &rxr->rx_ring_struct;
 
 		rc = bnxt_alloc_ring(bp, ring);
@@ -1742,19 +1721,13 @@ static void bnxt_free_tx_rings(struct bnxt *bp)
 	int i;
 	struct pci_dev *pdev = bp->pdev;
 
-	if (!bp->bnapi)
+	if (!bp->tx_ring)
 		return;
 
 	for (i = 0; i < bp->tx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_tx_ring_info *txr;
+		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
 		struct bnxt_ring_struct *ring;
 
-		if (!bnapi)
-			continue;
-
-		txr = &bnapi->tx_ring;
-
 		if (txr->tx_push) {
 			dma_free_coherent(&pdev->dev, bp->tx_push_size,
 					  txr->tx_push, txr->tx_push_mapping);
@@ -1788,14 +1761,9 @@ static int bnxt_alloc_tx_rings(struct bnxt *bp)
 	}
 
 	for (i = 0, j = 0; i < bp->tx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_tx_ring_info *txr;
+		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
 		struct bnxt_ring_struct *ring;
 
-		if (!bnapi)
-			continue;
-
-		txr = &bnapi->tx_ring;
 		ring = &txr->tx_ring_struct;
 
 		rc = bnxt_alloc_ring(bp, ring);
@@ -1898,7 +1866,7 @@ static void bnxt_init_ring_struct(struct bnxt *bp)
 		ring->dma_arr = cpr->cp_desc_mapping;
 		ring->vmem_size = 0;
 
-		rxr = &bnapi->rx_ring;
+		rxr = bnapi->rx_ring;
 		ring = &rxr->rx_ring_struct;
 		ring->nr_pages = bp->rx_nr_pages;
 		ring->page_size = HW_RXBD_RING_SIZE;
@@ -1915,7 +1883,7 @@ static void bnxt_init_ring_struct(struct bnxt *bp)
 		ring->vmem_size = SW_RXBD_AGG_RING_SIZE * bp->rx_agg_nr_pages;
 		ring->vmem = (void **)&rxr->rx_agg_ring;
 
-		txr = &bnapi->tx_ring;
+		txr = bnapi->tx_ring;
 		ring = &txr->tx_ring_struct;
 		ring->nr_pages = bp->tx_nr_pages;
 		ring->page_size = HW_RXBD_RING_SIZE;
@@ -1951,22 +1919,18 @@ static void bnxt_init_rxbd_pages(struct bnxt_ring_struct *ring, u32 type)
 static int bnxt_init_one_rx_ring(struct bnxt *bp, int ring_nr)
 {
 	struct net_device *dev = bp->dev;
-	struct bnxt_napi *bnapi = bp->bnapi[ring_nr];
 	struct bnxt_rx_ring_info *rxr;
 	struct bnxt_ring_struct *ring;
 	u32 prod, type;
 	int i;
 
-	if (!bnapi)
-		return -EINVAL;
-
 	type = (bp->rx_buf_use_size << RX_BD_LEN_SHIFT) |
 		RX_BD_TYPE_RX_PACKET_BD | RX_BD_FLAGS_EOP;
 
 	if (NET_IP_ALIGN == 2)
 		type |= RX_BD_FLAGS_SOP;
 
-	rxr = &bnapi->rx_ring;
+	rxr = &bp->rx_ring[ring_nr];
 	ring = &rxr->rx_ring_struct;
 	bnxt_init_rxbd_pages(ring, type);
 
@@ -2048,8 +2012,7 @@ static int bnxt_init_tx_rings(struct bnxt *bp)
 				   MAX_SKB_FRAGS + 1);
 
 	for (i = 0; i < bp->tx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_tx_ring_info *txr = &bnapi->tx_ring;
+		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
 		struct bnxt_ring_struct *ring = &txr->tx_ring_struct;
 
 		ring->fw_ring_id = INVALID_HW_RING_ID;
@@ -2436,11 +2399,11 @@ static void bnxt_clear_ring_indices(struct bnxt *bp)
 		cpr = &bnapi->cp_ring;
 		cpr->cp_raw_cons = 0;
 
-		txr = &bnapi->tx_ring;
+		txr = bnapi->tx_ring;
 		txr->tx_prod = 0;
 		txr->tx_cons = 0;
 
-		rxr = &bnapi->rx_ring;
+		rxr = bnapi->rx_ring;
 		rxr->rx_prod = 0;
 		rxr->rx_agg_prod = 0;
 		rxr->rx_sw_agg_prod = 0;
@@ -2509,6 +2472,10 @@ static void bnxt_free_mem(struct bnxt *bp, bool irq_re_init)
 		bnxt_free_stats(bp);
 		bnxt_free_ring_grps(bp);
 		bnxt_free_vnics(bp);
+		kfree(bp->tx_ring);
+		bp->tx_ring = NULL;
+		kfree(bp->rx_ring);
+		bp->rx_ring = NULL;
 		kfree(bp->bnapi);
 		bp->bnapi = NULL;
 	} else {
@@ -2540,6 +2507,28 @@ static int bnxt_alloc_mem(struct bnxt *bp, bool irq_re_init)
 			bp->bnapi[i]->bp = bp;
 		}
 
+		bp->rx_ring = kcalloc(bp->rx_nr_rings,
+				      sizeof(struct bnxt_rx_ring_info),
+				      GFP_KERNEL);
+		if (!bp->rx_ring)
+			return -ENOMEM;
+
+		for (i = 0; i < bp->rx_nr_rings; i++) {
+			bp->rx_ring[i].bnapi = bp->bnapi[i];
+			bp->bnapi[i]->rx_ring = &bp->rx_ring[i];
+		}
+
+		bp->tx_ring = kcalloc(bp->tx_nr_rings,
+				      sizeof(struct bnxt_tx_ring_info),
+				      GFP_KERNEL);
+		if (!bp->tx_ring)
+			return -ENOMEM;
+
+		for (i = 0; i < bp->tx_nr_rings; i++) {
+			bp->tx_ring[i].bnapi = bp->bnapi[i];
+			bp->bnapi[i]->tx_ring = &bp->tx_ring[i];
+		}
+
 		rc = bnxt_alloc_stats(bp);
 		if (rc)
 			goto alloc_mem_err;
@@ -3332,8 +3321,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
 	}
 
 	for (i = 0; i < bp->tx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_tx_ring_info *txr = &bnapi->tx_ring;
+		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
 		struct bnxt_ring_struct *ring = &txr->tx_ring_struct;
 		u16 fw_stats_ctx = bp->grp_info[i].fw_stats_ctx;
 
@@ -3345,8 +3333,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
 	}
 
 	for (i = 0; i < bp->rx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
 
 		rc = hwrm_ring_alloc_send_msg(bp, ring, HWRM_RING_ALLOC_RX, i,
@@ -3360,8 +3347,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
 
 	if (bp->flags & BNXT_FLAG_AGG_RINGS) {
 		for (i = 0; i < bp->rx_nr_rings; i++) {
-			struct bnxt_napi *bnapi = bp->bnapi[i];
-			struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+			struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 			struct bnxt_ring_struct *ring =
 						&rxr->rx_agg_ring_struct;
 
@@ -3431,8 +3417,7 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
 		return;
 
 	for (i = 0; i < bp->tx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_tx_ring_info *txr = &bnapi->tx_ring;
+		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
 		struct bnxt_ring_struct *ring = &txr->tx_ring_struct;
 		u32 cmpl_ring_id = bp->grp_info[i].cp_fw_ring_id;
 
@@ -3446,8 +3431,7 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
 	}
 
 	for (i = 0; i < bp->rx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
 		u32 cmpl_ring_id = bp->grp_info[i].cp_fw_ring_id;
 
@@ -3462,8 +3446,7 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
 	}
 
 	for (i = 0; i < bp->rx_nr_rings; i++) {
-		struct bnxt_napi *bnapi = bp->bnapi[i];
-		struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct;
 		u32 cmpl_ring_id = bp->grp_info[i].cp_fw_ring_id;
 
@@ -4258,14 +4241,12 @@ static void bnxt_enable_napi(struct bnxt *bp)
 static void bnxt_tx_disable(struct bnxt *bp)
 {
 	int i;
-	struct bnxt_napi *bnapi;
 	struct bnxt_tx_ring_info *txr;
 	struct netdev_queue *txq;
 
-	if (bp->bnapi) {
+	if (bp->tx_ring) {
 		for (i = 0; i < bp->tx_nr_rings; i++) {
-			bnapi = bp->bnapi[i];
-			txr = &bnapi->tx_ring;
+			txr = &bp->tx_ring[i];
 			txq = netdev_get_tx_queue(bp->dev, i);
 			__netif_tx_lock(txq, smp_processor_id());
 			txr->dev_state = BNXT_DEV_STATE_CLOSING;
@@ -4280,13 +4261,11 @@ static void bnxt_tx_disable(struct bnxt *bp)
 static void bnxt_tx_enable(struct bnxt *bp)
 {
 	int i;
-	struct bnxt_napi *bnapi;
 	struct bnxt_tx_ring_info *txr;
 	struct netdev_queue *txq;
 
 	for (i = 0; i < bp->tx_nr_rings; i++) {
-		bnapi = bp->bnapi[i];
-		txr = &bnapi->tx_ring;
+		txr = &bp->tx_ring[i];
 		txq = netdev_get_tx_queue(bp->dev, i);
 		txr->dev_state = 0;
 	}
@@ -5017,7 +4996,7 @@ static int bnxt_set_features(struct net_device *dev, netdev_features_t features)
 
 static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi)
 {
-	struct bnxt_tx_ring_info *txr = &bnapi->tx_ring;
+	struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
 	int i = bnapi->index;
 
 	netdev_info(bnapi->bp->dev, "[%d]: tx{fw_ring: %d prod: %x cons: %x}\n",
@@ -5027,7 +5006,7 @@ static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi)
 
 static void bnxt_dump_rx_sw_state(struct bnxt_napi *bnapi)
 {
-	struct bnxt_rx_ring_info *rxr = &bnapi->rx_ring;
+	struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
 	int i = bnapi->index;
 
 	netdev_info(bnapi->bp->dev, "[%d]: rx{fw_ring: %d prod: %x} rx_agg{fw_ring: %d agg_prod: %x sw_agg_prod: %x}\n",
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 78d639d..ea2fd92 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -528,6 +528,7 @@ struct tx_push_bd {
 };
 
 struct bnxt_tx_ring_info {
+	struct bnxt_napi	*bnapi;
 	u16			tx_prod;
 	u16			tx_cons;
 	void __iomem		*tx_doorbell;
@@ -558,6 +559,7 @@ struct bnxt_tpa_info {
 };
 
 struct bnxt_rx_ring_info {
+	struct bnxt_napi	*bnapi;
 	u16			rx_prod;
 	u16			rx_agg_prod;
 	u16			rx_sw_agg_prod;
@@ -604,8 +606,8 @@ struct bnxt_napi {
 
 	int			index;
 	struct bnxt_cp_ring_info	cp_ring;
-	struct bnxt_rx_ring_info	rx_ring;
-	struct bnxt_tx_ring_info	tx_ring;
+	struct bnxt_rx_ring_info	*rx_ring;
+	struct bnxt_tx_ring_info	*tx_ring;
 
 #ifdef CONFIG_NET_RX_BUSY_POLL
 	atomic_t		poll_state;
@@ -884,6 +886,9 @@ struct bnxt {
 
 	struct bnxt_napi	**bnapi;
 
+	struct bnxt_rx_ring_info	*rx_ring;
+	struct bnxt_tx_ring_info	*tx_ring;
+
 	u32			rx_buf_size;
 	u32			rx_buf_use_size;	/* useable size */
 	u32			rx_ring_size;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 3/7] bnxt_en: Check for NULL rx or tx ring.
  2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
  2016-01-03  4:44 ` [PATCH net-next 1/7] bnxt_en: Refactor bnxt_dbg_dump_states() Michael Chan
  2016-01-03  4:44 ` [PATCH net-next 2/7] bnxt_en: Separate bnxt_{rx|tx}_ring_info structs from bnxt_napi struct Michael Chan
@ 2016-01-03  4:45 ` Michael Chan
  2016-01-03  4:45 ` [PATCH net-next 4/7] bnxt_en: Re-structure ring indexing and mapping Michael Chan
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Michael Chan @ 2016-01-03  4:45 UTC (permalink / raw)
  To: davem; +Cc: netdev

Each bnxt_napi structure may no longer be having both an rx ring and
a tx ring.  Check for a valid ring before using it.

Signed-off-by: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 27 ++++++++++++++++++++++-----
 1 file changed, 22 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 8635e03..11478c9 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -1867,6 +1867,9 @@ static void bnxt_init_ring_struct(struct bnxt *bp)
 		ring->vmem_size = 0;
 
 		rxr = bnapi->rx_ring;
+		if (!rxr)
+			goto skip_rx;
+
 		ring = &rxr->rx_ring_struct;
 		ring->nr_pages = bp->rx_nr_pages;
 		ring->page_size = HW_RXBD_RING_SIZE;
@@ -1883,7 +1886,11 @@ static void bnxt_init_ring_struct(struct bnxt *bp)
 		ring->vmem_size = SW_RXBD_AGG_RING_SIZE * bp->rx_agg_nr_pages;
 		ring->vmem = (void **)&rxr->rx_agg_ring;
 
+skip_rx:
 		txr = bnapi->tx_ring;
+		if (!txr)
+			continue;
+
 		ring = &txr->tx_ring_struct;
 		ring->nr_pages = bp->tx_nr_pages;
 		ring->page_size = HW_RXBD_RING_SIZE;
@@ -2400,13 +2407,17 @@ static void bnxt_clear_ring_indices(struct bnxt *bp)
 		cpr->cp_raw_cons = 0;
 
 		txr = bnapi->tx_ring;
-		txr->tx_prod = 0;
-		txr->tx_cons = 0;
+		if (txr) {
+			txr->tx_prod = 0;
+			txr->tx_cons = 0;
+		}
 
 		rxr = bnapi->rx_ring;
-		rxr->rx_prod = 0;
-		rxr->rx_agg_prod = 0;
-		rxr->rx_sw_agg_prod = 0;
+		if (rxr) {
+			rxr->rx_prod = 0;
+			rxr->rx_agg_prod = 0;
+			rxr->rx_sw_agg_prod = 0;
+		}
 	}
 }
 
@@ -4999,6 +5010,9 @@ static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi)
 	struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
 	int i = bnapi->index;
 
+	if (!txr)
+		return;
+
 	netdev_info(bnapi->bp->dev, "[%d]: tx{fw_ring: %d prod: %x cons: %x}\n",
 		    i, txr->tx_ring_struct.fw_ring_id, txr->tx_prod,
 		    txr->tx_cons);
@@ -5009,6 +5023,9 @@ static void bnxt_dump_rx_sw_state(struct bnxt_napi *bnapi)
 	struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
 	int i = bnapi->index;
 
+	if (!rxr)
+		return;
+
 	netdev_info(bnapi->bp->dev, "[%d]: rx{fw_ring: %d prod: %x} rx_agg{fw_ring: %d agg_prod: %x sw_agg_prod: %x}\n",
 		    i, rxr->rx_ring_struct.fw_ring_id, rxr->rx_prod,
 		    rxr->rx_agg_ring_struct.fw_ring_id, rxr->rx_agg_prod,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 4/7] bnxt_en: Re-structure ring indexing and mapping.
  2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
                   ` (2 preceding siblings ...)
  2016-01-03  4:45 ` [PATCH net-next 3/7] bnxt_en: Check for NULL rx or tx ring Michael Chan
@ 2016-01-03  4:45 ` Michael Chan
  2016-01-03  4:45 ` [PATCH net-next 5/7] bnxt_en: Modify bnxt_get_max_rings() to support shared or non shared rings Michael Chan
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Michael Chan @ 2016-01-03  4:45 UTC (permalink / raw)
  To: davem; +Cc: netdev

In order to support dedicated or shared completion rings, the ring
indexing and mapping are re-structured as below:

1. bp->grp_info[] array index is 1:1 with bp->bnapi[] array index and
completion ring index.

2. rx rings 0 to n will be mapped to completion rings 0 to n.

3. If tx and rx rings share completion rings, then tx rings 0 to m will
be mapped to completion rings 0 to m.

4. If tx and rx rings use dedicated completion rings, then tx rings 0 to
m will be mapped to completion rings n + 1 to n + m.

5. Each tx or rx ring will use the corresponding completion ring index
for doorbell mapping and MSIX mapping.

Signed-off-by: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 91 ++++++++++++++++++-------------
 drivers/net/ethernet/broadcom/bnxt/bnxt.h |  2 +
 2 files changed, 55 insertions(+), 38 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 11478c9..8a54dab 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -422,7 +422,7 @@ tx_dma_error:
 static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
 {
 	struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
-	int index = bnapi->index;
+	int index = txr - &bp->tx_ring[0];
 	struct netdev_queue *txq = netdev_get_tx_queue(bp->dev, index);
 	u16 cons = txr->tx_cons;
 	struct pci_dev *pdev = bp->pdev;
@@ -3082,7 +3082,7 @@ static int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, u16 vnic_id)
 
 static int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id)
 {
-	int grp_idx = 0;
+	unsigned int ring = 0, grp_idx;
 	struct bnxt_vnic_info *vnic = &bp->vnic_info[vnic_id];
 	struct hwrm_vnic_cfg_input req = {0};
 
@@ -3093,10 +3093,11 @@ static int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id)
 	req.rss_rule = cpu_to_le16(vnic->fw_rss_cos_lb_ctx);
 	req.cos_rule = cpu_to_le16(0xffff);
 	if (vnic->flags & BNXT_VNIC_RSS_FLAG)
-		grp_idx = 0;
+		ring = 0;
 	else if (vnic->flags & BNXT_VNIC_RFS_FLAG)
-		grp_idx = vnic_id - 1;
+		ring = vnic_id - 1;
 
+	grp_idx = bp->rx_ring[ring].bnapi->index;
 	req.vnic_id = cpu_to_le16(vnic->fw_vnic_id);
 	req.dflt_ring_grp = cpu_to_le16(bp->grp_info[grp_idx].fw_grp_id);
 
@@ -3137,22 +3138,25 @@ static void bnxt_hwrm_vnic_free(struct bnxt *bp)
 		bnxt_hwrm_vnic_free_one(bp, i);
 }
 
-static int bnxt_hwrm_vnic_alloc(struct bnxt *bp, u16 vnic_id, u16 start_grp_id,
-				u16 end_grp_id)
+static int bnxt_hwrm_vnic_alloc(struct bnxt *bp, u16 vnic_id,
+				unsigned int start_rx_ring_idx,
+				unsigned int nr_rings)
 {
-	u32 rc = 0, i, j;
+	int rc = 0;
+	unsigned int i, j, grp_idx, end_idx = start_rx_ring_idx + nr_rings;
 	struct hwrm_vnic_alloc_input req = {0};
 	struct hwrm_vnic_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
 	/* map ring groups to this vnic */
-	for (i = start_grp_id, j = 0; i < end_grp_id; i++, j++) {
-		if (bp->grp_info[i].fw_grp_id == INVALID_HW_RING_ID) {
+	for (i = start_rx_ring_idx, j = 0; i < end_idx; i++, j++) {
+		grp_idx = bp->rx_ring[i].bnapi->index;
+		if (bp->grp_info[grp_idx].fw_grp_id == INVALID_HW_RING_ID) {
 			netdev_err(bp->dev, "Not enough ring groups avail:%x req:%x\n",
-				   j, (end_grp_id - start_grp_id));
+				   j, nr_rings);
 			break;
 		}
 		bp->vnic_info[vnic_id].fw_grp_ids[j] =
-					bp->grp_info[i].fw_grp_id;
+					bp->grp_info[grp_idx].fw_grp_id;
 	}
 
 	bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx = INVALID_HW_RING_ID;
@@ -3179,20 +3183,22 @@ static int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp)
 		struct hwrm_ring_grp_alloc_input req = {0};
 		struct hwrm_ring_grp_alloc_output *resp =
 					bp->hwrm_cmd_resp_addr;
+		unsigned int grp_idx = bp->rx_ring[i].bnapi->index;
 
 		bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_RING_GRP_ALLOC, -1, -1);
 
-		req.cr = cpu_to_le16(bp->grp_info[i].cp_fw_ring_id);
-		req.rr = cpu_to_le16(bp->grp_info[i].rx_fw_ring_id);
-		req.ar = cpu_to_le16(bp->grp_info[i].agg_fw_ring_id);
-		req.sc = cpu_to_le16(bp->grp_info[i].fw_stats_ctx);
+		req.cr = cpu_to_le16(bp->grp_info[grp_idx].cp_fw_ring_id);
+		req.rr = cpu_to_le16(bp->grp_info[grp_idx].rx_fw_ring_id);
+		req.ar = cpu_to_le16(bp->grp_info[grp_idx].agg_fw_ring_id);
+		req.sc = cpu_to_le16(bp->grp_info[grp_idx].fw_stats_ctx);
 
 		rc = _hwrm_send_message(bp, &req, sizeof(req),
 					HWRM_CMD_TIMEOUT);
 		if (rc)
 			break;
 
-		bp->grp_info[i].fw_grp_id = le32_to_cpu(resp->ring_group_id);
+		bp->grp_info[grp_idx].fw_grp_id =
+			le32_to_cpu(resp->ring_group_id);
 	}
 	mutex_unlock(&bp->hwrm_cmd_lock);
 	return rc;
@@ -3334,26 +3340,28 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
 	for (i = 0; i < bp->tx_nr_rings; i++) {
 		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
 		struct bnxt_ring_struct *ring = &txr->tx_ring_struct;
-		u16 fw_stats_ctx = bp->grp_info[i].fw_stats_ctx;
+		u32 map_idx = txr->bnapi->index;
+		u16 fw_stats_ctx = bp->grp_info[map_idx].fw_stats_ctx;
 
-		rc = hwrm_ring_alloc_send_msg(bp, ring, HWRM_RING_ALLOC_TX, i,
-					      fw_stats_ctx);
+		rc = hwrm_ring_alloc_send_msg(bp, ring, HWRM_RING_ALLOC_TX,
+					      map_idx, fw_stats_ctx);
 		if (rc)
 			goto err_out;
-		txr->tx_doorbell = bp->bar1 + i * 0x80;
+		txr->tx_doorbell = bp->bar1 + map_idx * 0x80;
 	}
 
 	for (i = 0; i < bp->rx_nr_rings; i++) {
 		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
+		u32 map_idx = rxr->bnapi->index;
 
-		rc = hwrm_ring_alloc_send_msg(bp, ring, HWRM_RING_ALLOC_RX, i,
-					      INVALID_STATS_CTX_ID);
+		rc = hwrm_ring_alloc_send_msg(bp, ring, HWRM_RING_ALLOC_RX,
+					      map_idx, INVALID_STATS_CTX_ID);
 		if (rc)
 			goto err_out;
-		rxr->rx_doorbell = bp->bar1 + i * 0x80;
+		rxr->rx_doorbell = bp->bar1 + map_idx * 0x80;
 		writel(DB_KEY_RX | rxr->rx_prod, rxr->rx_doorbell);
-		bp->grp_info[i].rx_fw_ring_id = ring->fw_ring_id;
+		bp->grp_info[map_idx].rx_fw_ring_id = ring->fw_ring_id;
 	}
 
 	if (bp->flags & BNXT_FLAG_AGG_RINGS) {
@@ -3361,19 +3369,20 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
 			struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 			struct bnxt_ring_struct *ring =
 						&rxr->rx_agg_ring_struct;
+			u32 grp_idx = rxr->bnapi->index;
+			u32 map_idx = grp_idx + bp->rx_nr_rings;
 
 			rc = hwrm_ring_alloc_send_msg(bp, ring,
 						      HWRM_RING_ALLOC_AGG,
-						      bp->rx_nr_rings + i,
+						      map_idx,
 						      INVALID_STATS_CTX_ID);
 			if (rc)
 				goto err_out;
 
-			rxr->rx_agg_doorbell =
-				bp->bar1 + (bp->rx_nr_rings + i) * 0x80;
+			rxr->rx_agg_doorbell = bp->bar1 + map_idx * 0x80;
 			writel(DB_KEY_RX | rxr->rx_agg_prod,
 			       rxr->rx_agg_doorbell);
-			bp->grp_info[i].agg_fw_ring_id = ring->fw_ring_id;
+			bp->grp_info[grp_idx].agg_fw_ring_id = ring->fw_ring_id;
 		}
 	}
 err_out:
@@ -3430,7 +3439,8 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
 	for (i = 0; i < bp->tx_nr_rings; i++) {
 		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
 		struct bnxt_ring_struct *ring = &txr->tx_ring_struct;
-		u32 cmpl_ring_id = bp->grp_info[i].cp_fw_ring_id;
+		u32 grp_idx = txr->bnapi->index;
+		u32 cmpl_ring_id = bp->grp_info[grp_idx].cp_fw_ring_id;
 
 		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
 			hwrm_ring_free_send_msg(bp, ring,
@@ -3444,7 +3454,8 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
 	for (i = 0; i < bp->rx_nr_rings; i++) {
 		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
-		u32 cmpl_ring_id = bp->grp_info[i].cp_fw_ring_id;
+		u32 grp_idx = rxr->bnapi->index;
+		u32 cmpl_ring_id = bp->grp_info[grp_idx].cp_fw_ring_id;
 
 		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
 			hwrm_ring_free_send_msg(bp, ring,
@@ -3452,14 +3463,16 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
 						close_path ? cmpl_ring_id :
 						INVALID_HW_RING_ID);
 			ring->fw_ring_id = INVALID_HW_RING_ID;
-			bp->grp_info[i].rx_fw_ring_id = INVALID_HW_RING_ID;
+			bp->grp_info[grp_idx].rx_fw_ring_id =
+				INVALID_HW_RING_ID;
 		}
 	}
 
 	for (i = 0; i < bp->rx_nr_rings; i++) {
 		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
 		struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct;
-		u32 cmpl_ring_id = bp->grp_info[i].cp_fw_ring_id;
+		u32 grp_idx = rxr->bnapi->index;
+		u32 cmpl_ring_id = bp->grp_info[grp_idx].cp_fw_ring_id;
 
 		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
 			hwrm_ring_free_send_msg(bp, ring,
@@ -3467,7 +3480,8 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
 						close_path ? cmpl_ring_id :
 						INVALID_HW_RING_ID);
 			ring->fw_ring_id = INVALID_HW_RING_ID;
-			bp->grp_info[i].agg_fw_ring_id = INVALID_HW_RING_ID;
+			bp->grp_info[grp_idx].agg_fw_ring_id =
+				INVALID_HW_RING_ID;
 		}
 	}
 
@@ -3859,7 +3873,7 @@ static int bnxt_alloc_rfs_vnics(struct bnxt *bp)
 			break;
 
 		bp->vnic_info[vnic_id].flags |= BNXT_VNIC_RFS_FLAG;
-		rc = bnxt_hwrm_vnic_alloc(bp, vnic_id, ring_id, ring_id + 1);
+		rc = bnxt_hwrm_vnic_alloc(bp, vnic_id, ring_id, 1);
 		if (rc) {
 			netdev_err(bp->dev, "hwrm vnic %d alloc failure rc: %x\n",
 				   vnic_id, rc);
@@ -4165,7 +4179,7 @@ static void bnxt_free_irq(struct bnxt *bp)
 
 static int bnxt_request_irq(struct bnxt *bp)
 {
-	int i, rc = 0;
+	int i, j, rc = 0;
 	unsigned long flags = 0;
 #ifdef CONFIG_RFS_ACCEL
 	struct cpu_rmap *rmap = bp->dev->rx_cpu_rmap;
@@ -4174,14 +4188,15 @@ static int bnxt_request_irq(struct bnxt *bp)
 	if (!(bp->flags & BNXT_FLAG_USING_MSIX))
 		flags = IRQF_SHARED;
 
-	for (i = 0; i < bp->cp_nr_rings; i++) {
+	for (i = 0, j = 0; i < bp->cp_nr_rings; i++) {
 		struct bnxt_irq *irq = &bp->irq_tbl[i];
 #ifdef CONFIG_RFS_ACCEL
-		if (rmap && (i < bp->rx_nr_rings)) {
+		if (rmap && bp->bnapi[i]->rx_ring) {
 			rc = irq_cpu_rmap_add(rmap, irq->vector);
 			if (rc)
 				netdev_warn(bp->dev, "failed adding irq rmap for ring %d\n",
-					    i);
+					    j);
+			j++;
 		}
 #endif
 		rc = request_irq(irq->vector, irq->handler, flags, irq->name,
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index ea2fd92..ec3edcc 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -918,6 +918,8 @@ struct bnxt {
 	int			cp_nr_rings;
 
 	int			num_stat_ctxs;
+
+	/* grp_info indexed by completion ring index */
 	struct bnxt_ring_grp_info	*grp_info;
 	struct bnxt_vnic_info	*vnic_info;
 	int			nr_vnics;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 5/7] bnxt_en: Modify bnxt_get_max_rings() to support shared or non shared rings.
  2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
                   ` (3 preceding siblings ...)
  2016-01-03  4:45 ` [PATCH net-next 4/7] bnxt_en: Re-structure ring indexing and mapping Michael Chan
@ 2016-01-03  4:45 ` Michael Chan
  2016-01-03  4:45 ` [PATCH net-next 6/7] bnxt_en: Modify init sequence " Michael Chan
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Michael Chan @ 2016-01-03  4:45 UTC (permalink / raw)
  To: davem; +Cc: netdev

Add logic to calculate how many shared or non shared rings can be
supported.  Default is to use shared rings.

Signed-off-by: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c         | 96 +++++++++++++++++------
 drivers/net/ethernet/broadcom/bnxt/bnxt.h         |  4 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c |  4 +-
 3 files changed, 79 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 8a54dab..4f7b231 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -4038,6 +4038,30 @@ static int bnxt_set_real_num_queues(struct bnxt *bp)
 	return rc;
 }
 
+static int bnxt_trim_rings(struct bnxt *bp, int *rx, int *tx, int max,
+			   bool shared)
+{
+	int _rx = *rx, _tx = *tx;
+
+	if (shared) {
+		*rx = min_t(int, _rx, max);
+		*tx = min_t(int, _tx, max);
+	} else {
+		if (max < 2)
+			return -ENOMEM;
+
+		while (_rx + _tx > max) {
+			if (_rx > _tx && _rx > 1)
+				_rx--;
+			else if (_tx > 1)
+				_tx--;
+		}
+		*rx = _rx;
+		*tx = _tx;
+	}
+	return 0;
+}
+
 static int bnxt_setup_msix(struct bnxt *bp)
 {
 	struct msix_entry *msix_ent;
@@ -4068,8 +4092,11 @@ static int bnxt_setup_msix(struct bnxt *bp)
 		int tcs;
 
 		/* Trim rings based upon num of vectors allocated */
-		bp->rx_nr_rings = min_t(int, total_vecs, bp->rx_nr_rings);
-		bp->tx_nr_rings = min_t(int, total_vecs, bp->tx_nr_rings);
+		rc = bnxt_trim_rings(bp, &bp->rx_nr_rings, &bp->tx_nr_rings,
+				     total_vecs, true);
+		if (rc)
+			goto msix_setup_exit;
+
 		bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
 		tcs = netdev_get_num_tc(dev);
 		if (tcs > 1) {
@@ -5337,10 +5364,10 @@ static int bnxt_setup_tc(struct net_device *dev, u8 tc)
 		return 0;
 
 	if (tc) {
-		int max_rx_rings, max_tx_rings;
+		int max_rx_rings, max_tx_rings, rc;
 
-		bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings);
-		if (bp->tx_nr_rings_per_tc * tc > max_tx_rings)
+		rc = bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, true);
+		if (rc || bp->tx_nr_rings_per_tc * tc > max_tx_rings)
 			return -ENOMEM;
 	}
 
@@ -5654,31 +5681,62 @@ static int bnxt_get_max_irq(struct pci_dev *pdev)
 	return (ctrl & PCI_MSIX_FLAGS_QSIZE) + 1;
 }
 
-void bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx)
+static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx,
+				int *max_cp)
 {
-	int max_rings = 0, max_ring_grps = 0;
+	int max_ring_grps = 0;
 
 	if (BNXT_PF(bp)) {
 		*max_tx = bp->pf.max_tx_rings;
 		*max_rx = bp->pf.max_rx_rings;
-		max_rings = min_t(int, bp->pf.max_irqs, bp->pf.max_cp_rings);
-		max_rings = min_t(int, max_rings, bp->pf.max_stat_ctxs);
+		*max_cp = min_t(int, bp->pf.max_irqs, bp->pf.max_cp_rings);
+		*max_cp = min_t(int, *max_cp, bp->pf.max_stat_ctxs);
 		max_ring_grps = bp->pf.max_hw_ring_grps;
 	} else {
 #ifdef CONFIG_BNXT_SRIOV
 		*max_tx = bp->vf.max_tx_rings;
 		*max_rx = bp->vf.max_rx_rings;
-		max_rings = min_t(int, bp->vf.max_irqs, bp->vf.max_cp_rings);
-		max_rings = min_t(int, max_rings, bp->vf.max_stat_ctxs);
+		*max_cp = min_t(int, bp->vf.max_irqs, bp->vf.max_cp_rings);
+		*max_cp = min_t(int, *max_cp, bp->vf.max_stat_ctxs);
 		max_ring_grps = bp->vf.max_hw_ring_grps;
 #endif
 	}
 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
 		*max_rx >>= 1;
-
-	*max_rx = min_t(int, *max_rx, max_rings);
 	*max_rx = min_t(int, *max_rx, max_ring_grps);
-	*max_tx = min_t(int, *max_tx, max_rings);
+}
+
+int bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx, bool shared)
+{
+	int rx, tx, cp;
+
+	_bnxt_get_max_rings(bp, &rx, &tx, &cp);
+	if (!rx || !tx || !cp)
+		return -ENOMEM;
+
+	*max_rx = rx;
+	*max_tx = tx;
+	return bnxt_trim_rings(bp, max_rx, max_tx, cp, shared);
+}
+
+static int bnxt_set_dflt_rings(struct bnxt *bp)
+{
+	int dflt_rings, max_rx_rings, max_tx_rings, rc;
+	bool sh = true;
+
+	if (sh)
+		bp->flags |= BNXT_FLAG_SHARED_RINGS;
+	dflt_rings = netif_get_num_default_rss_queues();
+	rc = bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, sh);
+	if (rc)
+		return rc;
+	bp->rx_nr_rings = min_t(int, dflt_rings, max_rx_rings);
+	bp->tx_nr_rings_per_tc = min_t(int, dflt_rings, max_tx_rings);
+	bp->tx_nr_rings = bp->tx_nr_rings_per_tc;
+	bp->cp_nr_rings = sh ? max_t(int, bp->tx_nr_rings, bp->rx_nr_rings) :
+			       bp->tx_nr_rings + bp->rx_nr_rings;
+	bp->num_stat_ctxs = bp->cp_nr_rings;
+	return rc;
 }
 
 static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
@@ -5686,7 +5744,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 	static int version_printed;
 	struct net_device *dev;
 	struct bnxt *bp;
-	int rc, max_rx_rings, max_tx_rings, max_irqs, dflt_rings;
+	int rc, max_irqs;
 
 	if (version_printed++ == 0)
 		pr_info("%s", version);
@@ -5765,19 +5823,13 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	bnxt_set_tpa_flags(bp);
 	bnxt_set_ring_params(bp);
-	dflt_rings = netif_get_num_default_rss_queues();
 	if (BNXT_PF(bp))
 		bp->pf.max_irqs = max_irqs;
 #if defined(CONFIG_BNXT_SRIOV)
 	else
 		bp->vf.max_irqs = max_irqs;
 #endif
-	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings);
-	bp->rx_nr_rings = min_t(int, dflt_rings, max_rx_rings);
-	bp->tx_nr_rings_per_tc = min_t(int, dflt_rings, max_tx_rings);
-	bp->tx_nr_rings = bp->tx_nr_rings_per_tc;
-	bp->cp_nr_rings = max_t(int, bp->rx_nr_rings, bp->tx_nr_rings);
-	bp->num_stat_ctxs = bp->cp_nr_rings;
+	bnxt_set_dflt_rings(bp);
 
 	if (BNXT_PF(bp)) {
 		dev->hw_features |= NETIF_F_NTUPLE;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index ec3edcc..8af3ca8 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -877,6 +877,8 @@ struct bnxt {
 	#define BNXT_FLAG_USING_MSIX	0x40
 	#define BNXT_FLAG_MSIX_CAP	0x80
 	#define BNXT_FLAG_RFS		0x100
+	#define BNXT_FLAG_SHARED_RINGS	0x200
+
 	#define BNXT_FLAG_ALL_CONFIG_FEATS (BNXT_FLAG_TPA |		\
 					    BNXT_FLAG_RFS |		\
 					    BNXT_FLAG_STRIP_VLAN)
@@ -1096,5 +1098,5 @@ int bnxt_hwrm_set_pause(struct bnxt *);
 int bnxt_hwrm_set_link_setting(struct bnxt *, bool);
 int bnxt_open_nic(struct bnxt *, bool, bool);
 int bnxt_close_nic(struct bnxt *, bool, bool);
-void bnxt_get_max_rings(struct bnxt *, int *, int *);
+int bnxt_get_max_rings(struct bnxt *, int *, int *, bool);
 #endif
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index a39511f..8ad1b6c 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -211,7 +211,7 @@ static void bnxt_get_channels(struct net_device *dev,
 	struct bnxt *bp = netdev_priv(dev);
 	int max_rx_rings, max_tx_rings, tcs;
 
-	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings);
+	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, true);
 	tcs = netdev_get_num_tc(dev);
 	if (tcs > 1)
 		max_tx_rings /= tcs;
@@ -235,7 +235,7 @@ static int bnxt_set_channels(struct net_device *dev,
 	    !channel->rx_count || !channel->tx_count)
 		return -EINVAL;
 
-	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings);
+	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, true);
 	tcs = netdev_get_num_tc(dev);
 	if (tcs > 1)
 		max_tx_rings /= tcs;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 6/7] bnxt_en: Modify init sequence to support shared or non shared rings.
  2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
                   ` (4 preceding siblings ...)
  2016-01-03  4:45 ` [PATCH net-next 5/7] bnxt_en: Modify bnxt_get_max_rings() to support shared or non shared rings Michael Chan
@ 2016-01-03  4:45 ` Michael Chan
  2016-01-03  4:45 ` [PATCH net-next 7/7] bnxt_en: Modify ethtool -l|-L to support combined or rx/tx rings Michael Chan
  2016-01-04 20:55 ` [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels David Miller
  7 siblings, 0 replies; 9+ messages in thread
From: Michael Chan @ 2016-01-03  4:45 UTC (permalink / raw)
  To: davem; +Cc: netdev

Modify ring memory allocation and MSIX setup to support shared or
non shared rings and do the proper mapping.  Default is still to
use shared rings.

Signed-off-by: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 42 +++++++++++++++++++++++--------
 1 file changed, 32 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 4f7b231..f956949 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -2496,7 +2496,7 @@ static void bnxt_free_mem(struct bnxt *bp, bool irq_re_init)
 
 static int bnxt_alloc_mem(struct bnxt *bp, bool irq_re_init)
 {
-	int i, rc, size, arr_size;
+	int i, j, rc, size, arr_size;
 	void *bnapi;
 
 	if (irq_re_init) {
@@ -2535,9 +2535,14 @@ static int bnxt_alloc_mem(struct bnxt *bp, bool irq_re_init)
 		if (!bp->tx_ring)
 			return -ENOMEM;
 
-		for (i = 0; i < bp->tx_nr_rings; i++) {
-			bp->tx_ring[i].bnapi = bp->bnapi[i];
-			bp->bnapi[i]->tx_ring = &bp->tx_ring[i];
+		if (bp->flags & BNXT_FLAG_SHARED_RINGS)
+			j = 0;
+		else
+			j = bp->rx_nr_rings;
+
+		for (i = 0; i < bp->tx_nr_rings; i++, j++) {
+			bp->tx_ring[i].bnapi = bp->bnapi[j];
+			bp->bnapi[j]->tx_ring = &bp->tx_ring[i];
 		}
 
 		rc = bnxt_alloc_stats(bp);
@@ -4066,7 +4071,7 @@ static int bnxt_setup_msix(struct bnxt *bp)
 {
 	struct msix_entry *msix_ent;
 	struct net_device *dev = bp->dev;
-	int i, total_vecs, rc = 0;
+	int i, total_vecs, rc = 0, min = 1;
 	const int len = sizeof(bp->irq_tbl[0].name);
 
 	bp->flags &= ~BNXT_FLAG_USING_MSIX;
@@ -4081,7 +4086,10 @@ static int bnxt_setup_msix(struct bnxt *bp)
 		msix_ent[i].vector = 0;
 	}
 
-	total_vecs = pci_enable_msix_range(bp->pdev, msix_ent, 1, total_vecs);
+	if (!(bp->flags & BNXT_FLAG_SHARED_RINGS))
+		min = 2;
+
+	total_vecs = pci_enable_msix_range(bp->pdev, msix_ent, min, total_vecs);
 	if (total_vecs < 0) {
 		rc = -ENODEV;
 		goto msix_setup_exit;
@@ -4093,7 +4101,7 @@ static int bnxt_setup_msix(struct bnxt *bp)
 
 		/* Trim rings based upon num of vectors allocated */
 		rc = bnxt_trim_rings(bp, &bp->rx_nr_rings, &bp->tx_nr_rings,
-				     total_vecs, true);
+				     total_vecs, min == 1);
 		if (rc)
 			goto msix_setup_exit;
 
@@ -4115,12 +4123,21 @@ static int bnxt_setup_msix(struct bnxt *bp)
 				}
 			}
 		}
-		bp->cp_nr_rings = max_t(int, bp->rx_nr_rings, bp->tx_nr_rings);
+		bp->cp_nr_rings = total_vecs;
 
 		for (i = 0; i < bp->cp_nr_rings; i++) {
+			char *attr;
+
 			bp->irq_tbl[i].vector = msix_ent[i].vector;
+			if (bp->flags & BNXT_FLAG_SHARED_RINGS)
+				attr = "TxRx";
+			else if (i < bp->rx_nr_rings)
+				attr = "rx";
+			else
+				attr = "tx";
+
 			snprintf(bp->irq_tbl[i].name, len,
-				 "%s-%s-%d", dev->name, "TxRx", i);
+				 "%s-%s-%d", dev->name, attr, i);
 			bp->irq_tbl[i].handler = bnxt_msix;
 		}
 		rc = bnxt_set_real_num_queues(bp);
@@ -4158,6 +4175,7 @@ static int bnxt_setup_inta(struct bnxt *bp)
 	bp->tx_nr_rings = 1;
 	bp->cp_nr_rings = 1;
 	bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
+	bp->flags |= BNXT_FLAG_SHARED_RINGS;
 	bp->irq_tbl[0].vector = bp->pdev->irq;
 	snprintf(bp->irq_tbl[0].name, len,
 		 "%s-%s-%d", bp->dev->name, "TxRx", 0);
@@ -5365,8 +5383,12 @@ static int bnxt_setup_tc(struct net_device *dev, u8 tc)
 
 	if (tc) {
 		int max_rx_rings, max_tx_rings, rc;
+		bool sh = false;
+
+		if (bp->flags & BNXT_FLAG_SHARED_RINGS)
+			sh = true;
 
-		rc = bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, true);
+		rc = bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, sh);
 		if (rc || bp->tx_nr_rings_per_tc * tc > max_tx_rings)
 			return -ENOMEM;
 	}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 7/7] bnxt_en: Modify ethtool -l|-L to support combined or rx/tx rings.
  2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
                   ` (5 preceding siblings ...)
  2016-01-03  4:45 ` [PATCH net-next 6/7] bnxt_en: Modify init sequence " Michael Chan
@ 2016-01-03  4:45 ` Michael Chan
  2016-01-04 20:55 ` [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels David Miller
  7 siblings, 0 replies; 9+ messages in thread
From: Michael Chan @ 2016-01-03  4:45 UTC (permalink / raw)
  To: davem; +Cc: netdev

The driver can support either all combined or all rx/tx rings.  The
default is combined, but the user can now select rx/tx rings.

Signed-off-by: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 57 ++++++++++++++++++-----
 1 file changed, 45 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 8ad1b6c..4f62c9f 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -212,6 +212,9 @@ static void bnxt_get_channels(struct net_device *dev,
 	int max_rx_rings, max_tx_rings, tcs;
 
 	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, true);
+	channel->max_combined = max_rx_rings;
+
+	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, false);
 	tcs = netdev_get_num_tc(dev);
 	if (tcs > 1)
 		max_tx_rings /= tcs;
@@ -219,9 +222,12 @@ static void bnxt_get_channels(struct net_device *dev,
 	channel->max_rx = max_rx_rings;
 	channel->max_tx = max_tx_rings;
 	channel->max_other = 0;
-	channel->max_combined = 0;
-	channel->rx_count = bp->rx_nr_rings;
-	channel->tx_count = bp->tx_nr_rings_per_tc;
+	if (bp->flags & BNXT_FLAG_SHARED_RINGS) {
+		channel->combined_count = bp->rx_nr_rings;
+	} else {
+		channel->rx_count = bp->rx_nr_rings;
+		channel->tx_count = bp->tx_nr_rings_per_tc;
+	}
 }
 
 static int bnxt_set_channels(struct net_device *dev,
@@ -230,19 +236,35 @@ static int bnxt_set_channels(struct net_device *dev,
 	struct bnxt *bp = netdev_priv(dev);
 	int max_rx_rings, max_tx_rings, tcs;
 	u32 rc = 0;
+	bool sh = false;
 
-	if (channel->other_count || channel->combined_count ||
-	    !channel->rx_count || !channel->tx_count)
+	if (channel->other_count)
 		return -EINVAL;
 
-	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, true);
+	if (!channel->combined_count &&
+	    (!channel->rx_count || !channel->tx_count))
+		return -EINVAL;
+
+	if (channel->combined_count &&
+	    (channel->rx_count || channel->tx_count))
+		return -EINVAL;
+
+	if (channel->combined_count)
+		sh = true;
+
+	bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, sh);
+
 	tcs = netdev_get_num_tc(dev);
 	if (tcs > 1)
 		max_tx_rings /= tcs;
 
-	if (channel->rx_count > max_rx_rings ||
-	    channel->tx_count > max_tx_rings)
-		return -EINVAL;
+	if (sh && (channel->combined_count > max_rx_rings ||
+		   channel->combined_count > max_tx_rings))
+		return -ENOMEM;
+
+	if (!sh && (channel->rx_count > max_rx_rings ||
+		    channel->tx_count > max_tx_rings))
+		return -ENOMEM;
 
 	if (netif_running(dev)) {
 		if (BNXT_PF(bp)) {
@@ -258,12 +280,23 @@ static int bnxt_set_channels(struct net_device *dev,
 		}
 	}
 
-	bp->rx_nr_rings = channel->rx_count;
-	bp->tx_nr_rings_per_tc = channel->tx_count;
+	if (sh) {
+		bp->flags |= BNXT_FLAG_SHARED_RINGS;
+		bp->rx_nr_rings = channel->combined_count;
+		bp->tx_nr_rings_per_tc = channel->combined_count;
+	} else {
+		bp->flags &= ~BNXT_FLAG_SHARED_RINGS;
+		bp->rx_nr_rings = channel->rx_count;
+		bp->tx_nr_rings_per_tc = channel->tx_count;
+	}
+
 	bp->tx_nr_rings = bp->tx_nr_rings_per_tc;
 	if (tcs > 1)
 		bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tcs;
-	bp->cp_nr_rings = max_t(int, bp->tx_nr_rings, bp->rx_nr_rings);
+
+	bp->cp_nr_rings = sh ? max_t(int, bp->tx_nr_rings, bp->rx_nr_rings) :
+			       bp->tx_nr_rings + bp->rx_nr_rings;
+
 	bp->num_stat_ctxs = bp->cp_nr_rings;
 
 	/* After changing number of rx channels, update NTUPLE feature. */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels.
  2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
                   ` (6 preceding siblings ...)
  2016-01-03  4:45 ` [PATCH net-next 7/7] bnxt_en: Modify ethtool -l|-L to support combined or rx/tx rings Michael Chan
@ 2016-01-04 20:55 ` David Miller
  7 siblings, 0 replies; 9+ messages in thread
From: David Miller @ 2016-01-04 20:55 UTC (permalink / raw)
  To: mchan; +Cc: netdev

From: Michael Chan <mchan@broadcom.com>
Date: Sat, 2 Jan 2016 23:44:57 -0500

> The bnxt hardware uses a completion ring for rx and tx events.  The driver
> has to process the completion ring entries sequentially for the events.
> The current code only supports an rx/tx ring pair for each completion ring.
> This patch series add support for using a dedicated completion ring for
> rx only or tx only as an option configuarble using ethtool -L.
> 
> The benefits for using dedicated completion rings are:
> 
> 1. A burst of rx packets can cause delay in processing tx events if the
> completion ring is shared.  If tx queue is stopped by BQL, this can cause
> delay in re-starting the tx queue.
> 
> 2. A completion ring is sized according to the rx and tx ring size rounded
> up to the nearest power of 2.  When the completion ring is shared, it is
> sized by adding the rx and tx ring sizes and then rounded to the next power
> of 2, often with a lot of wasted space.
> 
> 3. Using dedicated completion ring, we can adjust the tx and rx coalescing
> parameters independently for rx and tx.

Series applied, thanks Michael.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-01-04 20:55 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-01-03  4:44 [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels Michael Chan
2016-01-03  4:44 ` [PATCH net-next 1/7] bnxt_en: Refactor bnxt_dbg_dump_states() Michael Chan
2016-01-03  4:44 ` [PATCH net-next 2/7] bnxt_en: Separate bnxt_{rx|tx}_ring_info structs from bnxt_napi struct Michael Chan
2016-01-03  4:45 ` [PATCH net-next 3/7] bnxt_en: Check for NULL rx or tx ring Michael Chan
2016-01-03  4:45 ` [PATCH net-next 4/7] bnxt_en: Re-structure ring indexing and mapping Michael Chan
2016-01-03  4:45 ` [PATCH net-next 5/7] bnxt_en: Modify bnxt_get_max_rings() to support shared or non shared rings Michael Chan
2016-01-03  4:45 ` [PATCH net-next 6/7] bnxt_en: Modify init sequence " Michael Chan
2016-01-03  4:45 ` [PATCH net-next 7/7] bnxt_en: Modify ethtool -l|-L to support combined or rx/tx rings Michael Chan
2016-01-04 20:55 ` [PATCH net-next 0/7] bnxt_en: Support combined and rx/tx channels David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).