* [net-next 00/13][pull request] Intel Wired LAN Driver Update
@ 2011-06-11 3:02 Jeff Kirsher
2011-06-11 3:02 ` [net-next 01/13] ixgbe: dcbnl reduce duplicated code and indentation Jeff Kirsher
` (12 more replies)
0 siblings, 13 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo
The following series contains updates for ixgbe. The updates are
mainly cleanup of DCB code. There is the addition of Dell CEM
support and a couple of FCoE DDP patches.
The following are changes since commit 1057c42747ebf4d1cbaa2ab6125b92914b8ec622:
ixgbevf: Update the driver string
and are available in the git repository at:
master.kernel.org:/pub/scm/linux/kernel/git/jkirsher/net-next-2.6 master
Emil Tantilov (1):
ixgbe: add support for Dell CEM
John Fastabend (10):
ixgbe: dcbnl reduce duplicated code and indentation
ixgbe: consolidate packet buffer allocation
ixgbe: consolidate MRQC and MTQC handling
ixgbe: configure minimal packet buffers to support TC
ixgbe: DCB use existing TX and RX queues
ixgbe: DCB 82598 devices, tx_idx and rx_idx swapped
ixgbe: setup redirection table for multiple packet buffers
ixgbe: fix bit mask for DCB version
ixgbe: DCB and perfect filters can coexist
ixgbe: DCB, remove unneeded ixgbe_dcb_txq_to_tc() routine
Vasu Dev (2):
ixgbe: setup per CPU PCI pool for FCoE DDP
ixgbe: use per NUMA node lock for FCoE DDP
drivers/net/ixgbe/ixgbe.h | 2 -
drivers/net/ixgbe/ixgbe_82598.c | 43 ++++
drivers/net/ixgbe/ixgbe_82599.c | 40 +----
drivers/net/ixgbe/ixgbe_common.c | 240 +++++++++++++++++++++
drivers/net/ixgbe/ixgbe_common.h | 5 +
drivers/net/ixgbe/ixgbe_dcb.c | 10 +-
drivers/net/ixgbe/ixgbe_dcb.h | 7 -
drivers/net/ixgbe/ixgbe_dcb_82598.c | 43 +----
drivers/net/ixgbe/ixgbe_dcb_82598.h | 3 +-
drivers/net/ixgbe/ixgbe_dcb_82599.c | 119 +-----------
drivers/net/ixgbe/ixgbe_dcb_82599.h | 14 +--
drivers/net/ixgbe/ixgbe_dcb_nl.c | 52 ++---
drivers/net/ixgbe/ixgbe_fcoe.c | 151 +++++++++++---
drivers/net/ixgbe/ixgbe_fcoe.h | 14 +-
drivers/net/ixgbe/ixgbe_main.c | 397 +++++++++++++++++------------------
drivers/net/ixgbe/ixgbe_type.h | 71 +++++++
drivers/net/ixgbe/ixgbe_x540.c | 2 +
17 files changed, 716 insertions(+), 497 deletions(-)
--
1.7.5.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [net-next 01/13] ixgbe: dcbnl reduce duplicated code and indentation
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 02/13] ixgbe: consolidate packet buffer allocation Jeff Kirsher
` (11 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
Replace duplicated code in if/else branches with single
check and ixgbe_init_interrupt_scheme().
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_dcb_nl.c | 51 ++++++++++++++++++--------------------
1 files changed, 24 insertions(+), 27 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ixgbe/ixgbe_dcb_nl.c
index 5e7ed22..293ff06 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
+++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
@@ -146,37 +146,34 @@ static u8 ixgbe_dcbnl_set_state(struct net_device *netdev, u8 state)
adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
if (!netdev_get_num_tc(netdev))
ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
-
- ixgbe_init_interrupt_scheme(adapter);
- if (netif_running(netdev))
- netdev->netdev_ops->ndo_open(netdev);
} else {
/* Turn off DCB */
- if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
- if (netif_running(netdev))
- netdev->netdev_ops->ndo_stop(netdev);
- ixgbe_clear_interrupt_scheme(adapter);
+ if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+ goto out;
- adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
- adapter->temp_dcb_cfg.pfc_mode_enable = false;
- adapter->dcb_cfg.pfc_mode_enable = false;
- adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
- switch (adapter->hw.mac.type) {
- case ixgbe_mac_82599EB:
- case ixgbe_mac_X540:
- adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
- break;
- default:
- break;
- }
-
- ixgbe_setup_tc(netdev, 0);
-
- ixgbe_init_interrupt_scheme(adapter);
- if (netif_running(netdev))
- netdev->netdev_ops->ndo_open(netdev);
+ if (netif_running(netdev))
+ netdev->netdev_ops->ndo_stop(netdev);
+ ixgbe_clear_interrupt_scheme(adapter);
+
+ adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
+ adapter->temp_dcb_cfg.pfc_mode_enable = false;
+ adapter->dcb_cfg.pfc_mode_enable = false;
+ adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
+ switch (adapter->hw.mac.type) {
+ case ixgbe_mac_82599EB:
+ case ixgbe_mac_X540:
+ adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
+ break;
+ default:
+ break;
}
+
+ ixgbe_setup_tc(netdev, 0);
}
+
+ ixgbe_init_interrupt_scheme(adapter);
+ if (netif_running(netdev))
+ netdev->netdev_ops->ndo_open(netdev);
out:
return err;
}
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 02/13] ixgbe: consolidate packet buffer allocation
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
2011-06-11 3:02 ` [net-next 01/13] ixgbe: dcbnl reduce duplicated code and indentation Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 03/13] ixgbe: consolidate MRQC and MTQC handling Jeff Kirsher
` (10 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Alexander Duyck, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
Consolidate packet buffer allocation currently being
done in the DCB path and main path. This allows the
feature set and packet buffer requirements to be done
once.
This is prep work to allow DCB to coexist with other
features namely, flow director.
CC: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_82598.c | 42 ++++++++++++++++++++++
drivers/net/ixgbe/ixgbe_82599.c | 39 +--------------------
drivers/net/ixgbe/ixgbe_common.c | 66 +++++++++++++++++++++++++++++++++++
drivers/net/ixgbe/ixgbe_common.h | 3 ++
drivers/net/ixgbe/ixgbe_dcb.c | 10 ++---
drivers/net/ixgbe/ixgbe_dcb.h | 7 ----
drivers/net/ixgbe/ixgbe_dcb_82598.c | 43 +----------------------
drivers/net/ixgbe/ixgbe_dcb_82598.h | 3 +-
drivers/net/ixgbe/ixgbe_dcb_82599.c | 62 +--------------------------------
drivers/net/ixgbe/ixgbe_dcb_82599.h | 14 +-------
drivers/net/ixgbe/ixgbe_main.c | 16 ++++++++-
drivers/net/ixgbe/ixgbe_type.h | 24 +++++++++++++
drivers/net/ixgbe/ixgbe_x540.c | 1 +
13 files changed, 160 insertions(+), 170 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_82598.c b/drivers/net/ixgbe/ixgbe_82598.c
index 8179e50..bb417d7 100644
--- a/drivers/net/ixgbe/ixgbe_82598.c
+++ b/drivers/net/ixgbe/ixgbe_82598.c
@@ -1242,6 +1242,47 @@ static void ixgbe_set_lan_id_multi_port_pcie_82598(struct ixgbe_hw *hw)
}
}
+/**
+ * ixgbe_set_rxpba_82598 - Configure packet buffers
+ * @hw: pointer to hardware structure
+ * @dcb_config: pointer to ixgbe_dcb_config structure
+ *
+ * Configure packet buffers.
+ */
+static void ixgbe_set_rxpba_82598(struct ixgbe_hw *hw, int num_pb, u32 headroom,
+ int strategy)
+{
+ u32 rxpktsize = IXGBE_RXPBSIZE_64KB;
+ u8 i = 0;
+
+ if (!num_pb)
+ return;
+
+ /* Setup Rx packet buffer sizes */
+ switch (strategy) {
+ case PBA_STRATEGY_WEIGHTED:
+ /* Setup the first four at 80KB */
+ rxpktsize = IXGBE_RXPBSIZE_80KB;
+ for (; i < 4; i++)
+ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
+ /* Setup the last four at 48KB...don't re-init i */
+ rxpktsize = IXGBE_RXPBSIZE_48KB;
+ /* Fall Through */
+ case PBA_STRATEGY_EQUAL:
+ default:
+ /* Divide the remaining Rx packet buffer evenly among the TCs */
+ for (; i < IXGBE_MAX_PACKET_BUFFERS; i++)
+ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
+ break;
+ }
+
+ /* Setup Tx packet buffer sizes */
+ for (i = 0; i < IXGBE_MAX_PACKET_BUFFERS; i++)
+ IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), IXGBE_TXPBSIZE_40KB);
+
+ return;
+}
+
static struct ixgbe_mac_operations mac_ops_82598 = {
.init_hw = &ixgbe_init_hw_generic,
.reset_hw = &ixgbe_reset_hw_82598,
@@ -1257,6 +1298,7 @@ static struct ixgbe_mac_operations mac_ops_82598 = {
.read_analog_reg8 = &ixgbe_read_analog_reg8_82598,
.write_analog_reg8 = &ixgbe_write_analog_reg8_82598,
.setup_link = &ixgbe_setup_mac_link_82598,
+ .set_rxpba = &ixgbe_set_rxpba_82598,
.check_link = &ixgbe_check_mac_link_82598,
.get_link_capabilities = &ixgbe_get_link_capabilities_82598,
.led_on = &ixgbe_led_on_generic,
diff --git a/drivers/net/ixgbe/ixgbe_82599.c b/drivers/net/ixgbe/ixgbe_82599.c
index 0d7bc91..324a505 100644
--- a/drivers/net/ixgbe/ixgbe_82599.c
+++ b/drivers/net/ixgbe/ixgbe_82599.c
@@ -1114,27 +1114,8 @@ s32 ixgbe_reinit_fdir_tables_82599(struct ixgbe_hw *hw)
s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 pballoc)
{
u32 fdirctrl = 0;
- u32 pbsize;
int i;
- /*
- * Before enabling Flow Director, the Rx Packet Buffer size
- * must be reduced. The new value is the current size minus
- * flow director memory usage size.
- */
- pbsize = (1 << (IXGBE_FDIR_PBALLOC_SIZE_SHIFT + pballoc));
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(0),
- (IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(0)) - pbsize));
-
- /*
- * The defaults in the HW for RX PB 1-7 are not zero and so should be
- * initialized to zero for non DCB mode otherwise actual total RX PB
- * would be bigger than programmed and filter space would run into
- * the PB 0 region.
- */
- for (i = 1; i < 8; i++)
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
-
/* Send interrupt when 64 filters are left */
fdirctrl |= 4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT;
@@ -1202,27 +1183,8 @@ s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 pballoc)
s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 pballoc)
{
u32 fdirctrl = 0;
- u32 pbsize;
int i;
- /*
- * Before enabling Flow Director, the Rx Packet Buffer size
- * must be reduced. The new value is the current size minus
- * flow director memory usage size.
- */
- pbsize = (1 << (IXGBE_FDIR_PBALLOC_SIZE_SHIFT + pballoc));
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(0),
- (IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(0)) - pbsize));
-
- /*
- * The defaults in the HW for RX PB 1-7 are not zero and so should be
- * initialized to zero for non DCB mode otherwise actual total RX PB
- * would be bigger than programmed and filter space would run into
- * the PB 0 region.
- */
- for (i = 1; i < 8; i++)
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
-
/* Send interrupt when 64 filters are left */
fdirctrl |= 4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT;
@@ -2146,6 +2108,7 @@ static struct ixgbe_mac_operations mac_ops_82599 = {
.read_analog_reg8 = &ixgbe_read_analog_reg8_82599,
.write_analog_reg8 = &ixgbe_write_analog_reg8_82599,
.setup_link = &ixgbe_setup_mac_link_82599,
+ .set_rxpba = &ixgbe_set_rxpba_generic,
.check_link = &ixgbe_check_mac_link_generic,
.get_link_capabilities = &ixgbe_get_link_capabilities_82599,
.led_on = &ixgbe_led_on_generic,
diff --git a/drivers/net/ixgbe/ixgbe_common.c b/drivers/net/ixgbe/ixgbe_common.c
index de65643..cc2a4a1 100644
--- a/drivers/net/ixgbe/ixgbe_common.c
+++ b/drivers/net/ixgbe/ixgbe_common.c
@@ -3267,3 +3267,69 @@ s32 ixgbe_get_device_caps_generic(struct ixgbe_hw *hw, u16 *device_caps)
return 0;
}
+
+/**
+ * ixgbe_set_rxpba_generic - Initialize RX packet buffer
+ * @hw: pointer to hardware structure
+ * @num_pb: number of packet buffers to allocate
+ * @headroom: reserve n KB of headroom
+ * @strategy: packet buffer allocation strategy
+ **/
+void ixgbe_set_rxpba_generic(struct ixgbe_hw *hw,
+ int num_pb,
+ u32 headroom,
+ int strategy)
+{
+ u32 pbsize = hw->mac.rx_pb_size;
+ int i = 0;
+ u32 rxpktsize, txpktsize, txpbthresh;
+
+ /* Reserve headroom */
+ pbsize -= headroom;
+
+ if (!num_pb)
+ num_pb = 1;
+
+ /* Divide remaining packet buffer space amongst the number
+ * of packet buffers requested using supplied strategy.
+ */
+ switch (strategy) {
+ case (PBA_STRATEGY_WEIGHTED):
+ /* pba_80_48 strategy weight first half of packet buffer with
+ * 5/8 of the packet buffer space.
+ */
+ rxpktsize = ((pbsize * 5 * 2) / (num_pb * 8));
+ pbsize -= rxpktsize * (num_pb / 2);
+ rxpktsize <<= IXGBE_RXPBSIZE_SHIFT;
+ for (; i < (num_pb / 2); i++)
+ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
+ /* Fall through to configure remaining packet buffers */
+ case (PBA_STRATEGY_EQUAL):
+ /* Divide the remaining Rx packet buffer evenly among the TCs */
+ rxpktsize = (pbsize / (num_pb - i)) << IXGBE_RXPBSIZE_SHIFT;
+ for (; i < num_pb; i++)
+ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
+ break;
+ default:
+ break;
+ }
+
+ /*
+ * Setup Tx packet buffer and threshold equally for all TCs
+ * TXPBTHRESH register is set in K so divide by 1024 and subtract
+ * 10 since the largest packet we support is just over 9K.
+ */
+ txpktsize = IXGBE_TXPBSIZE_MAX / num_pb;
+ txpbthresh = (txpktsize / 1024) - IXGBE_TXPKT_SIZE_MAX;
+ for (i = 0; i < num_pb; i++) {
+ IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), txpktsize);
+ IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
+ }
+
+ /* Clear unused TCs, if any, to zero buffer size*/
+ for (; i < IXGBE_MAX_PB; i++) {
+ IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
+ IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
+ }
+}
diff --git a/drivers/net/ixgbe/ixgbe_common.h b/drivers/net/ixgbe/ixgbe_common.h
index 46be83c..32a454f 100644
--- a/drivers/net/ixgbe/ixgbe_common.h
+++ b/drivers/net/ixgbe/ixgbe_common.h
@@ -100,6 +100,9 @@ void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int pf);
void ixgbe_set_vlan_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf);
s32 ixgbe_get_device_caps_generic(struct ixgbe_hw *hw, u16 *device_caps);
+void ixgbe_set_rxpba_generic(struct ixgbe_hw *hw, int num_pb,
+ u32 headroom, int strategy);
+
#define IXGBE_WRITE_REG(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
#ifndef writeq
diff --git a/drivers/net/ixgbe/ixgbe_dcb.c b/drivers/net/ixgbe/ixgbe_dcb.c
index 686a17a..9d88c31 100644
--- a/drivers/net/ixgbe/ixgbe_dcb.c
+++ b/drivers/net/ixgbe/ixgbe_dcb.c
@@ -258,15 +258,13 @@ s32 ixgbe_dcb_hw_config(struct ixgbe_hw *hw,
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
- ret = ixgbe_dcb_hw_config_82598(hw, dcb_config->rx_pba_cfg,
- pfc_en, refill, max, bwgid,
- ptype);
+ ret = ixgbe_dcb_hw_config_82598(hw, pfc_en, refill, max,
+ bwgid, ptype);
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
- ret = ixgbe_dcb_hw_config_82599(hw, dcb_config->rx_pba_cfg,
- pfc_en, refill, max, bwgid,
- ptype, prio_tc);
+ ret = ixgbe_dcb_hw_config_82599(hw, pfc_en, refill, max,
+ bwgid, ptype, prio_tc);
break;
default:
break;
diff --git a/drivers/net/ixgbe/ixgbe_dcb.h b/drivers/net/ixgbe/ixgbe_dcb.h
index 944838f..e85826a 100644
--- a/drivers/net/ixgbe/ixgbe_dcb.h
+++ b/drivers/net/ixgbe/ixgbe_dcb.h
@@ -123,11 +123,6 @@ struct tc_configuration {
u8 tc; /* Traffic class (TC) */
};
-enum dcb_rx_pba_cfg {
- pba_equal, /* PBA[0-7] each use 64KB FIFO */
- pba_80_48 /* PBA[0-3] each use 80KB, PBA[4-7] each use 48KB */
-};
-
struct dcb_num_tcs {
u8 pg_tcs;
u8 pfc_tcs;
@@ -140,8 +135,6 @@ struct ixgbe_dcb_config {
u8 bw_percentage[2][MAX_BW_GROUP]; /* One each for Tx/Rx */
bool pfc_mode_enable;
- enum dcb_rx_pba_cfg rx_pba_cfg;
-
u32 dcb_cfg_version; /* Not used...OS-specific? */
u32 link_speed; /* For bandwidth allocation validation purpose */
};
diff --git a/drivers/net/ixgbe/ixgbe_dcb_82598.c b/drivers/net/ixgbe/ixgbe_dcb_82598.c
index 771d01a..2288c3c 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_82598.c
+++ b/drivers/net/ixgbe/ixgbe_dcb_82598.c
@@ -32,45 +32,6 @@
#include "ixgbe_dcb_82598.h"
/**
- * ixgbe_dcb_config_packet_buffers_82598 - Configure packet buffers
- * @hw: pointer to hardware structure
- * @dcb_config: pointer to ixgbe_dcb_config structure
- *
- * Configure packet buffers for DCB mode.
- */
-static s32 ixgbe_dcb_config_packet_buffers_82598(struct ixgbe_hw *hw, u8 rx_pba)
-{
- s32 ret_val = 0;
- u32 value = IXGBE_RXPBSIZE_64KB;
- u8 i = 0;
-
- /* Setup Rx packet buffer sizes */
- switch (rx_pba) {
- case pba_80_48:
- /* Setup the first four at 80KB */
- value = IXGBE_RXPBSIZE_80KB;
- for (; i < 4; i++)
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), value);
- /* Setup the last four at 48KB...don't re-init i */
- value = IXGBE_RXPBSIZE_48KB;
- /* Fall Through */
- case pba_equal:
- default:
- for (; i < IXGBE_MAX_PACKET_BUFFERS; i++)
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), value);
-
- /* Setup Tx packet buffer sizes */
- for (i = 0; i < IXGBE_MAX_PACKET_BUFFERS; i++) {
- IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i),
- IXGBE_TXPBSIZE_40KB);
- }
- break;
- }
-
- return ret_val;
-}
-
-/**
* ixgbe_dcb_config_rx_arbiter_82598 - Config Rx data arbiter
* @hw: pointer to hardware structure
* @dcb_config: pointer to ixgbe_dcb_config structure
@@ -321,11 +282,9 @@ static s32 ixgbe_dcb_config_tc_stats_82598(struct ixgbe_hw *hw)
*
* Configure dcb settings and enable dcb mode.
*/
-s32 ixgbe_dcb_hw_config_82598(struct ixgbe_hw *hw,
- u8 rx_pba, u8 pfc_en, u16 *refill,
+s32 ixgbe_dcb_hw_config_82598(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
u16 *max, u8 *bwg_id, u8 *prio_type)
{
- ixgbe_dcb_config_packet_buffers_82598(hw, rx_pba);
ixgbe_dcb_config_rx_arbiter_82598(hw, refill, max, prio_type);
ixgbe_dcb_config_tx_desc_arbiter_82598(hw, refill, max,
bwg_id, prio_type);
diff --git a/drivers/net/ixgbe/ixgbe_dcb_82598.h b/drivers/net/ixgbe/ixgbe_dcb_82598.h
index 1e9750c..2f31893 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_82598.h
+++ b/drivers/net/ixgbe/ixgbe_dcb_82598.h
@@ -91,8 +91,7 @@ s32 ixgbe_dcb_config_tx_data_arbiter_82598(struct ixgbe_hw *hw,
u8 *bwg_id,
u8 *prio_type);
-s32 ixgbe_dcb_hw_config_82598(struct ixgbe_hw *hw,
- u8 rx_pba, u8 pfc_en, u16 *refill,
+s32 ixgbe_dcb_hw_config_82598(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
u16 *max, u8 *bwg_id, u8 *prio_type);
#endif /* _DCB_82598_CONFIG_H */
diff --git a/drivers/net/ixgbe/ixgbe_dcb_82599.c b/drivers/net/ixgbe/ixgbe_dcb_82599.c
index d50cf78..befe8ad 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_82599.c
+++ b/drivers/net/ixgbe/ixgbe_dcb_82599.c
@@ -31,63 +31,6 @@
#include "ixgbe_dcb_82599.h"
/**
- * ixgbe_dcb_config_packet_buffers_82599 - Configure DCB packet buffers
- * @hw: pointer to hardware structure
- * @rx_pba: method to distribute packet buffer
- *
- * Configure packet buffers for DCB mode.
- */
-static s32 ixgbe_dcb_config_packet_buffers_82599(struct ixgbe_hw *hw, u8 rx_pba)
-{
- int num_tcs = IXGBE_MAX_PACKET_BUFFERS;
- u32 rx_pb_size = hw->mac.rx_pb_size << IXGBE_RXPBSIZE_SHIFT;
- u32 rxpktsize;
- u32 txpktsize;
- u32 txpbthresh;
- u8 i = 0;
-
- /*
- * This really means configure the first half of the TCs
- * (Traffic Classes) to use 5/8 of the Rx packet buffer
- * space. To determine the size of the buffer for each TC,
- * we are multiplying the average size by 5/4 and applying
- * it to half of the traffic classes.
- */
- if (rx_pba == pba_80_48) {
- rxpktsize = (rx_pb_size * 5) / (num_tcs * 4);
- rx_pb_size -= rxpktsize * (num_tcs / 2);
- for (; i < (num_tcs / 2); i++)
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
- }
-
- /* Divide the remaining Rx packet buffer evenly among the TCs */
- rxpktsize = rx_pb_size / (num_tcs - i);
- for (; i < num_tcs; i++)
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpktsize);
-
- /*
- * Setup Tx packet buffer and threshold equally for all TCs
- * TXPBTHRESH register is set in K so divide by 1024 and subtract
- * 10 since the largest packet we support is just over 9K.
- */
- txpktsize = IXGBE_TXPBSIZE_MAX / num_tcs;
- txpbthresh = (txpktsize / 1024) - IXGBE_TXPKT_SIZE_MAX;
- for (i = 0; i < num_tcs; i++) {
- IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), txpktsize);
- IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
- }
-
- /* Clear unused TCs, if any, to zero buffer size*/
- for (; i < MAX_TRAFFIC_CLASS; i++) {
- IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
- IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
- IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
- }
-
- return 0;
-}
-
-/**
* ixgbe_dcb_config_rx_arbiter_82599 - Config Rx Data arbiter
* @hw: pointer to hardware structure
* @refill: refill credits index by traffic class
@@ -434,7 +377,6 @@ static s32 ixgbe_dcb_config_82599(struct ixgbe_hw *hw)
/**
* ixgbe_dcb_hw_config_82599 - Configure and enable DCB
* @hw: pointer to hardware structure
- * @rx_pba: method to distribute packet buffer
* @refill: refill credits index by traffic class
* @max: max credits index by traffic class
* @bwg_id: bandwidth grouping indexed by traffic class
@@ -443,11 +385,9 @@ static s32 ixgbe_dcb_config_82599(struct ixgbe_hw *hw)
*
* Configure dcb settings and enable dcb mode.
*/
-s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw,
- u8 rx_pba, u8 pfc_en, u16 *refill,
+s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
u16 *max, u8 *bwg_id, u8 *prio_type, u8 *prio_tc)
{
- ixgbe_dcb_config_packet_buffers_82599(hw, rx_pba);
ixgbe_dcb_config_82599(hw);
ixgbe_dcb_config_rx_arbiter_82599(hw, refill, max, bwg_id,
prio_type, prio_tc);
diff --git a/drivers/net/ixgbe/ixgbe_dcb_82599.h b/drivers/net/ixgbe/ixgbe_dcb_82599.h
index 2de71a5..08d1749 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_82599.h
+++ b/drivers/net/ixgbe/ixgbe_dcb_82599.h
@@ -86,17 +86,6 @@
#define IXGBE_RTTPCS_ARBD_SHIFT 22
#define IXGBE_RTTPCS_ARBD_DCB 0x4 /* Arbitration delay in DCB mode */
-#define IXGBE_TXPBSIZE_20KB 0x00005000 /* 20KB Packet Buffer */
-#define IXGBE_TXPBSIZE_40KB 0x0000A000 /* 40KB Packet Buffer */
-#define IXGBE_RXPBSIZE_48KB 0x0000C000 /* 48KB Packet Buffer */
-#define IXGBE_RXPBSIZE_64KB 0x00010000 /* 64KB Packet Buffer */
-#define IXGBE_RXPBSIZE_80KB 0x00014000 /* 80KB Packet Buffer */
-#define IXGBE_RXPBSIZE_128KB 0x00020000 /* 128KB Packet Buffer */
-#define IXGBE_TXPBSIZE_MAX 0x00028000 /* 160KB Packet Buffer*/
-
-#define IXGBE_TXPBTHRESH_DCB 0xA /* THRESH value for DCB mode */
-#define IXGBE_TXPKT_SIZE_MAX 0xA /* Max Tx Packet size */
-
/* SECTXMINIFG DCB */
#define IXGBE_SECTX_DCB 0x00001F00 /* DCB TX Buffer IFG */
@@ -127,8 +116,7 @@ s32 ixgbe_dcb_config_tx_data_arbiter_82599(struct ixgbe_hw *hw,
u8 *prio_type,
u8 *prio_tc);
-s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw,
- u8 rx_pba, u8 pfc_en, u16 *refill,
+s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
u16 *max, u8 *bwg_id, u8 *prio_type,
u8 *prio_tc);
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index 06cfaf3..fba1e323 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -3780,12 +3780,27 @@ static void ixgbe_configure_dcb(struct ixgbe_adapter *adapter)
}
#endif
+
+static void ixgbe_configure_pb(struct ixgbe_adapter *adapter)
+{
+ int hdrm = 0;
+ int num_tc = netdev_get_num_tc(adapter->netdev);
+ struct ixgbe_hw *hw = &adapter->hw;
+
+ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE ||
+ adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)
+ hdrm = 64 << adapter->fdir_pballoc;
+
+ hw->mac.ops.set_rxpba(&adapter->hw, num_tc, hdrm, PBA_STRATEGY_EQUAL);
+}
+
static void ixgbe_configure(struct ixgbe_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
struct ixgbe_hw *hw = &adapter->hw;
int i;
+ ixgbe_configure_pb(adapter);
#ifdef CONFIG_IXGBE_DCB
ixgbe_configure_dcb(adapter);
#endif
@@ -5251,7 +5266,6 @@ static int __devinit ixgbe_sw_init(struct ixgbe_adapter *adapter)
}
adapter->dcb_cfg.bw_percentage[DCB_TX_CONFIG][0] = 100;
adapter->dcb_cfg.bw_percentage[DCB_RX_CONFIG][0] = 100;
- adapter->dcb_cfg.rx_pba_cfg = pba_equal;
adapter->dcb_cfg.pfc_mode_enable = false;
adapter->dcb_set_bitmap = 0x00;
adapter->dcbx_cap = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_CEE;
diff --git a/drivers/net/ixgbe/ixgbe_type.h b/drivers/net/ixgbe/ixgbe_type.h
index fa43f25..c0849a6 100644
--- a/drivers/net/ixgbe/ixgbe_type.h
+++ b/drivers/net/ixgbe/ixgbe_type.h
@@ -1118,6 +1118,27 @@
#define IXGBE_GPIE_VTMODE_32 0x00008000 /* 32 VFs 4 queues per VF */
#define IXGBE_GPIE_VTMODE_64 0x0000C000 /* 64 VFs 2 queues per VF */
+/* Packet Buffer Initialization */
+#define IXGBE_TXPBSIZE_20KB 0x00005000 /* 20KB Packet Buffer */
+#define IXGBE_TXPBSIZE_40KB 0x0000A000 /* 40KB Packet Buffer */
+#define IXGBE_RXPBSIZE_48KB 0x0000C000 /* 48KB Packet Buffer */
+#define IXGBE_RXPBSIZE_64KB 0x00010000 /* 64KB Packet Buffer */
+#define IXGBE_RXPBSIZE_80KB 0x00014000 /* 80KB Packet Buffer */
+#define IXGBE_RXPBSIZE_128KB 0x00020000 /* 128KB Packet Buffer */
+#define IXGBE_RXPBSIZE_MAX 0x00080000 /* 512KB Packet Buffer*/
+#define IXGBE_TXPBSIZE_MAX 0x00028000 /* 160KB Packet Buffer*/
+
+#define IXGBE_TXPKT_SIZE_MAX 0xA /* Max Tx Packet size */
+#define IXGBE_MAX_PB 8
+
+/* Packet buffer allocation strategies */
+enum {
+ PBA_STRATEGY_EQUAL = 0, /* Distribute PB space equally */
+#define PBA_STRATEGY_EQUAL PBA_STRATEGY_EQUAL
+ PBA_STRATEGY_WEIGHTED = 1, /* Weight front half of TCs */
+#define PBA_STRATEGY_WEIGHTED PBA_STRATEGY_WEIGHTED
+};
+
/* Transmit Flow Control status */
#define IXGBE_TFCS_TXOFF 0x00000001
#define IXGBE_TFCS_TXOFF0 0x00000100
@@ -2615,6 +2636,9 @@ struct ixgbe_mac_operations {
s32 (*get_link_capabilities)(struct ixgbe_hw *, ixgbe_link_speed *,
bool *);
+ /* Packet Buffer Manipulation */
+ void (*set_rxpba)(struct ixgbe_hw *, int, u32, int);
+
/* LED */
s32 (*led_on)(struct ixgbe_hw *, u32);
s32 (*led_off)(struct ixgbe_hw *, u32);
diff --git a/drivers/net/ixgbe/ixgbe_x540.c b/drivers/net/ixgbe/ixgbe_x540.c
index 4ed687b..fa566ed 100644
--- a/drivers/net/ixgbe/ixgbe_x540.c
+++ b/drivers/net/ixgbe/ixgbe_x540.c
@@ -876,6 +876,7 @@ static struct ixgbe_mac_operations mac_ops_X540 = {
.read_analog_reg8 = NULL,
.write_analog_reg8 = NULL,
.setup_link = &ixgbe_setup_mac_link_X540,
+ .set_rxpba = &ixgbe_set_rxpba_generic,
.check_link = &ixgbe_check_mac_link_generic,
.get_link_capabilities = &ixgbe_get_copper_link_capabilities_generic,
.led_on = &ixgbe_led_on_generic,
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 03/13] ixgbe: consolidate MRQC and MTQC handling
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
2011-06-11 3:02 ` [net-next 01/13] ixgbe: dcbnl reduce duplicated code and indentation Jeff Kirsher
2011-06-11 3:02 ` [net-next 02/13] ixgbe: consolidate packet buffer allocation Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 04/13] ixgbe: configure minimal packet buffers to support TC Jeff Kirsher
` (9 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
The MRQC and MTQC registers are configured in the main
setup path but are also reconfigured in the DCB setup
path. The DCB path fixes the DCB configuration by configuring
the SECTXMINIFG gap which is required for DCB pause
to operate correctly.
This patch reduces the duplicate code and does all setup
in ixgbe_setup_mtqc() and ixgbe_setup_mrqc().
Additionally, this removes the IXGBE_QDE. This write never
set the WRITE bit in the register so the write was not
actually doing anything. Also this was to clear the register
but, it is never set and defaults to zero. If this is
needed for SRIOV it should be added correctly in a follow
up patch. But it's never been working so removing it here
should be OK.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_dcb_82599.c | 57 -----------------------------------
drivers/net/ixgbe/ixgbe_main.c | 7 ++++
2 files changed, 7 insertions(+), 57 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_dcb_82599.c b/drivers/net/ixgbe/ixgbe_dcb_82599.c
index befe8ad..ade9820 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_82599.c
+++ b/drivers/net/ixgbe/ixgbe_dcb_82599.c
@@ -319,62 +319,6 @@ static s32 ixgbe_dcb_config_tc_stats_82599(struct ixgbe_hw *hw)
}
/**
- * ixgbe_dcb_config_82599 - Configure general DCB parameters
- * @hw: pointer to hardware structure
- *
- * Configure general DCB parameters.
- */
-static s32 ixgbe_dcb_config_82599(struct ixgbe_hw *hw)
-{
- u32 reg;
- u32 q;
-
- /* Disable the Tx desc arbiter so that MTQC can be changed */
- reg = IXGBE_READ_REG(hw, IXGBE_RTTDCS);
- reg |= IXGBE_RTTDCS_ARBDIS;
- IXGBE_WRITE_REG(hw, IXGBE_RTTDCS, reg);
-
- /* Enable DCB for Rx with 8 TCs */
- reg = IXGBE_READ_REG(hw, IXGBE_MRQC);
- switch (reg & IXGBE_MRQC_MRQE_MASK) {
- case 0:
- case IXGBE_MRQC_RT4TCEN:
- /* RSS disabled cases */
- reg = (reg & ~IXGBE_MRQC_MRQE_MASK) | IXGBE_MRQC_RT8TCEN;
- break;
- case IXGBE_MRQC_RSSEN:
- case IXGBE_MRQC_RTRSS4TCEN:
- /* RSS enabled cases */
- reg = (reg & ~IXGBE_MRQC_MRQE_MASK) | IXGBE_MRQC_RTRSS8TCEN;
- break;
- default:
- /* Unsupported value, assume stale data, overwrite no RSS */
- reg = (reg & ~IXGBE_MRQC_MRQE_MASK) | IXGBE_MRQC_RT8TCEN;
- }
- IXGBE_WRITE_REG(hw, IXGBE_MRQC, reg);
-
- /* Enable DCB for Tx with 8 TCs */
- reg = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
- IXGBE_WRITE_REG(hw, IXGBE_MTQC, reg);
-
- /* Disable drop for all queues */
- for (q = 0; q < 128; q++)
- IXGBE_WRITE_REG(hw, IXGBE_QDE, q << IXGBE_QDE_IDX_SHIFT);
-
- /* Enable the Tx desc arbiter */
- reg = IXGBE_READ_REG(hw, IXGBE_RTTDCS);
- reg &= ~IXGBE_RTTDCS_ARBDIS;
- IXGBE_WRITE_REG(hw, IXGBE_RTTDCS, reg);
-
- /* Enable Security TX Buffer IFG for DCB */
- reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
- reg |= IXGBE_SECTX_DCB;
- IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
-
- return 0;
-}
-
-/**
* ixgbe_dcb_hw_config_82599 - Configure and enable DCB
* @hw: pointer to hardware structure
* @refill: refill credits index by traffic class
@@ -388,7 +332,6 @@ static s32 ixgbe_dcb_config_82599(struct ixgbe_hw *hw)
s32 ixgbe_dcb_hw_config_82599(struct ixgbe_hw *hw, u8 pfc_en, u16 *refill,
u16 *max, u8 *bwg_id, u8 *prio_type, u8 *prio_tc)
{
- ixgbe_dcb_config_82599(hw);
ixgbe_dcb_config_rx_arbiter_82599(hw, refill, max, bwg_id,
prio_type, prio_tc);
ixgbe_dcb_config_tx_desc_arbiter_82599(hw, refill, max,
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index fba1e323..20467da 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -2816,6 +2816,7 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
struct ixgbe_hw *hw = &adapter->hw;
u32 rttdcs;
u32 mask;
+ u32 reg;
if (hw->mac.type == ixgbe_mac_82598EB)
return;
@@ -2838,6 +2839,12 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
/* We enable 8 traffic classes, DCB only */
IXGBE_WRITE_REG(hw, IXGBE_MTQC,
(IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ));
+
+ /* Enable Security TX Buffer IFG for DCB */
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+ reg |= IXGBE_SECTX_DCB;
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+
break;
default:
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 04/13] ixgbe: configure minimal packet buffers to support TC
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (2 preceding siblings ...)
2011-06-11 3:02 ` [net-next 03/13] ixgbe: consolidate MRQC and MTQC handling Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 05/13] ixgbe: DCB use existing TX and RX queues Jeff Kirsher
` (8 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
ixgbe devices support different numbers of packet buffers either
8 or 4. Here we only allocate the minimal number of packet
buffers required to implement the net_devices number of traffic
classes.
Fewer traffic classes allows for larger packet buffers in
hardware. Also more Tx/Rx queues can be given to each
traffic class.
This patch is mostly about propagating the number of traffic
classes through the init path. Specifically this adds the 4TC
cases to the MRQC and MTQC setup routines. Also ixgbe_setup_tc()
was sanitized to handle other traffic class value.
Finally changing the number of packet buffers in the hardware
requires the device to reinit. So this moves the reinit work
from DCB into the main ixgbe_setup_tc() routine to consolidate
the reset code. Now dcbnl_xxx ops call ixgbe_setup_tc() to
configure packet buffers if needed.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_dcb_nl.c | 16 +---
drivers/net/ixgbe/ixgbe_main.c | 235 ++++++++++++++++++++++----------------
drivers/net/ixgbe/ixgbe_type.h | 1 +
3 files changed, 138 insertions(+), 114 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ixgbe/ixgbe_dcb_nl.c
index 293ff06..b229feb 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
+++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
@@ -125,9 +125,7 @@ static u8 ixgbe_dcbnl_set_state(struct net_device *netdev, u8 state)
goto out;
}
- if (netif_running(netdev))
- netdev->netdev_ops->ndo_stop(netdev);
- ixgbe_clear_interrupt_scheme(adapter);
+ adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
switch (adapter->hw.mac.type) {
case ixgbe_mac_82598EB:
@@ -143,18 +141,12 @@ static u8 ixgbe_dcbnl_set_state(struct net_device *netdev, u8 state)
break;
}
- adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
- if (!netdev_get_num_tc(netdev))
- ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
+ ixgbe_setup_tc(netdev, MAX_TRAFFIC_CLASS);
} else {
/* Turn off DCB */
if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
goto out;
- if (netif_running(netdev))
- netdev->netdev_ops->ndo_stop(netdev);
- ixgbe_clear_interrupt_scheme(adapter);
-
adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
adapter->temp_dcb_cfg.pfc_mode_enable = false;
adapter->dcb_cfg.pfc_mode_enable = false;
@@ -167,13 +159,9 @@ static u8 ixgbe_dcbnl_set_state(struct net_device *netdev, u8 state)
default:
break;
}
-
ixgbe_setup_tc(netdev, 0);
}
- ixgbe_init_interrupt_scheme(adapter);
- if (netif_running(netdev))
- netdev->netdev_ops->ndo_open(netdev);
out:
return err;
}
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index 20467da..7e3850a 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -2815,8 +2815,8 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
{
struct ixgbe_hw *hw = &adapter->hw;
u32 rttdcs;
- u32 mask;
u32 reg;
+ u8 tcs = netdev_get_num_tc(adapter->netdev);
if (hw->mac.type == ixgbe_mac_82598EB)
return;
@@ -2827,28 +2827,27 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
IXGBE_WRITE_REG(hw, IXGBE_RTTDCS, rttdcs);
/* set transmit pool layout */
- mask = (IXGBE_FLAG_SRIOV_ENABLED | IXGBE_FLAG_DCB_ENABLED);
- switch (adapter->flags & mask) {
-
+ switch (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) {
case (IXGBE_FLAG_SRIOV_ENABLED):
IXGBE_WRITE_REG(hw, IXGBE_MTQC,
(IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF));
break;
+ default:
+ if (!tcs)
+ reg = IXGBE_MTQC_64Q_1PB;
+ else if (tcs <= 4)
+ reg = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
+ else
+ reg = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
- case (IXGBE_FLAG_DCB_ENABLED):
- /* We enable 8 traffic classes, DCB only */
- IXGBE_WRITE_REG(hw, IXGBE_MTQC,
- (IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ));
-
- /* Enable Security TX Buffer IFG for DCB */
- reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
- reg |= IXGBE_SECTX_DCB;
- IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
-
- break;
+ IXGBE_WRITE_REG(hw, IXGBE_MTQC, reg);
- default:
- IXGBE_WRITE_REG(hw, IXGBE_MTQC, IXGBE_MTQC_64Q_1PB);
+ /* Enable Security TX Buffer IFG for multiple pb */
+ if (tcs) {
+ reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+ reg |= IXGBE_SECTX_DCB;
+ IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+ }
break;
}
@@ -2939,7 +2938,7 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
u32 mrqc = 0, reta = 0;
u32 rxcsum;
int i, j;
- int mask;
+ u8 tcs = netdev_get_num_tc(adapter->netdev);
/* Fill out hash function seeds */
for (i = 0; i < 10; i++)
@@ -2961,33 +2960,28 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
rxcsum |= IXGBE_RXCSUM_PCSD;
IXGBE_WRITE_REG(hw, IXGBE_RXCSUM, rxcsum);
- if (adapter->hw.mac.type == ixgbe_mac_82598EB)
- mask = adapter->flags & IXGBE_FLAG_RSS_ENABLED;
- else
- mask = adapter->flags & (IXGBE_FLAG_RSS_ENABLED
-#ifdef CONFIG_IXGBE_DCB
- | IXGBE_FLAG_DCB_ENABLED
-#endif
- | IXGBE_FLAG_SRIOV_ENABLED
- );
-
- switch (mask) {
-#ifdef CONFIG_IXGBE_DCB
- case (IXGBE_FLAG_DCB_ENABLED | IXGBE_FLAG_RSS_ENABLED):
- mrqc = IXGBE_MRQC_RTRSS8TCEN;
- break;
- case (IXGBE_FLAG_DCB_ENABLED):
- mrqc = IXGBE_MRQC_RT8TCEN;
- break;
-#endif /* CONFIG_IXGBE_DCB */
- case (IXGBE_FLAG_RSS_ENABLED):
+ if (adapter->hw.mac.type == ixgbe_mac_82598EB &&
+ (adapter->flags & IXGBE_FLAG_RSS_ENABLED)) {
mrqc = IXGBE_MRQC_RSSEN;
- break;
- case (IXGBE_FLAG_SRIOV_ENABLED):
- mrqc = IXGBE_MRQC_VMDQEN;
- break;
- default:
- break;
+ } else {
+ int mask = adapter->flags & (IXGBE_FLAG_RSS_ENABLED
+ | IXGBE_FLAG_SRIOV_ENABLED);
+
+ switch (mask) {
+ case (IXGBE_FLAG_RSS_ENABLED):
+ if (!tcs)
+ mrqc = IXGBE_MRQC_RSSEN;
+ else if (tcs <= 4)
+ mrqc = IXGBE_MRQC_RTRSS4TCEN;
+ else
+ mrqc = IXGBE_MRQC_RTRSS8TCEN;
+ break;
+ case (IXGBE_FLAG_SRIOV_ENABLED):
+ mrqc = IXGBE_MRQC_VMDQEN;
+ break;
+ default:
+ break;
+ }
}
/* Perform hash on these packet types */
@@ -4461,14 +4455,17 @@ static inline bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter)
{
bool ret = false;
struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_DCB];
- int i, q;
+ int tcs = netdev_get_num_tc(adapter->netdev);
+ int max_q, i, q;
- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+ if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED) || !tcs)
return ret;
+ max_q = adapter->netdev->num_tx_queues / tcs;
+
f->indices = 0;
- for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
- q = min((int)num_online_cpus(), MAX_TRAFFIC_CLASS);
+ for (i = 0; i < tcs; i++) {
+ q = min((int)num_online_cpus(), max_q);
f->indices += q;
}
@@ -4680,55 +4677,6 @@ static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc,
}
}
-#define IXGBE_MAX_Q_PER_TC (IXGBE_MAX_DCB_INDICES / MAX_TRAFFIC_CLASS)
-
-/* ixgbe_setup_tc - routine to configure net_device for multiple traffic
- * classes.
- *
- * @netdev: net device to configure
- * @tc: number of traffic classes to enable
- */
-int ixgbe_setup_tc(struct net_device *dev, u8 tc)
-{
- int i;
- unsigned int q, offset = 0;
-
- if (!tc) {
- netdev_reset_tc(dev);
- } else {
- struct ixgbe_adapter *adapter = netdev_priv(dev);
-
- /* Hardware supports up to 8 traffic classes */
- if (tc > MAX_TRAFFIC_CLASS || netdev_set_num_tc(dev, tc))
- return -EINVAL;
-
- /* Partition Tx queues evenly amongst traffic classes */
- for (i = 0; i < tc; i++) {
- q = min((int)num_online_cpus(), IXGBE_MAX_Q_PER_TC);
- netdev_set_prio_tc_map(dev, i, i);
- netdev_set_tc_queue(dev, i, q, offset);
- offset += q;
- }
-
- /* This enables multiple traffic class support in the hardware
- * which defaults to strict priority transmission by default.
- * If traffic classes are already enabled perhaps through DCB
- * code path then existing configuration will be used.
- */
- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED) &&
- dev->dcbnl_ops && dev->dcbnl_ops->setdcbx) {
- struct ieee_ets ets = {
- .prio_tc = {0, 1, 2, 3, 4, 5, 6, 7},
- };
- u8 mode = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE;
-
- dev->dcbnl_ops->setdcbx(dev, mode);
- dev->dcbnl_ops->ieee_setets(dev, &ets);
- }
- }
- return 0;
-}
-
/**
* ixgbe_cache_ring_dcb - Descriptor ring to register mapping for DCB
* @adapter: board private structure to initialize
@@ -4742,7 +4690,7 @@ static inline bool ixgbe_cache_ring_dcb(struct ixgbe_adapter *adapter)
int i, j, k;
u8 num_tcs = netdev_get_num_tc(dev);
- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+ if (!num_tcs)
return false;
for (i = 0, k = 0; i < num_tcs; i++) {
@@ -7220,6 +7168,95 @@ static struct rtnl_link_stats64 *ixgbe_get_stats64(struct net_device *netdev,
return stats;
}
+/* ixgbe_validate_rtr - verify 802.1Qp to Rx packet buffer mapping is valid.
+ * #adapter: pointer to ixgbe_adapter
+ * @tc: number of traffic classes currently enabled
+ *
+ * Configure a valid 802.1Qp to Rx packet buffer mapping ie confirm
+ * 802.1Q priority maps to a packet buffer that exists.
+ */
+static void ixgbe_validate_rtr(struct ixgbe_adapter *adapter, u8 tc)
+{
+ struct ixgbe_hw *hw = &adapter->hw;
+ u32 reg, rsave;
+ int i;
+
+ /* 82598 have a static priority to TC mapping that can not
+ * be changed so no validation is needed.
+ */
+ if (hw->mac.type == ixgbe_mac_82598EB)
+ return;
+
+ reg = IXGBE_READ_REG(hw, IXGBE_RTRUP2TC);
+ rsave = reg;
+
+ for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+ u8 up2tc = reg >> (i * IXGBE_RTRUP2TC_UP_SHIFT);
+
+ /* If up2tc is out of bounds default to zero */
+ if (up2tc > tc)
+ reg &= ~(0x7 << IXGBE_RTRUP2TC_UP_SHIFT);
+ }
+
+ if (reg != rsave)
+ IXGBE_WRITE_REG(hw, IXGBE_RTRUP2TC, reg);
+
+ return;
+}
+
+
+/* ixgbe_setup_tc - routine to configure net_device for multiple traffic
+ * classes.
+ *
+ * @netdev: net device to configure
+ * @tc: number of traffic classes to enable
+ */
+int ixgbe_setup_tc(struct net_device *dev, u8 tc)
+{
+ unsigned int q, i, offset = 0;
+ struct ixgbe_adapter *adapter = netdev_priv(dev);
+ struct ixgbe_hw *hw = &adapter->hw;
+ int max_q = adapter->netdev->num_tx_queues / tc;
+
+ /* If DCB is anabled do not remove traffic classes, multiple
+ * traffic classes are required to implement DCB
+ */
+ if (!tc && (adapter->flags & IXGBE_FLAG_DCB_ENABLED))
+ return 0;
+
+ /* Hardware supports up to 8 traffic classes */
+ if (tc > MAX_TRAFFIC_CLASS ||
+ (hw->mac.type == ixgbe_mac_82598EB && tc < MAX_TRAFFIC_CLASS))
+ return -EINVAL;
+
+ /* Hardware has to reinitialize queues and interrupts to
+ * match packet buffer alignment. Unfortunantly, the
+ * hardware is not flexible enough to do this dynamically.
+ */
+ if (netif_running(dev))
+ ixgbe_close(dev);
+ ixgbe_clear_interrupt_scheme(adapter);
+
+ if (tc)
+ netdev_set_num_tc(dev, tc);
+ else
+ netdev_reset_tc(dev);
+
+ /* Partition Tx queues evenly amongst traffic classes */
+ for (i = 0; i < tc; i++) {
+ q = min((int)num_online_cpus(), max_q);
+ netdev_set_prio_tc_map(dev, i, i);
+ netdev_set_tc_queue(dev, i, q, offset);
+ offset += q;
+ }
+
+ ixgbe_init_interrupt_scheme(adapter);
+ ixgbe_validate_rtr(adapter, tc);
+ if (netif_running(dev))
+ ixgbe_open(dev);
+
+ return 0;
+}
static const struct net_device_ops ixgbe_netdev_ops = {
.ndo_open = ixgbe_open,
@@ -7240,9 +7277,7 @@ static const struct net_device_ops ixgbe_netdev_ops = {
.ndo_set_vf_tx_rate = ixgbe_ndo_set_vf_bw,
.ndo_get_vf_config = ixgbe_ndo_get_vf_config,
.ndo_get_stats64 = ixgbe_get_stats64,
-#ifdef CONFIG_IXGBE_DCB
.ndo_setup_tc = ixgbe_setup_tc,
-#endif
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = ixgbe_netpoll,
#endif
diff --git a/drivers/net/ixgbe/ixgbe_type.h b/drivers/net/ixgbe/ixgbe_type.h
index c0849a6..5455064 100644
--- a/drivers/net/ixgbe/ixgbe_type.h
+++ b/drivers/net/ixgbe/ixgbe_type.h
@@ -1881,6 +1881,7 @@ enum {
#define IXGBE_MTQC_32VF 0x8 /* 4 TX Queues per pool w/32VF's */
#define IXGBE_MTQC_64VF 0x4 /* 2 TX Queues per pool w/64VF's */
#define IXGBE_MTQC_8TC_8TQ 0xC /* 8 TC if RT_ENA or 8 TQ if VT_ENA */
+#define IXGBE_MTQC_4TC_4TQ 0x8 /* 4 TC if RT_ENA or 4 TQ if VT_ENA */
/* Receive Descriptor bit definitions */
#define IXGBE_RXD_STAT_DD 0x01 /* Descriptor Done */
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 05/13] ixgbe: DCB use existing TX and RX queues
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (3 preceding siblings ...)
2011-06-11 3:02 ` [net-next 04/13] ixgbe: configure minimal packet buffers to support TC Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 06/13] ixgbe: DCB 82598 devices, tx_idx and rx_idx swapped Jeff Kirsher
` (7 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
The number of TX and RX queues allocated depends on the device
type, the current features set, online CPUs, and various
compile flags.
To enable DCB with multiple queues and allow it to coexist with
all the features currently implemented it has to setup a valid
queue count. This is done at init time using the FDIR and RSS
max queue counts and allowing each TC to allocate a queue per
CPU.
DCB will now use available queues up to (8 x TCs) this is somewhat
arbitrary cap but allows DCB to use up to 64 queues. Its easy to
increase this later if that is needed.
This is prep work to enable Flow Director with DCB. After this
DCB can easily coexist with existing features and no longer
needs its own DCB feature ring.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe.h | 2 -
drivers/net/ixgbe/ixgbe_main.c | 107 ++++++++++++++++++---------------------
2 files changed, 49 insertions(+), 60 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe.h b/drivers/net/ixgbe/ixgbe.h
index e467b20..d5674fc 100644
--- a/drivers/net/ixgbe/ixgbe.h
+++ b/drivers/net/ixgbe/ixgbe.h
@@ -244,7 +244,6 @@ struct ixgbe_ring {
enum ixgbe_ring_f_enum {
RING_F_NONE = 0,
- RING_F_DCB,
RING_F_VMDQ, /* SR-IOV uses the same ring feature */
RING_F_RSS,
RING_F_FDIR,
@@ -255,7 +254,6 @@ enum ixgbe_ring_f_enum {
RING_F_ARRAY_SIZE /* must be last in enum set */
};
-#define IXGBE_MAX_DCB_INDICES 64
#define IXGBE_MAX_RSS_INDICES 16
#define IXGBE_MAX_VMDQ_INDICES 64
#define IXGBE_MAX_FDIR_INDICES 64
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index 7e3850a..3a88fb6 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -4417,72 +4417,72 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
return false;
- if (adapter->flags & IXGBE_FLAG_DCB_ENABLED) {
-#ifdef CONFIG_IXGBE_DCB
- int tc;
- struct net_device *dev = adapter->netdev;
+ f->indices = min((int)num_online_cpus(), f->indices);
- tc = netdev_get_prio_tc_map(dev, adapter->fcoe.up);
- f->indices = dev->tc_to_txq[tc].count;
- f->mask = dev->tc_to_txq[tc].offset;
-#endif
- } else {
- f->indices = min((int)num_online_cpus(), f->indices);
-
- adapter->num_rx_queues = 1;
- adapter->num_tx_queues = 1;
+ adapter->num_rx_queues = 1;
+ adapter->num_tx_queues = 1;
- if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
- e_info(probe, "FCoE enabled with RSS\n");
- if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
- (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
- ixgbe_set_fdir_queues(adapter);
- else
- ixgbe_set_rss_queues(adapter);
- }
- /* adding FCoE rx rings to the end */
- f->mask = adapter->num_rx_queues;
- adapter->num_rx_queues += f->indices;
- adapter->num_tx_queues += f->indices;
+ if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
+ e_info(probe, "FCoE enabled with RSS\n");
+ if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) ||
+ (adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+ ixgbe_set_fdir_queues(adapter);
+ else
+ ixgbe_set_rss_queues(adapter);
}
+ /* adding FCoE rx rings to the end */
+ f->mask = adapter->num_rx_queues;
+ adapter->num_rx_queues += f->indices;
+ adapter->num_tx_queues += f->indices;
return true;
}
#endif /* IXGBE_FCOE */
+/* Artificial max queue cap per traffic class in DCB mode */
+#define DCB_QUEUE_CAP 8
+
#ifdef CONFIG_IXGBE_DCB
static inline bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter)
{
- bool ret = false;
- struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_DCB];
- int tcs = netdev_get_num_tc(adapter->netdev);
- int max_q, i, q;
+ int per_tc_q, q, i, offset = 0;
+ struct net_device *dev = adapter->netdev;
+ int tcs = netdev_get_num_tc(dev);
- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED) || !tcs)
- return ret;
+ if (!tcs)
+ return false;
- max_q = adapter->netdev->num_tx_queues / tcs;
+ /* Map queue offset and counts onto allocated tx queues */
+ per_tc_q = min(dev->num_tx_queues / tcs, (unsigned int)DCB_QUEUE_CAP);
+ q = min((int)num_online_cpus(), per_tc_q);
- f->indices = 0;
for (i = 0; i < tcs; i++) {
- q = min((int)num_online_cpus(), max_q);
- f->indices += q;
+ netdev_set_prio_tc_map(dev, i, i);
+ netdev_set_tc_queue(dev, i, q, offset);
+ offset += q;
}
- f->mask = 0x7 << 3;
- adapter->num_rx_queues = f->indices;
- adapter->num_tx_queues = f->indices;
- ret = true;
+ adapter->num_tx_queues = q * tcs;
+ adapter->num_rx_queues = q * tcs;
#ifdef IXGBE_FCOE
- /* FCoE enabled queues require special configuration done through
- * configure_fcoe() and others. Here we map FCoE indices onto the
- * DCB queue pairs allowing FCoE to own configuration later.
+ /* FCoE enabled queues require special configuration indexed
+ * by feature specific indices and mask. Here we map FCoE
+ * indices onto the DCB queue pairs allowing FCoE to own
+ * configuration later.
*/
- ixgbe_set_fcoe_queues(adapter);
+ if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) {
+ int tc;
+ struct ixgbe_ring_feature *f =
+ &adapter->ring_feature[RING_F_FCOE];
+
+ tc = netdev_get_prio_tc_map(dev, adapter->fcoe.up);
+ f->indices = dev->tc_to_txq[tc].count;
+ f->mask = dev->tc_to_txq[tc].offset;
+ }
#endif
- return ret;
+ return true;
}
#endif
@@ -5172,7 +5172,6 @@ static int __devinit ixgbe_sw_init(struct ixgbe_adapter *adapter)
rss = min(IXGBE_MAX_RSS_INDICES, (int)num_online_cpus());
adapter->ring_feature[RING_F_RSS].indices = rss;
adapter->flags |= IXGBE_FLAG_RSS_ENABLED;
- adapter->ring_feature[RING_F_DCB].indices = IXGBE_MAX_DCB_INDICES;
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
if (hw->device_id == IXGBE_DEV_ID_82598AT)
@@ -7213,10 +7212,8 @@ static void ixgbe_validate_rtr(struct ixgbe_adapter *adapter, u8 tc)
*/
int ixgbe_setup_tc(struct net_device *dev, u8 tc)
{
- unsigned int q, i, offset = 0;
struct ixgbe_adapter *adapter = netdev_priv(dev);
struct ixgbe_hw *hw = &adapter->hw;
- int max_q = adapter->netdev->num_tx_queues / tc;
/* If DCB is anabled do not remove traffic classes, multiple
* traffic classes are required to implement DCB
@@ -7242,14 +7239,6 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
else
netdev_reset_tc(dev);
- /* Partition Tx queues evenly amongst traffic classes */
- for (i = 0; i < tc; i++) {
- q = min((int)num_online_cpus(), max_q);
- netdev_set_prio_tc_map(dev, i, i);
- netdev_set_tc_queue(dev, i, q, offset);
- offset += q;
- }
-
ixgbe_init_interrupt_scheme(adapter);
ixgbe_validate_rtr(adapter, tc);
if (netif_running(dev))
@@ -7436,14 +7425,16 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev,
pci_set_master(pdev);
pci_save_state(pdev);
+#ifdef CONFIG_IXGBE_DCB
+ indices *= MAX_TRAFFIC_CLASS;
+#endif
+
if (ii->mac == ixgbe_mac_82598EB)
indices = min_t(unsigned int, indices, IXGBE_MAX_RSS_INDICES);
else
indices = min_t(unsigned int, indices, IXGBE_MAX_FDIR_INDICES);
-#if defined(CONFIG_DCB)
- indices = max_t(unsigned int, indices, IXGBE_MAX_DCB_INDICES);
-#elif defined(IXGBE_FCOE)
+#ifdef IXGBE_FCOE
indices += min_t(unsigned int, num_possible_cpus(),
IXGBE_MAX_FCOE_INDICES);
#endif
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 06/13] ixgbe: DCB 82598 devices, tx_idx and rx_idx swapped
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (4 preceding siblings ...)
2011-06-11 3:02 ` [net-next 05/13] ixgbe: DCB use existing TX and RX queues Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 07/13] ixgbe: setup redirection table for multiple packet buffers Jeff Kirsher
` (6 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
The tx_idx and rx_idx values are swapped on 82598 devices
with DCB enabled.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_main.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index 3a88fb6..f829d36 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -4636,8 +4636,8 @@ static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc,
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
- *tx = tc << 3;
- *rx = tc << 2;
+ *tx = tc << 2;
+ *rx = tc << 3;
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 07/13] ixgbe: setup redirection table for multiple packet buffers
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (5 preceding siblings ...)
2011-06-11 3:02 ` [net-next 06/13] ixgbe: DCB 82598 devices, tx_idx and rx_idx swapped Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 08/13] ixgbe: fix bit mask for DCB version Jeff Kirsher
` (5 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
Setup RSS redirection table to be compatible with multiple packet
buffers. Currently, this works on 82599 devices because the RSS
redirection index is masked by the number of queues per packet
buffer.
This sets the cap on the RSS table to maxq.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_main.c | 6 +++++-
1 files changed, 5 insertions(+), 1 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index f829d36..e81dc85 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -2939,6 +2939,10 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
u32 rxcsum;
int i, j;
u8 tcs = netdev_get_num_tc(adapter->netdev);
+ int maxq = adapter->ring_feature[RING_F_RSS].indices;
+
+ if (tcs)
+ maxq = min(maxq, adapter->num_tx_queues / tcs);
/* Fill out hash function seeds */
for (i = 0; i < 10; i++)
@@ -2946,7 +2950,7 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
/* Fill out redirection table */
for (i = 0, j = 0; i < 128; i++, j++) {
- if (j == adapter->ring_feature[RING_F_RSS].indices)
+ if (j == maxq)
j = 0;
/* reta = 4-byte sliding window of
* 0x00..(indices-1)(indices-1)00..etc. */
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 08/13] ixgbe: fix bit mask for DCB version
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (6 preceding siblings ...)
2011-06-11 3:02 ` [net-next 07/13] ixgbe: setup redirection table for multiple packet buffers Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 09/13] ixgbe: DCB and perfect filters can coexist Jeff Kirsher
` (4 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
This bit mask is wrong DCBX_HOST is always set. It was missed up
until now because lldpad reprograms the device on a link
event. However this is still wrong and it is best not to be
mis-configured for some time immediately following ixgbe_up().
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_main.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index e81dc85..5caf42a 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -3745,7 +3745,7 @@ static void ixgbe_configure_dcb(struct ixgbe_adapter *adapter)
hw->mac.ops.set_vfta(&adapter->hw, 0, 0, true);
/* reconfigure the hardware */
- if (adapter->dcbx_cap & (DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_CEE)) {
+ if (adapter->dcbx_cap & DCB_CAP_DCBX_VER_CEE) {
#ifdef CONFIG_FCOE
if (adapter->netdev->features & NETIF_F_FCOE_MTU)
max_frame = max(max_frame, IXGBE_FCOE_JUMBO_FRAME_SIZE);
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 09/13] ixgbe: DCB and perfect filters can coexist
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (7 preceding siblings ...)
2011-06-11 3:02 ` [net-next 08/13] ixgbe: fix bit mask for DCB version Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 10/13] ixgbe: DCB, remove unneeded ixgbe_dcb_txq_to_tc() routine Jeff Kirsher
` (3 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
Now flow directors perfect filters features can coexist with DCB.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_dcb_nl.c | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ixgbe/ixgbe_dcb_nl.c
index b229feb..08c7aeb 100644
--- a/drivers/net/ixgbe/ixgbe_dcb_nl.c
+++ b/drivers/net/ixgbe/ixgbe_dcb_nl.c
@@ -135,7 +135,6 @@ static u8 ixgbe_dcbnl_set_state(struct net_device *netdev, u8 state)
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
adapter->flags &= ~IXGBE_FLAG_FDIR_HASH_CAPABLE;
- adapter->flags &= ~IXGBE_FLAG_FDIR_PERFECT_CAPABLE;
break;
default:
break;
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 10/13] ixgbe: DCB, remove unneeded ixgbe_dcb_txq_to_tc() routine
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (8 preceding siblings ...)
2011-06-11 3:02 ` [net-next 09/13] ixgbe: DCB and perfect filters can coexist Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 11/13] ixgbe: add support for Dell CEM Jeff Kirsher
` (2 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: John Fastabend, netdev, gospo, Jeff Kirsher
From: John Fastabend <john.r.fastabend@intel.com>
The ixgbe_dcb_txq_to_tc() routine was used to map TX rings to
a DCB traffic class. Now that a tx_ring has a DCB traffic class
associated with it this routine is no longer needed.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_main.c | 58 +---------------------------------------
1 files changed, 1 insertions(+), 57 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index 5caf42a..295ab6d 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -665,62 +665,6 @@ void ixgbe_unmap_and_free_tx_resource(struct ixgbe_ring *tx_ring,
/* tx_buffer_info must be completely set up in the transmit path */
}
-/**
- * ixgbe_dcb_txq_to_tc - convert a reg index to a traffic class
- * @adapter: driver private struct
- * @index: reg idx of queue to query (0-127)
- *
- * Helper function to determine the traffic index for a particular
- * register index.
- *
- * Returns : a tc index for use in range 0-7, or 0-3
- */
-static u8 ixgbe_dcb_txq_to_tc(struct ixgbe_adapter *adapter, u8 reg_idx)
-{
- int tc = -1;
- int dcb_i = netdev_get_num_tc(adapter->netdev);
-
- /* if DCB is not enabled the queues have no TC */
- if (!(adapter->flags & IXGBE_FLAG_DCB_ENABLED))
- return tc;
-
- /* check valid range */
- if (reg_idx >= adapter->hw.mac.max_tx_queues)
- return tc;
-
- switch (adapter->hw.mac.type) {
- case ixgbe_mac_82598EB:
- tc = reg_idx >> 2;
- break;
- default:
- if (dcb_i != 4 && dcb_i != 8)
- break;
-
- /* if VMDq is enabled the lowest order bits determine TC */
- if (adapter->flags & (IXGBE_FLAG_SRIOV_ENABLED |
- IXGBE_FLAG_VMDQ_ENABLED)) {
- tc = reg_idx & (dcb_i - 1);
- break;
- }
-
- /*
- * Convert the reg_idx into the correct TC. This bitmask
- * targets the last full 32 ring traffic class and assigns
- * it a value of 1. From there the rest of the rings are
- * based on shifting the mask further up to include the
- * reg_idx / 16 and then reg_idx / 8. It assumes dcB_i
- * will only ever be 8 or 4 and that reg_idx will never
- * be greater then 128. The code without the power of 2
- * optimizations would be:
- * (((reg_idx % 32) + 32) * dcb_i) >> (9 - reg_idx / 32)
- */
- tc = ((reg_idx & 0X1F) + 0x20) * dcb_i;
- tc >>= 9 - (reg_idx >> 5);
- }
-
- return tc;
-}
-
static void ixgbe_update_xoff_received(struct ixgbe_adapter *adapter)
{
struct ixgbe_hw *hw = &adapter->hw;
@@ -766,7 +710,7 @@ static void ixgbe_update_xoff_received(struct ixgbe_adapter *adapter)
/* disarm tx queues that have received xoff frames */
for (i = 0; i < adapter->num_tx_queues; i++) {
struct ixgbe_ring *tx_ring = adapter->tx_ring[i];
- u32 tc = ixgbe_dcb_txq_to_tc(adapter, tx_ring->reg_idx);
+ u8 tc = tx_ring->dcb_tc;
if (xoff[tc])
clear_bit(__IXGBE_HANG_CHECK_ARMED, &tx_ring->state);
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 11/13] ixgbe: add support for Dell CEM
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (9 preceding siblings ...)
2011-06-11 3:02 ` [net-next 10/13] ixgbe: DCB, remove unneeded ixgbe_dcb_txq_to_tc() routine Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 12/13] ixgbe: setup per CPU PCI pool for FCoE DDP Jeff Kirsher
2011-06-11 3:02 ` [net-next 13/13] ixgbe: use per NUMA node lock " Jeff Kirsher
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: Emil Tantilov, netdev, gospo, Jeff Kirsher
From: Emil Tantilov <emil.s.tantilov@intel.com>
This patch adds support for Dell CEM (Comprehensive Embedded Management)).
This consists of informing the management firmware of the driver version
during probe on 82599 and X540 HW.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Evan Swanson <evan.swanson@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_82598.c | 1 +
drivers/net/ixgbe/ixgbe_82599.c | 1 +
drivers/net/ixgbe/ixgbe_common.c | 174 ++++++++++++++++++++++++++++++++++++++
drivers/net/ixgbe/ixgbe_common.h | 2 +
drivers/net/ixgbe/ixgbe_main.c | 4 +
drivers/net/ixgbe/ixgbe_type.h | 46 ++++++++++
drivers/net/ixgbe/ixgbe_x540.c | 1 +
7 files changed, 229 insertions(+), 0 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_82598.c b/drivers/net/ixgbe/ixgbe_82598.c
index bb417d7..0d4e382 100644
--- a/drivers/net/ixgbe/ixgbe_82598.c
+++ b/drivers/net/ixgbe/ixgbe_82598.c
@@ -1316,6 +1316,7 @@ static struct ixgbe_mac_operations mac_ops_82598 = {
.clear_vfta = &ixgbe_clear_vfta_82598,
.set_vfta = &ixgbe_set_vfta_82598,
.fc_enable = &ixgbe_fc_enable_82598,
+ .set_fw_drv_ver = NULL,
.acquire_swfw_sync = &ixgbe_acquire_swfw_sync,
.release_swfw_sync = &ixgbe_release_swfw_sync,
};
diff --git a/drivers/net/ixgbe/ixgbe_82599.c b/drivers/net/ixgbe/ixgbe_82599.c
index 324a505..4a6826b 100644
--- a/drivers/net/ixgbe/ixgbe_82599.c
+++ b/drivers/net/ixgbe/ixgbe_82599.c
@@ -2126,6 +2126,7 @@ static struct ixgbe_mac_operations mac_ops_82599 = {
.clear_vfta = &ixgbe_clear_vfta_generic,
.set_vfta = &ixgbe_set_vfta_generic,
.fc_enable = &ixgbe_fc_enable_generic,
+ .set_fw_drv_ver = &ixgbe_set_fw_drv_ver_generic,
.init_uta_tables = &ixgbe_init_uta_tables_generic,
.setup_sfp = &ixgbe_setup_sfp_modules_82599,
.set_mac_anti_spoofing = &ixgbe_set_mac_anti_spoofing,
diff --git a/drivers/net/ixgbe/ixgbe_common.c b/drivers/net/ixgbe/ixgbe_common.c
index cc2a4a1..777051f 100644
--- a/drivers/net/ixgbe/ixgbe_common.c
+++ b/drivers/net/ixgbe/ixgbe_common.c
@@ -3333,3 +3333,177 @@ void ixgbe_set_rxpba_generic(struct ixgbe_hw *hw,
IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
}
}
+
+/**
+ * ixgbe_calculate_checksum - Calculate checksum for buffer
+ * @buffer: pointer to EEPROM
+ * @length: size of EEPROM to calculate a checksum for
+ * Calculates the checksum for some buffer on a specified length. The
+ * checksum calculated is returned.
+ **/
+static u8 ixgbe_calculate_checksum(u8 *buffer, u32 length)
+{
+ u32 i;
+ u8 sum = 0;
+
+ if (!buffer)
+ return 0;
+
+ for (i = 0; i < length; i++)
+ sum += buffer[i];
+
+ return (u8) (0 - sum);
+}
+
+/**
+ * ixgbe_host_interface_command - Issue command to manageability block
+ * @hw: pointer to the HW structure
+ * @buffer: contains the command to write and where the return status will
+ * be placed
+ * @lenght: lenght of buffer, must be multiple of 4 bytes
+ *
+ * Communicates with the manageability block. On success return 0
+ * else return IXGBE_ERR_HOST_INTERFACE_COMMAND.
+ **/
+static s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, u8 *buffer,
+ u32 length)
+{
+ u32 hicr, i;
+ u32 hdr_size = sizeof(struct ixgbe_hic_hdr);
+ u8 buf_len, dword_len;
+
+ s32 ret_val = 0;
+
+ if (length == 0 || length & 0x3 ||
+ length > IXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
+ hw_dbg(hw, "Buffer length failure.\n");
+ ret_val = IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto out;
+ }
+
+ /* Check that the host interface is enabled. */
+ hicr = IXGBE_READ_REG(hw, IXGBE_HICR);
+ if ((hicr & IXGBE_HICR_EN) == 0) {
+ hw_dbg(hw, "IXGBE_HOST_EN bit disabled.\n");
+ ret_val = IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto out;
+ }
+
+ /* Calculate length in DWORDs */
+ dword_len = length >> 2;
+
+ /*
+ * The device driver writes the relevant command block
+ * into the ram area.
+ */
+ for (i = 0; i < dword_len; i++)
+ IXGBE_WRITE_REG_ARRAY(hw, IXGBE_FLEX_MNG,
+ i, *((u32 *)buffer + i));
+
+ /* Setting this bit tells the ARC that a new command is pending. */
+ IXGBE_WRITE_REG(hw, IXGBE_HICR, hicr | IXGBE_HICR_C);
+
+ for (i = 0; i < IXGBE_HI_COMMAND_TIMEOUT; i++) {
+ hicr = IXGBE_READ_REG(hw, IXGBE_HICR);
+ if (!(hicr & IXGBE_HICR_C))
+ break;
+ usleep_range(1000, 2000);
+ }
+
+ /* Check command successful completion. */
+ if (i == IXGBE_HI_COMMAND_TIMEOUT ||
+ (!(IXGBE_READ_REG(hw, IXGBE_HICR) & IXGBE_HICR_SV))) {
+ hw_dbg(hw, "Command has failed with no status valid.\n");
+ ret_val = IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto out;
+ }
+
+ /* Calculate length in DWORDs */
+ dword_len = hdr_size >> 2;
+
+ /* first pull in the header so we know the buffer length */
+ for (i = 0; i < dword_len; i++)
+ *((u32 *)buffer + i) =
+ IXGBE_READ_REG_ARRAY(hw, IXGBE_FLEX_MNG, i);
+
+ /* If there is any thing in data position pull it in */
+ buf_len = ((struct ixgbe_hic_hdr *)buffer)->buf_len;
+ if (buf_len == 0)
+ goto out;
+
+ if (length < (buf_len + hdr_size)) {
+ hw_dbg(hw, "Buffer not large enough for reply message.\n");
+ ret_val = IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto out;
+ }
+
+ /* Calculate length in DWORDs, add one for odd lengths */
+ dword_len = (buf_len + 1) >> 2;
+
+ /* Pull in the rest of the buffer (i is where we left off)*/
+ for (; i < buf_len; i++)
+ *((u32 *)buffer + i) =
+ IXGBE_READ_REG_ARRAY(hw, IXGBE_FLEX_MNG, i);
+
+out:
+ return ret_val;
+}
+
+/**
+ * ixgbe_set_fw_drv_ver_generic - Sends driver version to firmware
+ * @hw: pointer to the HW structure
+ * @maj: driver version major number
+ * @min: driver version minor number
+ * @build: driver version build number
+ * @sub: driver version sub build number
+ *
+ * Sends driver version number to firmware through the manageability
+ * block. On success return 0
+ * else returns IXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+ * semaphore or IXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
+ u8 build, u8 sub)
+{
+ struct ixgbe_hic_drv_info fw_cmd;
+ int i;
+ s32 ret_val = 0;
+
+ if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_SW_MNG_SM) != 0) {
+ ret_val = IXGBE_ERR_SWFW_SYNC;
+ goto out;
+ }
+
+ fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO;
+ fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN;
+ fw_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+ fw_cmd.port_num = (u8)hw->bus.func;
+ fw_cmd.ver_maj = maj;
+ fw_cmd.ver_min = min;
+ fw_cmd.ver_build = build;
+ fw_cmd.ver_sub = sub;
+ fw_cmd.hdr.checksum = 0;
+ fw_cmd.hdr.checksum = ixgbe_calculate_checksum((u8 *)&fw_cmd,
+ (FW_CEM_HDR_LEN + fw_cmd.hdr.buf_len));
+ fw_cmd.pad = 0;
+ fw_cmd.pad2 = 0;
+
+ for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+ ret_val = ixgbe_host_interface_command(hw, (u8 *)&fw_cmd,
+ sizeof(fw_cmd));
+ if (ret_val != 0)
+ continue;
+
+ if (fw_cmd.hdr.cmd_or_resp.ret_status ==
+ FW_CEM_RESP_STATUS_SUCCESS)
+ ret_val = 0;
+ else
+ ret_val = IXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+ break;
+ }
+
+ hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_SW_MNG_SM);
+out:
+ return ret_val;
+}
diff --git a/drivers/net/ixgbe/ixgbe_common.h b/drivers/net/ixgbe/ixgbe_common.h
index 32a454f..f24fd64 100644
--- a/drivers/net/ixgbe/ixgbe_common.h
+++ b/drivers/net/ixgbe/ixgbe_common.h
@@ -99,6 +99,8 @@ s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index);
void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int pf);
void ixgbe_set_vlan_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf);
s32 ixgbe_get_device_caps_generic(struct ixgbe_hw *hw, u16 *device_caps);
+s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
+ u8 build, u8 ver);
void ixgbe_set_rxpba_generic(struct ixgbe_hw *hw, int num_pb,
u32 headroom, int strategy);
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index 295ab6d..4cd66ae 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -7674,6 +7674,10 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev,
ixgbe_vf_configuration(pdev, (i | 0x10000000));
}
+ /* Inform firmware of driver version */
+ if (hw->mac.ops.set_fw_drv_ver)
+ hw->mac.ops.set_fw_drv_ver(hw, MAJ, MIN, BUILD, KFIX);
+
/* add san mac addr to netdev */
ixgbe_add_sanmac_netdev(netdev);
diff --git a/drivers/net/ixgbe/ixgbe_type.h b/drivers/net/ixgbe/ixgbe_type.h
index 5455064..9a499a6 100644
--- a/drivers/net/ixgbe/ixgbe_type.h
+++ b/drivers/net/ixgbe/ixgbe_type.h
@@ -707,6 +707,13 @@
#define IXGBE_HFDR 0x15FE8
#define IXGBE_FLEX_MNG 0x15800 /* 0x15800 - 0x15EFC */
+#define IXGBE_HICR_EN 0x01 /* Enable bit - RO */
+/* Driver sets this bit when done to put command in RAM */
+#define IXGBE_HICR_C 0x02
+#define IXGBE_HICR_SV 0x04 /* Status Validity */
+#define IXGBE_HICR_FW_RESET_ENABLE 0x40
+#define IXGBE_HICR_FW_RESET 0x80
+
/* PCI-E registers */
#define IXGBE_GCR 0x11000
#define IXGBE_GTV 0x11004
@@ -2124,6 +2131,41 @@ enum ixgbe_fdir_pballoc_type {
#define IXGBE_FDIR_INIT_DONE_POLL 10
#define IXGBE_FDIRCMD_CMD_POLL 10
+/* Manageablility Host Interface defines */
+#define IXGBE_HI_MAX_BLOCK_BYTE_LENGTH 1792 /* Num of bytes in range */
+#define IXGBE_HI_MAX_BLOCK_DWORD_LENGTH 448 /* Num of dwords in range */
+#define IXGBE_HI_COMMAND_TIMEOUT 500 /* Process HI command limit */
+
+/* CEM Support */
+#define FW_CEM_HDR_LEN 0x4
+#define FW_CEM_CMD_DRIVER_INFO 0xDD
+#define FW_CEM_CMD_DRIVER_INFO_LEN 0x5
+#define FW_CEM_CMD_RESERVED 0X0
+#define FW_CEM_MAX_RETRIES 3
+#define FW_CEM_RESP_STATUS_SUCCESS 0x1
+
+/* Host Interface Command Structures */
+struct ixgbe_hic_hdr {
+ u8 cmd;
+ u8 buf_len;
+ union {
+ u8 cmd_resv;
+ u8 ret_status;
+ } cmd_or_resp;
+ u8 checksum;
+};
+
+struct ixgbe_hic_drv_info {
+ struct ixgbe_hic_hdr hdr;
+ u8 port_num;
+ u8 ver_sub;
+ u8 ver_build;
+ u8 ver_min;
+ u8 ver_maj;
+ u8 pad; /* end spacing to ensure length is mult. of dword */
+ u16 pad2; /* end spacing to ensure length is mult. of dword2 */
+};
+
/* Transmit Descriptor - Advanced */
union ixgbe_adv_tx_desc {
struct {
@@ -2663,6 +2705,9 @@ struct ixgbe_mac_operations {
/* Flow Control */
s32 (*fc_enable)(struct ixgbe_hw *, s32);
+
+ /* Manageability interface */
+ s32 (*set_fw_drv_ver)(struct ixgbe_hw *, u8, u8, u8, u8);
};
struct ixgbe_phy_operations {
@@ -2832,6 +2877,7 @@ struct ixgbe_info {
#define IXGBE_ERR_SFP_SETUP_NOT_COMPLETE -30
#define IXGBE_ERR_PBA_SECTION -31
#define IXGBE_ERR_INVALID_ARGUMENT -32
+#define IXGBE_ERR_HOST_INTERFACE_COMMAND -33
#define IXGBE_NOT_IMPLEMENTED 0x7FFFFFFF
#endif /* _IXGBE_TYPE_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_x540.c b/drivers/net/ixgbe/ixgbe_x540.c
index fa566ed..bec30ed 100644
--- a/drivers/net/ixgbe/ixgbe_x540.c
+++ b/drivers/net/ixgbe/ixgbe_x540.c
@@ -894,6 +894,7 @@ static struct ixgbe_mac_operations mac_ops_X540 = {
.clear_vfta = &ixgbe_clear_vfta_generic,
.set_vfta = &ixgbe_set_vfta_generic,
.fc_enable = &ixgbe_fc_enable_generic,
+ .set_fw_drv_ver = &ixgbe_set_fw_drv_ver_generic,
.init_uta_tables = &ixgbe_init_uta_tables_generic,
.setup_sfp = NULL,
.set_mac_anti_spoofing = &ixgbe_set_mac_anti_spoofing,
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 12/13] ixgbe: setup per CPU PCI pool for FCoE DDP
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (10 preceding siblings ...)
2011-06-11 3:02 ` [net-next 11/13] ixgbe: add support for Dell CEM Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 3:02 ` [net-next 13/13] ixgbe: use per NUMA node lock " Jeff Kirsher
12 siblings, 0 replies; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: Vasu Dev, netdev, gospo, Jeff Kirsher
From: Vasu Dev <vasu.dev@intel.com>
Currently single PCI pool used across all CPUs and that
doesn't scales up as number of CPU increases, so this
patch adds per CPU PCI pool to setup udl and that aligns
well from FCoE stack as that already has per CPU exch locking.
Adds per CPU PCI alloc setup and free in
ixgbe_fcoe_ddp_pools_alloc and ixgbe_fcoe_ddp_pools_free,
use CPU specific pool during DDP setup.
Re-arranged ixgbe_fcoe struct to have fewer holes
along with adding pools ptr using pahole.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_fcoe.c | 105 ++++++++++++++++++++++++++++-----------
drivers/net/ixgbe/ixgbe_fcoe.h | 13 +++--
2 files changed, 82 insertions(+), 36 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_fcoe.c b/drivers/net/ixgbe/ixgbe_fcoe.c
index 0592072..f5f39ed 100644
--- a/drivers/net/ixgbe/ixgbe_fcoe.c
+++ b/drivers/net/ixgbe/ixgbe_fcoe.c
@@ -128,7 +128,11 @@ int ixgbe_fcoe_ddp_put(struct net_device *netdev, u16 xid)
if (ddp->sgl)
pci_unmap_sg(adapter->pdev, ddp->sgl, ddp->sgc,
DMA_FROM_DEVICE);
- pci_pool_free(fcoe->pool, ddp->udl, ddp->udp);
+ if (ddp->pool) {
+ pci_pool_free(ddp->pool, ddp->udl, ddp->udp);
+ ddp->pool = NULL;
+ }
+
ixgbe_fcoe_clear_ddp(ddp);
out_ddp_put:
@@ -163,6 +167,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
unsigned int thislen = 0;
u32 fcbuff, fcdmarw, fcfltrw, fcrxctl;
dma_addr_t addr = 0;
+ struct pci_pool *pool;
if (!netdev || !sgl)
return 0;
@@ -199,12 +204,14 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
return 0;
}
- /* alloc the udl from our ddp pool */
- ddp->udl = pci_pool_alloc(fcoe->pool, GFP_ATOMIC, &ddp->udp);
+ /* alloc the udl from per cpu ddp pool */
+ pool = *per_cpu_ptr(fcoe->pool, get_cpu());
+ ddp->udl = pci_pool_alloc(pool, GFP_ATOMIC, &ddp->udp);
if (!ddp->udl) {
e_err(drv, "failed allocated ddp context\n");
goto out_noddp_unmap;
}
+ ddp->pool = pool;
ddp->sgl = sgl;
ddp->sgc = sgc;
@@ -268,6 +275,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
j++;
lastsize = 1;
}
+ put_cpu();
fcbuff = (IXGBE_FCBUFF_4KB << IXGBE_FCBUFF_BUFFSIZE_SHIFT);
fcbuff |= ((j & 0xff) << IXGBE_FCBUFF_BUFFCNT_SHIFT);
@@ -311,11 +319,12 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
return 1;
out_noddp_free:
- pci_pool_free(fcoe->pool, ddp->udl, ddp->udp);
+ pci_pool_free(pool, ddp->udl, ddp->udp);
ixgbe_fcoe_clear_ddp(ddp);
out_noddp_unmap:
pci_unmap_sg(adapter->pdev, sgl, sgc, DMA_FROM_DEVICE);
+ put_cpu();
return 0;
}
@@ -585,6 +594,46 @@ int ixgbe_fso(struct ixgbe_adapter *adapter,
return skb_is_gso(skb);
}
+static void ixgbe_fcoe_ddp_pools_free(struct ixgbe_fcoe *fcoe)
+{
+ unsigned int cpu;
+ struct pci_pool **pool;
+
+ for_each_possible_cpu(cpu) {
+ pool = per_cpu_ptr(fcoe->pool, cpu);
+ if (*pool)
+ pci_pool_destroy(*pool);
+ }
+ free_percpu(fcoe->pool);
+ fcoe->pool = NULL;
+}
+
+static void ixgbe_fcoe_ddp_pools_alloc(struct ixgbe_adapter *adapter)
+{
+ struct ixgbe_fcoe *fcoe = &adapter->fcoe;
+ unsigned int cpu;
+ struct pci_pool **pool;
+ char pool_name[32];
+
+ fcoe->pool = alloc_percpu(struct pci_pool *);
+ if (!fcoe->pool)
+ return;
+
+ /* allocate pci pool for each cpu */
+ for_each_possible_cpu(cpu) {
+ snprintf(pool_name, 32, "ixgbe_fcoe_ddp_%d", cpu);
+ pool = per_cpu_ptr(fcoe->pool, cpu);
+ *pool = pci_pool_create(pool_name,
+ adapter->pdev, IXGBE_FCPTR_MAX,
+ IXGBE_FCPTR_ALIGN, PAGE_SIZE);
+ if (!*pool) {
+ e_err(drv, "failed to alloc DDP pool on cpu:%d\n", cpu);
+ ixgbe_fcoe_ddp_pools_free(fcoe);
+ return;
+ }
+ }
+}
+
/**
* ixgbe_configure_fcoe - configures registers for fcoe at start
* @adapter: ptr to ixgbe adapter
@@ -604,22 +653,20 @@ void ixgbe_configure_fcoe(struct ixgbe_adapter *adapter)
u32 up2tc;
#endif
- /* create the pool for ddp if not created yet */
if (!fcoe->pool) {
- /* allocate ddp pool */
- fcoe->pool = pci_pool_create("ixgbe_fcoe_ddp",
- adapter->pdev, IXGBE_FCPTR_MAX,
- IXGBE_FCPTR_ALIGN, PAGE_SIZE);
- if (!fcoe->pool)
- e_err(drv, "failed to allocated FCoE DDP pool\n");
-
spin_lock_init(&fcoe->lock);
+ ixgbe_fcoe_ddp_pools_alloc(adapter);
+ if (!fcoe->pool) {
+ e_err(drv, "failed to alloc percpu fcoe DDP pools\n");
+ return;
+ }
+
/* Extra buffer to be shared by all DDPs for HW work around */
fcoe->extra_ddp_buffer = kmalloc(IXGBE_FCBUFF_MIN, GFP_ATOMIC);
if (fcoe->extra_ddp_buffer == NULL) {
e_err(drv, "failed to allocated extra DDP buffer\n");
- goto out_extra_ddp_buffer_alloc;
+ goto out_ddp_pools;
}
fcoe->extra_ddp_buffer_dma =
@@ -630,7 +677,7 @@ void ixgbe_configure_fcoe(struct ixgbe_adapter *adapter)
if (dma_mapping_error(&adapter->pdev->dev,
fcoe->extra_ddp_buffer_dma)) {
e_err(drv, "failed to map extra DDP buffer\n");
- goto out_extra_ddp_buffer_dma;
+ goto out_extra_ddp_buffer;
}
}
@@ -684,11 +731,10 @@ void ixgbe_configure_fcoe(struct ixgbe_adapter *adapter)
return;
-out_extra_ddp_buffer_dma:
+out_extra_ddp_buffer:
kfree(fcoe->extra_ddp_buffer);
-out_extra_ddp_buffer_alloc:
- pci_pool_destroy(fcoe->pool);
- fcoe->pool = NULL;
+out_ddp_pools:
+ ixgbe_fcoe_ddp_pools_free(fcoe);
}
/**
@@ -704,18 +750,17 @@ void ixgbe_cleanup_fcoe(struct ixgbe_adapter *adapter)
int i;
struct ixgbe_fcoe *fcoe = &adapter->fcoe;
- /* release ddp resource */
- if (fcoe->pool) {
- for (i = 0; i < IXGBE_FCOE_DDP_MAX; i++)
- ixgbe_fcoe_ddp_put(adapter->netdev, i);
- dma_unmap_single(&adapter->pdev->dev,
- fcoe->extra_ddp_buffer_dma,
- IXGBE_FCBUFF_MIN,
- DMA_FROM_DEVICE);
- kfree(fcoe->extra_ddp_buffer);
- pci_pool_destroy(fcoe->pool);
- fcoe->pool = NULL;
- }
+ if (!fcoe->pool)
+ return;
+
+ for (i = 0; i < IXGBE_FCOE_DDP_MAX; i++)
+ ixgbe_fcoe_ddp_put(adapter->netdev, i);
+ dma_unmap_single(&adapter->pdev->dev,
+ fcoe->extra_ddp_buffer_dma,
+ IXGBE_FCBUFF_MIN,
+ DMA_FROM_DEVICE);
+ kfree(fcoe->extra_ddp_buffer);
+ ixgbe_fcoe_ddp_pools_free(fcoe);
}
/**
diff --git a/drivers/net/ixgbe/ixgbe_fcoe.h b/drivers/net/ixgbe/ixgbe_fcoe.h
index 5a650a4..d876e7a 100644
--- a/drivers/net/ixgbe/ixgbe_fcoe.h
+++ b/drivers/net/ixgbe/ixgbe_fcoe.h
@@ -62,20 +62,21 @@ struct ixgbe_fcoe_ddp {
struct scatterlist *sgl;
dma_addr_t udp;
u64 *udl;
+ struct pci_pool *pool;
};
struct ixgbe_fcoe {
-#ifdef CONFIG_IXGBE_DCB
- u8 tc;
- u8 up;
-#endif
- unsigned long mode;
+ struct pci_pool **pool;
atomic_t refcnt;
spinlock_t lock;
- struct pci_pool *pool;
struct ixgbe_fcoe_ddp ddp[IXGBE_FCOE_DDP_MAX];
unsigned char *extra_ddp_buffer;
dma_addr_t extra_ddp_buffer_dma;
+ unsigned long mode;
+#ifdef CONFIG_IXGBE_DCB
+ u8 tc;
+ u8 up;
+#endif
};
#endif /* _IXGBE_FCOE_H */
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [net-next 13/13] ixgbe: use per NUMA node lock for FCoE DDP
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
` (11 preceding siblings ...)
2011-06-11 3:02 ` [net-next 12/13] ixgbe: setup per CPU PCI pool for FCoE DDP Jeff Kirsher
@ 2011-06-11 3:02 ` Jeff Kirsher
2011-06-11 5:18 ` Eric Dumazet
12 siblings, 1 reply; 19+ messages in thread
From: Jeff Kirsher @ 2011-06-11 3:02 UTC (permalink / raw)
To: davem; +Cc: Vasu Dev, netdev, gospo, Jeff Kirsher
From: Vasu Dev <vasu.dev@intel.com>
Adds per NUMA node lock to have CPU pass thru its NUMA lock
first before contending for global DDP fcoe->lock to setup
DDP lock, this is to reduce contentions across NUMA nodes.
Allocates and initialize per NUMA node lock in added
ixgbe_fcoe_lock_init and then have current CPU's numa_node_id
based NUMA node lock acquired before taking global fcoe->lock.
The node lock is allocated from its NUMA node using kzalloc_node.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ixgbe/ixgbe_fcoe.c | 50 ++++++++++++++++++++++++++++++++++++++-
drivers/net/ixgbe/ixgbe_fcoe.h | 1 +
2 files changed, 49 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_fcoe.c b/drivers/net/ixgbe/ixgbe_fcoe.c
index f5f39ed..aadff4f 100644
--- a/drivers/net/ixgbe/ixgbe_fcoe.c
+++ b/drivers/net/ixgbe/ixgbe_fcoe.c
@@ -109,6 +109,7 @@ int ixgbe_fcoe_ddp_put(struct net_device *netdev, u16 xid)
len = ddp->len;
/* if there an error, force to invalidate ddp context */
if (ddp->err) {
+ spin_lock(fcoe->node_lock[numa_node_id()]);
spin_lock_bh(&fcoe->lock);
IXGBE_WRITE_REG(&adapter->hw, IXGBE_FCFLT, 0);
IXGBE_WRITE_REG(&adapter->hw, IXGBE_FCFLTRW,
@@ -122,6 +123,7 @@ int ixgbe_fcoe_ddp_put(struct net_device *netdev, u16 xid)
(xid | IXGBE_FCDMARW_RE));
fcbuff = IXGBE_READ_REG(&adapter->hw, IXGBE_FCBUFF);
spin_unlock_bh(&fcoe->lock);
+ spin_unlock(fcoe->node_lock[numa_node_id()]);
if (fcbuff & IXGBE_FCBUFF_VALID)
udelay(100);
}
@@ -294,6 +296,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
/* program DMA context */
hw = &adapter->hw;
+ spin_lock(fcoe->node_lock[numa_node_id()]);
spin_lock_bh(&fcoe->lock);
/* turn on last frame indication for target mode as FCP_RSPtarget is
@@ -315,6 +318,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
IXGBE_WRITE_REG(hw, IXGBE_FCFLTRW, fcfltrw);
spin_unlock_bh(&fcoe->lock);
+ spin_unlock(fcoe->node_lock[numa_node_id()]);
return 1;
@@ -634,6 +638,42 @@ static void ixgbe_fcoe_ddp_pools_alloc(struct ixgbe_adapter *adapter)
}
}
+static void ixgbe_fcoe_locks_free(struct ixgbe_fcoe *fcoe)
+{
+ int node;
+
+ if (!fcoe->node_lock)
+ return;
+
+ for_each_node_with_cpus(node)
+ kfree(fcoe->node_lock[node]);
+
+ kfree(fcoe->node_lock);
+ fcoe->node_lock = NULL;
+}
+
+static void ixgbe_fcoe_lock_init(struct ixgbe_fcoe *fcoe)
+{
+ int node;
+ spinlock_t *node_lock;
+
+ fcoe->node_lock = kzalloc(sizeof(node_lock) * num_possible_nodes(),
+ GFP_KERNEL);
+ if (!fcoe->node_lock)
+ return;
+
+ for_each_node_with_cpus(node) {
+ node_lock = kzalloc_node(sizeof(*node_lock) , GFP_KERNEL, node);
+ if (!node_lock) {
+ ixgbe_fcoe_locks_free(fcoe);
+ return;
+ }
+ spin_lock_init(node_lock);
+ fcoe->node_lock[node] = node_lock;
+ }
+ spin_lock_init(&fcoe->lock);
+}
+
/**
* ixgbe_configure_fcoe - configures registers for fcoe at start
* @adapter: ptr to ixgbe adapter
@@ -654,12 +694,16 @@ void ixgbe_configure_fcoe(struct ixgbe_adapter *adapter)
#endif
if (!fcoe->pool) {
- spin_lock_init(&fcoe->lock);
+ ixgbe_fcoe_lock_init(fcoe);
+ if (!fcoe->node_lock) {
+ e_err(drv, "failed to initialize fcoe locks\n");
+ return;
+ }
ixgbe_fcoe_ddp_pools_alloc(adapter);
if (!fcoe->pool) {
e_err(drv, "failed to alloc percpu fcoe DDP pools\n");
- return;
+ goto out_locks_free;
}
/* Extra buffer to be shared by all DDPs for HW work around */
@@ -735,6 +779,8 @@ out_extra_ddp_buffer:
kfree(fcoe->extra_ddp_buffer);
out_ddp_pools:
ixgbe_fcoe_ddp_pools_free(fcoe);
+out_locks_free:
+ ixgbe_fcoe_locks_free(fcoe);
}
/**
diff --git a/drivers/net/ixgbe/ixgbe_fcoe.h b/drivers/net/ixgbe/ixgbe_fcoe.h
index d876e7a..8618892 100644
--- a/drivers/net/ixgbe/ixgbe_fcoe.h
+++ b/drivers/net/ixgbe/ixgbe_fcoe.h
@@ -69,6 +69,7 @@ struct ixgbe_fcoe {
struct pci_pool **pool;
atomic_t refcnt;
spinlock_t lock;
+ struct spinlock **node_lock;
struct ixgbe_fcoe_ddp ddp[IXGBE_FCOE_DDP_MAX];
unsigned char *extra_ddp_buffer;
dma_addr_t extra_ddp_buffer_dma;
--
1.7.5.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [net-next 13/13] ixgbe: use per NUMA node lock for FCoE DDP
2011-06-11 3:02 ` [net-next 13/13] ixgbe: use per NUMA node lock " Jeff Kirsher
@ 2011-06-11 5:18 ` Eric Dumazet
2011-06-11 5:42 ` Eric Dumazet
2011-06-14 0:02 ` Vasu Dev
0 siblings, 2 replies; 19+ messages in thread
From: Eric Dumazet @ 2011-06-11 5:18 UTC (permalink / raw)
To: Jeff Kirsher; +Cc: davem, Vasu Dev, netdev, gospo
Le vendredi 10 juin 2011 à 20:02 -0700, Jeff Kirsher a écrit :
> From: Vasu Dev <vasu.dev@intel.com>
>
> Adds per NUMA node lock to have CPU pass thru its NUMA lock
> first before contending for global DDP fcoe->lock to setup
> DDP lock, this is to reduce contentions across NUMA nodes.
>
> Allocates and initialize per NUMA node lock in added
> ixgbe_fcoe_lock_init and then have current CPU's numa_node_id
> based NUMA node lock acquired before taking global fcoe->lock.
>
> The node lock is allocated from its NUMA node using kzalloc_node.
>
> Signed-off-by: Vasu Dev <vasu.dev@intel.com>
> Tested-by: Ross Brattain <ross.b.brattain@intel.com>
> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> ---
> drivers/net/ixgbe/ixgbe_fcoe.c | 50 ++++++++++++++++++++++++++++++++++++++-
> drivers/net/ixgbe/ixgbe_fcoe.h | 1 +
> 2 files changed, 49 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ixgbe/ixgbe_fcoe.c b/drivers/net/ixgbe/ixgbe_fcoe.c
> index f5f39ed..aadff4f 100644
> --- a/drivers/net/ixgbe/ixgbe_fcoe.c
> +++ b/drivers/net/ixgbe/ixgbe_fcoe.c
> @@ -109,6 +109,7 @@ int ixgbe_fcoe_ddp_put(struct net_device *netdev, u16 xid)
> len = ddp->len;
> /* if there an error, force to invalidate ddp context */
> if (ddp->err) {
> + spin_lock(fcoe->node_lock[numa_node_id()]);
> spin_lock_bh(&fcoe->lock);
> IXGBE_WRITE_REG(&adapter->hw, IXGBE_FCFLT, 0);
> IXGBE_WRITE_REG(&adapter->hw, IXGBE_FCFLTRW,
> @@ -122,6 +123,7 @@ int ixgbe_fcoe_ddp_put(struct net_device *netdev, u16 xid)
> (xid | IXGBE_FCDMARW_RE));
> fcbuff = IXGBE_READ_REG(&adapter->hw, IXGBE_FCBUFF);
> spin_unlock_bh(&fcoe->lock);
> + spin_unlock(fcoe->node_lock[numa_node_id()]);
> if (fcbuff & IXGBE_FCBUFF_VALID)
> udelay(100);
> }
> @@ -294,6 +296,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
>
> /* program DMA context */
> hw = &adapter->hw;
> + spin_lock(fcoe->node_lock[numa_node_id()]);
> spin_lock_bh(&fcoe->lock);
>
> /* turn on last frame indication for target mode as FCP_RSPtarget is
> @@ -315,6 +318,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
> IXGBE_WRITE_REG(hw, IXGBE_FCFLTRW, fcfltrw);
>
> spin_unlock_bh(&fcoe->lock);
> + spin_unlock(fcoe->node_lock[numa_node_id()]);
>
> return 1;
>
> @@ -634,6 +638,42 @@ static void ixgbe_fcoe_ddp_pools_alloc(struct ixgbe_adapter *adapter)
> }
> }
>
> +static void ixgbe_fcoe_locks_free(struct ixgbe_fcoe *fcoe)
> +{
> + int node;
> +
> + if (!fcoe->node_lock)
> + return;
> +
> + for_each_node_with_cpus(node)
> + kfree(fcoe->node_lock[node]);
> +
> + kfree(fcoe->node_lock);
> + fcoe->node_lock = NULL;
> +}
> +
> +static void ixgbe_fcoe_lock_init(struct ixgbe_fcoe *fcoe)
> +{
> + int node;
> + spinlock_t *node_lock;
> +
> + fcoe->node_lock = kzalloc(sizeof(node_lock) * num_possible_nodes(),
> + GFP_KERNEL);
Hmm...
1) Think of what happens if some machine has 3 possible nodes : 0, 2, 3
-> You should use nr_node_ids instead of num_possible_nodes()
2) Make sure this block cant have false sharing : Allocate at least a
full cache line : On a typical 2 node machine, you currently allocate
16bytes of memory, and this small block could share a contended cache
line.
> + if (!fcoe->node_lock)
> + return;
> +
> + for_each_node_with_cpus(node) {
> + node_lock = kzalloc_node(sizeof(*node_lock) , GFP_KERNEL, node);
> + if (!node_lock) {
> + ixgbe_fcoe_locks_free(fcoe);
> + return;
> + }
> + spin_lock_init(node_lock);
> + fcoe->node_lock[node] = node_lock;
> + }
> + spin_lock_init(&fcoe->lock);
> +}
> +
...
>
> /**
> diff --git a/drivers/net/ixgbe/ixgbe_fcoe.h b/drivers/net/ixgbe/ixgbe_fcoe.h
> index d876e7a..8618892 100644
> --- a/drivers/net/ixgbe/ixgbe_fcoe.h
> +++ b/drivers/net/ixgbe/ixgbe_fcoe.h
> @@ -69,6 +69,7 @@ struct ixgbe_fcoe {
> struct pci_pool **pool;
> atomic_t refcnt;
> spinlock_t lock;
> + struct spinlock **node_lock;
Wont this read_mostly pointer sits in often modified cache line ?
> struct ixgbe_fcoe_ddp ddp[IXGBE_FCOE_DDP_MAX];
> unsigned char *extra_ddp_buffer;
> dma_addr_t extra_ddp_buffer_dma;
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [net-next 13/13] ixgbe: use per NUMA node lock for FCoE DDP
2011-06-11 5:18 ` Eric Dumazet
@ 2011-06-11 5:42 ` Eric Dumazet
2011-06-12 2:00 ` David Miller
2011-06-13 23:59 ` Vasu Dev
2011-06-14 0:02 ` Vasu Dev
1 sibling, 2 replies; 19+ messages in thread
From: Eric Dumazet @ 2011-06-11 5:42 UTC (permalink / raw)
To: Jeff Kirsher; +Cc: davem, Vasu Dev, netdev, gospo
Le samedi 11 juin 2011 à 07:18 +0200, Eric Dumazet a écrit :
> > /**
> > diff --git a/drivers/net/ixgbe/ixgbe_fcoe.h b/drivers/net/ixgbe/ixgbe_fcoe.h
> > index d876e7a..8618892 100644
> > --- a/drivers/net/ixgbe/ixgbe_fcoe.h
> > +++ b/drivers/net/ixgbe/ixgbe_fcoe.h
> > @@ -69,6 +69,7 @@ struct ixgbe_fcoe {
> > struct pci_pool **pool;
> > atomic_t refcnt;
> > spinlock_t lock;
> > + struct spinlock **node_lock;
>
> Wont this read_mostly pointer sits in often modified cache line ?
>
> > struct ixgbe_fcoe_ddp ddp[IXGBE_FCOE_DDP_MAX];
> > unsigned char *extra_ddp_buffer;
> > dma_addr_t extra_ddp_buffer_dma;
>
This patch seems overkill to me, have you tried the more simple way I
did in commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527
(net: add additional lock to qdisc to increase throughput )
(remember you must place ->busylock in a separate cache line, to not
slow down the two cpus that have access to ->lock)
struct ixgbe_fcoe could probably be more carefuly reordered to lower
false sharing
I kindly ask you guys provide actual perf numbers between
1) before any patch
2) After your multilevel per numanode locks
3) A more simple way (my suggestion of adding a single 'busylock')
Thanks
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [net-next 13/13] ixgbe: use per NUMA node lock for FCoE DDP
2011-06-11 5:42 ` Eric Dumazet
@ 2011-06-12 2:00 ` David Miller
2011-06-13 23:59 ` Vasu Dev
1 sibling, 0 replies; 19+ messages in thread
From: David Miller @ 2011-06-12 2:00 UTC (permalink / raw)
To: eric.dumazet; +Cc: jeffrey.t.kirsher, vasu.dev, netdev, gospo
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Sat, 11 Jun 2011 07:42:11 +0200
> This patch seems overkill to me, have you tried the more simple way I
> did in commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527
> (net: add additional lock to qdisc to increase throughput )
>
> (remember you must place ->busylock in a separate cache line, to not
> slow down the two cpus that have access to ->lock)
>
> struct ixgbe_fcoe could probably be more carefuly reordered to lower
> false sharing
>
> I kindly ask you guys provide actual perf numbers between
>
> 1) before any patch
> 2) After your multilevel per numanode locks
> 3) A more simple way (my suggestion of adding a single 'busylock')
Jeff, please sort out these issues with Eric and resend your pull
request once things are resolved.
Thanks!
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [net-next 13/13] ixgbe: use per NUMA node lock for FCoE DDP
2011-06-11 5:42 ` Eric Dumazet
2011-06-12 2:00 ` David Miller
@ 2011-06-13 23:59 ` Vasu Dev
1 sibling, 0 replies; 19+ messages in thread
From: Vasu Dev @ 2011-06-13 23:59 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Jeff Kirsher, davem, Vasu Dev, netdev, gospo
On Sat, 2011-06-11 at 07:42 +0200, Eric Dumazet wrote:
> Le samedi 11 juin 2011 à 07:18 +0200, Eric Dumazet a écrit :
> > > /**
> > > diff --git a/drivers/net/ixgbe/ixgbe_fcoe.h b/drivers/net/ixgbe/ixgbe_fcoe.h
> > > index d876e7a..8618892 100644
> > > --- a/drivers/net/ixgbe/ixgbe_fcoe.h
> > > +++ b/drivers/net/ixgbe/ixgbe_fcoe.h
> > > @@ -69,6 +69,7 @@ struct ixgbe_fcoe {
> > > struct pci_pool **pool;
> > > atomic_t refcnt;
> > > spinlock_t lock;
> > > + struct spinlock **node_lock;
> >
> > Wont this read_mostly pointer sits in often modified cache line ?
> >
> > > struct ixgbe_fcoe_ddp ddp[IXGBE_FCOE_DDP_MAX];
> > > unsigned char *extra_ddp_buffer;
> > > dma_addr_t extra_ddp_buffer_dma;
> >
>
> This patch seems overkill to me, have you tried the more simple way I
> did in commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527
> (net: add additional lock to qdisc to increase throughput )
>
Conceptually this patch does same as referred patch from you, in your
case added Qdisc busylock worked fine with multiple tx/Qdisc but for
fcoe a single ixgbe_fcoe instance is used and therefore this patch
required to setup per node lock similar to Qdisc busylock and that was
the only additional code here and rest is similar.
> (remember you must place ->busylock in a separate cache line, to not
> slow down the two cpus that have access to ->lock)
>
> struct ixgbe_fcoe could probably be more carefuly reordered to lower
> false sharing
>
I did optimize ixgbe_fcoe due to added node_lock using pahole and not to
stepped on any mostly modified line beside also reducing holes, but
moving far away in the struct on new cacheline boundary is even better
and I'll do that in updated patch.
> I kindly ask you guys provide actual perf numbers between
>
> 1) before any patch
> 2) After your multilevel per numanode locks
> 3) A more simple way (my suggestion of adding a single 'busylock')
>
I'm retracting this patch for now and in case of updated patch I'll get
number by just this change, earlier I had number w/ and w/o three DDP
related improvements patches in this series and that had net 26% IOPS
increase for 512Bytes reads but that is mainly from other patch adding
per CPU pci pool. The two sockets systems had about same IOPS w/ or w/o
just this patch and so may be this will work better w/ more sockets, so
I'll will respin this patch only if I see any justifiable gain by just
this patch.
Thanks
Vasu
> Thanks
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [net-next 13/13] ixgbe: use per NUMA node lock for FCoE DDP
2011-06-11 5:18 ` Eric Dumazet
2011-06-11 5:42 ` Eric Dumazet
@ 2011-06-14 0:02 ` Vasu Dev
1 sibling, 0 replies; 19+ messages in thread
From: Vasu Dev @ 2011-06-14 0:02 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Jeff Kirsher, davem, Vasu Dev, netdev, gospo
On Sat, 2011-06-11 at 07:18 +0200, Eric Dumazet wrote:
> > +static void ixgbe_fcoe_lock_init(struct ixgbe_fcoe *fcoe)
> > +{
> > + int node;
> > + spinlock_t *node_lock;
> > +
> > + fcoe->node_lock = kzalloc(sizeof(node_lock) *
> num_possible_nodes(),
> > + GFP_KERNEL);
>
> Hmm...
>
> 1) Think of what happens if some machine has 3 possible nodes : 0, 2,
> 3
>
> -> You should use nr_node_ids instead of num_possible_nodes()
>
Just curious won't these both return same values ? Anycase use of
nr_nodes_ids looks better as it is already set.
> 2) Make sure this block cant have false sharing : Allocate at least a
> full cache line : On a typical 2 node machine, you currently allocate
> 16bytes of memory, and this small block could share a contended cache
> line.
Yeap good point. Thanks
>
>
> > + if (!fcoe->node_lock)
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2011-06-14 0:02 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-11 3:02 [net-next 00/13][pull request] Intel Wired LAN Driver Update Jeff Kirsher
2011-06-11 3:02 ` [net-next 01/13] ixgbe: dcbnl reduce duplicated code and indentation Jeff Kirsher
2011-06-11 3:02 ` [net-next 02/13] ixgbe: consolidate packet buffer allocation Jeff Kirsher
2011-06-11 3:02 ` [net-next 03/13] ixgbe: consolidate MRQC and MTQC handling Jeff Kirsher
2011-06-11 3:02 ` [net-next 04/13] ixgbe: configure minimal packet buffers to support TC Jeff Kirsher
2011-06-11 3:02 ` [net-next 05/13] ixgbe: DCB use existing TX and RX queues Jeff Kirsher
2011-06-11 3:02 ` [net-next 06/13] ixgbe: DCB 82598 devices, tx_idx and rx_idx swapped Jeff Kirsher
2011-06-11 3:02 ` [net-next 07/13] ixgbe: setup redirection table for multiple packet buffers Jeff Kirsher
2011-06-11 3:02 ` [net-next 08/13] ixgbe: fix bit mask for DCB version Jeff Kirsher
2011-06-11 3:02 ` [net-next 09/13] ixgbe: DCB and perfect filters can coexist Jeff Kirsher
2011-06-11 3:02 ` [net-next 10/13] ixgbe: DCB, remove unneeded ixgbe_dcb_txq_to_tc() routine Jeff Kirsher
2011-06-11 3:02 ` [net-next 11/13] ixgbe: add support for Dell CEM Jeff Kirsher
2011-06-11 3:02 ` [net-next 12/13] ixgbe: setup per CPU PCI pool for FCoE DDP Jeff Kirsher
2011-06-11 3:02 ` [net-next 13/13] ixgbe: use per NUMA node lock " Jeff Kirsher
2011-06-11 5:18 ` Eric Dumazet
2011-06-11 5:42 ` Eric Dumazet
2011-06-12 2:00 ` David Miller
2011-06-13 23:59 ` Vasu Dev
2011-06-14 0:02 ` Vasu Dev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).