* [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07
@ 2014-11-07 8:57 Jeff Kirsher
2014-11-07 8:57 ` [net-next 1/9] i40e: poll firmware slower Jeff Kirsher
` (8 more replies)
0 siblings, 9 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, nhorman, sassmann, jogreene, john.ronciak
This series contains updates to i40e, i40evf and ixgbe.
Kamil updated the i40e and i40evf driver to poll the firmware slower
since we were polling faster than the firmware could respond.
Shannon updates i40e to add a check to keep the service_task from
running the periodic tasks more than once per second, while still
allowing quick action to service the events.
Jesse cleans up the throttle rate code by fixing the minimum interrupt
throttle rate and removing some unused defines.
Mitch makes the early init admin queue message receive code more robust
by handling messages in a loop and ignoring those that we are not
interested in. This also gets rid of some scary log messages that
really do not indicate a problem.
Don provides several ixgbe patches, first fixes an issue with x540
completion timeout where on topologies including few levels of PCIe
switching for x540 can run into an unexpected completion error. Cleans
up the functionality in ixgbe_ndo_set_vf_vlan() in preparation for
future work. Adds support for x550 MAC's to the driver.
The following are changes since commit 4e84b496fd2a226883920e0e0de4ed3f94898adf:
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Don Skidmore (4):
ixgbe: fix X540 Completion timeout
ixgbe: cleanup ixgbe_ndo_set_vf_vlan
ixgbe: cleanup move setting PFQDE.HIDE_VLAN to support function.
ixgbe: Add new support for X550 MAC's
Jesse Brandeburg (1):
i40e: clean up throttle rate code
Kamil Krawczyk (1):
i40e: poll firmware slower
Mitch Williams (2):
i40evf: make early init processing more robust
i40evf: don't use more queues than CPUs
Shannon Nelson (1):
i40e: don't do link_status or stats collection on every ARQ
drivers/net/ethernet/intel/i40e/i40e.h | 3 +-
drivers/net/ethernet/intel/i40e/i40e_adminq.c | 5 +-
drivers/net/ethernet/intel/i40e/i40e_adminq.h | 2 +-
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 11 +-
drivers/net/ethernet/intel/i40e/i40e_main.c | 14 ++-
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 5 +-
drivers/net/ethernet/intel/i40evf/i40e_adminq.c | 5 +-
drivers/net/ethernet/intel/i40evf/i40e_adminq.h | 2 +-
drivers/net/ethernet/intel/i40evf/i40e_txrx.h | 5 +-
drivers/net/ethernet/intel/i40evf/i40evf.h | 1 +
drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c | 16 +--
drivers/net/ethernet/intel/i40evf/i40evf_main.c | 50 ++++----
.../net/ethernet/intel/i40evf/i40evf_virtchnl.c | 65 +++++-----
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 51 ++++++--
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c | 8 ++
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c | 37 +++++-
drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 2 +
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 90 ++++++++++++--
drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c | 4 +
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c | 64 +++++-----
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 131 ++++++++++++++-------
drivers/net/ethernet/intel/ixgbe/ixgbe_type.h | 45 +++++--
23 files changed, 422 insertions(+), 195 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [net-next 1/9] i40e: poll firmware slower
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
2014-11-07 9:40 ` David Laight
2014-11-07 13:29 ` Or Gerlitz
2014-11-07 8:57 ` [net-next 2/9] i40e: don't do link_status or stats collection on every ARQ Jeff Kirsher
` (7 subsequent siblings)
8 siblings, 2 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem; +Cc: Kamil Krawczyk, netdev, nhorman, sassmann, jogreene, Jeff Kirsher
From: Kamil Krawczyk <kamil.krawczyk@intel.com>
The code was polling the firmware tail register for completion every
10 microseconds, which is way faster than the firmware can respond.
This changes the poll interval to 1ms, which reduces polling CPU
utilization, and the number of times we loop.
The maximum delay is still 100ms.
Change-ID: I4bbfa6b66d802890baf8b4154061e55942b90958
Signed-off-by: Kamil Krawczyk <kamil.krawczyk@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_adminq.c | 5 ++---
drivers/net/ethernet/intel/i40e/i40e_adminq.h | 2 +-
drivers/net/ethernet/intel/i40evf/i40e_adminq.c | 5 ++---
drivers/net/ethernet/intel/i40evf/i40e_adminq.h | 2 +-
4 files changed, 6 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
index 72f5d25..057b7bf 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
@@ -853,7 +853,6 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
*/
if (!details->async && !details->postpone) {
u32 total_delay = 0;
- u32 delay_len = 10;
do {
/* AQ designers suggest use of head for better
@@ -862,8 +861,8 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
if (i40e_asq_done(hw))
break;
/* ugh! delay while spin_lock */
- udelay(delay_len);
- total_delay += delay_len;
+ usleep_range(1000, 2000);
+ total_delay++;
} while (total_delay < hw->aq.asq_cmd_timeout);
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.h b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
index ba38a89..df0bd09 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
@@ -141,7 +141,7 @@ static inline int i40e_aq_rc_to_posix(u16 aq_rc)
/* general information */
#define I40E_AQ_LARGE_BUF 512
-#define I40E_ASQ_CMD_TIMEOUT 100000 /* usecs */
+#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */
void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
u16 opcode);
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq.c b/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
index f206be9..25c846b 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
@@ -801,7 +801,6 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
*/
if (!details->async && !details->postpone) {
u32 total_delay = 0;
- u32 delay_len = 10;
do {
/* AQ designers suggest use of head for better
@@ -810,8 +809,8 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
if (i40evf_asq_done(hw))
break;
/* ugh! delay while spin_lock */
- udelay(delay_len);
- total_delay += delay_len;
+ usleep_range(1000, 2000);
+ total_delay++;
} while (total_delay < hw->aq.asq_cmd_timeout);
}
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
index 91a5c5b..f40cfac 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
@@ -141,7 +141,7 @@ static inline int i40e_aq_rc_to_posix(u16 aq_rc)
/* general information */
#define I40E_AQ_LARGE_BUF 512
-#define I40E_ASQ_CMD_TIMEOUT 100000 /* usecs */
+#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */
void i40evf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
u16 opcode);
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [net-next 2/9] i40e: don't do link_status or stats collection on every ARQ
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
2014-11-07 8:57 ` [net-next 1/9] i40e: poll firmware slower Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
2014-11-07 8:57 ` [net-next 3/9] i40e: clean up throttle rate code Jeff Kirsher
` (6 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem
Cc: Shannon Nelson, netdev, nhorman, sassmann, jogreene, Patrick Lu,
Jeff Kirsher
From: Shannon Nelson <shannon.nelson@intel.com>
The ARQ events cause a service_task execution, and we do a link_status
check and full stats gathering for each service_task. However, when
there are a lot of ARQ events, such as when doing an NVM update, we end up
doing 10's if not 100's of these per second, thereby heavily abusing the
PCI bus and especially the Firmware. This patch adds a check to keep the
service_task from running these periodic tasks more than once per second,
while still allowing quick action to service the events.
Change-ID: Iec7670c37bfae9791c43fec26df48aea7f70b33e
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e.h | 3 ++-
drivers/net/ethernet/intel/i40e/i40e_main.c | 14 ++++++++++----
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index f1e33f8..b7a807b 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -269,7 +269,8 @@ struct i40e_pf {
u16 msg_enable;
char misc_int_name[IFNAMSIZ + 9];
u16 adminq_work_limit; /* num of admin receive queue desc to process */
- int service_timer_period;
+ unsigned long service_timer_period;
+ unsigned long service_timer_previous;
struct timer_list service_timer;
struct work_struct service_task;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 1a98e23..de66463 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -5449,7 +5449,7 @@ static void i40e_check_hang_subtask(struct i40e_pf *pf)
}
/**
- * i40e_watchdog_subtask - Check and bring link up
+ * i40e_watchdog_subtask - periodic checks not using event driven response
* @pf: board private structure
**/
static void i40e_watchdog_subtask(struct i40e_pf *pf)
@@ -5461,6 +5461,15 @@ static void i40e_watchdog_subtask(struct i40e_pf *pf)
test_bit(__I40E_CONFIG_BUSY, &pf->state))
return;
+ /* make sure we don't do these things too often */
+ if (time_before(jiffies, (pf->service_timer_previous +
+ pf->service_timer_period)))
+ return;
+ pf->service_timer_previous = jiffies;
+
+ i40e_check_hang_subtask(pf);
+ i40e_link_event(pf);
+
/* Update the stats for active netdevs so the network stack
* can look at updated numbers whenever it cares to
*/
@@ -6325,15 +6334,12 @@ static void i40e_service_task(struct work_struct *work)
i40e_vc_process_vflr_event(pf);
i40e_watchdog_subtask(pf);
i40e_fdir_reinit_subtask(pf);
- i40e_check_hang_subtask(pf);
i40e_sync_filters_subtask(pf);
#ifdef CONFIG_I40E_VXLAN
i40e_sync_vxlan_filters_subtask(pf);
#endif
i40e_clean_adminq_subtask(pf);
- i40e_link_event(pf);
-
i40e_service_event_complete(pf);
/* If the tasks have taken longer than one timer cycle or there
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [net-next 3/9] i40e: clean up throttle rate code
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
2014-11-07 8:57 ` [net-next 1/9] i40e: poll firmware slower Jeff Kirsher
2014-11-07 8:57 ` [net-next 2/9] i40e: don't do link_status or stats collection on every ARQ Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
2014-11-07 8:57 ` [net-next 4/9] i40evf: make early init processing more robust Jeff Kirsher
` (5 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem
Cc: Jesse Brandeburg, netdev, nhorman, sassmann, jogreene, Patrick Lu,
Jeff Kirsher
From: Jesse Brandeburg <jesse.brandeburg@intel.com>
The interrupt throttle rate minimum is actually 2us, so
fix that define and while we are there, remove some unused defines.
Change some strings in the function to be a bit less wrappy, and
express the correct limits.
Change-ID: I96829bbc77935e0b57c6f0fc1439fb4152b2960a
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 11 ++++-------
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 5 +----
drivers/net/ethernet/intel/i40evf/i40e_txrx.h | 5 +----
3 files changed, 6 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index b6e745f..afad5aa 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -1575,11 +1575,9 @@ static int i40e_set_coalesce(struct net_device *netdev,
} else if (ec->rx_coalesce_usecs == 0) {
vsi->rx_itr_setting = ec->rx_coalesce_usecs;
if (ec->use_adaptive_rx_coalesce)
- netif_info(pf, drv, netdev,
- "Rx-secs=0, need to disable adaptive-Rx for a complete disable\n");
+ netif_info(pf, drv, netdev, "rx-usecs=0, need to disable adaptive-rx for a complete disable\n");
} else {
- netif_info(pf, drv, netdev,
- "Invalid value, Rx-usecs range is 0, 8-8160\n");
+ netif_info(pf, drv, netdev, "Invalid value, rx-usecs range is 0-8160\n");
return -EINVAL;
}
@@ -1589,11 +1587,10 @@ static int i40e_set_coalesce(struct net_device *netdev,
} else if (ec->tx_coalesce_usecs == 0) {
vsi->tx_itr_setting = ec->tx_coalesce_usecs;
if (ec->use_adaptive_tx_coalesce)
- netif_info(pf, drv, netdev,
- "Tx-secs=0, need to disable adaptive-Tx for a complete disable\n");
+ netif_info(pf, drv, netdev, "tx-usecs=0, need to disable adaptive-tx for a complete disable\n");
} else {
netif_info(pf, drv, netdev,
- "Invalid value, Tx-usecs range is 0, 8-8160\n");
+ "Invalid value, tx-usecs range is 0-8160\n");
return -EINVAL;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index d7a625a..e60d3ac 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -30,10 +30,7 @@
/* Interrupt Throttling and Rate Limiting Goodies */
#define I40E_MAX_ITR 0x0FF0 /* reg uses 2 usec resolution */
-#define I40E_MIN_ITR 0x0004 /* reg uses 2 usec resolution */
-#define I40E_MAX_IRATE 0x03F
-#define I40E_MIN_IRATE 0x001
-#define I40E_IRATE_USEC_RESOLUTION 4
+#define I40E_MIN_ITR 0x0001 /* reg uses 2 usec resolution */
#define I40E_ITR_100K 0x0005
#define I40E_ITR_20K 0x0019
#define I40E_ITR_8K 0x003E
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
index f6dcf9d..c7f2962 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
@@ -30,10 +30,7 @@
/* Interrupt Throttling and Rate Limiting Goodies */
#define I40E_MAX_ITR 0x0FF0 /* reg uses 2 usec resolution */
-#define I40E_MIN_ITR 0x0004 /* reg uses 2 usec resolution */
-#define I40E_MAX_IRATE 0x03F
-#define I40E_MIN_IRATE 0x001
-#define I40E_IRATE_USEC_RESOLUTION 4
+#define I40E_MIN_ITR 0x0001 /* reg uses 2 usec resolution */
#define I40E_ITR_100K 0x0005
#define I40E_ITR_20K 0x0019
#define I40E_ITR_8K 0x003E
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [net-next 4/9] i40evf: make early init processing more robust
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
` (2 preceding siblings ...)
2014-11-07 8:57 ` [net-next 3/9] i40e: clean up throttle rate code Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
2014-11-07 8:57 ` [net-next 5/9] i40evf: don't use more queues than CPUs Jeff Kirsher
` (4 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem
Cc: Mitch Williams, netdev, nhorman, sassmann, jogreene, Patrick Lu,
Jeff Kirsher
From: Mitch Williams <mitch.a.williams@intel.com>
In early init, if we get an unexpected message from the PF (such as link
status), we just kick an error back to the init task, causing it to
restart its state machine and delaying initialization.
Make the early init AQ message receive code more robust by handling
messages in a loop, and ignoring those that we aren't interested in.
This also gets rid of some scary log messages that really didn't
indicate a problem.
Change-ID: I620e8c72e49c49c665ef33eeab2425dd10e721cf
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
.../net/ethernet/intel/i40evf/i40evf_virtchnl.c | 59 ++++++++++++----------
1 file changed, 31 insertions(+), 28 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c b/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
index 66d12f5..ff86761 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
@@ -89,6 +89,7 @@ int i40evf_verify_api_ver(struct i40evf_adapter *adapter)
struct i40e_virtchnl_version_info *pf_vvi;
struct i40e_hw *hw = &adapter->hw;
struct i40e_arq_event_info event;
+ enum i40e_virtchnl_ops op;
i40e_status err;
event.msg_size = I40EVF_MAX_AQ_BUF_SIZE;
@@ -98,18 +99,27 @@ int i40evf_verify_api_ver(struct i40evf_adapter *adapter)
goto out;
}
- err = i40evf_clean_arq_element(hw, &event, NULL);
- if (err == I40E_ERR_ADMIN_QUEUE_NO_WORK)
- goto out_alloc;
+ while (1) {
+ err = i40evf_clean_arq_element(hw, &event, NULL);
+ /* When the AQ is empty, i40evf_clean_arq_element will return
+ * nonzero and this loop will terminate.
+ */
+ if (err)
+ goto out_alloc;
+ op =
+ (enum i40e_virtchnl_ops)le32_to_cpu(event.desc.cookie_high);
+ if (op == I40E_VIRTCHNL_OP_VERSION)
+ break;
+ }
+
err = (i40e_status)le32_to_cpu(event.desc.cookie_low);
if (err)
goto out_alloc;
- if ((enum i40e_virtchnl_ops)le32_to_cpu(event.desc.cookie_high) !=
- I40E_VIRTCHNL_OP_VERSION) {
+ if (op != I40E_VIRTCHNL_OP_VERSION) {
dev_info(&adapter->pdev->dev, "Invalid reply type %d from PF\n",
- le32_to_cpu(event.desc.cookie_high));
+ op);
err = -EIO;
goto out_alloc;
}
@@ -153,8 +163,9 @@ int i40evf_get_vf_config(struct i40evf_adapter *adapter)
{
struct i40e_hw *hw = &adapter->hw;
struct i40e_arq_event_info event;
- u16 len;
+ enum i40e_virtchnl_ops op;
i40e_status err;
+ u16 len;
len = sizeof(struct i40e_virtchnl_vf_resource) +
I40E_MAX_VF_VSI * sizeof(struct i40e_virtchnl_vsi_resource);
@@ -165,29 +176,21 @@ int i40evf_get_vf_config(struct i40evf_adapter *adapter)
goto out;
}
- err = i40evf_clean_arq_element(hw, &event, NULL);
- if (err == I40E_ERR_ADMIN_QUEUE_NO_WORK)
- goto out_alloc;
-
- err = (i40e_status)le32_to_cpu(event.desc.cookie_low);
- if (err) {
- dev_err(&adapter->pdev->dev,
- "%s: Error returned from PF, %d, %d\n", __func__,
- le32_to_cpu(event.desc.cookie_high),
- le32_to_cpu(event.desc.cookie_low));
- err = -EIO;
- goto out_alloc;
+ while (1) {
+ event.msg_size = len;
+ /* When the AQ is empty, i40evf_clean_arq_element will return
+ * nonzero and this loop will terminate.
+ */
+ err = i40evf_clean_arq_element(hw, &event, NULL);
+ if (err)
+ goto out_alloc;
+ op =
+ (enum i40e_virtchnl_ops)le32_to_cpu(event.desc.cookie_high);
+ if (op == I40E_VIRTCHNL_OP_GET_VF_RESOURCES)
+ break;
}
- if ((enum i40e_virtchnl_ops)le32_to_cpu(event.desc.cookie_high) !=
- I40E_VIRTCHNL_OP_GET_VF_RESOURCES) {
- dev_err(&adapter->pdev->dev,
- "%s: Invalid response from PF, %d, %d\n", __func__,
- le32_to_cpu(event.desc.cookie_high),
- le32_to_cpu(event.desc.cookie_low));
- err = -EIO;
- goto out_alloc;
- }
+ err = (i40e_status)le32_to_cpu(event.desc.cookie_low);
memcpy(adapter->vf_res, event.msg_buf, min(event.msg_size, len));
i40e_vf_parse_hw_config(hw, adapter->vf_res);
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [net-next 5/9] i40evf: don't use more queues than CPUs
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
` (3 preceding siblings ...)
2014-11-07 8:57 ` [net-next 4/9] i40evf: make early init processing more robust Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
2014-11-07 8:57 ` [net-next 6/9] ixgbe: fix X540 Completion timeout Jeff Kirsher
` (3 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem
Cc: Mitch Williams, netdev, nhorman, sassmann, jogreene, Patrick Lu,
Jeff Kirsher
From: Mitch Williams <mitch.a.williams@intel.com>
It's kind of silly to configure and attempt to use a bunch of queue
pairs when you're running on a single (virtual) CPU. Instead of
unconditionally configuring all of the queues that the PF gives us,
clamp the number of queue pairs to the number of CPUs.
Change-ID: I321714c9e15072ee76de8f95ab9a81f86ed347d1
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40evf/i40evf.h | 1 +
drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c | 16 +++----
drivers/net/ethernet/intel/i40evf/i40evf_main.c | 50 +++++++++++++---------
.../net/ethernet/intel/i40evf/i40evf_virtchnl.c | 6 +--
4 files changed, 41 insertions(+), 32 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf.h b/drivers/net/ethernet/intel/i40evf/i40evf.h
index 30ef519..1113f8a 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf.h
+++ b/drivers/net/ethernet/intel/i40evf/i40evf.h
@@ -191,6 +191,7 @@ struct i40evf_adapter {
struct i40e_q_vector *q_vector[MAX_MSIX_Q_VECTORS];
struct list_head vlan_filter_list;
char misc_vector_name[IFNAMSIZ + 9];
+ int num_active_queues;
/* TX */
struct i40e_ring *tx_rings[I40E_MAX_VSI_QP];
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c b/drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c
index efee6b2..876411c 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c
@@ -59,7 +59,7 @@ static const struct i40evf_stats i40evf_gstrings_stats[] = {
#define I40EVF_GLOBAL_STATS_LEN ARRAY_SIZE(i40evf_gstrings_stats)
#define I40EVF_QUEUE_STATS_LEN(_dev) \
(((struct i40evf_adapter *) \
- netdev_priv(_dev))->vsi_res->num_queue_pairs \
+ netdev_priv(_dev))->num_active_queues \
* 2 * (sizeof(struct i40e_queue_stats) / sizeof(u64)))
#define I40EVF_STATS_LEN(_dev) \
(I40EVF_GLOBAL_STATS_LEN + I40EVF_QUEUE_STATS_LEN(_dev))
@@ -121,11 +121,11 @@ static void i40evf_get_ethtool_stats(struct net_device *netdev,
p = (char *)adapter + i40evf_gstrings_stats[i].stat_offset;
data[i] = *(u64 *)p;
}
- for (j = 0; j < adapter->vsi_res->num_queue_pairs; j++) {
+ for (j = 0; j < adapter->num_active_queues; j++) {
data[i++] = adapter->tx_rings[j]->stats.packets;
data[i++] = adapter->tx_rings[j]->stats.bytes;
}
- for (j = 0; j < adapter->vsi_res->num_queue_pairs; j++) {
+ for (j = 0; j < adapter->num_active_queues; j++) {
data[i++] = adapter->rx_rings[j]->stats.packets;
data[i++] = adapter->rx_rings[j]->stats.bytes;
}
@@ -151,13 +151,13 @@ static void i40evf_get_strings(struct net_device *netdev, u32 sset, u8 *data)
ETH_GSTRING_LEN);
p += ETH_GSTRING_LEN;
}
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++) {
+ for (i = 0; i < adapter->num_active_queues; i++) {
snprintf(p, ETH_GSTRING_LEN, "tx-%u.packets", i);
p += ETH_GSTRING_LEN;
snprintf(p, ETH_GSTRING_LEN, "tx-%u.bytes", i);
p += ETH_GSTRING_LEN;
}
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++) {
+ for (i = 0; i < adapter->num_active_queues; i++) {
snprintf(p, ETH_GSTRING_LEN, "rx-%u.packets", i);
p += ETH_GSTRING_LEN;
snprintf(p, ETH_GSTRING_LEN, "rx-%u.bytes", i);
@@ -430,7 +430,7 @@ static int i40evf_get_rxnfc(struct net_device *netdev,
switch (cmd->cmd) {
case ETHTOOL_GRXRINGS:
- cmd->data = adapter->vsi_res->num_queue_pairs;
+ cmd->data = adapter->num_active_queues;
ret = 0;
break;
case ETHTOOL_GRXFH:
@@ -598,12 +598,12 @@ static void i40evf_get_channels(struct net_device *netdev,
struct i40evf_adapter *adapter = netdev_priv(netdev);
/* Report maximum channels */
- ch->max_combined = adapter->vsi_res->num_queue_pairs;
+ ch->max_combined = adapter->num_active_queues;
ch->max_other = NONQ_VECS;
ch->other_count = NONQ_VECS;
- ch->combined_count = adapter->vsi_res->num_queue_pairs;
+ ch->combined_count = adapter->num_active_queues;
}
/**
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_main.c b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
index b2f01eb..f0d07ad 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_main.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
@@ -397,8 +397,8 @@ static int i40evf_map_rings_to_vectors(struct i40evf_adapter *adapter)
int q_vectors;
int v_start = 0;
int rxr_idx = 0, txr_idx = 0;
- int rxr_remaining = adapter->vsi_res->num_queue_pairs;
- int txr_remaining = adapter->vsi_res->num_queue_pairs;
+ int rxr_remaining = adapter->num_active_queues;
+ int txr_remaining = adapter->num_active_queues;
int i, j;
int rqpv, tqpv;
int err = 0;
@@ -584,7 +584,7 @@ static void i40evf_configure_tx(struct i40evf_adapter *adapter)
{
struct i40e_hw *hw = &adapter->hw;
int i;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++)
+ for (i = 0; i < adapter->num_active_queues; i++)
adapter->tx_rings[i]->tail = hw->hw_addr + I40E_QTX_TAIL1(i);
}
@@ -629,7 +629,7 @@ static void i40evf_configure_rx(struct i40evf_adapter *adapter)
rx_buf_len = ALIGN(max_frame, 1024);
}
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++) {
+ for (i = 0; i < adapter->num_active_queues; i++) {
adapter->rx_rings[i]->tail = hw->hw_addr + I40E_QRX_TAIL1(i);
adapter->rx_rings[i]->rx_buf_len = rx_buf_len;
}
@@ -918,7 +918,7 @@ static void i40evf_configure(struct i40evf_adapter *adapter)
i40evf_configure_rx(adapter);
adapter->aq_required |= I40EVF_FLAG_AQ_CONFIGURE_QUEUES;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++) {
+ for (i = 0; i < adapter->num_active_queues; i++) {
struct i40e_ring *ring = adapter->rx_rings[i];
i40evf_alloc_rx_buffers(ring, ring->count);
ring->next_to_use = ring->count - 1;
@@ -950,7 +950,7 @@ static void i40evf_clean_all_rx_rings(struct i40evf_adapter *adapter)
{
int i;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++)
+ for (i = 0; i < adapter->num_active_queues; i++)
i40evf_clean_rx_ring(adapter->rx_rings[i]);
}
@@ -962,7 +962,7 @@ static void i40evf_clean_all_tx_rings(struct i40evf_adapter *adapter)
{
int i;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++)
+ for (i = 0; i < adapter->num_active_queues; i++)
i40evf_clean_tx_ring(adapter->tx_rings[i]);
}
@@ -1064,7 +1064,7 @@ static void i40evf_free_queues(struct i40evf_adapter *adapter)
if (!adapter->vsi_res)
return;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++) {
+ for (i = 0; i < adapter->num_active_queues; i++) {
if (adapter->tx_rings[i])
kfree_rcu(adapter->tx_rings[i], rcu);
adapter->tx_rings[i] = NULL;
@@ -1084,7 +1084,7 @@ static int i40evf_alloc_queues(struct i40evf_adapter *adapter)
{
int i;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++) {
+ for (i = 0; i < adapter->num_active_queues; i++) {
struct i40e_ring *tx_ring;
struct i40e_ring *rx_ring;
@@ -1130,7 +1130,7 @@ static int i40evf_set_interrupt_capability(struct i40evf_adapter *adapter)
err = -EIO;
goto out;
}
- pairs = adapter->vsi_res->num_queue_pairs;
+ pairs = adapter->num_active_queues;
/* It's easy to be greedy for MSI-X vectors, but it really
* doesn't do us much good if we have a lot more vectors
@@ -1210,7 +1210,7 @@ static void i40evf_free_q_vectors(struct i40evf_adapter *adapter)
int napi_vectors;
num_q_vectors = adapter->num_msix_vectors - NONQ_VECS;
- napi_vectors = adapter->vsi_res->num_queue_pairs;
+ napi_vectors = adapter->num_active_queues;
for (q_idx = 0; q_idx < num_q_vectors; q_idx++) {
struct i40e_q_vector *q_vector = adapter->q_vector[q_idx];
@@ -1265,8 +1265,8 @@ int i40evf_init_interrupt_scheme(struct i40evf_adapter *adapter)
}
dev_info(&adapter->pdev->dev, "Multiqueue %s: Queue pair count = %u",
- (adapter->vsi_res->num_queue_pairs > 1) ? "Enabled" :
- "Disabled", adapter->vsi_res->num_queue_pairs);
+ (adapter->num_active_queues > 1) ? "Enabled" :
+ "Disabled", adapter->num_active_queues);
return 0;
err_alloc_queues:
@@ -1425,7 +1425,7 @@ static int next_queue(struct i40evf_adapter *adapter, int j)
{
j += 1;
- return j >= adapter->vsi_res->num_queue_pairs ? 0 : j;
+ return j >= adapter->num_active_queues ? 0 : j;
}
/**
@@ -1446,9 +1446,14 @@ static void i40evf_configure_rss(struct i40evf_adapter *adapter)
0xc135cafa, 0x7a6f7e2d, 0xe7102d28, 0x163cd12e,
0x4954b126 };
- /* Hash type is configured by the PF - we just supply the key */
+ /* No RSS for single queue. */
+ if (adapter->num_active_queues == 1) {
+ wr32(hw, I40E_VFQF_HENA(0), 0);
+ wr32(hw, I40E_VFQF_HENA(1), 0);
+ return;
+ }
- /* Fill out hash function seed */
+ /* Hash type is configured by the PF - we just supply the key */
for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
wr32(hw, I40E_VFQF_HKEY(i), seed[i]);
@@ -1458,7 +1463,7 @@ static void i40evf_configure_rss(struct i40evf_adapter *adapter)
wr32(hw, I40E_VFQF_HENA(1), (u32)(hena >> 32));
/* Populate the LUT with max no. of queues in round robin fashion */
- j = adapter->vsi_res->num_queue_pairs;
+ j = adapter->num_active_queues;
for (i = 0; i <= I40E_VFQF_HLUT_MAX_INDEX; i++) {
j = next_queue(adapter, j);
lut = j;
@@ -1703,7 +1708,7 @@ static void i40evf_free_all_tx_resources(struct i40evf_adapter *adapter)
{
int i;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++)
+ for (i = 0; i < adapter->num_active_queues; i++)
if (adapter->tx_rings[i]->desc)
i40evf_free_tx_resources(adapter->tx_rings[i]);
@@ -1723,7 +1728,7 @@ static int i40evf_setup_all_tx_resources(struct i40evf_adapter *adapter)
{
int i, err = 0;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++) {
+ for (i = 0; i < adapter->num_active_queues; i++) {
adapter->tx_rings[i]->count = adapter->tx_desc_count;
err = i40evf_setup_tx_descriptors(adapter->tx_rings[i]);
if (!err)
@@ -1751,7 +1756,7 @@ static int i40evf_setup_all_rx_resources(struct i40evf_adapter *adapter)
{
int i, err = 0;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++) {
+ for (i = 0; i < adapter->num_active_queues; i++) {
adapter->rx_rings[i]->count = adapter->rx_desc_count;
err = i40evf_setup_rx_descriptors(adapter->rx_rings[i]);
if (!err)
@@ -1774,7 +1779,7 @@ static void i40evf_free_all_rx_resources(struct i40evf_adapter *adapter)
{
int i;
- for (i = 0; i < adapter->vsi_res->num_queue_pairs; i++)
+ for (i = 0; i < adapter->num_active_queues; i++)
if (adapter->rx_rings[i]->desc)
i40evf_free_rx_resources(adapter->rx_rings[i]);
}
@@ -2150,6 +2155,9 @@ static void i40evf_init_task(struct work_struct *work)
adapter->watchdog_timer.data = (unsigned long)adapter;
mod_timer(&adapter->watchdog_timer, jiffies + 1);
+ adapter->num_active_queues = min_t(int,
+ adapter->vsi_res->num_queue_pairs,
+ (int)(num_online_cpus()));
adapter->tx_desc_count = I40EVF_DEFAULT_TXD;
adapter->rx_desc_count = I40EVF_DEFAULT_RXD;
err = i40evf_init_interrupt_scheme(adapter);
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c b/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
index ff86761..49bfdb5 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
@@ -210,7 +210,7 @@ void i40evf_configure_queues(struct i40evf_adapter *adapter)
{
struct i40e_virtchnl_vsi_queue_config_info *vqci;
struct i40e_virtchnl_queue_pair_info *vqpi;
- int pairs = adapter->vsi_res->num_queue_pairs;
+ int pairs = adapter->num_active_queues;
int i, len;
if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
@@ -276,7 +276,7 @@ void i40evf_enable_queues(struct i40evf_adapter *adapter)
}
adapter->current_op = I40E_VIRTCHNL_OP_ENABLE_QUEUES;
vqs.vsi_id = adapter->vsi_res->vsi_id;
- vqs.tx_queues = (1 << adapter->vsi_res->num_queue_pairs) - 1;
+ vqs.tx_queues = (1 << adapter->num_active_queues) - 1;
vqs.rx_queues = vqs.tx_queues;
adapter->aq_pending |= I40EVF_FLAG_AQ_ENABLE_QUEUES;
adapter->aq_required &= ~I40EVF_FLAG_AQ_ENABLE_QUEUES;
@@ -302,7 +302,7 @@ void i40evf_disable_queues(struct i40evf_adapter *adapter)
}
adapter->current_op = I40E_VIRTCHNL_OP_DISABLE_QUEUES;
vqs.vsi_id = adapter->vsi_res->vsi_id;
- vqs.tx_queues = (1 << adapter->vsi_res->num_queue_pairs) - 1;
+ vqs.tx_queues = (1 << adapter->num_active_queues) - 1;
vqs.rx_queues = vqs.tx_queues;
adapter->aq_pending |= I40EVF_FLAG_AQ_DISABLE_QUEUES;
adapter->aq_required &= ~I40EVF_FLAG_AQ_DISABLE_QUEUES;
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [net-next 6/9] ixgbe: fix X540 Completion timeout
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
` (4 preceding siblings ...)
2014-11-07 8:57 ` [net-next 5/9] i40evf: don't use more queues than CPUs Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
2014-11-07 14:35 ` Sergei Shtylyov
2014-11-07 8:57 ` [net-next 7/9] ixgbe: cleanup ixgbe_ndo_set_vf_vlan Jeff Kirsher
` (2 subsequent siblings)
8 siblings, 1 reply; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem; +Cc: Don Skidmore, netdev, nhorman, sassmann, jogreene, Jeff Kirsher
From: Don Skidmore <donald.c.skidmore@intel.com>
On topologies including few levels of PCIe switching X540 can run into an
unexpected completion error. We get around this by waiting after enabling
loopback a sufficient amount of time until Tx Data Fetch is sent. We then
poll the pending transaction bit to ensure we received the completion. Only
then do we go on to clear the buffers.
Signed-of-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
index b5f484b..e314b53 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
@@ -3583,7 +3583,8 @@ s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
**/
void ixgbe_clear_tx_pending(struct ixgbe_hw *hw)
{
- u32 gcr_ext, hlreg0;
+ u32 gcr_ext, hlreg0, i, poll;
+ u16 value;
/*
* If double reset is not requested then all transactions should
@@ -3600,6 +3601,24 @@ void ixgbe_clear_tx_pending(struct ixgbe_hw *hw)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0 | IXGBE_HLREG0_LPBK);
+ /* wait for a last completion before clearing buffers */
+ IXGBE_WRITE_FLUSH(hw);
+ usleep_range(3000, 6000);
+
+ /* Before proceeding, make sure that the PCIe block does not have
+ * transactions pending.
+ */
+ poll = ixgbe_pcie_timeout_poll(hw);
+ for (i = 0; i < poll; i++) {
+ usleep_range(100, 200);
+ value = ixgbe_read_pci_cfg_word(hw, IXGBE_PCI_DEVICE_STATUS);
+ if (ixgbe_removed(hw->hw_addr))
+ goto out;
+ if (!(value & IXGBE_PCI_DEVICE_STATUS_TRANSACTION_PENDING))
+ goto out;
+ }
+
+out:
/* initiate cleaning flow for buffers in the PCIe transaction layer */
gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT,
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [net-next 7/9] ixgbe: cleanup ixgbe_ndo_set_vf_vlan
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
` (5 preceding siblings ...)
2014-11-07 8:57 ` [net-next 6/9] ixgbe: fix X540 Completion timeout Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
2014-11-07 8:57 ` [net-next 8/9] ixgbe: cleanup move setting PFQDE.HIDE_VLAN to support function Jeff Kirsher
2014-11-07 8:57 ` [net-next 9/9] ixgbe: Add new support for X550 MAC's Jeff Kirsher
8 siblings, 0 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem; +Cc: Don Skidmore, netdev, nhorman, sassmann, jogreene, Jeff Kirsher
From: Don Skidmore <donald.c.skidmore@intel.com>
Clean up functionality in ixgbe_ndo_set_vf_vlan that will simplify later
patches.
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 94 +++++++++++++++++---------
1 file changed, 61 insertions(+), 33 deletions(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 97c85b8..843e10d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -1079,52 +1079,80 @@ int ixgbe_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
return ixgbe_set_vf_mac(adapter, vf, mac);
}
+static int ixgbe_enable_port_vlan(struct ixgbe_adapter *adapter, int vf,
+ u16 vlan, u8 qos)
+{
+ struct ixgbe_hw *hw = &adapter->hw;
+ int err;
+
+ if (adapter->vfinfo[vf].pf_vlan)
+ err = ixgbe_set_vf_vlan(adapter, false,
+ adapter->vfinfo[vf].pf_vlan,
+ vf);
+ if (err)
+ goto out;
+ ixgbe_set_vmvir(adapter, vlan, qos, vf);
+ ixgbe_set_vmolr(hw, vf, false);
+ if (adapter->vfinfo[vf].spoofchk_enabled)
+ hw->mac.ops.set_vlan_anti_spoofing(hw, true, vf);
+ adapter->vfinfo[vf].vlan_count++;
+ adapter->vfinfo[vf].pf_vlan = vlan;
+ adapter->vfinfo[vf].pf_qos = qos;
+ dev_info(&adapter->pdev->dev,
+ "Setting VLAN %d, QOS 0x%x on VF %d\n", vlan, qos, vf);
+ if (test_bit(__IXGBE_DOWN, &adapter->state)) {
+ dev_warn(&adapter->pdev->dev,
+ "The VF VLAN has been set, but the PF device is not up.\n");
+ dev_warn(&adapter->pdev->dev,
+ "Bring the PF device up before attempting to use the VF device.\n");
+ }
+
+out:
+ return err;
+}
+
+static int ixgbe_disable_port_vlan(struct ixgbe_adapter *adapter, int vf)
+{
+ struct ixgbe_hw *hw = &adapter->hw;
+ int err;
+
+ err = ixgbe_set_vf_vlan(adapter, false,
+ adapter->vfinfo[vf].pf_vlan, vf);
+ ixgbe_clear_vmvir(adapter, vf);
+ ixgbe_set_vmolr(hw, vf, true);
+ hw->mac.ops.set_vlan_anti_spoofing(hw, false, vf);
+ if (adapter->vfinfo[vf].vlan_count)
+ adapter->vfinfo[vf].vlan_count--;
+ adapter->vfinfo[vf].pf_vlan = 0;
+ adapter->vfinfo[vf].pf_qos = 0;
+
+ return err;
+}
+
int ixgbe_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos)
{
int err = 0;
struct ixgbe_adapter *adapter = netdev_priv(netdev);
- struct ixgbe_hw *hw = &adapter->hw;
if ((vf >= adapter->num_vfs) || (vlan > 4095) || (qos > 7))
return -EINVAL;
if (vlan || qos) {
+ /* Check if there is already a port VLAN set, if so
+ * we have to delete the old one first before we
+ * can set the new one. The usage model had
+ * previously assumed the user would delete the
+ * old port VLAN before setting a new one but this
+ * is not necessarily the case.
+ */
if (adapter->vfinfo[vf].pf_vlan)
- err = ixgbe_set_vf_vlan(adapter, false,
- adapter->vfinfo[vf].pf_vlan,
- vf);
- if (err)
- goto out;
- err = ixgbe_set_vf_vlan(adapter, true, vlan, vf);
+ err = ixgbe_disable_port_vlan(adapter, vf);
if (err)
goto out;
- ixgbe_set_vmvir(adapter, vlan, qos, vf);
- ixgbe_set_vmolr(hw, vf, false);
- if (adapter->vfinfo[vf].spoofchk_enabled)
- hw->mac.ops.set_vlan_anti_spoofing(hw, true, vf);
- adapter->vfinfo[vf].vlan_count++;
- adapter->vfinfo[vf].pf_vlan = vlan;
- adapter->vfinfo[vf].pf_qos = qos;
- dev_info(&adapter->pdev->dev,
- "Setting VLAN %d, QOS 0x%x on VF %d\n", vlan, qos, vf);
- if (test_bit(__IXGBE_DOWN, &adapter->state)) {
- dev_warn(&adapter->pdev->dev,
- "The VF VLAN has been set,"
- " but the PF device is not up.\n");
- dev_warn(&adapter->pdev->dev,
- "Bring the PF device up before"
- " attempting to use the VF device.\n");
- }
+ err = ixgbe_enable_port_vlan(adapter, vf, vlan, qos);
} else {
- err = ixgbe_set_vf_vlan(adapter, false,
- adapter->vfinfo[vf].pf_vlan, vf);
- ixgbe_clear_vmvir(adapter, vf);
- ixgbe_set_vmolr(hw, vf, true);
- hw->mac.ops.set_vlan_anti_spoofing(hw, false, vf);
- if (adapter->vfinfo[vf].vlan_count)
- adapter->vfinfo[vf].vlan_count--;
- adapter->vfinfo[vf].pf_vlan = 0;
- adapter->vfinfo[vf].pf_qos = 0;
+ err = ixgbe_disable_port_vlan(adapter, vf);
}
+
out:
return err;
}
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [net-next 8/9] ixgbe: cleanup move setting PFQDE.HIDE_VLAN to support function.
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
` (6 preceding siblings ...)
2014-11-07 8:57 ` [net-next 7/9] ixgbe: cleanup ixgbe_ndo_set_vf_vlan Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
2014-11-07 8:57 ` [net-next 9/9] ixgbe: Add new support for X550 MAC's Jeff Kirsher
8 siblings, 0 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem; +Cc: Don Skidmore, netdev, nhorman, sassmann, jogreene, Jeff Kirsher
From: Don Skidmore <donald.c.skidmore@intel.com>
Move setting of drop enable to support function. This not only makes the
code more readable but is also prep for following patches that add
additional MAC support.
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 31 ++++++++++++++++++--------
1 file changed, 22 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 843e10d..daa37c2 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -618,6 +618,27 @@ int ixgbe_vf_configuration(struct pci_dev *pdev, unsigned int event_mask)
return 0;
}
+static inline void ixgbe_write_qde(struct ixgbe_adapter *adapter, u32 vf,
+ u32 qde)
+{
+ struct ixgbe_hw *hw = &adapter->hw;
+ struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ u32 q_per_pool = __ALIGN_MASK(1, ~vmdq->mask);
+ int i;
+
+ for (i = vf * q_per_pool; i < ((vf + 1) * q_per_pool); i++) {
+ u32 reg;
+
+ /* flush previous write */
+ IXGBE_WRITE_FLUSH(hw);
+
+ /* indicate to hardware that we want to set drop enable */
+ reg = IXGBE_QDE_WRITE | IXGBE_QDE_ENABLE;
+ reg |= i << IXGBE_QDE_IDX_SHIFT;
+ IXGBE_WRITE_REG(hw, IXGBE_QDE, reg);
+ }
+}
+
static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
{
struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
@@ -647,15 +668,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), reg);
/* force drop enable for all VF Rx queues */
- for (i = vf * q_per_pool; i < ((vf + 1) * q_per_pool); i++) {
- /* flush previous write */
- IXGBE_WRITE_FLUSH(hw);
-
- /* indicate to hardware that we want to set drop enable */
- reg = IXGBE_QDE_WRITE | IXGBE_QDE_ENABLE;
- reg |= i << IXGBE_QDE_IDX_SHIFT;
- IXGBE_WRITE_REG(hw, IXGBE_QDE, reg);
- }
+ ixgbe_write_qde(adapter, vf, IXGBE_QDE_ENABLE);
/* enable receive for vf */
reg = IXGBE_READ_REG(hw, IXGBE_VFRE(reg_offset));
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [net-next 9/9] ixgbe: Add new support for X550 MAC's
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
` (7 preceding siblings ...)
2014-11-07 8:57 ` [net-next 8/9] ixgbe: cleanup move setting PFQDE.HIDE_VLAN to support function Jeff Kirsher
@ 2014-11-07 8:57 ` Jeff Kirsher
8 siblings, 0 replies; 16+ messages in thread
From: Jeff Kirsher @ 2014-11-07 8:57 UTC (permalink / raw)
To: davem
Cc: Don Skidmore, netdev, nhorman, sassmann, jogreene, john.ronciak,
Jeff Kirsher
From: Don Skidmore <donald.c.skidmore@intel.com>
This patch will add in the new MAC defines and fit it into the switch
cases throughout the driver. New functionality and enablement support will
be added in following patches.
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 30 +++++---
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c | 8 +++
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c | 37 ++++++++--
drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 2 +
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 90 ++++++++++++++++++++----
drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c | 4 ++
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c | 64 +++++++++--------
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 6 ++
drivers/net/ethernet/intel/ixgbe/ixgbe_type.h | 45 +++++++++---
10 files changed, 223 insertions(+), 64 deletions(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
index e314b53..9ce8a47 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
@@ -2799,6 +2799,8 @@ u16 ixgbe_get_pcie_msix_count_generic(struct ixgbe_hw *hw)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
pcie_offset = IXGBE_PCIE_MSIX_82599_CAPS;
max_msix_count = IXGBE_MAX_MSIX_VECTORS_82599;
break;
@@ -3192,17 +3194,27 @@ s32 ixgbe_check_mac_link_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed,
*link_up = false;
}
- if ((links_reg & IXGBE_LINKS_SPEED_82599) ==
- IXGBE_LINKS_SPEED_10G_82599)
- *speed = IXGBE_LINK_SPEED_10GB_FULL;
- else if ((links_reg & IXGBE_LINKS_SPEED_82599) ==
- IXGBE_LINKS_SPEED_1G_82599)
+ switch (links_reg & IXGBE_LINKS_SPEED_82599) {
+ case IXGBE_LINKS_SPEED_10G_82599:
+ if ((hw->mac.type >= ixgbe_mac_X550) &&
+ (links_reg & IXGBE_LINKS_SPEED_NON_STD))
+ *speed = IXGBE_LINK_SPEED_2_5GB_FULL;
+ else
+ *speed = IXGBE_LINK_SPEED_10GB_FULL;
+ break;
+ case IXGBE_LINKS_SPEED_1G_82599:
*speed = IXGBE_LINK_SPEED_1GB_FULL;
- else if ((links_reg & IXGBE_LINKS_SPEED_82599) ==
- IXGBE_LINKS_SPEED_100_82599)
- *speed = IXGBE_LINK_SPEED_100_FULL;
- else
+ break;
+ case IXGBE_LINKS_SPEED_100_82599:
+ if ((hw->mac.type >= ixgbe_mac_X550) &&
+ (links_reg & IXGBE_LINKS_SPEED_NON_STD))
+ *speed = IXGBE_LINK_SPEED_5GB_FULL;
+ else
+ *speed = IXGBE_LINK_SPEED_100_FULL;
+ break;
+ default:
*speed = IXGBE_LINK_SPEED_UNKNOWN;
+ }
return 0;
}
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c
index 48f35fc..a507a6f 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c
@@ -286,6 +286,8 @@ s32 ixgbe_dcb_hw_config(struct ixgbe_hw *hw,
bwgid, ptype);
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
return ixgbe_dcb_hw_config_82599(hw, pfc_en, refill, max,
bwgid, ptype, prio_tc);
default:
@@ -302,6 +304,8 @@ s32 ixgbe_dcb_hw_pfc_config(struct ixgbe_hw *hw, u8 pfc_en, u8 *prio_tc)
return ixgbe_dcb_config_pfc_82598(hw, pfc_en);
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
return ixgbe_dcb_config_pfc_82599(hw, pfc_en, prio_tc);
default:
break;
@@ -357,6 +361,8 @@ s32 ixgbe_dcb_hw_ets_config(struct ixgbe_hw *hw,
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
ixgbe_dcb_config_rx_arbiter_82599(hw, refill, max,
bwg_id, prio_type, prio_tc);
ixgbe_dcb_config_tx_desc_arbiter_82599(hw, refill, max,
@@ -385,6 +391,8 @@ void ixgbe_dcb_read_rtrup2tc(struct ixgbe_hw *hw, u8 *map)
switch (hw->mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
ixgbe_dcb_read_rtrup2tc_82599(hw, map);
break;
default:
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
index 58a7f53..2707bda 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
@@ -180,6 +180,7 @@ static void ixgbe_dcbnl_get_perm_hw_addr(struct net_device *netdev,
switch (adapter->hw.mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
for (j = 0; j < netdev->addr_len; j++, i++)
perm_addr[i] = adapter->hw.mac.san_addr[j];
break;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index 0ae038b..26fd85e 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -511,6 +511,8 @@ static void ixgbe_get_regs(struct net_device *netdev,
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
regs_buff[35 + i] = IXGBE_READ_REG(hw, IXGBE_FCRTL_82599(i));
regs_buff[43 + i] = IXGBE_READ_REG(hw, IXGBE_FCRTH_82599(i));
break;
@@ -622,6 +624,8 @@ static void ixgbe_get_regs(struct net_device *netdev,
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
regs_buff[830] = IXGBE_READ_REG(hw, IXGBE_RTTDCS);
regs_buff[832] = IXGBE_READ_REG(hw, IXGBE_RTRPCS);
for (i = 0; i < 8; i++)
@@ -1406,6 +1410,8 @@ static int ixgbe_reg_test(struct ixgbe_adapter *adapter, u64 *data)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
toggle = 0x7FFFF30F;
test = reg_test_82599;
break;
@@ -1644,6 +1650,8 @@ static void ixgbe_free_desc_rings(struct ixgbe_adapter *adapter)
switch (hw->mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
reg_ctl = IXGBE_READ_REG(hw, IXGBE_DMATXCTL);
reg_ctl &= ~IXGBE_DMATXCTL_TE;
IXGBE_WRITE_REG(hw, IXGBE_DMATXCTL, reg_ctl);
@@ -1680,6 +1688,8 @@ static int ixgbe_setup_desc_rings(struct ixgbe_adapter *adapter)
switch (adapter->hw.mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
reg_data = IXGBE_READ_REG(&adapter->hw, IXGBE_DMATXCTL);
reg_data |= IXGBE_DMATXCTL_TE;
IXGBE_WRITE_REG(&adapter->hw, IXGBE_DMATXCTL, reg_data);
@@ -1733,12 +1743,16 @@ static int ixgbe_setup_loopback_test(struct ixgbe_adapter *adapter)
reg_data |= IXGBE_FCTRL_BAM | IXGBE_FCTRL_SBP | IXGBE_FCTRL_MPE;
IXGBE_WRITE_REG(hw, IXGBE_FCTRL, reg_data);
- /* X540 needs to set the MACC.FLU bit to force link up */
- if (adapter->hw.mac.type == ixgbe_mac_X540) {
+ /* X540 and X550 needs to set the MACC.FLU bit to force link up */
+ switch (adapter->hw.mac.type) {
+ case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
reg_data = IXGBE_READ_REG(hw, IXGBE_MACC);
reg_data |= IXGBE_MACC_FLU;
IXGBE_WRITE_REG(hw, IXGBE_MACC, reg_data);
- } else {
+ break;
+ default:
if (hw->mac.orig_autoc) {
reg_data = hw->mac.orig_autoc | IXGBE_AUTOC_FLU;
IXGBE_WRITE_REG(hw, IXGBE_AUTOC, reg_data);
@@ -2776,7 +2790,14 @@ static int ixgbe_set_rss_hash_opt(struct ixgbe_adapter *adapter,
/* if we changed something we need to update flags */
if (flags2 != adapter->flags2) {
struct ixgbe_hw *hw = &adapter->hw;
- u32 mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
+ u32 mrqc;
+ unsigned int pf_pool = adapter->num_vfs;
+
+ if ((hw->mac.type >= ixgbe_mac_X550) &&
+ (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED))
+ mrqc = IXGBE_READ_REG(hw, IXGBE_PFVFMRQC(pf_pool));
+ else
+ mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
if ((flags2 & UDP_RSS_FLAGS) &&
!(adapter->flags2 & UDP_RSS_FLAGS))
@@ -2799,7 +2820,11 @@ static int ixgbe_set_rss_hash_opt(struct ixgbe_adapter *adapter,
if (flags2 & IXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
- IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
+ if ((hw->mac.type >= ixgbe_mac_X550) &&
+ (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED))
+ IXGBE_WRITE_REG(hw, IXGBE_PFVFMRQC(pf_pool), mrqc);
+ else
+ IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
}
return 0;
@@ -2833,6 +2858,8 @@ static int ixgbe_get_ts_info(struct net_device *dev,
struct ixgbe_adapter *adapter = netdev_priv(dev);
switch (adapter->hw.mac.type) {
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
case ixgbe_mac_X540:
case ixgbe_mac_82599EB:
info->so_timestamping =
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index ce40c77..68e1e75 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -126,6 +126,8 @@ static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc,
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
if (num_tcs > 4) {
/*
* TCs : TC0/1 TC2/3 TC4-7
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index d2df4e3..355d1f7 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -835,6 +835,8 @@ static void ixgbe_set_ivar(struct ixgbe_adapter *adapter, s8 direction,
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
if (direction == -1) {
/* other causes */
msix_vector |= IXGBE_IVAR_ALLOC_VAL;
@@ -871,6 +873,8 @@ static inline void ixgbe_irq_rearm_queues(struct ixgbe_adapter *adapter,
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
mask = (qmask & 0xFFFFFFFF);
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EICS_EX(0), mask);
mask = (qmask >> 32);
@@ -2155,6 +2159,8 @@ static void ixgbe_configure_msix(struct ixgbe_adapter *adapter)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
ixgbe_set_ivar(adapter, -1, 1, v_idx);
break;
default:
@@ -2264,6 +2270,8 @@ void ixgbe_write_eitr(struct ixgbe_q_vector *q_vector)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
/*
* set the WDIS bit to not clear the timer bits and cause an
* immediate assertion of the interrupt
@@ -2467,6 +2475,8 @@ static inline void ixgbe_irq_enable_queues(struct ixgbe_adapter *adapter,
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
mask = (qmask & 0xFFFFFFFF);
if (mask)
IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(0), mask);
@@ -2493,6 +2503,8 @@ static inline void ixgbe_irq_disable_queues(struct ixgbe_adapter *adapter,
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
mask = (qmask & 0xFFFFFFFF);
if (mask)
IXGBE_WRITE_REG(hw, IXGBE_EIMC_EX(0), mask);
@@ -2525,6 +2537,8 @@ static inline void ixgbe_irq_enable(struct ixgbe_adapter *adapter, bool queues,
mask |= IXGBE_EIMS_GPI_SDP0;
break;
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
mask |= IXGBE_EIMS_TS;
break;
default:
@@ -2536,7 +2550,10 @@ static inline void ixgbe_irq_enable(struct ixgbe_adapter *adapter, bool queues,
case ixgbe_mac_82599EB:
mask |= IXGBE_EIMS_GPI_SDP1;
mask |= IXGBE_EIMS_GPI_SDP2;
+ /* fall through */
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
mask |= IXGBE_EIMS_ECC;
mask |= IXGBE_EIMS_MAILBOX;
break;
@@ -2544,9 +2561,6 @@ static inline void ixgbe_irq_enable(struct ixgbe_adapter *adapter, bool queues,
break;
}
- if (adapter->hw.mac.type == ixgbe_mac_X540)
- mask |= IXGBE_EIMS_TIMESYNC;
-
if ((adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) &&
!(adapter->flags2 & IXGBE_FLAG2_FDIR_REQUIRES_REINIT))
mask |= IXGBE_EIMS_FLOW_DIR;
@@ -2592,6 +2606,8 @@ static irqreturn_t ixgbe_msix_other(int irq, void *data)
switch (hw->mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
if (eicr & IXGBE_EICR_ECC) {
e_info(link, "Received ECC Err, initiating reset\n");
adapter->flags2 |= IXGBE_FLAG2_RESET_REQUESTED;
@@ -2811,6 +2827,8 @@ static irqreturn_t ixgbe_intr(int irq, void *data)
ixgbe_check_sfp_event(adapter, eicr);
/* Fall through */
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
if (eicr & IXGBE_EICR_ECC) {
e_info(link, "Received ECC Err, initiating reset\n");
adapter->flags2 |= IXGBE_FLAG2_RESET_REQUESTED;
@@ -2905,6 +2923,8 @@ static inline void ixgbe_irq_disable(struct ixgbe_adapter *adapter)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC, 0xFFFF0000);
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC_EX(0), ~0);
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC_EX(1), ~0);
@@ -3534,6 +3554,8 @@ static void ixgbe_setup_rdrxctl(struct ixgbe_adapter *adapter)
u32 rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
switch (hw->mac.type) {
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
case ixgbe_mac_82598EB:
/*
* For VMDq support of different descriptor types or
@@ -3657,6 +3679,8 @@ static void ixgbe_vlan_strip_disable(struct ixgbe_adapter *adapter)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
for (i = 0; i < adapter->num_rx_queues; i++) {
struct ixgbe_ring *ring = adapter->rx_ring[i];
@@ -3691,6 +3715,8 @@ static void ixgbe_vlan_strip_enable(struct ixgbe_adapter *adapter)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
for (i = 0; i < adapter->num_rx_queues; i++) {
struct ixgbe_ring *ring = adapter->rx_ring[i];
@@ -4112,6 +4138,8 @@ static int ixgbe_hpbthresh(struct ixgbe_adapter *adapter, int pb)
/* Calculate delay value for device */
switch (hw->mac.type) {
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
dv_id = IXGBE_DV_X540(link, tc);
break;
default:
@@ -4170,6 +4198,8 @@ static int ixgbe_lpbthresh(struct ixgbe_adapter *adapter, int pb)
/* Calculate delay value for device */
switch (hw->mac.type) {
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
dv_id = IXGBE_LOW_DV_X540(tc);
break;
default:
@@ -4606,6 +4636,8 @@ static void ixgbe_setup_gpie(struct ixgbe_adapter *adapter)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
default:
IXGBE_WRITE_REG(hw, IXGBE_EIAM_EX(0), 0xFFFFFFFF);
IXGBE_WRITE_REG(hw, IXGBE_EIAM_EX(1), 0xFFFFFFFF);
@@ -4948,10 +4980,12 @@ void ixgbe_down(struct ixgbe_adapter *adapter)
IXGBE_WRITE_REG(hw, IXGBE_TXDCTL(reg_idx), IXGBE_TXDCTL_SWFLSH);
}
- /* Disable the Tx DMA engine on 82599 and X540 */
+ /* Disable the Tx DMA engine on 82599 and later MAC */
switch (hw->mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
IXGBE_WRITE_REG(hw, IXGBE_DMATXCTL,
(IXGBE_READ_REG(hw, IXGBE_DMATXCTL) &
~IXGBE_DMATXCTL_TE));
@@ -5071,6 +5105,12 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
if (fwsm & IXGBE_FWSM_TS_ENABLED)
adapter->flags2 |= IXGBE_FLAG2_TEMP_SENSOR_CAPABLE;
break;
+ case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_X550:
+#ifdef CONFIG_IXGBE_DCA
+ adapter->flags &= ~IXGBE_FLAG_DCA_CAPABLE;
+#endif
+ break;
default:
break;
}
@@ -5086,6 +5126,8 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
#ifdef CONFIG_IXGBE_DCB
switch (hw->mac.type) {
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
adapter->dcb_cfg.num_tcs.pg_tcs = X540_TRAFFIC_CLASS;
adapter->dcb_cfg.num_tcs.pfc_tcs = X540_TRAFFIC_CLASS;
break;
@@ -5675,6 +5717,8 @@ static int __ixgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
pci_wake_from_d3(pdev, !!wufc);
break;
default:
@@ -5806,6 +5850,8 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
break;
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
hwstats->pxonrxc[i] +=
IXGBE_READ_REG(hw, IXGBE_PXONRXCNT(i));
break;
@@ -5819,7 +5865,9 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
hwstats->qptc[i] += IXGBE_READ_REG(hw, IXGBE_QPTC(i));
hwstats->qprc[i] += IXGBE_READ_REG(hw, IXGBE_QPRC(i));
if ((hw->mac.type == ixgbe_mac_82599EB) ||
- (hw->mac.type == ixgbe_mac_X540)) {
+ (hw->mac.type == ixgbe_mac_X540) ||
+ (hw->mac.type == ixgbe_mac_X550) ||
+ (hw->mac.type == ixgbe_mac_X550EM_x)) {
hwstats->qbtc[i] += IXGBE_READ_REG(hw, IXGBE_QBTC_L(i));
IXGBE_READ_REG(hw, IXGBE_QBTC_H(i)); /* to clear */
hwstats->qbrc[i] += IXGBE_READ_REG(hw, IXGBE_QBRC_L(i));
@@ -5842,7 +5890,9 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
hwstats->tor += IXGBE_READ_REG(hw, IXGBE_TORH);
break;
case ixgbe_mac_X540:
- /* OS2BMC stats are X540 only*/
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
+ /* OS2BMC stats are X540 and later */
hwstats->o2bgptc += IXGBE_READ_REG(hw, IXGBE_O2BGPTC);
hwstats->o2bspc += IXGBE_READ_REG(hw, IXGBE_O2BSPC);
hwstats->b2ospc += IXGBE_READ_REG(hw, IXGBE_B2OSPC);
@@ -6110,6 +6160,8 @@ static void ixgbe_watchdog_link_is_up(struct ixgbe_adapter *adapter)
}
break;
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
case ixgbe_mac_82599EB: {
u32 mflcn = IXGBE_READ_REG(hw, IXGBE_MFLCN);
u32 fccfg = IXGBE_READ_REG(hw, IXGBE_FCCFG);
@@ -6221,6 +6273,10 @@ static bool ixgbe_vf_tx_pending(struct ixgbe_adapter *adapter)
if (!adapter->num_vfs)
return false;
+ /* resetting the PF is only needed for MAC before X550 */
+ if (hw->mac.type >= ixgbe_mac_X550)
+ return false;
+
for (i = 0; i < adapter->num_vfs; i++) {
for (j = 0; j < q_per_pool; j++) {
u32 h, t;
@@ -6430,11 +6486,11 @@ static void ixgbe_check_for_bad_vf(struct ixgbe_adapter *adapter)
ciaa = (vf << 16) | 0x80000000;
/* 32 bit read so align, we really want status at offset 6 */
ciaa |= PCI_COMMAND;
- IXGBE_WRITE_REG(hw, IXGBE_CIAA_82599, ciaa);
- ciad = IXGBE_READ_REG(hw, IXGBE_CIAD_82599);
+ IXGBE_WRITE_REG(hw, IXGBE_CIAA_BY_MAC(hw), ciaa);
+ ciad = IXGBE_READ_REG(hw, IXGBE_CIAD_BY_MAC(hw));
ciaa &= 0x7FFFFFFF;
/* disable debug mode asap after reading data */
- IXGBE_WRITE_REG(hw, IXGBE_CIAA_82599, ciaa);
+ IXGBE_WRITE_REG(hw, IXGBE_CIAA_BY_MAC(hw), ciaa);
/* Get the upper 16 bits which will be the PCI status reg */
ciad >>= 16;
if (ciad & PCI_STATUS_REC_MASTER_ABORT) {
@@ -6442,11 +6498,11 @@ static void ixgbe_check_for_bad_vf(struct ixgbe_adapter *adapter)
/* Issue VFLR */
ciaa = (vf << 16) | 0x80000000;
ciaa |= 0xA8;
- IXGBE_WRITE_REG(hw, IXGBE_CIAA_82599, ciaa);
+ IXGBE_WRITE_REG(hw, IXGBE_CIAA_BY_MAC(hw), ciaa);
ciad = 0x00008000; /* VFLR */
- IXGBE_WRITE_REG(hw, IXGBE_CIAD_82599, ciad);
+ IXGBE_WRITE_REG(hw, IXGBE_CIAD_BY_MAC(hw), ciad);
ciaa &= 0x7FFFFFFF;
- IXGBE_WRITE_REG(hw, IXGBE_CIAA_82599, ciaa);
+ IXGBE_WRITE_REG(hw, IXGBE_CIAA_BY_MAC(hw), ciaa);
}
}
}
@@ -8098,6 +8154,8 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
switch (adapter->hw.mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
IXGBE_WRITE_REG(&adapter->hw, IXGBE_WUS, ~0);
break;
default:
@@ -8161,6 +8219,8 @@ skip_sriov:
switch (adapter->hw.mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
netdev->features |= NETIF_F_SCTP_CSUM;
netdev->hw_features |= NETIF_F_SCTP_CSUM |
NETIF_F_NTUPLE;
@@ -8514,6 +8574,12 @@ static pci_ers_result_t ixgbe_io_error_detected(struct pci_dev *pdev,
case ixgbe_mac_X540:
device_id = IXGBE_X540_VF_DEVICE_ID;
break;
+ case ixgbe_mac_X550:
+ device_id = IXGBE_DEV_ID_X550_VF;
+ break;
+ case ixgbe_mac_X550EM_x:
+ device_id = IXGBE_DEV_ID_X550EM_X_VF;
+ break;
default:
device_id = 0;
break;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
index cc8f012..9993a47 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
@@ -305,6 +305,8 @@ static s32 ixgbe_check_for_rst_pf(struct ixgbe_hw *hw, u16 vf_number)
vflre = IXGBE_READ_REG(hw, IXGBE_VFLRE(reg_offset));
break;
case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ case ixgbe_mac_X550EM_x:
vflre = IXGBE_READ_REG(hw, IXGBE_VFLREC(reg_offset));
break;
default:
@@ -426,6 +428,8 @@ void ixgbe_init_mbx_params_pf(struct ixgbe_hw *hw)
struct ixgbe_mbx_info *mbx = &hw->mbx;
if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
hw->mac.type != ixgbe_mac_X540)
return;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
index d47b19f..dc97c03 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
@@ -43,7 +43,7 @@ static s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data);
static void ixgbe_raise_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl);
static void ixgbe_lower_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl);
static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data);
-static bool ixgbe_get_i2c_data(u32 *i2cctl);
+static bool ixgbe_get_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl);
static void ixgbe_i2c_bus_clear(struct ixgbe_hw *hw);
static enum ixgbe_phy_type ixgbe_get_phy_type_from_id(u32 phy_id);
static s32 ixgbe_get_phy_id(struct ixgbe_hw *hw);
@@ -576,6 +576,10 @@ s32 ixgbe_get_copper_link_capabilities_generic(struct ixgbe_hw *hw,
*speed |= IXGBE_LINK_SPEED_100_FULL;
}
+ /* Internal PHY does not support 100 Mbps */
+ if (hw->mac.type == ixgbe_mac_X550EM_x)
+ *speed &= ~IXGBE_LINK_SPEED_100_FULL;
+
return status;
}
@@ -632,10 +636,12 @@ s32 ixgbe_check_phy_link_tnx(struct ixgbe_hw *hw, ixgbe_link_speed *speed,
* @hw: pointer to hardware structure
*
* Restart autonegotiation and PHY and waits for completion.
+ * This function always returns success, this is nessary since
+ * it is called via a function pointer that could call other
+ * functions that could return an error.
**/
s32 ixgbe_setup_phy_link_tnx(struct ixgbe_hw *hw)
{
- s32 status;
u16 autoneg_reg = IXGBE_MII_AUTONEG_REG;
bool autoneg = false;
ixgbe_link_speed speed;
@@ -701,7 +707,7 @@ s32 ixgbe_setup_phy_link_tnx(struct ixgbe_hw *hw)
hw->phy.ops.write_reg(hw, MDIO_CTRL1,
MDIO_MMD_AN, autoneg_reg);
- return status;
+ return 0;
}
/**
@@ -1612,7 +1618,7 @@ fail:
**/
static void ixgbe_i2c_start(struct ixgbe_hw *hw)
{
- u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
+ u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
/* Start condition must begin with data and clock high */
ixgbe_set_i2c_data(hw, &i2cctl, 1);
@@ -1641,7 +1647,7 @@ static void ixgbe_i2c_start(struct ixgbe_hw *hw)
**/
static void ixgbe_i2c_stop(struct ixgbe_hw *hw)
{
- u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
+ u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
/* Stop condition must begin with data low and clock high */
ixgbe_set_i2c_data(hw, &i2cctl, 0);
@@ -1699,9 +1705,9 @@ static s32 ixgbe_clock_out_i2c_byte(struct ixgbe_hw *hw, u8 data)
}
/* Release SDA line (set high) */
- i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
- i2cctl |= IXGBE_I2C_DATA_OUT;
- IXGBE_WRITE_REG(hw, IXGBE_I2CCTL, i2cctl);
+ i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
+ i2cctl |= IXGBE_I2C_DATA_OUT_BY_MAC(hw);
+ IXGBE_WRITE_REG(hw, IXGBE_I2CCTL_BY_MAC(hw), i2cctl);
IXGBE_WRITE_FLUSH(hw);
return status;
@@ -1717,7 +1723,7 @@ static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw)
{
s32 status = 0;
u32 i = 0;
- u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
+ u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
u32 timeout = 10;
bool ack = true;
@@ -1730,8 +1736,8 @@ static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw)
/* Poll for ACK. Note that ACK in I2C spec is
* transition from 1 to 0 */
for (i = 0; i < timeout; i++) {
- i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
- ack = ixgbe_get_i2c_data(&i2cctl);
+ i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
+ ack = ixgbe_get_i2c_data(hw, &i2cctl);
udelay(1);
if (ack == 0)
@@ -1760,15 +1766,15 @@ static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw)
**/
static s32 ixgbe_clock_in_i2c_bit(struct ixgbe_hw *hw, bool *data)
{
- u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
+ u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
ixgbe_raise_i2c_clk(hw, &i2cctl);
/* Minimum high period of clock is 4us */
udelay(IXGBE_I2C_T_HIGH);
- i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
- *data = ixgbe_get_i2c_data(&i2cctl);
+ i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
+ *data = ixgbe_get_i2c_data(hw, &i2cctl);
ixgbe_lower_i2c_clk(hw, &i2cctl);
@@ -1788,7 +1794,7 @@ static s32 ixgbe_clock_in_i2c_bit(struct ixgbe_hw *hw, bool *data)
static s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data)
{
s32 status;
- u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
+ u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
status = ixgbe_set_i2c_data(hw, &i2cctl, data);
if (status == 0) {
@@ -1824,14 +1830,14 @@ static void ixgbe_raise_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl)
u32 i2cctl_r = 0;
for (i = 0; i < timeout; i++) {
- *i2cctl |= IXGBE_I2C_CLK_OUT;
- IXGBE_WRITE_REG(hw, IXGBE_I2CCTL, *i2cctl);
+ *i2cctl |= IXGBE_I2C_CLK_OUT_BY_MAC(hw);
+ IXGBE_WRITE_REG(hw, IXGBE_I2CCTL_BY_MAC(hw), *i2cctl);
IXGBE_WRITE_FLUSH(hw);
/* SCL rise time (1000ns) */
udelay(IXGBE_I2C_T_RISE);
- i2cctl_r = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
- if (i2cctl_r & IXGBE_I2C_CLK_IN)
+ i2cctl_r = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
+ if (i2cctl_r & IXGBE_I2C_CLK_IN_BY_MAC(hw))
break;
}
}
@@ -1846,9 +1852,9 @@ static void ixgbe_raise_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl)
static void ixgbe_lower_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl)
{
- *i2cctl &= ~IXGBE_I2C_CLK_OUT;
+ *i2cctl &= ~IXGBE_I2C_CLK_OUT_BY_MAC(hw);
- IXGBE_WRITE_REG(hw, IXGBE_I2CCTL, *i2cctl);
+ IXGBE_WRITE_REG(hw, IXGBE_I2CCTL_BY_MAC(hw), *i2cctl);
IXGBE_WRITE_FLUSH(hw);
/* SCL fall time (300ns) */
@@ -1866,19 +1872,19 @@ static void ixgbe_lower_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl)
static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data)
{
if (data)
- *i2cctl |= IXGBE_I2C_DATA_OUT;
+ *i2cctl |= IXGBE_I2C_DATA_OUT_BY_MAC(hw);
else
- *i2cctl &= ~IXGBE_I2C_DATA_OUT;
+ *i2cctl &= ~IXGBE_I2C_DATA_OUT_BY_MAC(hw);
- IXGBE_WRITE_REG(hw, IXGBE_I2CCTL, *i2cctl);
+ IXGBE_WRITE_REG(hw, IXGBE_I2CCTL_BY_MAC(hw), *i2cctl);
IXGBE_WRITE_FLUSH(hw);
/* Data rise/fall (1000ns/300ns) and set-up time (250ns) */
udelay(IXGBE_I2C_T_RISE + IXGBE_I2C_T_FALL + IXGBE_I2C_T_SU_DATA);
/* Verify data was set correctly */
- *i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
- if (data != ixgbe_get_i2c_data(i2cctl)) {
+ *i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
+ if (data != ixgbe_get_i2c_data(hw, i2cctl)) {
hw_dbg(hw, "Error - I2C data was not set to %X.\n", data);
return IXGBE_ERR_I2C;
}
@@ -1893,9 +1899,9 @@ static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data)
*
* Returns the I2C data bit value
**/
-static bool ixgbe_get_i2c_data(u32 *i2cctl)
+static bool ixgbe_get_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl)
{
- if (*i2cctl & IXGBE_I2C_DATA_IN)
+ if (*i2cctl & IXGBE_I2C_DATA_IN_BY_MAC(hw))
return true;
return false;
}
@@ -1909,7 +1915,7 @@ static bool ixgbe_get_i2c_data(u32 *i2cctl)
**/
static void ixgbe_i2c_bus_clear(struct ixgbe_hw *hw)
{
- u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL);
+ u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL_BY_MAC(hw));
u32 i;
ixgbe_i2c_start(hw);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index daa37c2..566f995 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -1109,6 +1109,12 @@ static int ixgbe_enable_port_vlan(struct ixgbe_adapter *adapter, int vf,
if (adapter->vfinfo[vf].spoofchk_enabled)
hw->mac.ops.set_vlan_anti_spoofing(hw, true, vf);
adapter->vfinfo[vf].vlan_count++;
+
+ /* enable hide vlan on X550 */
+ if (hw->mac.type >= ixgbe_mac_X550)
+ ixgbe_write_qde(adapter, vf, IXGBE_QDE_ENABLE |
+ IXGBE_QDE_HIDE_VLAN);
+
adapter->vfinfo[vf].pf_vlan = vlan;
adapter->vfinfo[vf].pf_qos = qos;
dev_info(&adapter->pdev->dev,
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
index dfd55d8..64de20d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
@@ -77,6 +77,8 @@
/* VF Device IDs */
#define IXGBE_DEV_ID_82599_VF 0x10ED
#define IXGBE_DEV_ID_X540_VF 0x1515
+#define IXGBE_DEV_ID_X550_VF 0x1565
+#define IXGBE_DEV_ID_X550EM_X_VF 0x15A8
/* General Registers */
#define IXGBE_CTRL 0x00000
@@ -84,7 +86,8 @@
#define IXGBE_CTRL_EXT 0x00018
#define IXGBE_ESDP 0x00020
#define IXGBE_EODSDP 0x00028
-#define IXGBE_I2CCTL 0x00028
+#define IXGBE_I2CCTL_BY_MAC(_hw)((((_hw)->mac.type >= ixgbe_mac_X550) ? \
+ 0x15F5C : 0x00028))
#define IXGBE_LEDCTL 0x00200
#define IXGBE_FRTIMER 0x00048
#define IXGBE_TCPTIMER 0x0004C
@@ -112,10 +115,14 @@
#define IXGBE_VPDDIAG1 0x10208
/* I2CCTL Bit Masks */
-#define IXGBE_I2C_CLK_IN 0x00000001
-#define IXGBE_I2C_CLK_OUT 0x00000002
-#define IXGBE_I2C_DATA_IN 0x00000004
-#define IXGBE_I2C_DATA_OUT 0x00000008
+#define IXGBE_I2C_CLK_IN_BY_MAC(_hw)(((_hw)->mac.type) >= ixgbe_mac_X550 ? \
+ 0x00004000 : 0x00000001)
+#define IXGBE_I2C_CLK_OUT_BY_MAC(_hw)(((_hw)->mac.type) >= ixgbe_mac_X550 ? \
+ 0x00000200 : 0x00000002)
+#define IXGBE_I2C_DATA_IN_BY_MAC(_hw)(((_hw)->mac.type) >= ixgbe_mac_X550 ? \
+ 0x00001000 : 0x00000004)
+#define IXGBE_I2C_DATA_OUT_BY_MAC(_hw)(((_hw)->mac.type) >= ixgbe_mac_X550 ? \
+ 0x00000400 : 0x00000008)
#define IXGBE_I2C_CLOCK_STRETCHING_TIMEOUT 500
#define IXGBE_I2C_THERMAL_SENSOR_ADDR 0xF8
@@ -292,6 +299,14 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_RETA(_i) (0x05C00 + ((_i) * 4)) /* 32 of these (0-31) */
#define IXGBE_RSSRK(_i) (0x05C80 + ((_i) * 4)) /* 10 of these (0-9) */
+/* Registers for setting up RSS on X550 with SRIOV
+ * _p - pool number (0..63)
+ * _i - index (0..10 for PFVFRSSRK, 0..15 for PFVFRETA)
+ */
+#define IXGBE_PFVFMRQC(_p) (0x03400 + ((_p) * 4))
+#define IXGBE_PFVFRSSRK(_i, _p) (0x018000 + ((_i) * 4) + ((_p) * 0x40))
+#define IXGBE_PFVFRETA(_i, _p) (0x019000 + ((_i) * 4) + ((_p) * 0x40))
+
/* Flow Director registers */
#define IXGBE_FDIRCTRL 0x0EE00
#define IXGBE_FDIRHKEY 0x0EE68
@@ -798,6 +813,12 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_PBACLR_82599 0x11068
#define IXGBE_CIAA_82599 0x11088
#define IXGBE_CIAD_82599 0x1108C
+#define IXGBE_CIAA_X550 0x11508
+#define IXGBE_CIAD_X550 0x11510
+#define IXGBE_CIAA_BY_MAC(_hw) ((((_hw)->mac.type >= ixgbe_mac_X550) ? \
+ IXGBE_CIAA_X550 : IXGBE_CIAA_82599))
+#define IXGBE_CIAD_BY_MAC(_hw) ((((_hw)->mac.type >= ixgbe_mac_X550) ? \
+ IXGBE_CIAD_X550 : IXGBE_CIAD_82599))
#define IXGBE_PICAUSE 0x110B0
#define IXGBE_PIENA 0x110B8
#define IXGBE_CDQ_MBR_82599 0x110B4
@@ -1632,6 +1653,7 @@ enum {
#define IXGBE_LINKS_TL_FAULT 0x00001000
#define IXGBE_LINKS_SIGNAL 0x00000F00
+#define IXGBE_LINKS_SPEED_NON_STD 0x08000000
#define IXGBE_LINKS_SPEED_82599 0x30000000
#define IXGBE_LINKS_SPEED_10G_82599 0x30000000
#define IXGBE_LINKS_SPEED_1G_82599 0x20000000
@@ -2000,6 +2022,7 @@ enum {
/* Queue Drop Enable */
#define IXGBE_QDE_ENABLE 0x00000001
+#define IXGBE_QDE_HIDE_VLAN 0x00000002
#define IXGBE_QDE_IDX_MASK 0x00007F00
#define IXGBE_QDE_IDX_SHIFT 8
#define IXGBE_QDE_WRITE 0x00010000
@@ -2437,10 +2460,12 @@ struct ixgbe_adv_tx_context_desc {
typedef u32 ixgbe_autoneg_advertised;
/* Link speed */
typedef u32 ixgbe_link_speed;
-#define IXGBE_LINK_SPEED_UNKNOWN 0
-#define IXGBE_LINK_SPEED_100_FULL 0x0008
-#define IXGBE_LINK_SPEED_1GB_FULL 0x0020
-#define IXGBE_LINK_SPEED_10GB_FULL 0x0080
+#define IXGBE_LINK_SPEED_UNKNOWN 0
+#define IXGBE_LINK_SPEED_100_FULL 0x0008
+#define IXGBE_LINK_SPEED_1GB_FULL 0x0020
+#define IXGBE_LINK_SPEED_2_5GB_FULL 0x0400
+#define IXGBE_LINK_SPEED_5GB_FULL 0x0800
+#define IXGBE_LINK_SPEED_10GB_FULL 0x0080
#define IXGBE_LINK_SPEED_82598_AUTONEG (IXGBE_LINK_SPEED_1GB_FULL | \
IXGBE_LINK_SPEED_10GB_FULL)
#define IXGBE_LINK_SPEED_82599_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
@@ -2588,6 +2613,8 @@ enum ixgbe_mac_type {
ixgbe_mac_82598EB,
ixgbe_mac_82599EB,
ixgbe_mac_X540,
+ ixgbe_mac_X550,
+ ixgbe_mac_X550EM_x,
ixgbe_num_macs
};
--
1.9.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* RE: [net-next 1/9] i40e: poll firmware slower
2014-11-07 8:57 ` [net-next 1/9] i40e: poll firmware slower Jeff Kirsher
@ 2014-11-07 9:40 ` David Laight
2014-11-07 22:58 ` Jesse Brandeburg
2014-11-07 13:29 ` Or Gerlitz
1 sibling, 1 reply; 16+ messages in thread
From: David Laight @ 2014-11-07 9:40 UTC (permalink / raw)
To: 'Jeff Kirsher', davem@davemloft.net
Cc: Kamil Krawczyk, netdev@vger.kernel.org, nhorman@redhat.com,
sassmann@redhat.com, jogreene@redhat.com
From: Jeff Kirsher
> From: Kamil Krawczyk <kamil.krawczyk@intel.com>
>
> The code was polling the firmware tail register for completion every
> 10 microseconds, which is way faster than the firmware can respond.
> This changes the poll interval to 1ms, which reduces polling CPU
> utilization, and the number of times we loop.
Are you sure the code path is allowed to sleep?
> The maximum delay is still 100ms.
Actually it is now likely to be up to 200ms or more.
You could convert the maximum delay check to one that
looks at jiffies - but maybe it doesn't matter.
> Change-ID: I4bbfa6b66d802890baf8b4154061e55942b90958
> Signed-off-by: Kamil Krawczyk <kamil.krawczyk@intel.com>
> Acked-by: Shannon Nelson <shannon.nelson@intel.com>
> Tested-by: Jim Young <jamesx.m.young@intel.com>
> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> ---
> drivers/net/ethernet/intel/i40e/i40e_adminq.c | 5 ++---
> drivers/net/ethernet/intel/i40e/i40e_adminq.h | 2 +-
> drivers/net/ethernet/intel/i40evf/i40e_adminq.c | 5 ++---
> drivers/net/ethernet/intel/i40evf/i40e_adminq.h | 2 +-
> 4 files changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
> b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
> index 72f5d25..057b7bf 100644
> --- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
> +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
> @@ -853,7 +853,6 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
> */
> if (!details->async && !details->postpone) {
> u32 total_delay = 0;
> - u32 delay_len = 10;
>
> do {
> /* AQ designers suggest use of head for better
> @@ -862,8 +861,8 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
> if (i40e_asq_done(hw))
> break;
> /* ugh! delay while spin_lock */
The comment is not right any more.
> - udelay(delay_len);
> - total_delay += delay_len;
> + usleep_range(1000, 2000);
> + total_delay++;
> } while (total_delay < hw->aq.asq_cmd_timeout);
> }
>
> diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.h
> b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
> index ba38a89..df0bd09 100644
> --- a/drivers/net/ethernet/intel/i40e/i40e_adminq.h
> +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
> @@ -141,7 +141,7 @@ static inline int i40e_aq_rc_to_posix(u16 aq_rc)
>
> /* general information */
> #define I40E_AQ_LARGE_BUF 512
> -#define I40E_ASQ_CMD_TIMEOUT 100000 /* usecs */
> +#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */
It looks like this value is written to asq_cmd_timeout, that makes
be wonder whether anything else can change it - otherwise the compile
time constant would be used.
Changing the units has broken anything else that modifies the value.
>
> void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
> u16 opcode);
> diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
> b/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
> index f206be9..25c846b 100644
> --- a/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
> +++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
> @@ -801,7 +801,6 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
> */
> if (!details->async && !details->postpone) {
> u32 total_delay = 0;
> - u32 delay_len = 10;
>
> do {
> /* AQ designers suggest use of head for better
> @@ -810,8 +809,8 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
> if (i40evf_asq_done(hw))
> break;
> /* ugh! delay while spin_lock */
> - udelay(delay_len);
> - total_delay += delay_len;
> + usleep_range(1000, 2000);
> + total_delay++;
> } while (total_delay < hw->aq.asq_cmd_timeout);
> }
>
> diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
> b/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
> index 91a5c5b..f40cfac 100644
> --- a/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
> +++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
> @@ -141,7 +141,7 @@ static inline int i40e_aq_rc_to_posix(u16 aq_rc)
>
> /* general information */
> #define I40E_AQ_LARGE_BUF 512
> -#define I40E_ASQ_CMD_TIMEOUT 100000 /* usecs */
> +#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */
>
> void i40evf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
> u16 opcode);
> --
> 1.9.3
Hmmm.... two drivers containing the same code....
David
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [net-next 1/9] i40e: poll firmware slower
2014-11-07 8:57 ` [net-next 1/9] i40e: poll firmware slower Jeff Kirsher
2014-11-07 9:40 ` David Laight
@ 2014-11-07 13:29 ` Or Gerlitz
2014-11-07 23:02 ` Jesse Brandeburg
1 sibling, 1 reply; 16+ messages in thread
From: Or Gerlitz @ 2014-11-07 13:29 UTC (permalink / raw)
To: Jeff Kirsher
Cc: David Miller, Kamil Krawczyk, Linux Netdev List, nhorman,
sassmann, jogreene
On Fri, Nov 7, 2014 at 10:57 AM, Jeff Kirsher
<jeffrey.t.kirsher@intel.com> wrote:
> The code was polling the firmware tail register for completion
any reason not to sleep while waiting for this completion? can the
firmware generate an interrupt?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [net-next 6/9] ixgbe: fix X540 Completion timeout
2014-11-07 8:57 ` [net-next 6/9] ixgbe: fix X540 Completion timeout Jeff Kirsher
@ 2014-11-07 14:35 ` Sergei Shtylyov
2014-11-07 17:06 ` Skidmore, Donald C
0 siblings, 1 reply; 16+ messages in thread
From: Sergei Shtylyov @ 2014-11-07 14:35 UTC (permalink / raw)
To: Jeff Kirsher, davem; +Cc: Don Skidmore, netdev, nhorman, sassmann, jogreene
Hello.
On 11/7/2014 11:57 AM, Jeff Kirsher wrote:
> From: Don Skidmore <donald.c.skidmore@intel.com>
> On topologies including few levels of PCIe switching X540 can run into an
> unexpected completion error. We get around this by waiting after enabling
> loopback a sufficient amount of time until Tx Data Fetch is sent. We then
> poll the pending transaction bit to ensure we received the completion. Only
> then do we go on to clear the buffers.
> Signed-of-by: Don Skidmore <donald.c.skidmore@intel.com>
> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> ---
> drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 21 ++++++++++++++++++++-
> 1 file changed, 20 insertions(+), 1 deletion(-)
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
> index b5f484b..e314b53 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
[...]
> @@ -3600,6 +3601,24 @@ void ixgbe_clear_tx_pending(struct ixgbe_hw *hw)
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0 | IXGBE_HLREG0_LPBK);
>
> + /* wait for a last completion before clearing buffers */
> + IXGBE_WRITE_FLUSH(hw);
> + usleep_range(3000, 6000);
> +
> + /* Before proceeding, make sure that the PCIe block does not have
> + * transactions pending.
> + */
> + poll = ixgbe_pcie_timeout_poll(hw);
> + for (i = 0; i < poll; i++) {
> + usleep_range(100, 200);
> + value = ixgbe_read_pci_cfg_word(hw, IXGBE_PCI_DEVICE_STATUS);
> + if (ixgbe_removed(hw->hw_addr))
> + goto out;
Why not just *break*?
> + if (!(value & IXGBE_PCI_DEVICE_STATUS_TRANSACTION_PENDING))
> + goto out;
Likewise.
> + }
> +
> +out:
> /* initiate cleaning flow for buffers in the PCIe transaction layer */
> gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
> IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT,
WBR, Sergei
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [net-next 6/9] ixgbe: fix X540 Completion timeout
2014-11-07 14:35 ` Sergei Shtylyov
@ 2014-11-07 17:06 ` Skidmore, Donald C
0 siblings, 0 replies; 16+ messages in thread
From: Skidmore, Donald C @ 2014-11-07 17:06 UTC (permalink / raw)
To: Sergei Shtylyov, Kirsher, Jeffrey T, davem@davemloft.net
Cc: netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com,
jogreene@redhat.com
> -----Original Message-----
> From: Sergei Shtylyov [mailto:sergei.shtylyov@cogentembedded.com]
> Sent: Friday, November 07, 2014 6:35 AM
> To: Kirsher, Jeffrey T; davem@davemloft.net
> Cc: Skidmore, Donald C; netdev@vger.kernel.org; nhorman@redhat.com;
> sassmann@redhat.com; jogreene@redhat.com
> Subject: Re: [net-next 6/9] ixgbe: fix X540 Completion timeout
>
> Hello.
>
> On 11/7/2014 11:57 AM, Jeff Kirsher wrote:
>
> > From: Don Skidmore <donald.c.skidmore@intel.com>
>
> > On topologies including few levels of PCIe switching X540 can run into
> > an unexpected completion error. We get around this by waiting after
> > enabling loopback a sufficient amount of time until Tx Data Fetch is
> > sent. We then poll the pending transaction bit to ensure we received
> > the completion. Only then do we go on to clear the buffers.
>
> > Signed-of-by: Don Skidmore <donald.c.skidmore@intel.com>
> > Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
>
> > ---
> > drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 21
> ++++++++++++++++++++-
> > 1 file changed, 20 insertions(+), 1 deletion(-)
>
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
> > b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
> > index b5f484b..e314b53 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
> [...]
> > @@ -3600,6 +3601,24 @@ void ixgbe_clear_tx_pending(struct ixgbe_hw
> *hw)
> > hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> > IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0 |
> IXGBE_HLREG0_LPBK);
> >
> > + /* wait for a last completion before clearing buffers */
> > + IXGBE_WRITE_FLUSH(hw);
> > + usleep_range(3000, 6000);
> > +
> > + /* Before proceeding, make sure that the PCIe block does not have
> > + * transactions pending.
> > + */
> > + poll = ixgbe_pcie_timeout_poll(hw);
> > + for (i = 0; i < poll; i++) {
> > + usleep_range(100, 200);
> > + value = ixgbe_read_pci_cfg_word(hw,
> IXGBE_PCI_DEVICE_STATUS);
> > + if (ixgbe_removed(hw->hw_addr))
> > + goto out;
>
> Why not just *break*?
>
> > + if (!(value &
> IXGBE_PCI_DEVICE_STATUS_TRANSACTION_PENDING))
> > + goto out;
>
> Likewise.
>
> > + }
> > +
> > +out:
> > /* initiate cleaning flow for buffers in the PCIe transaction layer */
> > gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
> > IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT,
>
> WBR, Sergei
Your right the breaks would be much cleaner, thanks for pointing it out. :) I can resubmit the patch threw Jeff or send a new one, whatever is preferred.
- Don Skidmore <donald.c.skidmore@intel.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [net-next 1/9] i40e: poll firmware slower
2014-11-07 9:40 ` David Laight
@ 2014-11-07 22:58 ` Jesse Brandeburg
0 siblings, 0 replies; 16+ messages in thread
From: Jesse Brandeburg @ 2014-11-07 22:58 UTC (permalink / raw)
To: David Laight
Cc: 'Jeff Kirsher', davem@davemloft.net, Kamil Krawczyk,
netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com,
jogreene@redhat.com, jesse.brandeburg
Thanks for the review David, comments follow.
On Fri, 7 Nov 2014 09:40:08 +0000
David Laight <David.Laight@ACULAB.COM> wrote:
> From: Jeff Kirsher
> > From: Kamil Krawczyk <kamil.krawczyk@intel.com>
> >
> > The code was polling the firmware tail register for completion every
> > 10 microseconds, which is way faster than the firmware can respond.
> > This changes the poll interval to 1ms, which reduces polling CPU
> > utilization, and the number of times we loop.
>
> Are you sure the code path is allowed to sleep?
Yes, we are never (should never be) in interrupt context when calling
these routines.
>
> > The maximum delay is still 100ms.
>
> Actually it is now likely to be up to 200ms or more.
> You could convert the maximum delay check to one that
> looks at jiffies - but maybe it doesn't matter.
Thats okay, this is all init or reset or shutdown level code. If the
delay goes up it won't hurt anything.
>
> > --- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
> > +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
> > @@ -853,7 +853,6 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
> > */
> > if (!details->async && !details->postpone) {
> > u32 total_delay = 0;
> > - u32 delay_len = 10;
> >
> > do {
> > /* AQ designers suggest use of head for better
> > @@ -862,8 +861,8 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
> > if (i40e_asq_done(hw))
> > break;
> > /* ugh! delay while spin_lock */
>
> The comment is not right any more.
yes it should have been removed.
>
> > - udelay(delay_len);
> > - total_delay += delay_len;
> > + usleep_range(1000, 2000);
> > + total_delay++;
> > } while (total_delay < hw->aq.asq_cmd_timeout);
> > }
> >
> > diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.h
> > b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
> > index ba38a89..df0bd09 100644
> > --- a/drivers/net/ethernet/intel/i40e/i40e_adminq.h
> > +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
> > @@ -141,7 +141,7 @@ static inline int i40e_aq_rc_to_posix(u16 aq_rc)
> >
> > /* general information */
> > #define I40E_AQ_LARGE_BUF 512
> > -#define I40E_ASQ_CMD_TIMEOUT 100000 /* usecs */
> > +#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */
>
> It looks like this value is written to asq_cmd_timeout, that makes
> be wonder whether anything else can change it - otherwise the compile
> time constant would be used.
> Changing the units has broken anything else that modifies the value.
I pretty much agree with you, but I can tell you why it's there.
Currently nothing in the code changes it. The code was designed such
that it can run on hardware requiring different timeouts.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [net-next 1/9] i40e: poll firmware slower
2014-11-07 13:29 ` Or Gerlitz
@ 2014-11-07 23:02 ` Jesse Brandeburg
0 siblings, 0 replies; 16+ messages in thread
From: Jesse Brandeburg @ 2014-11-07 23:02 UTC (permalink / raw)
To: Or Gerlitz
Cc: Jeff Kirsher, David Miller, Kamil Krawczyk, Linux Netdev List,
nhorman, sassmann, jogreene, jesse.brandeburg
On Fri, 7 Nov 2014 15:29:15 +0200
Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Fri, Nov 7, 2014 at 10:57 AM, Jeff Kirsher
> <jeffrey.t.kirsher@intel.com> wrote:
> > The code was polling the firmware tail register for completion
>
> any reason not to sleep while waiting for this completion? can the
> firmware generate an interrupt?
The code path in question is called from the drivers when they are in
a synchronous context and can sleep or be rescheduled without issue.
An async mode is possible, but not here, because the code complexity
required to change to async mode is unmaintainable (if even doable, how
do you handle an interrupt while you're in probe?). We use async when
it is practical.
Thanks for your comments,
Jesse
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2014-11-07 23:02 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-07 8:57 [net-next 0/9][pull request] Intel Wired LAN Driver Updates 2014-11-07 Jeff Kirsher
2014-11-07 8:57 ` [net-next 1/9] i40e: poll firmware slower Jeff Kirsher
2014-11-07 9:40 ` David Laight
2014-11-07 22:58 ` Jesse Brandeburg
2014-11-07 13:29 ` Or Gerlitz
2014-11-07 23:02 ` Jesse Brandeburg
2014-11-07 8:57 ` [net-next 2/9] i40e: don't do link_status or stats collection on every ARQ Jeff Kirsher
2014-11-07 8:57 ` [net-next 3/9] i40e: clean up throttle rate code Jeff Kirsher
2014-11-07 8:57 ` [net-next 4/9] i40evf: make early init processing more robust Jeff Kirsher
2014-11-07 8:57 ` [net-next 5/9] i40evf: don't use more queues than CPUs Jeff Kirsher
2014-11-07 8:57 ` [net-next 6/9] ixgbe: fix X540 Completion timeout Jeff Kirsher
2014-11-07 14:35 ` Sergei Shtylyov
2014-11-07 17:06 ` Skidmore, Donald C
2014-11-07 8:57 ` [net-next 7/9] ixgbe: cleanup ixgbe_ndo_set_vf_vlan Jeff Kirsher
2014-11-07 8:57 ` [net-next 8/9] ixgbe: cleanup move setting PFQDE.HIDE_VLAN to support function Jeff Kirsher
2014-11-07 8:57 ` [net-next 9/9] ixgbe: Add new support for X550 MAC's Jeff Kirsher
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).