* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2011-09-17 8:04 Jeff Kirsher
0 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2011-09-17 8:04 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo
The following series contains updates to ixgb and igb. The ixgb patch
is a trivial fix. The remaining patches are for igb to do the following:
- cleanup/consolidate Tx descriptors
- trivial fix to remove _adv/_ADV since igb uses only advanced
descriptors
- performance enhancements to help with general and routing
This series of igb patches is the first of three to update/cleanup the igb
driver by Alex. So there are additional patches/changes coming to complete
this work.
The following are changes since commit 765cf9976e937f1cfe9159bf4534967c8bf8eb6d:
tcp: md5: remove one indirection level in tcp_md5sig_pool
and are available in the git repository at:
git://github.com/Jkirsher/net-next.git
Alexander Duyck (12):
igb: Update RXDCTL/TXDCTL configurations
igb: Update max_frame_size to account for an optional VLAN tag if
present
igb: drop support for single buffer mode
igb: streamline Rx buffer allocation and cleanup
igb: update ring and adapter structure to improve performance
igb: Refactor clean_rx_irq to reduce overhead and improve performance
igb: drop the "adv" off function names relating to descriptors
igb: Replace E1000_XX_DESC_ADV with IGB_XX_DESC
igb: Remove multi_tx_table and simplify igb_xmit_frame
igb: Make Tx budget for NAPI user adjustable
igb: split buffer_info into tx_buffer_info and rx_buffer_info
igb: Consolidate creation of Tx context descriptors into a single
function
Jesse Brandeburg (1):
ixgb: eliminate checkstack warnings
drivers/net/ethernet/intel/igb/igb.h | 160 +++--
drivers/net/ethernet/intel/igb/igb_ethtool.c | 36 +-
drivers/net/ethernet/intel/igb/igb_main.c | 972 +++++++++++++-------------
drivers/net/ethernet/intel/ixgb/ixgb_main.c | 10 +-
4 files changed, 586 insertions(+), 592 deletions(-)
--
1.7.6
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2011-10-07 7:18 Jeff Kirsher
2011-10-07 16:38 ` David Miller
0 siblings, 1 reply; 33+ messages in thread
From: Jeff Kirsher @ 2011-10-07 7:18 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
The following series contains updates to e1000, e1000e, igb and ixgbe. Here
is a quick summary:
- e1000: 3 conversions (timers->threads, mdelay->msleep, mutex->rtnl)
- e1000e: fix jumbo frames on 82579
- igb: several cleanups to reduce stack space and improve performance
- ixgbe: bump driver ver
The following are changes since commit e878d78b9a7403fabc89ecc93c56928b74d14f01:
virtio-net: Verify page list size before fitting into skb
and are available in the git repository at
git://github.com/Jkirsher/net-next.git
Alexander Duyck (8):
igb: Make Tx budget for NAPI user adjustable
igb: split buffer_info into tx_buffer_info and rx_buffer_info
igb: Consolidate creation of Tx context descriptors into a single
function
igb: Make first and tx_buffer_info->next_to_watch into pointers
igb: Create separate functions for generating cmd_type and olinfo
igb: Cleanup protocol handling in transmit path
igb: Combine all flag info fields into a single tx_flags structure
igb: consolidate creation of Tx buffer info and data descriptor
Bruce Allan (1):
e1000e: bad short packets received when jumbos enabled on 82579
Don Skidmore (1):
ixgbe: bump version number
Jesse Brandeburg (3):
e1000: convert hardware management from timers to threads
e1000: convert mdelay to msleep
e1000: convert to private mutex from rtnl
drivers/net/ethernet/intel/e1000/e1000.h | 12 +-
drivers/net/ethernet/intel/e1000/e1000_hw.c | 22 +-
drivers/net/ethernet/intel/e1000/e1000_main.c | 169 +++---
drivers/net/ethernet/intel/e1000e/ich8lan.c | 2 +-
drivers/net/ethernet/intel/igb/e1000_82575.h | 2 +
drivers/net/ethernet/intel/igb/igb.h | 54 +-
drivers/net/ethernet/intel/igb/igb_ethtool.c | 16 +-
drivers/net/ethernet/intel/igb/igb_main.c | 840 ++++++++++++++-----------
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 4 +-
9 files changed, 600 insertions(+), 521 deletions(-)
--
1.7.6.4
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [net-next 00/13][pull request] Intel Wired LAN Driver Updates
2011-10-07 7:18 Jeff Kirsher
@ 2011-10-07 16:38 ` David Miller
0 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2011-10-07 16:38 UTC (permalink / raw)
To: jeffrey.t.kirsher; +Cc: netdev, gospo, sassmann
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Date: Fri, 7 Oct 2011 00:18:32 -0700
> The following series contains updates to e1000, e1000e, igb and ixgbe. Here
> is a quick summary:
> - e1000: 3 conversions (timers->threads, mdelay->msleep, mutex->rtnl)
> - e1000e: fix jumbo frames on 82579
> - igb: several cleanups to reduce stack space and improve performance
> - ixgbe: bump driver ver
>
> The following are changes since commit e878d78b9a7403fabc89ecc93c56928b74d14f01:
> virtio-net: Verify page list size before fitting into skb
> and are available in the git repository at
> git://github.com/Jkirsher/net-next.git
Pulled, thanks Jeff.
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2012-07-21 23:08 Jeff Kirsher
2012-07-22 19:24 ` David Miller
0 siblings, 1 reply; 33+ messages in thread
From: Jeff Kirsher @ 2012-07-21 23:08 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to ixgbe and ixgbevf.
The following are changes since commit 186e868786f97c8026f0a81400b451ace306b3a4:
forcedeth: spin_unlock_irq in interrupt handler fix
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Akeem G. Abodunrin (1):
igb: reset PHY in the link_up process to recover PHY setting after
power down.
Alexander Duyck (8):
ixgbe: Drop probe_vf and merge functionality into ixgbe_enable_sriov
ixgbe: Change how we check for pre-existing and assigned VFs
ixgbevf: Add lock around mailbox ops to prevent simultaneous access
ixgbevf: Add support for PCI error handling
ixgbe: Fix handling of FDIR_HASH flag
ixgbe: Reduce Rx header size to what is actually used
ixgbe: Use num_tcs.pg_tcs as upper limit for TC when checking based
on UP
ixgbe: Use 1TC DCB instead of disabling DCB for MSI and legacy
interrupts
Don Skidmore (1):
ixgbe: add support for new 82599 device
Greg Rose (1):
ixgbevf: Fix namespace issue with ixgbe_write_eitr
John Fastabend (2):
ixgbe: fix RAR entry counting for generic and fdb_add()
ixgbe: remove extra unused queues in DCB + FCoE case
drivers/net/ethernet/intel/igb/igb_main.c | 3 +-
drivers/net/ethernet/intel/ixgbe/ixgbe.h | 16 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c | 12 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 41 ++++--
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 140 +++++++++---------
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 151 ++++++++-----------
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h | 1 -
drivers/net/ethernet/intel/ixgbe/ixgbe_type.h | 1 +
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h | 3 +-
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 164 +++++++++++++++++----
10 files changed, 323 insertions(+), 209 deletions(-)
--
1.7.10.4
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [net-next 00/13][pull request] Intel Wired LAN Driver Updates
2012-07-21 23:08 Jeff Kirsher
@ 2012-07-22 19:24 ` David Miller
2012-07-22 19:37 ` David Miller
0 siblings, 1 reply; 33+ messages in thread
From: David Miller @ 2012-07-22 19:24 UTC (permalink / raw)
To: jeffrey.t.kirsher; +Cc: netdev, gospo, sassmann
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Date: Sat, 21 Jul 2012 16:08:49 -0700
> This series contains updates to ixgbe and ixgbevf.
>
> The following are changes since commit 186e868786f97c8026f0a81400b451ace306b3a4:
> forcedeth: spin_unlock_irq in interrupt handler fix
> and are available in the git repository at:
> git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Pulled, thanks Jeff.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [net-next 00/13][pull request] Intel Wired LAN Driver Updates
2012-07-22 19:24 ` David Miller
@ 2012-07-22 19:37 ` David Miller
2012-07-22 21:39 ` Jeff Kirsher
0 siblings, 1 reply; 33+ messages in thread
From: David Miller @ 2012-07-22 19:37 UTC (permalink / raw)
To: jeffrey.t.kirsher; +Cc: netdev, gospo, sassmann
From: David Miller <davem@davemloft.net>
Date: Sun, 22 Jul 2012 12:24:05 -0700 (PDT)
> From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Date: Sat, 21 Jul 2012 16:08:49 -0700
>
>> This series contains updates to ixgbe and ixgbevf.
>>
>> The following are changes since commit 186e868786f97c8026f0a81400b451ace306b3a4:
>> forcedeth: spin_unlock_irq in interrupt handler fix
>> and are available in the git repository at:
>> git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
>
> Pulled, thanks Jeff.
Can you guys actually build test this stuff?
====================
[PATCH] ixgbe: Fix build with PCI_IOV enabled.
Signed-off-by: David S. Miller <davem@davemloft.net>
---
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 47b2ce7..4fea871 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -236,7 +236,7 @@ void ixgbe_disable_sriov(struct ixgbe_adapter *adapter)
if (ixgbe_vfs_are_assigned(adapter)) {
e_dev_warn("Unloading driver while VFs are assigned - VFs will not be deallocated\n");
return;
-
+ }
/* disable iov and allow time for transactions to clear */
pci_disable_sriov(adapter->pdev);
#endif
--
1.7.10.4
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [net-next 00/13][pull request] Intel Wired LAN Driver Updates
2012-07-22 19:37 ` David Miller
@ 2012-07-22 21:39 ` Jeff Kirsher
2012-07-22 21:53 ` David Miller
0 siblings, 1 reply; 33+ messages in thread
From: Jeff Kirsher @ 2012-07-22 21:39 UTC (permalink / raw)
To: David Miller; +Cc: netdev, gospo, sassmann
[-- Attachment #1: Type: text/plain, Size: 768 bytes --]
On Sun, 2012-07-22 at 12:37 -0700, David Miller wrote:
> From: David Miller <davem@davemloft.net>
> Date: Sun, 22 Jul 2012 12:24:05 -0700 (PDT)
>
> > From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > Date: Sat, 21 Jul 2012 16:08:49 -0700
> >
> >> This series contains updates to ixgbe and ixgbevf.
> >>
> >> The following are changes since commit
> 186e868786f97c8026f0a81400b451ace306b3a4:
> >> forcedeth: spin_unlock_irq in interrupt handler fix
> >> and are available in the git repository at:
> >> git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next
> master
> >
> > Pulled, thanks Jeff.
>
> Can you guys actually build test this stuff?
I did, but it appears I did not have PCI_IOV enabled. That was my bad,
sorry.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [net-next 00/13][pull request] Intel Wired LAN Driver Updates
2012-07-22 21:39 ` Jeff Kirsher
@ 2012-07-22 21:53 ` David Miller
0 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2012-07-22 21:53 UTC (permalink / raw)
To: jeffrey.t.kirsher; +Cc: netdev, gospo, sassmann
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Date: Sun, 22 Jul 2012 14:39:53 -0700
> On Sun, 2012-07-22 at 12:37 -0700, David Miller wrote:
>> From: David Miller <davem@davemloft.net>
>> Date: Sun, 22 Jul 2012 12:24:05 -0700 (PDT)
>>
>> > From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
>> > Date: Sat, 21 Jul 2012 16:08:49 -0700
>> >
>> >> This series contains updates to ixgbe and ixgbevf.
>> >>
>> >> The following are changes since commit
>> 186e868786f97c8026f0a81400b451ace306b3a4:
>> >> forcedeth: spin_unlock_irq in interrupt handler fix
>> >> and are available in the git repository at:
>> >> git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next
>> master
>> >
>> > Pulled, thanks Jeff.
>>
>> Can you guys actually build test this stuff?
>
> I did, but it appears I did not have PCI_IOV enabled. That was my bad,
> sorry.
If you're not doing "allmodconfig" builds, there are by definition
parts you are not testing. It's the first thing I do with any change
I apply.
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2012-08-23 9:56 Jeff Kirsher
0 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2012-08-23 9:56 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to e1000 and igb. Patches for e1000
consist of code cleanups and patches for igb adds loopback support
for i210 as well as PTP fixes.
The following are changes since commit 0fa7fa98dbcc2789409ed24e885485e645803d7f:
packet: Protect packet sk list with mutex (v2)
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Bruce Allan (7):
e1000e: use correct type for read of 32-bit register
e1000e: cleanup strict checkpatch check
e1000e: cleanup - remove inapplicable comment
e1000e: cleanup checkpatch PREFER_PR_LEVEL warning
e1000e: cleanup - remove unnecessary variable
e1000e: update driver version number
e1000e: cleanup strict checkpatch MEMORY_BARRIER checks
Carolyn Wyborny (1):
igb: Add loopback test support for i210.
Eric Dumazet (1):
igb: reduce Rx header size
Matthew Vick (4):
igb: Tidy up wrapping for CONFIG_IGB_PTP.
igb: Update PTP function names/variables and locations.
igb: Correct PTP support query from ethtool.
igb: Store the MAC address in the name in the PTP struct.
drivers/net/ethernet/intel/e1000e/82571.c | 4 +-
drivers/net/ethernet/intel/e1000e/ethtool.c | 3 +-
drivers/net/ethernet/intel/e1000e/netdev.c | 28 +-
drivers/net/ethernet/intel/igb/igb.h | 29 +-
drivers/net/ethernet/intel/igb/igb_ethtool.c | 136 ++++----
drivers/net/ethernet/intel/igb/igb_main.c | 272 +--------------
drivers/net/ethernet/intel/igb/igb_ptp.c | 488 ++++++++++++++++++++-------
7 files changed, 490 insertions(+), 470 deletions(-)
--
1.7.11.4
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2012-10-20 6:25 Jeff Kirsher
0 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2012-10-20 6:25 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to ixgbe only.
The following are changes since commit 72ec301a27badb11635b08dc845101815abcf4a7:
Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Alexander Duyck (7):
ixgbe: Add support for IPv6 and UDP to ixgbe_get_headlen
ixgbe: Add support for tracking the default user priority to SR-IOV
ixgbe: Add support for GET_QUEUES message to get DCB configuration
ixgbe: Enable support for VF API version 1.1 in the PF.
ixgbevf: Add VF DCB + SR-IOV support
ixgbe: Drop unnecessary addition from ixgbe_set_rx_buffer_len
ixgbe: Fix possible memory leak in ixgbe_set_ringparam
Don Skidmore (2):
ixgbe: Add function ixgbe_reset_pipeline_82599
ixgbe: Add support for pipeline reset
Emil Tantilov (1):
ixgbe: add WOL support for new subdevice id
Jacob Keller (1):
ixgbe: (PTP) refactor init, cyclecounter and reset
Tushar Dave (1):
ixgbe: Correcting small packet padding
Wei Yongjun (1):
ixgbe: using is_zero_ether_addr() to simplify the code
drivers/net/ethernet/intel/ixgbe/ixgbe.h | 7 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c | 149 ++++++++++++++++----
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 73 +++++++++-
drivers/net/ethernet/intel/ixgbe/ixgbe_common.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c | 102 +++++++-------
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 66 +++++++--
drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h | 10 ++
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c | 109 +++++++--------
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 124 +++++++++++++----
drivers/net/ethernet/intel/ixgbe/ixgbe_type.h | 1 +
drivers/net/ethernet/intel/ixgbevf/defines.h | 7 +-
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h | 4 +-
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 159 +++++++++++++++++++++-
drivers/net/ethernet/intel/ixgbevf/mbx.h | 10 ++
drivers/net/ethernet/intel/ixgbevf/vf.c | 58 ++++++++
drivers/net/ethernet/intel/ixgbevf/vf.h | 2 +
16 files changed, 688 insertions(+), 194 deletions(-)
--
1.7.11.7
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2012-10-30 7:04 Jeff Kirsher
2012-10-31 18:28 ` David Miller
0 siblings, 1 reply; 33+ messages in thread
From: Jeff Kirsher @ 2012-10-30 7:04 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to ixgbe, ixgbevf, igbvf, igb and
networking core (bridge). Most notably is the addition of support
for local link multicast addresses in SR-IOV mode to the networking
core.
Also note, the ixgbe patch "ixgbe: Add support for pipeline reset" and
"ixgbe: Fix return value from macvlan filter function" is revised based
on community feedback.
The following are changes since commit a932657f51eadb8280166e82dc7034dfbff3985a:
net: sierra: shut up sparse restricted type warnings
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Alexander Duyck (2):
ixgbe: Do not decrement budget in ixgbe_clean_rx_irq
igb: Fix sparse warning in igb_ptp_rx_pktstamp
Carolyn Wyborny (1):
igb: Update firmware version info for ethtool output.
Don Skidmore (1):
ixgbe: Add support for pipeline reset
Emil Tantilov (1):
ixgbe: clean up the condition for turning on/off the laser
Greg Rose (4):
ixgbe: Fix return value from macvlan filter function
ixgbe: Return success or failure on VF MAC filter set
ixgbevf: Do not forward LLDP type frames
igbvf: Check for error on dma_map_single call
Jiri Benc (1):
ixgbe: reduce PTP rx path overhead
John Fastabend (1):
net, ixgbe: handle link local multicast addresses in SR-IOV mode
Josh Hay (1):
ixgbe: add/update descriptor maps in comments
Matthew Vick (1):
igb: Enable auto-crossover during forced operation on 82580 and
above.
drivers/net/ethernet/intel/igb/e1000_defines.h | 14 +++
drivers/net/ethernet/intel/igb/e1000_mac.c | 4 +
drivers/net/ethernet/intel/igb/e1000_nvm.c | 70 +++++++++++++
drivers/net/ethernet/intel/igb/e1000_nvm.h | 16 +++
drivers/net/ethernet/intel/igb/e1000_phy.c | 29 +++---
drivers/net/ethernet/intel/igb/igb_main.c | 76 +++++----------
drivers/net/ethernet/intel/igb/igb_ptp.c | 2 +-
drivers/net/ethernet/intel/igbvf/netdev.c | 13 +++
drivers/net/ethernet/intel/ixgbe/ixgbe.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c | 114 ++++++++++++++++------
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 70 ++++++++++++-
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 103 ++++++++++++-------
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c | 6 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 5 +-
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 5 +
drivers/net/ethernet/intel/ixgbevf/vf.c | 3 +
include/linux/etherdevice.h | 19 ++++
net/bridge/br_device.c | 2 +-
net/bridge/br_input.c | 15 ---
net/bridge/br_private.h | 1 -
net/bridge/br_sysfs_br.c | 3 +-
21 files changed, 419 insertions(+), 152 deletions(-)
--
1.7.11.7
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [net-next 00/13][pull request] Intel Wired LAN Driver Updates
2012-10-30 7:04 Jeff Kirsher
@ 2012-10-31 18:28 ` David Miller
0 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2012-10-31 18:28 UTC (permalink / raw)
To: jeffrey.t.kirsher; +Cc: netdev, gospo, sassmann
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Date: Tue, 30 Oct 2012 00:04:17 -0700
> This series contains updates to ixgbe, ixgbevf, igbvf, igb and
> networking core (bridge). Most notably is the addition of support
> for local link multicast addresses in SR-IOV mode to the networking
> core.
>
> Also note, the ixgbe patch "ixgbe: Add support for pipeline reset" and
> "ixgbe: Fix return value from macvlan filter function" is revised based
> on community feedback.
Pulled, thanks Jeff.
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2013-04-04 11:37 Jeff Kirsher
0 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-04-04 11:37 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to ixgbe and igb.
For ixgbe (and igb), Alex fixes an issue where we were incorrectly checking
the entire frag_off field when we only wanted the fragment offset. Alex
also provides a patch to drop the check for PAGE_SIZE in transmit since the
default configuration is to allocate 32k for all buffers.
Emil provides the third ixgbe patch with is a simple change to make the
calculation of eerd consistent between the read and write functions
by using | instead of + for IXGBE_EEPROM_RW_REG_START.
The remaining patches in the series are against igb, the largest being
my patch to cleanup code comments and whitespace to align the igb
driver with the networking style of code comments. While cleaning up the
code comments, fixed several other whitespace/checkpatch.pl code
formatting issues.
Other notable igb patches are the added support for 100base-fx SFP,
added support for reading & exporting SFP data over i2c, and on EEE
capable devices, query the PHY to determine what the link partner
is advertising.
The following are changes since commit d66248326410ed0d3e813ebe974b3e6638df0717:
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Akeem G. Abodunrin (5):
igb: Support for 100base-fx SFP
igb: Support to read and export SFF-8472/8079 data
igb: Implement support to power sfp cage and turn on I2C
igb: random code and comments fix
igb: Fix sparse warnings on function pointers
Alexander Duyck (5):
ixgbe: Mask off check of frag_off as we only want fragment offset
ixgbe: Drop check for PAGE_SIZE from ixgbe_xmit_frame_ring
igb: Mask off check of frag_off as we only want fragment offset
igb: Pull adapter out of main path in igb_xmit_frame_ring
igb: Use rx/tx_itr_setting when setting up initial value of itr
Emil Tantilov (1):
ixgbe: don't do arithmetic operations on bitmasks
Jeff Kirsher (1):
igb: Fix code comments and whitespace
Matthew Vick (1):
igb: Enable EEE LP advertisement
drivers/net/ethernet/intel/igb/e1000_82575.c | 130 +--
drivers/net/ethernet/intel/igb/e1000_82575.h | 1 +
drivers/net/ethernet/intel/igb/e1000_defines.h | 34 +-
drivers/net/ethernet/intel/igb/e1000_hw.h | 53 +-
drivers/net/ethernet/intel/igb/e1000_i210.c | 93 +-
drivers/net/ethernet/intel/igb/e1000_i210.h | 4 +
drivers/net/ethernet/intel/igb/e1000_mac.c | 111 +--
drivers/net/ethernet/intel/igb/e1000_mac.h | 17 +-
drivers/net/ethernet/intel/igb/e1000_mbx.c | 11 +-
drivers/net/ethernet/intel/igb/e1000_mbx.h | 52 +-
drivers/net/ethernet/intel/igb/e1000_nvm.c | 25 +-
drivers/net/ethernet/intel/igb/e1000_phy.c | 258 ++---
drivers/net/ethernet/intel/igb/e1000_regs.h | 49 +-
drivers/net/ethernet/intel/igb/igb.h | 132 +--
drivers/net/ethernet/intel/igb/igb_ethtool.c | 299 ++++--
drivers/net/ethernet/intel/igb/igb_hwmon.c | 29 +-
drivers/net/ethernet/intel/igb/igb_main.c | 1187 ++++++++++++-----------
drivers/net/ethernet/intel/igb/igb_ptp.c | 57 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 2 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 9 +-
20 files changed, 1323 insertions(+), 1230 deletions(-)
--
1.7.11.7
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2013-10-10 6:40 Jeff Kirsher
2013-10-10 6:40 ` [net-next 01/13] i40e: Link code updates Jeff Kirsher
` (13 more replies)
0 siblings, 14 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:40 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to i40e only.
Alex provides the majority of the patches against i40e, where he does
cleanup of the Tx and RX queues and to align the code with the known
good Tx/Rx queue code in the ixgbe driver.
Anjali provides an i40e patch to update link events to not print to
the log until the device is administratively up.
Catherine provides a patch to update the driver version.
The following are changes since commit b68656b22fd7a3e03087c11b2b45c15c0b328609:
be2net: change the driver version number to 4.9.224.0
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Alexander Duyck (11):
i40e: Drop unused completed stat
i40e: Cleanup Tx buffer info layout
i40e: Do not directly increment Tx next_to_use
i40e: clean up Tx fast path
i40e: Drop dead code and flags from Tx hotpath
i40e: Add support for Tx byte queue limits
i40e: Split bytes and packets from Rx/Tx stats
i40e: Move q_vectors from pointer to array to array of pointers
i40e: Replace ring container array with linked list
i40e: Move rings from pointer to array to array of pointers
i40e: Add support for 64 bit netstats
Anjali Singhai (1):
i40e: Link code updates
Catherine Sullivan (1):
i40e: Bump version
drivers/net/ethernet/intel/i40e/i40e.h | 11 +-
drivers/net/ethernet/intel/i40e/i40e_debugfs.c | 207 ++++++------
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 69 ++--
drivers/net/ethernet/intel/i40e/i40e_main.c | 442 ++++++++++++++++---------
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 358 ++++++++++----------
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 35 +-
6 files changed, 650 insertions(+), 472 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 01/13] i40e: Link code updates
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
@ 2013-10-10 6:40 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 02/13] i40e: Drop unused completed stat Jeff Kirsher
` (12 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:40 UTC (permalink / raw)
To: davem
Cc: Anjali Singhai, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Anjali Singhai <anjali.singhai@intel.com>
Link events should not print to the log until the device is
administratively up.
Signed-off-by: Anjali Singhai <anjali.singhai@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_main.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 221aa47..657babe 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -3703,8 +3703,11 @@ static int i40e_up_complete(struct i40e_vsi *vsi)
if ((pf->hw.phy.link_info.link_info & I40E_AQ_LINK_UP) &&
(vsi->netdev)) {
+ netdev_info(vsi->netdev, "NIC Link is Up\n");
netif_tx_start_all_queues(vsi->netdev);
netif_carrier_on(vsi->netdev);
+ } else if (vsi->netdev) {
+ netdev_info(vsi->netdev, "NIC Link is Down\n");
}
i40e_service_event_schedule(pf);
@@ -4153,8 +4156,9 @@ static void i40e_link_event(struct i40e_pf *pf)
if (new_link == old_link)
return;
- netdev_info(pf->vsi[pf->lan_vsi]->netdev,
- "NIC Link is %s\n", (new_link ? "Up" : "Down"));
+ if (!test_bit(__I40E_DOWN, &pf->vsi[pf->lan_vsi]->state))
+ netdev_info(pf->vsi[pf->lan_vsi]->netdev,
+ "NIC Link is %s\n", (new_link ? "Up" : "Down"));
/* Notify the base of the switch tree connected to
* the link. Floating VEBs are not notified.
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 02/13] i40e: Drop unused completed stat
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
2013-10-10 6:40 ` [net-next 01/13] i40e: Link code updates Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 03/13] i40e: Cleanup Tx buffer info layout Jeff Kirsher
` (11 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
The Tx "completed" stat was part of the original rewrite for detecting
Tx hangs. However some time ago in ixgbe I determined that we could
just use the packets stat instead. Since then this stat was
removed from ixgbe and it serves no purpose in i40e so it can be
dropped.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_debugfs.c | 3 +--
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 3 +--
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 -
3 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index 8dbd91f..0b01c61 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -560,10 +560,9 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
vsi->tx_rings[i].tx_stats.bytes,
vsi->tx_rings[i].tx_stats.restart_queue);
dev_info(&pf->pdev->dev,
- " tx_rings[%i]: tx_stats: tx_busy = %lld, completed = %lld, tx_done_old = %lld\n",
+ " tx_rings[%i]: tx_stats: tx_busy = %lld, tx_done_old = %lld\n",
i,
vsi->tx_rings[i].tx_stats.tx_busy,
- vsi->tx_rings[i].tx_stats.completed,
vsi->tx_rings[i].tx_stats.tx_done_old);
dev_info(&pf->pdev->dev,
" tx_rings[%i]: size = %i, dma = 0x%08lx\n",
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 49d2cfa..32c9aeb 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -346,8 +346,7 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
cpu_to_le64(I40E_TX_DESC_DTYPE_DESC_DONE)))
break;
- /* count the packet as being completed */
- tx_ring->tx_stats.completed++;
+ /* clear next_to_watch to prevent false hangs */
tx_buf->next_to_watch = NULL;
tx_buf->time_stamp = 0;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index b1d7722..2fbacaf 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -134,7 +134,6 @@ struct i40e_tx_queue_stats {
u64 bytes;
u64 restart_queue;
u64 tx_busy;
- u64 completed;
u64 tx_done_old;
};
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 03/13] i40e: Cleanup Tx buffer info layout
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
2013-10-10 6:40 ` [net-next 01/13] i40e: Link code updates Jeff Kirsher
2013-10-10 6:41 ` [net-next 02/13] i40e: Drop unused completed stat Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 04/13] i40e: Do not directly increment Tx next_to_use Jeff Kirsher
` (10 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
- drop the mapped_as_page u8 from the Tx buffer info as it was unused
- use the DMA unmap accessors for Tx DMA
- replace checks of DMA with checks of the unmap length to verify if an
unmap is needed
- update the Tx buffer layout to make it consistent with igb, ixgbe
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 16 ++++++++--------
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 13 ++++++-------
2 files changed, 14 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 32c9aeb..3bc3efa 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -195,20 +195,20 @@ static void i40e_fd_handle_status(struct i40e_ring *rx_ring, u32 qw, u8 prog_id)
static inline void i40e_unmap_tx_resource(struct i40e_ring *ring,
struct i40e_tx_buffer *tx_buffer)
{
- if (tx_buffer->dma) {
+ if (dma_unmap_len(tx_buffer, len)) {
if (tx_buffer->tx_flags & I40E_TX_FLAGS_MAPPED_AS_PAGE)
dma_unmap_page(ring->dev,
- tx_buffer->dma,
- tx_buffer->length,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
DMA_TO_DEVICE);
else
dma_unmap_single(ring->dev,
- tx_buffer->dma,
- tx_buffer->length,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
DMA_TO_DEVICE);
}
- tx_buffer->dma = 0;
tx_buffer->time_stamp = 0;
+ dma_unmap_len_set(tx_buffer, len, 0);
}
/**
@@ -1554,9 +1554,9 @@ static void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
}
tx_bi = &tx_ring->tx_bi[i];
- tx_bi->length = buf_offset + size;
+ dma_unmap_len_set(tx_bi, len, buf_offset + size);
+ dma_unmap_addr_set(tx_bi, dma, dma);
tx_bi->tx_flags = tx_flags;
- tx_bi->dma = dma;
tx_desc->buffer_addr = cpu_to_le64(dma + buf_offset);
tx_desc->cmd_type_offset_bsz = build_ctob(td_cmd, td_offset,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index 2fbacaf..711f549 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -110,15 +110,14 @@
#define I40E_TX_FLAGS_VLAN_SHIFT 16
struct i40e_tx_buffer {
- struct sk_buff *skb;
- dma_addr_t dma;
- unsigned long time_stamp;
- u16 length;
- u32 tx_flags;
struct i40e_tx_desc *next_to_watch;
+ unsigned long time_stamp;
+ struct sk_buff *skb;
unsigned int bytecount;
- u16 gso_segs;
- u8 mapped_as_page;
+ unsigned short gso_segs;
+ DEFINE_DMA_UNMAP_ADDR(dma);
+ DEFINE_DMA_UNMAP_LEN(len);
+ u32 tx_flags;
};
struct i40e_rx_buffer {
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 04/13] i40e: Do not directly increment Tx next_to_use
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (2 preceding siblings ...)
2013-10-10 6:41 ` [net-next 03/13] i40e: Cleanup Tx buffer info layout Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 05/13] i40e: clean up Tx fast path Jeff Kirsher
` (9 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
Avoid directly incrementing next_to_use for multiple reasons. The main
reason being that if we directly increment it then it can attain a state
where it is equal to the ring count. Technically this is a state it
should not be able to reach but the way this is written it now can.
This patch pulls the value off into a register and then increments it
and writes back either the value or 0 depending on if the value is equal
to the ring count.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 46 ++++++++++++++++-------------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 3bc3efa..e8db4dc 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -73,11 +73,12 @@ int i40e_program_fdir_filter(struct i40e_fdir_data *fdir_data,
goto dma_fail;
/* grab the next descriptor */
- fdir_desc = I40E_TX_FDIRDESC(tx_ring, tx_ring->next_to_use);
- tx_buf = &tx_ring->tx_bi[tx_ring->next_to_use];
- tx_ring->next_to_use++;
- if (tx_ring->next_to_use == tx_ring->count)
- tx_ring->next_to_use = 0;
+ i = tx_ring->next_to_use;
+ fdir_desc = I40E_TX_FDIRDESC(tx_ring, i);
+ tx_buf = &tx_ring->tx_bi[i];
+
+ i++;
+ tx_ring->next_to_use = (i < tx_ring->count) ? i : 0;
fdir_desc->qindex_flex_ptype_vsi = cpu_to_le32((fdir_data->q_index
<< I40E_TXD_FLTR_QW0_QINDEX_SHIFT)
@@ -134,11 +135,11 @@ int i40e_program_fdir_filter(struct i40e_fdir_data *fdir_data,
fdir_desc->fd_id = cpu_to_le32(fdir_data->fd_id);
/* Now program a dummy descriptor */
- tx_desc = I40E_TX_DESC(tx_ring, tx_ring->next_to_use);
- tx_buf = &tx_ring->tx_bi[tx_ring->next_to_use];
- tx_ring->next_to_use++;
- if (tx_ring->next_to_use == tx_ring->count)
- tx_ring->next_to_use = 0;
+ i = tx_ring->next_to_use;
+ tx_desc = I40E_TX_DESC(tx_ring, i);
+
+ i++;
+ tx_ring->next_to_use = (i < tx_ring->count) ? i : 0;
tx_desc->buffer_addr = cpu_to_le64(dma);
td_cmd = I40E_TX_DESC_CMD_EOP |
@@ -148,9 +149,6 @@ int i40e_program_fdir_filter(struct i40e_fdir_data *fdir_data,
tx_desc->cmd_type_offset_bsz =
build_ctob(td_cmd, 0, I40E_FDIR_MAX_RAW_PACKET_LOOKUP, 0);
- /* Mark the data descriptor to be watched */
- tx_buf->next_to_watch = tx_desc;
-
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
* applicable for weak-ordered memory model archs,
@@ -158,6 +156,9 @@ int i40e_program_fdir_filter(struct i40e_fdir_data *fdir_data,
*/
wmb();
+ /* Mark the data descriptor to be watched */
+ tx_buf->next_to_watch = tx_desc;
+
writel(tx_ring->next_to_use, tx_ring->tail);
return 0;
@@ -1143,6 +1144,7 @@ static void i40e_atr(struct i40e_ring *tx_ring, struct sk_buff *skb,
struct tcphdr *th;
unsigned int hlen;
u32 flex_ptype, dtype_cmd;
+ u16 i;
/* make sure ATR is enabled */
if (!(pf->flags & I40E_FLAG_FDIR_ATR_ENABLED))
@@ -1182,10 +1184,11 @@ static void i40e_atr(struct i40e_ring *tx_ring, struct sk_buff *skb,
tx_ring->atr_count = 0;
/* grab the next descriptor */
- fdir_desc = I40E_TX_FDIRDESC(tx_ring, tx_ring->next_to_use);
- tx_ring->next_to_use++;
- if (tx_ring->next_to_use == tx_ring->count)
- tx_ring->next_to_use = 0;
+ i = tx_ring->next_to_use;
+ fdir_desc = I40E_TX_FDIRDESC(tx_ring, i);
+
+ i++;
+ tx_ring->next_to_use = (i < tx_ring->count) ? i : 0;
flex_ptype = (tx_ring->queue_index << I40E_TXD_FLTR_QW0_QINDEX_SHIFT) &
I40E_TXD_FLTR_QW0_QINDEX_MASK;
@@ -1481,15 +1484,16 @@ static void i40e_create_tx_ctx(struct i40e_ring *tx_ring,
const u32 cd_tunneling, const u32 cd_l2tag2)
{
struct i40e_tx_context_desc *context_desc;
+ int i = tx_ring->next_to_use;
if (!cd_type_cmd_tso_mss && !cd_tunneling && !cd_l2tag2)
return;
/* grab the next descriptor */
- context_desc = I40E_TX_CTXTDESC(tx_ring, tx_ring->next_to_use);
- tx_ring->next_to_use++;
- if (tx_ring->next_to_use == tx_ring->count)
- tx_ring->next_to_use = 0;
+ context_desc = I40E_TX_CTXTDESC(tx_ring, i);
+
+ i++;
+ tx_ring->next_to_use = (i < tx_ring->count) ? i : 0;
/* cpu_to_le32 and assign to struct fields */
context_desc->tunneling_params = cpu_to_le32(cd_tunneling);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 05/13] i40e: clean up Tx fast path
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (3 preceding siblings ...)
2013-10-10 6:41 ` [net-next 04/13] i40e: Do not directly increment Tx next_to_use Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 06/13] i40e: Drop dead code and flags from Tx hotpath Jeff Kirsher
` (8 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
Sync the fast path for i40e_tx_map and i40e_clean_tx_irq so that they
are similar to igb and ixgbe.
- Only update the Tx descriptor ring in tx_map
- Make skb mapping always on the first buffer in the chain
- Drop the use of MAPPED_AS_PAGE Tx flag
- Only store flags on the first buffer_info structure
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 220 +++++++++++++++-------------
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 -
2 files changed, 122 insertions(+), 99 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index e8db4dc..e96de75 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -189,27 +189,30 @@ static void i40e_fd_handle_status(struct i40e_ring *rx_ring, u32 qw, u8 prog_id)
}
/**
- * i40e_unmap_tx_resource - Release a Tx buffer
+ * i40e_unmap_and_free_tx_resource - Release a Tx buffer
* @ring: the ring that owns the buffer
* @tx_buffer: the buffer to free
**/
-static inline void i40e_unmap_tx_resource(struct i40e_ring *ring,
- struct i40e_tx_buffer *tx_buffer)
+static void i40e_unmap_and_free_tx_resource(struct i40e_ring *ring,
+ struct i40e_tx_buffer *tx_buffer)
{
- if (dma_unmap_len(tx_buffer, len)) {
- if (tx_buffer->tx_flags & I40E_TX_FLAGS_MAPPED_AS_PAGE)
- dma_unmap_page(ring->dev,
- dma_unmap_addr(tx_buffer, dma),
- dma_unmap_len(tx_buffer, len),
- DMA_TO_DEVICE);
- else
+ if (tx_buffer->skb) {
+ dev_kfree_skb_any(tx_buffer->skb);
+ if (dma_unmap_len(tx_buffer, len))
dma_unmap_single(ring->dev,
dma_unmap_addr(tx_buffer, dma),
dma_unmap_len(tx_buffer, len),
DMA_TO_DEVICE);
+ } else if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
}
- tx_buffer->time_stamp = 0;
+ tx_buffer->next_to_watch = NULL;
+ tx_buffer->skb = NULL;
dma_unmap_len_set(tx_buffer, len, 0);
+ /* tx_buffer must be completely set up in the transmit path */
}
/**
@@ -218,7 +221,6 @@ static inline void i40e_unmap_tx_resource(struct i40e_ring *ring,
**/
void i40e_clean_tx_ring(struct i40e_ring *tx_ring)
{
- struct i40e_tx_buffer *tx_buffer;
unsigned long bi_size;
u16 i;
@@ -227,13 +229,8 @@ void i40e_clean_tx_ring(struct i40e_ring *tx_ring)
return;
/* Free all the Tx ring sk_buffs */
- for (i = 0; i < tx_ring->count; i++) {
- tx_buffer = &tx_ring->tx_bi[i];
- i40e_unmap_tx_resource(tx_ring, tx_buffer);
- if (tx_buffer->skb)
- dev_kfree_skb_any(tx_buffer->skb);
- tx_buffer->skb = NULL;
- }
+ for (i = 0; i < tx_ring->count; i++)
+ i40e_unmap_and_free_tx_resource(tx_ring, &tx_ring->tx_bi[i]);
bi_size = sizeof(struct i40e_tx_buffer) * tx_ring->count;
memset(tx_ring->tx_bi, 0, bi_size);
@@ -332,16 +329,18 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
tx_buf = &tx_ring->tx_bi[i];
tx_desc = I40E_TX_DESC(tx_ring, i);
+ i -= tx_ring->count;
- for (; budget; budget--) {
- struct i40e_tx_desc *eop_desc;
-
- eop_desc = tx_buf->next_to_watch;
+ do {
+ struct i40e_tx_desc *eop_desc = tx_buf->next_to_watch;
/* if next_to_watch is not set then there is no work pending */
if (!eop_desc)
break;
+ /* prevent any other reads prior to eop_desc */
+ read_barrier_depends();
+
/* if the descriptor isn't done, no work yet to do */
if (!(eop_desc->cmd_type_offset_bsz &
cpu_to_le64(I40E_TX_DESC_DTYPE_DESC_DONE)))
@@ -349,44 +348,67 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
/* clear next_to_watch to prevent false hangs */
tx_buf->next_to_watch = NULL;
- tx_buf->time_stamp = 0;
- /* set memory barrier before eop_desc is verified */
- rmb();
+ /* update the statistics for this packet */
+ total_bytes += tx_buf->bytecount;
+ total_packets += tx_buf->gso_segs;
- do {
- i40e_unmap_tx_resource(tx_ring, tx_buf);
+ /* free the skb */
+ dev_kfree_skb_any(tx_buf->skb);
- /* clear dtype status */
- tx_desc->cmd_type_offset_bsz &=
- ~cpu_to_le64(I40E_TXD_QW1_DTYPE_MASK);
+ /* unmap skb header data */
+ dma_unmap_single(tx_ring->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
- if (likely(tx_desc == eop_desc)) {
- eop_desc = NULL;
+ /* clear tx_buffer data */
+ tx_buf->skb = NULL;
+ dma_unmap_len_set(tx_buf, len, 0);
- dev_kfree_skb_any(tx_buf->skb);
- tx_buf->skb = NULL;
-
- total_bytes += tx_buf->bytecount;
- total_packets += tx_buf->gso_segs;
- }
+ /* unmap remaining buffers */
+ while (tx_desc != eop_desc) {
tx_buf++;
tx_desc++;
i++;
- if (unlikely(i == tx_ring->count)) {
- i = 0;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
tx_buf = tx_ring->tx_bi;
tx_desc = I40E_TX_DESC(tx_ring, 0);
}
- } while (eop_desc);
- }
+ /* unmap any remaining paged data */
+ if (dma_unmap_len(tx_buf, len)) {
+ dma_unmap_page(tx_ring->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buf, len, 0);
+ }
+ }
+
+ /* move us one more past the eop_desc for start of next pkt */
+ tx_buf++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buf = tx_ring->tx_bi;
+ tx_desc = I40E_TX_DESC(tx_ring, 0);
+ }
+
+ /* update budget accounting */
+ budget--;
+ } while (likely(budget));
+
+ i += tx_ring->count;
tx_ring->next_to_clean = i;
tx_ring->tx_stats.bytes += total_bytes;
tx_ring->tx_stats.packets += total_packets;
tx_ring->q_vector->tx.total_bytes += total_bytes;
tx_ring->q_vector->tx.total_packets += total_packets;
+
if (check_for_tx_hang(tx_ring) && i40e_check_tx_hang(tx_ring)) {
/* schedule immediate reset if we believe we hung */
dev_info(tx_ring->dev, "Detected Tx Unit Hang\n"
@@ -1515,68 +1537,71 @@ static void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
struct i40e_tx_buffer *first, u32 tx_flags,
const u8 hdr_len, u32 td_cmd, u32 td_offset)
{
- struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
unsigned int data_len = skb->data_len;
unsigned int size = skb_headlen(skb);
- struct device *dev = tx_ring->dev;
- u32 paylen = skb->len - hdr_len;
- u16 i = tx_ring->next_to_use;
+ struct skb_frag_struct *frag;
struct i40e_tx_buffer *tx_bi;
struct i40e_tx_desc *tx_desc;
- u32 buf_offset = 0;
+ u16 i = tx_ring->next_to_use;
u32 td_tag = 0;
dma_addr_t dma;
u16 gso_segs;
- dma = dma_map_single(dev, skb->data, size, DMA_TO_DEVICE);
- if (dma_mapping_error(dev, dma))
- goto dma_error;
-
if (tx_flags & I40E_TX_FLAGS_HW_VLAN) {
td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
td_tag = (tx_flags & I40E_TX_FLAGS_VLAN_MASK) >>
I40E_TX_FLAGS_VLAN_SHIFT;
}
+ if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO))
+ gso_segs = skb_shinfo(skb)->gso_segs;
+ else
+ gso_segs = 1;
+
+ /* multiply data chunks by size of headers */
+ first->bytecount = skb->len - hdr_len + (gso_segs * hdr_len);
+ first->gso_segs = gso_segs;
+ first->skb = skb;
+ first->tx_flags = tx_flags;
+
+ dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE);
+
tx_desc = I40E_TX_DESC(tx_ring, i);
- for (;;) {
- while (size > I40E_MAX_DATA_PER_TXD) {
- tx_desc->buffer_addr = cpu_to_le64(dma + buf_offset);
+ tx_bi = first;
+
+ for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ if (dma_mapping_error(tx_ring->dev, dma))
+ goto dma_error;
+
+ /* record length, and DMA address */
+ dma_unmap_len_set(tx_bi, len, size);
+ dma_unmap_addr_set(tx_bi, dma, dma);
+
+ tx_desc->buffer_addr = cpu_to_le64(dma);
+
+ while (unlikely(size > I40E_MAX_DATA_PER_TXD)) {
tx_desc->cmd_type_offset_bsz =
build_ctob(td_cmd, td_offset,
I40E_MAX_DATA_PER_TXD, td_tag);
- buf_offset += I40E_MAX_DATA_PER_TXD;
- size -= I40E_MAX_DATA_PER_TXD;
-
tx_desc++;
i++;
if (i == tx_ring->count) {
tx_desc = I40E_TX_DESC(tx_ring, 0);
i = 0;
}
- }
- tx_bi = &tx_ring->tx_bi[i];
- dma_unmap_len_set(tx_bi, len, buf_offset + size);
- dma_unmap_addr_set(tx_bi, dma, dma);
- tx_bi->tx_flags = tx_flags;
+ dma += I40E_MAX_DATA_PER_TXD;
+ size -= I40E_MAX_DATA_PER_TXD;
- tx_desc->buffer_addr = cpu_to_le64(dma + buf_offset);
- tx_desc->cmd_type_offset_bsz = build_ctob(td_cmd, td_offset,
- size, td_tag);
+ tx_desc->buffer_addr = cpu_to_le64(dma);
+ }
if (likely(!data_len))
break;
- size = skb_frag_size(frag);
- data_len -= size;
- buf_offset = 0;
- tx_flags |= I40E_TX_FLAGS_MAPPED_AS_PAGE;
-
- dma = skb_frag_dma_map(dev, frag, 0, size, DMA_TO_DEVICE);
- if (dma_mapping_error(dev, dma))
- goto dma_error;
+ tx_desc->cmd_type_offset_bsz = build_ctob(td_cmd, td_offset,
+ size, td_tag);
tx_desc++;
i++;
@@ -1585,31 +1610,21 @@ static void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
i = 0;
}
- frag++;
- }
-
- tx_desc->cmd_type_offset_bsz |=
- cpu_to_le64((u64)I40E_TXD_CMD << I40E_TXD_QW1_CMD_SHIFT);
-
- i++;
- if (i == tx_ring->count)
- i = 0;
+ size = skb_frag_size(frag);
+ data_len -= size;
- tx_ring->next_to_use = i;
+ dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size,
+ DMA_TO_DEVICE);
- if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO))
- gso_segs = skb_shinfo(skb)->gso_segs;
- else
- gso_segs = 1;
+ tx_bi = &tx_ring->tx_bi[i];
+ }
- /* multiply data chunks by size of headers */
- tx_bi->bytecount = paylen + (gso_segs * hdr_len);
- tx_bi->gso_segs = gso_segs;
- tx_bi->skb = skb;
+ tx_desc->cmd_type_offset_bsz =
+ build_ctob(td_cmd, td_offset, size, td_tag) |
+ cpu_to_le64((u64)I40E_TXD_CMD << I40E_TXD_QW1_CMD_SHIFT);
- /* set the timestamp and next to watch values */
+ /* set the timestamp */
first->time_stamp = jiffies;
- first->next_to_watch = tx_desc;
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
@@ -1618,16 +1633,27 @@ static void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
*/
wmb();
+ /* set next_to_watch value indicating a packet is present */
+ first->next_to_watch = tx_desc;
+
+ i++;
+ if (i == tx_ring->count)
+ i = 0;
+
+ tx_ring->next_to_use = i;
+
+ /* notify HW of packet */
writel(i, tx_ring->tail);
+
return;
dma_error:
- dev_info(dev, "TX DMA map failed\n");
+ dev_info(tx_ring->dev, "TX DMA map failed\n");
/* clear dma mappings for failed tx_bi map */
for (;;) {
tx_bi = &tx_ring->tx_bi[i];
- i40e_unmap_tx_resource(tx_ring, tx_bi);
+ i40e_unmap_and_free_tx_resource(tx_ring, tx_bi);
if (tx_bi == first)
break;
if (i == 0)
@@ -1635,8 +1661,6 @@ dma_error:
i--;
}
- dev_kfree_skb_any(skb);
-
tx_ring->next_to_use = i;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index 711f549..2cb1338 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -103,7 +103,6 @@
#define I40E_TX_FLAGS_FCCRC (u32)(1 << 6)
#define I40E_TX_FLAGS_FSO (u32)(1 << 7)
#define I40E_TX_FLAGS_TXSW (u32)(1 << 8)
-#define I40E_TX_FLAGS_MAPPED_AS_PAGE (u32)(1 << 9)
#define I40E_TX_FLAGS_VLAN_MASK 0xffff0000
#define I40E_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000
#define I40E_TX_FLAGS_VLAN_PRIO_SHIFT 29
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 06/13] i40e: Drop dead code and flags from Tx hotpath
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (4 preceding siblings ...)
2013-10-10 6:41 ` [net-next 05/13] i40e: clean up Tx fast path Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 07/13] i40e: Add support for Tx byte queue limits Jeff Kirsher
` (7 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
Drop Tx flag and TXSW which is tested but never set.
As a result of this change we can drop a complicated check that always
resulted in the final result of i40e_tx_csum being equal to the
CHECKSUM_PARTIAL value. As such we can replace the entire function call
with just a check for skb->summed == CHECKSUM_PARTIAL.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 31 +++++------------------------
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 -
2 files changed, 5 insertions(+), 27 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index e96de75..52dfac2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1300,27 +1300,6 @@ static int i40e_tx_prepare_vlan_flags(struct sk_buff *skb,
}
/**
- * i40e_tx_csum - is checksum offload requested
- * @tx_ring: ptr to the ring to send
- * @skb: ptr to the skb we're sending
- * @tx_flags: the collected send information
- * @protocol: the send protocol
- *
- * Returns true if checksum offload is requested
- **/
-static bool i40e_tx_csum(struct i40e_ring *tx_ring, struct sk_buff *skb,
- u32 tx_flags, __be16 protocol)
-{
- if ((skb->ip_summed != CHECKSUM_PARTIAL) &&
- !(tx_flags & I40E_TX_FLAGS_TXSW)) {
- if (!(tx_flags & I40E_TX_FLAGS_HW_VLAN))
- return false;
- }
-
- return skb->ip_summed == CHECKSUM_PARTIAL;
-}
-
-/**
* i40e_tso - set up the tso context descriptor
* @tx_ring: ptr to the ring to send
* @skb: ptr to the skb we're sending
@@ -1785,16 +1764,16 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,
skb_tx_timestamp(skb);
+ /* always enable CRC insertion offload */
+ td_cmd |= I40E_TX_DESC_CMD_ICRC;
+
/* Always offload the checksum, since it's in the data descriptor */
- if (i40e_tx_csum(tx_ring, skb, tx_flags, protocol))
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
tx_flags |= I40E_TX_FLAGS_CSUM;
- /* always enable offload insertion */
- td_cmd |= I40E_TX_DESC_CMD_ICRC;
-
- if (tx_flags & I40E_TX_FLAGS_CSUM)
i40e_tx_enable_csum(skb, tx_flags, &td_cmd, &td_offset,
tx_ring, &cd_tunneling);
+ }
i40e_create_tx_ctx(tx_ring, cd_type_cmd_tso_mss,
cd_tunneling, cd_l2tag2);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index 2cb1338..e514247 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -102,7 +102,6 @@
#define I40E_TX_FLAGS_IPV6 (u32)(1 << 5)
#define I40E_TX_FLAGS_FCCRC (u32)(1 << 6)
#define I40E_TX_FLAGS_FSO (u32)(1 << 7)
-#define I40E_TX_FLAGS_TXSW (u32)(1 << 8)
#define I40E_TX_FLAGS_VLAN_MASK 0xffff0000
#define I40E_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000
#define I40E_TX_FLAGS_VLAN_PRIO_SHIFT 29
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 07/13] i40e: Add support for Tx byte queue limits
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (5 preceding siblings ...)
2013-10-10 6:41 ` [net-next 06/13] i40e: Drop dead code and flags from Tx hotpath Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 08/13] i40e: Split bytes and packets from Rx/Tx stats Jeff Kirsher
` (6 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
Implement BQL (byte queue limit) support in i40e.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 52dfac2..ad2818f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -240,6 +240,13 @@ void i40e_clean_tx_ring(struct i40e_ring *tx_ring)
tx_ring->next_to_use = 0;
tx_ring->next_to_clean = 0;
+
+ if (!tx_ring->netdev)
+ return;
+
+ /* cleanup Tx queue statistics */
+ netdev_tx_reset_queue(netdev_get_tx_queue(tx_ring->netdev,
+ tx_ring->queue_index));
}
/**
@@ -436,6 +443,10 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
return true;
}
+ netdev_tx_completed_queue(netdev_get_tx_queue(tx_ring->netdev,
+ tx_ring->queue_index),
+ total_packets, total_bytes);
+
#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2)
if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) &&
(I40E_DESC_UNUSED(tx_ring) >= TX_WAKE_THRESHOLD))) {
@@ -1602,6 +1613,10 @@ static void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
build_ctob(td_cmd, td_offset, size, td_tag) |
cpu_to_le64((u64)I40E_TXD_CMD << I40E_TXD_QW1_CMD_SHIFT);
+ netdev_tx_sent_queue(netdev_get_tx_queue(tx_ring->netdev,
+ tx_ring->queue_index),
+ first->bytecount);
+
/* set the timestamp */
first->time_stamp = jiffies;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 08/13] i40e: Split bytes and packets from Rx/Tx stats
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (6 preceding siblings ...)
2013-10-10 6:41 ` [net-next 07/13] i40e: Add support for Tx byte queue limits Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 09/13] i40e: Move q_vectors from pointer to array to array of pointers Jeff Kirsher
` (5 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
This makes it so that the Tx and Rx byte and packet counts are
separated from the rest of the statistics. This allows for better
isolation of these stats when we move them into the 64 bit statistics.
Simplify things by re-ordering how the stats display in ethtool.
Instead of displaying all of the Tx queues as a block, followed by all
the Rx queues, the new order is Tx[0], Rx[0], Tx[1], Rx[1], ..., Tx[n],
Rx[n]. This reduces the loops and cleans up the display for testing
purposes since it is very easy to verify if flow director is doing the
right thing as the Tx and Rx queue pair are shown in pairs.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_debugfs.c | 8 ++++----
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 14 +++++---------
drivers/net/ethernet/intel/i40e/i40e_main.c | 12 ++++++++----
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 12 ++++++------
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 8 +++++---
5 files changed, 28 insertions(+), 26 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index 0b01c61..2b6655b 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -512,8 +512,8 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
vsi->rx_rings[i].ring_active);
dev_info(&pf->pdev->dev,
" rx_rings[%i]: rx_stats: packets = %lld, bytes = %lld, non_eop_descs = %lld\n",
- i, vsi->rx_rings[i].rx_stats.packets,
- vsi->rx_rings[i].rx_stats.bytes,
+ i, vsi->rx_rings[i].stats.packets,
+ vsi->rx_rings[i].stats.bytes,
vsi->rx_rings[i].rx_stats.non_eop_descs);
dev_info(&pf->pdev->dev,
" rx_rings[%i]: rx_stats: alloc_rx_page_failed = %lld, alloc_rx_buff_failed = %lld\n",
@@ -556,8 +556,8 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
vsi->tx_rings[i].ring_active);
dev_info(&pf->pdev->dev,
" tx_rings[%i]: tx_stats: packets = %lld, bytes = %lld, restart_queue = %lld\n",
- i, vsi->tx_rings[i].tx_stats.packets,
- vsi->tx_rings[i].tx_stats.bytes,
+ i, vsi->tx_rings[i].stats.packets,
+ vsi->tx_rings[i].stats.bytes,
vsi->tx_rings[i].tx_stats.restart_queue);
dev_info(&pf->pdev->dev,
" tx_rings[%i]: tx_stats: tx_busy = %lld, tx_done_old = %lld\n",
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 9a76b8c..8754c6fa 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -587,13 +587,11 @@ static void i40e_get_ethtool_stats(struct net_device *netdev,
data[i++] = (i40e_gstrings_net_stats[j].sizeof_stat ==
sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
}
- for (j = 0; j < vsi->num_queue_pairs; j++) {
- data[i++] = vsi->tx_rings[j].tx_stats.packets;
- data[i++] = vsi->tx_rings[j].tx_stats.bytes;
- }
- for (j = 0; j < vsi->num_queue_pairs; j++) {
- data[i++] = vsi->rx_rings[j].rx_stats.packets;
- data[i++] = vsi->rx_rings[j].rx_stats.bytes;
+ for (j = 0; j < vsi->num_queue_pairs; j++, i += 4) {
+ data[i] = vsi->tx_rings[j].stats.packets;
+ data[i + 1] = vsi->tx_rings[j].stats.bytes;
+ data[i + 2] = vsi->rx_rings[j].stats.packets;
+ data[i + 3] = vsi->rx_rings[j].stats.bytes;
}
if (vsi == pf->vsi[pf->lan_vsi]) {
for (j = 0; j < I40E_GLOBAL_STATS_LEN; j++) {
@@ -641,8 +639,6 @@ static void i40e_get_strings(struct net_device *netdev, u32 stringset,
p += ETH_GSTRING_LEN;
snprintf(p, ETH_GSTRING_LEN, "tx-%u.tx_bytes", i);
p += ETH_GSTRING_LEN;
- }
- for (i = 0; i < vsi->num_queue_pairs; i++) {
snprintf(p, ETH_GSTRING_LEN, "rx-%u.rx_packets", i);
p += ETH_GSTRING_LEN;
snprintf(p, ETH_GSTRING_LEN, "rx-%u.rx_bytes", i);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 657babe..d1b5bae 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -376,8 +376,12 @@ void i40e_vsi_reset_stats(struct i40e_vsi *vsi)
memset(&vsi->eth_stats_offsets, 0, sizeof(vsi->eth_stats_offsets));
if (vsi->rx_rings)
for (i = 0; i < vsi->num_queue_pairs; i++) {
+ memset(&vsi->rx_rings[i].stats, 0 ,
+ sizeof(vsi->rx_rings[i].stats));
memset(&vsi->rx_rings[i].rx_stats, 0 ,
sizeof(vsi->rx_rings[i].rx_stats));
+ memset(&vsi->tx_rings[i].stats, 0 ,
+ sizeof(vsi->tx_rings[i].stats));
memset(&vsi->tx_rings[i].tx_stats, 0,
sizeof(vsi->tx_rings[i].tx_stats));
}
@@ -708,14 +712,14 @@ void i40e_update_stats(struct i40e_vsi *vsi)
struct i40e_ring *p;
p = &vsi->rx_rings[q];
- rx_b += p->rx_stats.bytes;
- rx_p += p->rx_stats.packets;
+ rx_b += p->stats.bytes;
+ rx_p += p->stats.packets;
rx_buf += p->rx_stats.alloc_rx_buff_failed;
rx_page += p->rx_stats.alloc_rx_page_failed;
p = &vsi->tx_rings[q];
- tx_b += p->tx_stats.bytes;
- tx_p += p->tx_stats.packets;
+ tx_b += p->stats.bytes;
+ tx_p += p->stats.packets;
tx_restart += p->tx_stats.restart_queue;
tx_busy += p->tx_stats.tx_busy;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index ad2818f..3e73bc0 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -305,14 +305,14 @@ static bool i40e_check_tx_hang(struct i40e_ring *tx_ring)
* run the check_tx_hang logic with a transmit completion
* pending but without time to complete it yet.
*/
- if ((tx_ring->tx_stats.tx_done_old == tx_ring->tx_stats.packets) &&
+ if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) &&
tx_pending) {
/* make sure it is true for two checks in a row */
ret = test_and_set_bit(__I40E_HANG_CHECK_ARMED,
&tx_ring->state);
} else {
/* update completed stats and disarm the hang check */
- tx_ring->tx_stats.tx_done_old = tx_ring->tx_stats.packets;
+ tx_ring->tx_stats.tx_done_old = tx_ring->stats.packets;
clear_bit(__I40E_HANG_CHECK_ARMED, &tx_ring->state);
}
@@ -411,8 +411,8 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
i += tx_ring->count;
tx_ring->next_to_clean = i;
- tx_ring->tx_stats.bytes += total_bytes;
- tx_ring->tx_stats.packets += total_packets;
+ tx_ring->stats.bytes += total_bytes;
+ tx_ring->stats.packets += total_packets;
tx_ring->q_vector->tx.total_bytes += total_bytes;
tx_ring->q_vector->tx.total_packets += total_packets;
@@ -1075,8 +1075,8 @@ next_desc:
}
rx_ring->next_to_clean = i;
- rx_ring->rx_stats.packets += total_rx_packets;
- rx_ring->rx_stats.bytes += total_rx_bytes;
+ rx_ring->stats.packets += total_rx_packets;
+ rx_ring->stats.bytes += total_rx_bytes;
rx_ring->q_vector->rx.total_packets += total_rx_packets;
rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index e514247..7f3f7e3 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -126,17 +126,18 @@ struct i40e_rx_buffer {
unsigned int page_offset;
};
-struct i40e_tx_queue_stats {
+struct i40e_queue_stats {
u64 packets;
u64 bytes;
+};
+
+struct i40e_tx_queue_stats {
u64 restart_queue;
u64 tx_busy;
u64 tx_done_old;
};
struct i40e_rx_queue_stats {
- u64 packets;
- u64 bytes;
u64 non_eop_descs;
u64 alloc_rx_page_failed;
u64 alloc_rx_buff_failed;
@@ -215,6 +216,7 @@ struct i40e_ring {
bool ring_active; /* is ring online or not */
/* stats structs */
+ struct i40e_queue_stats stats;
union {
struct i40e_tx_queue_stats tx_stats;
struct i40e_rx_queue_stats rx_stats;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 09/13] i40e: Move q_vectors from pointer to array to array of pointers
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (7 preceding siblings ...)
2013-10-10 6:41 ` [net-next 08/13] i40e: Split bytes and packets from Rx/Tx stats Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 10/13] i40e: Replace ring container array with linked list Jeff Kirsher
` (4 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
Allocate the q_vectors individually. The advantage to this is that it
allows for easier freeing and allocation. In addition it makes it so
that we could do node specific allocations at some point in the future.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e.h | 5 +-
drivers/net/ethernet/intel/i40e/i40e_debugfs.c | 11 +-
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 4 +-
drivers/net/ethernet/intel/i40e/i40e_main.c | 152 +++++++++++++++++--------
4 files changed, 112 insertions(+), 60 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index b5252eb..789304e 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -366,7 +366,7 @@ struct i40e_vsi {
u8 dtype;
/* List of q_vectors allocated to this VSI */
- struct i40e_q_vector *q_vectors;
+ struct i40e_q_vector **q_vectors;
int num_q_vectors;
int base_vector;
@@ -422,8 +422,9 @@ struct i40e_q_vector {
u8 num_ringpairs; /* total number of ring pairs in vector */
- char name[IFNAMSIZ + 9];
cpumask_t affinity_mask;
+ struct rcu_head rcu; /* to avoid race with update stats on free */
+ char name[IFNAMSIZ + 9];
} ____cacheline_internodealigned_in_smp;
/* lan device */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index 2b6655b..44e3fa4 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -586,15 +586,6 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
dev_info(&pf->pdev->dev,
" max_frame = %d, rx_hdr_len = %d, rx_buf_len = %d dtype = %d\n",
vsi->max_frame, vsi->rx_hdr_len, vsi->rx_buf_len, vsi->dtype);
- if (vsi->q_vectors) {
- for (i = 0; i < vsi->num_q_vectors; i++) {
- dev_info(&pf->pdev->dev,
- " q_vectors[%i]: base index = %ld\n",
- i, ((long int)*vsi->q_vectors[i].rx.ring-
- (long int)*vsi->q_vectors[0].rx.ring)/
- sizeof(struct i40e_ring));
- }
- }
dev_info(&pf->pdev->dev,
" num_q_vectors = %i, base_vector = %i\n",
vsi->num_q_vectors, vsi->base_vector);
@@ -1995,7 +1986,7 @@ static ssize_t i40e_dbg_netdev_ops_write(struct file *filp,
goto netdev_ops_write_done;
}
for (i = 0; i < vsi->num_q_vectors; i++)
- napi_schedule(&vsi->q_vectors[i].napi);
+ napi_schedule(&vsi->q_vectors[i]->napi);
dev_info(&pf->pdev->dev, "napi called\n");
} else {
dev_info(&pf->pdev->dev, "unknown command '%s'\n",
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 8754c6fa..bf607da 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -906,8 +906,8 @@ static int i40e_set_coalesce(struct net_device *netdev,
}
vector = vsi->base_vector;
- q_vector = vsi->q_vectors;
- for (i = 0; i < vsi->num_q_vectors; i++, vector++, q_vector++) {
+ for (i = 0; i < vsi->num_q_vectors; i++, vector++) {
+ q_vector = vsi->q_vectors[i];
q_vector->rx.itr = ITR_TO_REG(vsi->rx_itr_setting);
wr32(hw, I40E_PFINT_ITRN(0, vector - 1), q_vector->rx.itr);
q_vector->tx.itr = ITR_TO_REG(vsi->tx_itr_setting);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index d1b5bae..a090815 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -2358,8 +2358,8 @@ static void i40e_vsi_configure_msix(struct i40e_vsi *vsi)
*/
qp = vsi->base_queue;
vector = vsi->base_vector;
- q_vector = vsi->q_vectors;
- for (i = 0; i < vsi->num_q_vectors; i++, q_vector++, vector++) {
+ for (i = 0; i < vsi->num_q_vectors; i++, vector++) {
+ q_vector = vsi->q_vectors[i];
q_vector->rx.itr = ITR_TO_REG(vsi->rx_itr_setting);
q_vector->rx.latency_range = I40E_LOW_LATENCY;
wr32(hw, I40E_PFINT_ITRN(I40E_RX_ITR, vector - 1),
@@ -2439,7 +2439,7 @@ static void i40e_enable_misc_int_causes(struct i40e_hw *hw)
**/
static void i40e_configure_msi_and_legacy(struct i40e_vsi *vsi)
{
- struct i40e_q_vector *q_vector = vsi->q_vectors;
+ struct i40e_q_vector *q_vector = vsi->q_vectors[0];
struct i40e_pf *pf = vsi->back;
struct i40e_hw *hw = &pf->hw;
u32 val;
@@ -2558,7 +2558,7 @@ static int i40e_vsi_request_irq_msix(struct i40e_vsi *vsi, char *basename)
int vector, err;
for (vector = 0; vector < q_vectors; vector++) {
- struct i40e_q_vector *q_vector = &(vsi->q_vectors[vector]);
+ struct i40e_q_vector *q_vector = vsi->q_vectors[vector];
if (q_vector->tx.ring[0] && q_vector->rx.ring[0]) {
snprintf(q_vector->name, sizeof(q_vector->name) - 1,
@@ -2709,7 +2709,7 @@ static irqreturn_t i40e_intr(int irq, void *data)
i40e_flush(hw);
if (!test_bit(__I40E_DOWN, &pf->state))
- napi_schedule(&pf->vsi[pf->lan_vsi]->q_vectors[0].napi);
+ napi_schedule(&pf->vsi[pf->lan_vsi]->q_vectors[0]->napi);
}
if (icr0 & I40E_PFINT_ICR0_ADMINQ_MASK) {
@@ -2785,7 +2785,7 @@ static irqreturn_t i40e_intr(int irq, void *data)
**/
static void map_vector_to_rxq(struct i40e_vsi *vsi, int v_idx, int r_idx)
{
- struct i40e_q_vector *q_vector = &(vsi->q_vectors[v_idx]);
+ struct i40e_q_vector *q_vector = vsi->q_vectors[v_idx];
struct i40e_ring *rx_ring = &(vsi->rx_rings[r_idx]);
rx_ring->q_vector = q_vector;
@@ -2803,7 +2803,7 @@ static void map_vector_to_rxq(struct i40e_vsi *vsi, int v_idx, int r_idx)
**/
static void map_vector_to_txq(struct i40e_vsi *vsi, int v_idx, int t_idx)
{
- struct i40e_q_vector *q_vector = &(vsi->q_vectors[v_idx]);
+ struct i40e_q_vector *q_vector = vsi->q_vectors[v_idx];
struct i40e_ring *tx_ring = &(vsi->tx_rings[t_idx]);
tx_ring->q_vector = q_vector;
@@ -2891,7 +2891,7 @@ static void i40e_netpoll(struct net_device *netdev)
pf->flags |= I40E_FLAG_IN_NETPOLL;
if (pf->flags & I40E_FLAG_MSIX_ENABLED) {
for (i = 0; i < vsi->num_q_vectors; i++)
- i40e_msix_clean_rings(0, &vsi->q_vectors[i]);
+ i40e_msix_clean_rings(0, vsi->q_vectors[i]);
} else {
i40e_intr(pf->pdev->irq, netdev);
}
@@ -3077,14 +3077,14 @@ static void i40e_vsi_free_irq(struct i40e_vsi *vsi)
u16 vector = i + base;
/* free only the irqs that were actually requested */
- if (vsi->q_vectors[i].num_ringpairs == 0)
+ if (vsi->q_vectors[i]->num_ringpairs == 0)
continue;
/* clear the affinity_mask in the IRQ descriptor */
irq_set_affinity_hint(pf->msix_entries[vector].vector,
NULL);
free_irq(pf->msix_entries[vector].vector,
- &vsi->q_vectors[i]);
+ vsi->q_vectors[i]);
/* Tear down the interrupt queue link list
*
@@ -3168,6 +3168,38 @@ static void i40e_vsi_free_irq(struct i40e_vsi *vsi)
}
/**
+ * i40e_free_q_vector - Free memory allocated for specific interrupt vector
+ * @vsi: the VSI being configured
+ * @v_idx: Index of vector to be freed
+ *
+ * This function frees the memory allocated to the q_vector. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void i40e_free_q_vector(struct i40e_vsi *vsi, int v_idx)
+{
+ struct i40e_q_vector *q_vector = vsi->q_vectors[v_idx];
+ int r_idx;
+
+ if (!q_vector)
+ return;
+
+ /* disassociate q_vector from rings */
+ for (r_idx = 0; r_idx < q_vector->tx.count; r_idx++)
+ q_vector->tx.ring[r_idx]->q_vector = NULL;
+ for (r_idx = 0; r_idx < q_vector->rx.count; r_idx++)
+ q_vector->rx.ring[r_idx]->q_vector = NULL;
+
+ /* only VSI w/ an associated netdev is set up w/ NAPI */
+ if (vsi->netdev)
+ netif_napi_del(&q_vector->napi);
+
+ vsi->q_vectors[v_idx] = NULL;
+
+ kfree_rcu(q_vector, rcu);
+}
+
+/**
* i40e_vsi_free_q_vectors - Free memory allocated for interrupt vectors
* @vsi: the VSI being un-configured
*
@@ -3178,24 +3210,8 @@ static void i40e_vsi_free_q_vectors(struct i40e_vsi *vsi)
{
int v_idx;
- for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++) {
- struct i40e_q_vector *q_vector = &vsi->q_vectors[v_idx];
- int r_idx;
-
- if (!q_vector)
- continue;
-
- /* disassociate q_vector from rings */
- for (r_idx = 0; r_idx < q_vector->tx.count; r_idx++)
- q_vector->tx.ring[r_idx]->q_vector = NULL;
- for (r_idx = 0; r_idx < q_vector->rx.count; r_idx++)
- q_vector->rx.ring[r_idx]->q_vector = NULL;
-
- /* only VSI w/ an associated netdev is set up w/ NAPI */
- if (vsi->netdev)
- netif_napi_del(&q_vector->napi);
- }
- kfree(vsi->q_vectors);
+ for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++)
+ i40e_free_q_vector(vsi, v_idx);
}
/**
@@ -3245,7 +3261,7 @@ static void i40e_napi_enable_all(struct i40e_vsi *vsi)
return;
for (q_idx = 0; q_idx < vsi->num_q_vectors; q_idx++)
- napi_enable(&vsi->q_vectors[q_idx].napi);
+ napi_enable(&vsi->q_vectors[q_idx]->napi);
}
/**
@@ -3260,7 +3276,7 @@ static void i40e_napi_disable_all(struct i40e_vsi *vsi)
return;
for (q_idx = 0; q_idx < vsi->num_q_vectors; q_idx++)
- napi_disable(&vsi->q_vectors[q_idx].napi);
+ napi_disable(&vsi->q_vectors[q_idx]->napi);
}
/**
@@ -4945,6 +4961,7 @@ static int i40e_vsi_mem_alloc(struct i40e_pf *pf, enum i40e_vsi_type type)
{
int ret = -ENODEV;
struct i40e_vsi *vsi;
+ int sz_vectors;
int vsi_idx;
int i;
@@ -4970,14 +4987,14 @@ static int i40e_vsi_mem_alloc(struct i40e_pf *pf, enum i40e_vsi_type type)
vsi_idx = i; /* Found one! */
} else {
ret = -ENODEV;
- goto err_alloc_vsi; /* out of VSI slots! */
+ goto unlock_pf; /* out of VSI slots! */
}
pf->next_vsi = ++i;
vsi = kzalloc(sizeof(*vsi), GFP_KERNEL);
if (!vsi) {
ret = -ENOMEM;
- goto err_alloc_vsi;
+ goto unlock_pf;
}
vsi->type = type;
vsi->back = pf;
@@ -4992,12 +5009,25 @@ static int i40e_vsi_mem_alloc(struct i40e_pf *pf, enum i40e_vsi_type type)
i40e_set_num_rings_in_vsi(vsi);
+ /* allocate memory for q_vector pointers */
+ sz_vectors = sizeof(struct i40e_q_vectors *) * vsi->num_q_vectors;
+ vsi->q_vectors = kzalloc(sz_vectors, GFP_KERNEL);
+ if (!vsi->q_vectors) {
+ ret = -ENOMEM;
+ goto err_vectors;
+ }
+
/* Setup default MSIX irq handler for VSI */
i40e_vsi_setup_irqhandler(vsi, i40e_msix_clean_rings);
pf->vsi[vsi_idx] = vsi;
ret = vsi_idx;
-err_alloc_vsi:
+ goto unlock_pf;
+
+err_vectors:
+ pf->next_vsi = i - 1;
+ kfree(vsi);
+unlock_pf:
mutex_unlock(&pf->switch_mutex);
return ret;
}
@@ -5038,6 +5068,9 @@ static int i40e_vsi_clear(struct i40e_vsi *vsi)
i40e_put_lump(pf->qp_pile, vsi->base_queue, vsi->idx);
i40e_put_lump(pf->irq_pile, vsi->base_vector, vsi->idx);
+ /* free the ring and vector containers */
+ kfree(vsi->q_vectors);
+
pf->vsi[vsi->idx] = NULL;
if (vsi->idx < pf->next_vsi)
pf->next_vsi = vsi->idx;
@@ -5257,6 +5290,35 @@ static int i40e_init_msix(struct i40e_pf *pf)
}
/**
+ * i40e_alloc_q_vector - Allocate memory for a single interrupt vector
+ * @vsi: the VSI being configured
+ * @v_idx: index of the vector in the vsi struct
+ *
+ * We allocate one q_vector. If allocation fails we return -ENOMEM.
+ **/
+static int i40e_alloc_q_vector(struct i40e_vsi *vsi, int v_idx)
+{
+ struct i40e_q_vector *q_vector;
+
+ /* allocate q_vector */
+ q_vector = kzalloc(sizeof(struct i40e_q_vector), GFP_KERNEL);
+ if (!q_vector)
+ return -ENOMEM;
+
+ q_vector->vsi = vsi;
+ q_vector->v_idx = v_idx;
+ cpumask_set_cpu(v_idx, &q_vector->affinity_mask);
+ if (vsi->netdev)
+ netif_napi_add(vsi->netdev, &q_vector->napi,
+ i40e_napi_poll, vsi->work_limit);
+
+ /* tie q_vector and vsi together */
+ vsi->q_vectors[v_idx] = q_vector;
+
+ return 0;
+}
+
+/**
* i40e_alloc_q_vectors - Allocate memory for interrupt vectors
* @vsi: the VSI being configured
*
@@ -5267,6 +5329,7 @@ static int i40e_alloc_q_vectors(struct i40e_vsi *vsi)
{
struct i40e_pf *pf = vsi->back;
int v_idx, num_q_vectors;
+ int err;
/* if not MSIX, give the one vector only to the LAN VSI */
if (pf->flags & I40E_FLAG_MSIX_ENABLED)
@@ -5276,22 +5339,19 @@ static int i40e_alloc_q_vectors(struct i40e_vsi *vsi)
else
return -EINVAL;
- vsi->q_vectors = kcalloc(num_q_vectors,
- sizeof(struct i40e_q_vector),
- GFP_KERNEL);
- if (!vsi->q_vectors)
- return -ENOMEM;
-
for (v_idx = 0; v_idx < num_q_vectors; v_idx++) {
- vsi->q_vectors[v_idx].vsi = vsi;
- vsi->q_vectors[v_idx].v_idx = v_idx;
- cpumask_set_cpu(v_idx, &vsi->q_vectors[v_idx].affinity_mask);
- if (vsi->netdev)
- netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx].napi,
- i40e_napi_poll, vsi->work_limit);
+ err = i40e_alloc_q_vector(vsi, v_idx);
+ if (err)
+ goto err_out;
}
return 0;
+
+err_out:
+ while (v_idx--)
+ i40e_free_q_vector(vsi, v_idx);
+
+ return err;
}
/**
@@ -5958,7 +6018,7 @@ static int i40e_vsi_setup_vectors(struct i40e_vsi *vsi)
int ret = -ENOENT;
struct i40e_pf *pf = vsi->back;
- if (vsi->q_vectors) {
+ if (vsi->q_vectors[0]) {
dev_info(&pf->pdev->dev, "VSI %d has existing q_vectors\n",
vsi->seid);
return -EEXIST;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 10/13] i40e: Replace ring container array with linked list
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (8 preceding siblings ...)
2013-10-10 6:41 ` [net-next 09/13] i40e: Move q_vectors from pointer to array to array of pointers Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 11/13] i40e: Move rings from pointer to array to array of pointers Jeff Kirsher
` (3 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
This replaces the ring container array with a linked list. The idea is
to make the logic much easier to deal with since this will allow us to
call a simple helper function from the q_vectors to go through the
entire list.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_main.c | 84 ++++++++++++++---------------
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 19 +++----
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 8 ++-
3 files changed, 58 insertions(+), 53 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index a090815..c74ac58 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -2516,7 +2516,7 @@ static irqreturn_t i40e_msix_clean_rings(int irq, void *data)
{
struct i40e_q_vector *q_vector = data;
- if (!q_vector->tx.ring[0] && !q_vector->rx.ring[0])
+ if (!q_vector->tx.ring && !q_vector->rx.ring)
return IRQ_HANDLED;
napi_schedule(&q_vector->napi);
@@ -2533,7 +2533,7 @@ static irqreturn_t i40e_fdir_clean_rings(int irq, void *data)
{
struct i40e_q_vector *q_vector = data;
- if (!q_vector->tx.ring[0] && !q_vector->rx.ring[0])
+ if (!q_vector->tx.ring && !q_vector->rx.ring)
return IRQ_HANDLED;
pr_info("fdir ring cleaning needed\n");
@@ -2560,14 +2560,14 @@ static int i40e_vsi_request_irq_msix(struct i40e_vsi *vsi, char *basename)
for (vector = 0; vector < q_vectors; vector++) {
struct i40e_q_vector *q_vector = vsi->q_vectors[vector];
- if (q_vector->tx.ring[0] && q_vector->rx.ring[0]) {
+ if (q_vector->tx.ring && q_vector->rx.ring) {
snprintf(q_vector->name, sizeof(q_vector->name) - 1,
"%s-%s-%d", basename, "TxRx", rx_int_idx++);
tx_int_idx++;
- } else if (q_vector->rx.ring[0]) {
+ } else if (q_vector->rx.ring) {
snprintf(q_vector->name, sizeof(q_vector->name) - 1,
"%s-%s-%d", basename, "rx", rx_int_idx++);
- } else if (q_vector->tx.ring[0]) {
+ } else if (q_vector->tx.ring) {
snprintf(q_vector->name, sizeof(q_vector->name) - 1,
"%s-%s-%d", basename, "tx", tx_int_idx++);
} else {
@@ -2778,40 +2778,26 @@ static irqreturn_t i40e_intr(int irq, void *data)
}
/**
- * i40e_map_vector_to_rxq - Assigns the Rx queue to the vector
+ * i40e_map_vector_to_qp - Assigns the queue pair to the vector
* @vsi: the VSI being configured
* @v_idx: vector index
- * @r_idx: rx queue index
+ * @qp_idx: queue pair index
**/
-static void map_vector_to_rxq(struct i40e_vsi *vsi, int v_idx, int r_idx)
+static void map_vector_to_qp(struct i40e_vsi *vsi, int v_idx, int qp_idx)
{
struct i40e_q_vector *q_vector = vsi->q_vectors[v_idx];
- struct i40e_ring *rx_ring = &(vsi->rx_rings[r_idx]);
-
- rx_ring->q_vector = q_vector;
- q_vector->rx.ring[q_vector->rx.count] = rx_ring;
- q_vector->rx.count++;
- q_vector->rx.latency_range = I40E_LOW_LATENCY;
- q_vector->vsi = vsi;
-}
-
-/**
- * i40e_map_vector_to_txq - Assigns the Tx queue to the vector
- * @vsi: the VSI being configured
- * @v_idx: vector index
- * @t_idx: tx queue index
- **/
-static void map_vector_to_txq(struct i40e_vsi *vsi, int v_idx, int t_idx)
-{
- struct i40e_q_vector *q_vector = vsi->q_vectors[v_idx];
- struct i40e_ring *tx_ring = &(vsi->tx_rings[t_idx]);
+ struct i40e_ring *tx_ring = &(vsi->tx_rings[qp_idx]);
+ struct i40e_ring *rx_ring = &(vsi->rx_rings[qp_idx]);
tx_ring->q_vector = q_vector;
- q_vector->tx.ring[q_vector->tx.count] = tx_ring;
+ tx_ring->next = q_vector->tx.ring;
+ q_vector->tx.ring = tx_ring;
q_vector->tx.count++;
- q_vector->tx.latency_range = I40E_LOW_LATENCY;
- q_vector->num_ringpairs++;
- q_vector->vsi = vsi;
+
+ rx_ring->q_vector = q_vector;
+ rx_ring->next = q_vector->rx.ring;
+ q_vector->rx.ring = rx_ring;
+ q_vector->rx.count++;
}
/**
@@ -2827,7 +2813,7 @@ static void i40e_vsi_map_rings_to_vectors(struct i40e_vsi *vsi)
{
int qp_remaining = vsi->num_queue_pairs;
int q_vectors = vsi->num_q_vectors;
- int qp_per_vector;
+ int num_ringpairs;
int v_start = 0;
int qp_idx = 0;
@@ -2835,11 +2821,21 @@ static void i40e_vsi_map_rings_to_vectors(struct i40e_vsi *vsi)
* group them so there are multiple queues per vector.
*/
for (; v_start < q_vectors && qp_remaining; v_start++) {
- qp_per_vector = DIV_ROUND_UP(qp_remaining, q_vectors - v_start);
- for (; qp_per_vector;
- qp_per_vector--, qp_idx++, qp_remaining--) {
- map_vector_to_rxq(vsi, v_start, qp_idx);
- map_vector_to_txq(vsi, v_start, qp_idx);
+ struct i40e_q_vector *q_vector = vsi->q_vectors[v_start];
+
+ num_ringpairs = DIV_ROUND_UP(qp_remaining, q_vectors - v_start);
+
+ q_vector->num_ringpairs = num_ringpairs;
+
+ q_vector->rx.count = 0;
+ q_vector->tx.count = 0;
+ q_vector->rx.ring = NULL;
+ q_vector->tx.ring = NULL;
+
+ while (num_ringpairs--) {
+ map_vector_to_qp(vsi, v_start, qp_idx);
+ qp_idx++;
+ qp_remaining--;
}
}
}
@@ -3179,16 +3175,17 @@ static void i40e_vsi_free_irq(struct i40e_vsi *vsi)
static void i40e_free_q_vector(struct i40e_vsi *vsi, int v_idx)
{
struct i40e_q_vector *q_vector = vsi->q_vectors[v_idx];
- int r_idx;
+ struct i40e_ring *ring;
if (!q_vector)
return;
/* disassociate q_vector from rings */
- for (r_idx = 0; r_idx < q_vector->tx.count; r_idx++)
- q_vector->tx.ring[r_idx]->q_vector = NULL;
- for (r_idx = 0; r_idx < q_vector->rx.count; r_idx++)
- q_vector->rx.ring[r_idx]->q_vector = NULL;
+ i40e_for_each_ring(ring, q_vector->tx)
+ ring->q_vector = NULL;
+
+ i40e_for_each_ring(ring, q_vector->rx)
+ ring->q_vector = NULL;
/* only VSI w/ an associated netdev is set up w/ NAPI */
if (vsi->netdev)
@@ -5312,6 +5309,9 @@ static int i40e_alloc_q_vector(struct i40e_vsi *vsi, int v_idx)
netif_napi_add(vsi->netdev, &q_vector->napi,
i40e_napi_poll, vsi->work_limit);
+ q_vector->rx.latency_range = I40E_LOW_LATENCY;
+ q_vector->tx.latency_range = I40E_LOW_LATENCY;
+
/* tie q_vector and vsi together */
vsi->q_vectors[v_idx] = q_vector;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 3e73bc0..f153f37 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1100,27 +1100,28 @@ int i40e_napi_poll(struct napi_struct *napi, int budget)
struct i40e_q_vector *q_vector =
container_of(napi, struct i40e_q_vector, napi);
struct i40e_vsi *vsi = q_vector->vsi;
+ struct i40e_ring *ring;
bool clean_complete = true;
int budget_per_ring;
- int i;
if (test_bit(__I40E_DOWN, &vsi->state)) {
napi_complete(napi);
return 0;
}
+ /* Since the actual Tx work is minimal, we can give the Tx a larger
+ * budget and be more aggressive about cleaning up the Tx descriptors.
+ */
+ i40e_for_each_ring(ring, q_vector->tx)
+ clean_complete &= i40e_clean_tx_irq(ring, vsi->work_limit);
+
/* We attempt to distribute budget to each Rx queue fairly, but don't
* allow the budget to go below 1 because that would exit polling early.
- * Since the actual Tx work is minimal, we can give the Tx a larger
- * budget and be more aggressive about cleaning up the Tx descriptors.
*/
budget_per_ring = max(budget/q_vector->num_ringpairs, 1);
- for (i = 0; i < q_vector->num_ringpairs; i++) {
- clean_complete &= i40e_clean_tx_irq(q_vector->tx.ring[i],
- vsi->work_limit);
- clean_complete &= i40e_clean_rx_irq(q_vector->rx.ring[i],
- budget_per_ring);
- }
+
+ i40e_for_each_ring(ring, q_vector->rx)
+ clean_complete &= i40e_clean_rx_irq(ring, budget_per_ring);
/* If work not completed, return budget and polling will return */
if (!clean_complete)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index 7f3f7e3..c2a6746 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -180,6 +180,7 @@ enum i40e_ring_state_t {
/* struct that defines a descriptor ring, associated with a VSI */
struct i40e_ring {
+ struct i40e_ring *next; /* pointer to next ring in q_vector */
void *desc; /* Descriptor ring memory */
struct device *dev; /* Used for DMA mapping */
struct net_device *netdev; /* netdev ring maps to */
@@ -236,9 +237,8 @@ enum i40e_latency_range {
};
struct i40e_ring_container {
-#define I40E_MAX_RINGPAIR_PER_VECTOR 8
/* array of pointers to rings */
- struct i40e_ring *ring[I40E_MAX_RINGPAIR_PER_VECTOR];
+ struct i40e_ring *ring;
unsigned int total_bytes; /* total bytes processed this int */
unsigned int total_packets; /* total packets processed this int */
u16 count;
@@ -246,6 +246,10 @@ struct i40e_ring_container {
u16 itr;
};
+/* iterator for handling rings in ring container */
+#define i40e_for_each_ring(pos, head) \
+ for (pos = (head).ring; pos != NULL; pos = pos->next)
+
void i40e_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
netdev_tx_t i40e_lan_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void i40e_clean_tx_ring(struct i40e_ring *tx_ring);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 11/13] i40e: Move rings from pointer to array to array of pointers
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (9 preceding siblings ...)
2013-10-10 6:41 ` [net-next 10/13] i40e: Replace ring container array with linked list Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 12/13] i40e: Add support for 64 bit netstats Jeff Kirsher
` (2 subsequent siblings)
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
Allocate the queue pairs individually instead of as a group. This
allows for much easier queue management as it is possible to dynamically
resize the queues without having to free and allocate the entire block.
Ease statistic collection by treating Tx/Rx queue pairs as a single
unit. Each pair is allocated together and starts with a Tx queue and
ends with an Rx queue. By ordering them this way it is possible to know
the Rx offset based on a pointer to the Tx queue.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e.h | 6 +-
drivers/net/ethernet/intel/i40e/i40e_debugfs.c | 195 +++++++++++++------------
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 40 ++---
drivers/net/ethernet/intel/i40e/i40e_main.c | 142 +++++++++---------
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 4 +-
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 2 +
6 files changed, 204 insertions(+), 185 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 789304e..c06a76c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -347,9 +347,9 @@ struct i40e_vsi {
u32 rx_buf_failed;
u32 rx_page_failed;
- /* These are arrays of rings, allocated at run-time */
- struct i40e_ring *rx_rings;
- struct i40e_ring *tx_rings;
+ /* These are containers of ring pointers, allocated at run-time */
+ struct i40e_ring **rx_rings;
+ struct i40e_ring **tx_rings;
u16 work_limit;
/* high bit set means dynamic, use accessor routines to read/write.
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index 44e3fa4..19e248f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -258,12 +258,12 @@ static ssize_t i40e_dbg_dump_write(struct file *filp,
for (i = 0; i < vsi->num_queue_pairs; i++) {
len = sizeof(struct i40e_tx_buffer);
- memcpy(p, vsi->tx_rings[i].tx_bi, len);
+ memcpy(p, vsi->tx_rings[i]->tx_bi, len);
p += len;
}
for (i = 0; i < vsi->num_queue_pairs; i++) {
len = sizeof(struct i40e_rx_buffer);
- memcpy(p, vsi->rx_rings[i].rx_bi, len);
+ memcpy(p, vsi->rx_rings[i]->rx_bi, len);
p += len;
}
@@ -484,99 +484,104 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
" tx_restart = %d, tx_busy = %d, rx_buf_failed = %d, rx_page_failed = %d\n",
vsi->tx_restart, vsi->tx_busy,
vsi->rx_buf_failed, vsi->rx_page_failed);
- if (vsi->rx_rings) {
- for (i = 0; i < vsi->num_queue_pairs; i++) {
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: desc = %p\n",
- i, vsi->rx_rings[i].desc);
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: dev = %p, netdev = %p, rx_bi = %p\n",
- i, vsi->rx_rings[i].dev,
- vsi->rx_rings[i].netdev,
- vsi->rx_rings[i].rx_bi);
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: state = %li, queue_index = %d, reg_idx = %d\n",
- i, vsi->rx_rings[i].state,
- vsi->rx_rings[i].queue_index,
- vsi->rx_rings[i].reg_idx);
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: rx_hdr_len = %d, rx_buf_len = %d, dtype = %d\n",
- i, vsi->rx_rings[i].rx_hdr_len,
- vsi->rx_rings[i].rx_buf_len,
- vsi->rx_rings[i].dtype);
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: hsplit = %d, next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
- i, vsi->rx_rings[i].hsplit,
- vsi->rx_rings[i].next_to_use,
- vsi->rx_rings[i].next_to_clean,
- vsi->rx_rings[i].ring_active);
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: rx_stats: packets = %lld, bytes = %lld, non_eop_descs = %lld\n",
- i, vsi->rx_rings[i].stats.packets,
- vsi->rx_rings[i].stats.bytes,
- vsi->rx_rings[i].rx_stats.non_eop_descs);
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: rx_stats: alloc_rx_page_failed = %lld, alloc_rx_buff_failed = %lld\n",
- i,
- vsi->rx_rings[i].rx_stats.alloc_rx_page_failed,
- vsi->rx_rings[i].rx_stats.alloc_rx_buff_failed);
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: size = %i, dma = 0x%08lx\n",
- i, vsi->rx_rings[i].size,
- (long unsigned int)vsi->rx_rings[i].dma);
- dev_info(&pf->pdev->dev,
- " rx_rings[%i]: vsi = %p, q_vector = %p\n",
- i, vsi->rx_rings[i].vsi,
- vsi->rx_rings[i].q_vector);
- }
+ rcu_read_lock();
+ for (i = 0; i < vsi->num_queue_pairs; i++) {
+ struct i40e_ring *rx_ring = ACCESS_ONCE(vsi->rx_rings[i]);
+ if (!rx_ring)
+ continue;
+
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: desc = %p\n",
+ i, rx_ring->desc);
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: dev = %p, netdev = %p, rx_bi = %p\n",
+ i, rx_ring->dev,
+ rx_ring->netdev,
+ rx_ring->rx_bi);
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: state = %li, queue_index = %d, reg_idx = %d\n",
+ i, rx_ring->state,
+ rx_ring->queue_index,
+ rx_ring->reg_idx);
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: rx_hdr_len = %d, rx_buf_len = %d, dtype = %d\n",
+ i, rx_ring->rx_hdr_len,
+ rx_ring->rx_buf_len,
+ rx_ring->dtype);
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: hsplit = %d, next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
+ i, rx_ring->hsplit,
+ rx_ring->next_to_use,
+ rx_ring->next_to_clean,
+ rx_ring->ring_active);
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: rx_stats: packets = %lld, bytes = %lld, non_eop_descs = %lld\n",
+ i, rx_ring->stats.packets,
+ rx_ring->stats.bytes,
+ rx_ring->rx_stats.non_eop_descs);
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: rx_stats: alloc_rx_page_failed = %lld, alloc_rx_buff_failed = %lld\n",
+ i,
+ rx_ring->rx_stats.alloc_rx_page_failed,
+ rx_ring->rx_stats.alloc_rx_buff_failed);
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: size = %i, dma = 0x%08lx\n",
+ i, rx_ring->size,
+ (long unsigned int)rx_ring->dma);
+ dev_info(&pf->pdev->dev,
+ " rx_rings[%i]: vsi = %p, q_vector = %p\n",
+ i, rx_ring->vsi,
+ rx_ring->q_vector);
}
- if (vsi->tx_rings) {
- for (i = 0; i < vsi->num_queue_pairs; i++) {
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: desc = %p\n",
- i, vsi->tx_rings[i].desc);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: dev = %p, netdev = %p, tx_bi = %p\n",
- i, vsi->tx_rings[i].dev,
- vsi->tx_rings[i].netdev,
- vsi->tx_rings[i].tx_bi);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: state = %li, queue_index = %d, reg_idx = %d\n",
- i, vsi->tx_rings[i].state,
- vsi->tx_rings[i].queue_index,
- vsi->tx_rings[i].reg_idx);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: dtype = %d\n",
- i, vsi->tx_rings[i].dtype);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: hsplit = %d, next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
- i, vsi->tx_rings[i].hsplit,
- vsi->tx_rings[i].next_to_use,
- vsi->tx_rings[i].next_to_clean,
- vsi->tx_rings[i].ring_active);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: tx_stats: packets = %lld, bytes = %lld, restart_queue = %lld\n",
- i, vsi->tx_rings[i].stats.packets,
- vsi->tx_rings[i].stats.bytes,
- vsi->tx_rings[i].tx_stats.restart_queue);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: tx_stats: tx_busy = %lld, tx_done_old = %lld\n",
- i,
- vsi->tx_rings[i].tx_stats.tx_busy,
- vsi->tx_rings[i].tx_stats.tx_done_old);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: size = %i, dma = 0x%08lx\n",
- i, vsi->tx_rings[i].size,
- (long unsigned int)vsi->tx_rings[i].dma);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: vsi = %p, q_vector = %p\n",
- i, vsi->tx_rings[i].vsi,
- vsi->tx_rings[i].q_vector);
- dev_info(&pf->pdev->dev,
- " tx_rings[%i]: DCB tc = %d\n",
- i, vsi->tx_rings[i].dcb_tc);
- }
+ for (i = 0; i < vsi->num_queue_pairs; i++) {
+ struct i40e_ring *tx_ring = ACCESS_ONCE(vsi->tx_rings[i]);
+ if (!tx_ring)
+ continue;
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: desc = %p\n",
+ i, tx_ring->desc);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: dev = %p, netdev = %p, tx_bi = %p\n",
+ i, tx_ring->dev,
+ tx_ring->netdev,
+ tx_ring->tx_bi);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: state = %li, queue_index = %d, reg_idx = %d\n",
+ i, tx_ring->state,
+ tx_ring->queue_index,
+ tx_ring->reg_idx);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: dtype = %d\n",
+ i, tx_ring->dtype);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: hsplit = %d, next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
+ i, tx_ring->hsplit,
+ tx_ring->next_to_use,
+ tx_ring->next_to_clean,
+ tx_ring->ring_active);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: tx_stats: packets = %lld, bytes = %lld, restart_queue = %lld\n",
+ i, tx_ring->stats.packets,
+ tx_ring->stats.bytes,
+ tx_ring->tx_stats.restart_queue);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: tx_stats: tx_busy = %lld, tx_done_old = %lld\n",
+ i,
+ tx_ring->tx_stats.tx_busy,
+ tx_ring->tx_stats.tx_done_old);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: size = %i, dma = 0x%08lx\n",
+ i, tx_ring->size,
+ (long unsigned int)tx_ring->dma);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: vsi = %p, q_vector = %p\n",
+ i, tx_ring->vsi,
+ tx_ring->q_vector);
+ dev_info(&pf->pdev->dev,
+ " tx_rings[%i]: DCB tc = %d\n",
+ i, tx_ring->dcb_tc);
}
+ rcu_read_unlock();
dev_info(&pf->pdev->dev,
" work_limit = %d, rx_itr_setting = %d (%s), tx_itr_setting = %d (%s)\n",
vsi->work_limit, vsi->rx_itr_setting,
@@ -782,9 +787,9 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
return;
}
if (is_rx_ring)
- ring = vsi->rx_rings[ring_id];
+ ring = *vsi->rx_rings[ring_id];
else
- ring = vsi->tx_rings[ring_id];
+ ring = *vsi->tx_rings[ring_id];
if (cnt == 2) {
dev_info(&pf->pdev->dev, "vsi = %02i %s ring = %02i\n",
vsi_seid, is_rx_ring ? "rx" : "tx", ring_id);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index bf607da..50153ea 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -399,8 +399,8 @@ static void i40e_get_ringparam(struct net_device *netdev,
ring->tx_max_pending = I40E_MAX_NUM_DESCRIPTORS;
ring->rx_mini_max_pending = 0;
ring->rx_jumbo_max_pending = 0;
- ring->rx_pending = vsi->rx_rings[0].count;
- ring->tx_pending = vsi->tx_rings[0].count;
+ ring->rx_pending = vsi->rx_rings[0]->count;
+ ring->tx_pending = vsi->tx_rings[0]->count;
ring->rx_mini_pending = 0;
ring->rx_jumbo_pending = 0;
}
@@ -429,8 +429,8 @@ static int i40e_set_ringparam(struct net_device *netdev,
new_rx_count = ALIGN(new_rx_count, I40E_REQ_DESCRIPTOR_MULTIPLE);
/* if nothing to do return success */
- if ((new_tx_count == vsi->tx_rings[0].count) &&
- (new_rx_count == vsi->rx_rings[0].count))
+ if ((new_tx_count == vsi->tx_rings[0]->count) &&
+ (new_rx_count == vsi->rx_rings[0]->count))
return 0;
while (test_and_set_bit(__I40E_CONFIG_BUSY, &pf->state))
@@ -439,8 +439,8 @@ static int i40e_set_ringparam(struct net_device *netdev,
if (!netif_running(vsi->netdev)) {
/* simple case - set for the next time the netdev is started */
for (i = 0; i < vsi->num_queue_pairs; i++) {
- vsi->tx_rings[i].count = new_tx_count;
- vsi->rx_rings[i].count = new_rx_count;
+ vsi->tx_rings[i]->count = new_tx_count;
+ vsi->rx_rings[i]->count = new_rx_count;
}
goto done;
}
@@ -451,10 +451,10 @@ static int i40e_set_ringparam(struct net_device *netdev,
*/
/* alloc updated Tx resources */
- if (new_tx_count != vsi->tx_rings[0].count) {
+ if (new_tx_count != vsi->tx_rings[0]->count) {
netdev_info(netdev,
"Changing Tx descriptor count from %d to %d.\n",
- vsi->tx_rings[0].count, new_tx_count);
+ vsi->tx_rings[0]->count, new_tx_count);
tx_rings = kcalloc(vsi->alloc_queue_pairs,
sizeof(struct i40e_ring), GFP_KERNEL);
if (!tx_rings) {
@@ -464,7 +464,7 @@ static int i40e_set_ringparam(struct net_device *netdev,
for (i = 0; i < vsi->num_queue_pairs; i++) {
/* clone ring and setup updated count */
- tx_rings[i] = vsi->tx_rings[i];
+ tx_rings[i] = *vsi->tx_rings[i];
tx_rings[i].count = new_tx_count;
err = i40e_setup_tx_descriptors(&tx_rings[i]);
if (err) {
@@ -481,10 +481,10 @@ static int i40e_set_ringparam(struct net_device *netdev,
}
/* alloc updated Rx resources */
- if (new_rx_count != vsi->rx_rings[0].count) {
+ if (new_rx_count != vsi->rx_rings[0]->count) {
netdev_info(netdev,
"Changing Rx descriptor count from %d to %d\n",
- vsi->rx_rings[0].count, new_rx_count);
+ vsi->rx_rings[0]->count, new_rx_count);
rx_rings = kcalloc(vsi->alloc_queue_pairs,
sizeof(struct i40e_ring), GFP_KERNEL);
if (!rx_rings) {
@@ -494,7 +494,7 @@ static int i40e_set_ringparam(struct net_device *netdev,
for (i = 0; i < vsi->num_queue_pairs; i++) {
/* clone ring and setup updated count */
- rx_rings[i] = vsi->rx_rings[i];
+ rx_rings[i] = *vsi->rx_rings[i];
rx_rings[i].count = new_rx_count;
err = i40e_setup_rx_descriptors(&rx_rings[i]);
if (err) {
@@ -517,8 +517,8 @@ static int i40e_set_ringparam(struct net_device *netdev,
if (tx_rings) {
for (i = 0; i < vsi->num_queue_pairs; i++) {
- i40e_free_tx_resources(&vsi->tx_rings[i]);
- vsi->tx_rings[i] = tx_rings[i];
+ i40e_free_tx_resources(vsi->tx_rings[i]);
+ *vsi->tx_rings[i] = tx_rings[i];
}
kfree(tx_rings);
tx_rings = NULL;
@@ -526,8 +526,8 @@ static int i40e_set_ringparam(struct net_device *netdev,
if (rx_rings) {
for (i = 0; i < vsi->num_queue_pairs; i++) {
- i40e_free_rx_resources(&vsi->rx_rings[i]);
- vsi->rx_rings[i] = rx_rings[i];
+ i40e_free_rx_resources(vsi->rx_rings[i]);
+ *vsi->rx_rings[i] = rx_rings[i];
}
kfree(rx_rings);
rx_rings = NULL;
@@ -588,10 +588,10 @@ static void i40e_get_ethtool_stats(struct net_device *netdev,
sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
}
for (j = 0; j < vsi->num_queue_pairs; j++, i += 4) {
- data[i] = vsi->tx_rings[j].stats.packets;
- data[i + 1] = vsi->tx_rings[j].stats.bytes;
- data[i + 2] = vsi->rx_rings[j].stats.packets;
- data[i + 3] = vsi->rx_rings[j].stats.bytes;
+ data[i] = vsi->tx_rings[j]->stats.packets;
+ data[i + 1] = vsi->tx_rings[j]->stats.bytes;
+ data[i + 2] = vsi->rx_rings[j]->stats.packets;
+ data[i + 3] = vsi->rx_rings[j]->stats.bytes;
}
if (vsi == pf->vsi[pf->lan_vsi]) {
for (j = 0; j < I40E_GLOBAL_STATS_LEN; j++) {
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index c74ac58..3cf23f5 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -376,14 +376,14 @@ void i40e_vsi_reset_stats(struct i40e_vsi *vsi)
memset(&vsi->eth_stats_offsets, 0, sizeof(vsi->eth_stats_offsets));
if (vsi->rx_rings)
for (i = 0; i < vsi->num_queue_pairs; i++) {
- memset(&vsi->rx_rings[i].stats, 0 ,
- sizeof(vsi->rx_rings[i].stats));
- memset(&vsi->rx_rings[i].rx_stats, 0 ,
- sizeof(vsi->rx_rings[i].rx_stats));
- memset(&vsi->tx_rings[i].stats, 0 ,
- sizeof(vsi->tx_rings[i].stats));
- memset(&vsi->tx_rings[i].tx_stats, 0,
- sizeof(vsi->tx_rings[i].tx_stats));
+ memset(&vsi->rx_rings[i]->stats, 0 ,
+ sizeof(vsi->rx_rings[i]->stats));
+ memset(&vsi->rx_rings[i]->rx_stats, 0 ,
+ sizeof(vsi->rx_rings[i]->rx_stats));
+ memset(&vsi->tx_rings[i]->stats, 0 ,
+ sizeof(vsi->tx_rings[i]->stats));
+ memset(&vsi->tx_rings[i]->tx_stats, 0,
+ sizeof(vsi->tx_rings[i]->tx_stats));
}
vsi->stat_offsets_loaded = false;
}
@@ -602,7 +602,7 @@ static void i40e_update_link_xoff_rx(struct i40e_pf *pf)
continue;
for (i = 0; i < vsi->num_queue_pairs; i++) {
- struct i40e_ring *ring = &vsi->tx_rings[i];
+ struct i40e_ring *ring = vsi->tx_rings[i];
clear_bit(__I40E_HANG_CHECK_ARMED, &ring->state);
}
}
@@ -656,7 +656,7 @@ static void i40e_update_prio_xoff_rx(struct i40e_pf *pf)
continue;
for (i = 0; i < vsi->num_queue_pairs; i++) {
- struct i40e_ring *ring = &vsi->tx_rings[i];
+ struct i40e_ring *ring = vsi->tx_rings[i];
tc = ring->dcb_tc;
if (xoff[tc])
@@ -711,13 +711,13 @@ void i40e_update_stats(struct i40e_vsi *vsi)
for (q = 0; q < vsi->num_queue_pairs; q++) {
struct i40e_ring *p;
- p = &vsi->rx_rings[q];
+ p = vsi->rx_rings[q];
rx_b += p->stats.bytes;
rx_p += p->stats.packets;
rx_buf += p->rx_stats.alloc_rx_buff_failed;
rx_page += p->rx_stats.alloc_rx_page_failed;
- p = &vsi->tx_rings[q];
+ p = vsi->tx_rings[q];
tx_b += p->stats.bytes;
tx_p += p->stats.packets;
tx_restart += p->tx_stats.restart_queue;
@@ -1992,7 +1992,7 @@ static int i40e_vsi_setup_tx_resources(struct i40e_vsi *vsi)
int i, err = 0;
for (i = 0; i < vsi->num_queue_pairs && !err; i++)
- err = i40e_setup_tx_descriptors(&vsi->tx_rings[i]);
+ err = i40e_setup_tx_descriptors(vsi->tx_rings[i]);
return err;
}
@@ -2008,8 +2008,8 @@ static void i40e_vsi_free_tx_resources(struct i40e_vsi *vsi)
int i;
for (i = 0; i < vsi->num_queue_pairs; i++)
- if (vsi->tx_rings[i].desc)
- i40e_free_tx_resources(&vsi->tx_rings[i]);
+ if (vsi->tx_rings[i]->desc)
+ i40e_free_tx_resources(vsi->tx_rings[i]);
}
/**
@@ -2027,7 +2027,7 @@ static int i40e_vsi_setup_rx_resources(struct i40e_vsi *vsi)
int i, err = 0;
for (i = 0; i < vsi->num_queue_pairs && !err; i++)
- err = i40e_setup_rx_descriptors(&vsi->rx_rings[i]);
+ err = i40e_setup_rx_descriptors(vsi->rx_rings[i]);
return err;
}
@@ -2042,8 +2042,8 @@ static void i40e_vsi_free_rx_resources(struct i40e_vsi *vsi)
int i;
for (i = 0; i < vsi->num_queue_pairs; i++)
- if (vsi->rx_rings[i].desc)
- i40e_free_rx_resources(&vsi->rx_rings[i]);
+ if (vsi->rx_rings[i]->desc)
+ i40e_free_rx_resources(vsi->rx_rings[i]);
}
/**
@@ -2227,8 +2227,8 @@ static int i40e_vsi_configure_tx(struct i40e_vsi *vsi)
int err = 0;
u16 i;
- for (i = 0; (i < vsi->num_queue_pairs) && (!err); i++)
- err = i40e_configure_tx_ring(&vsi->tx_rings[i]);
+ for (i = 0; (i < vsi->num_queue_pairs) && !err; i++)
+ err = i40e_configure_tx_ring(vsi->tx_rings[i]);
return err;
}
@@ -2278,7 +2278,7 @@ static int i40e_vsi_configure_rx(struct i40e_vsi *vsi)
/* set up individual rings */
for (i = 0; i < vsi->num_queue_pairs && !err; i++)
- err = i40e_configure_rx_ring(&vsi->rx_rings[i]);
+ err = i40e_configure_rx_ring(vsi->rx_rings[i]);
return err;
}
@@ -2302,8 +2302,8 @@ static void i40e_vsi_config_dcb_rings(struct i40e_vsi *vsi)
qoffset = vsi->tc_config.tc_info[n].qoffset;
qcount = vsi->tc_config.tc_info[n].qcount;
for (i = qoffset; i < (qoffset + qcount); i++) {
- struct i40e_ring *rx_ring = &vsi->rx_rings[i];
- struct i40e_ring *tx_ring = &vsi->tx_rings[i];
+ struct i40e_ring *rx_ring = vsi->rx_rings[i];
+ struct i40e_ring *tx_ring = vsi->tx_rings[i];
rx_ring->dcb_tc = n;
tx_ring->dcb_tc = n;
}
@@ -2615,8 +2615,8 @@ static void i40e_vsi_disable_irq(struct i40e_vsi *vsi)
int i;
for (i = 0; i < vsi->num_queue_pairs; i++) {
- wr32(hw, I40E_QINT_TQCTL(vsi->tx_rings[i].reg_idx), 0);
- wr32(hw, I40E_QINT_RQCTL(vsi->rx_rings[i].reg_idx), 0);
+ wr32(hw, I40E_QINT_TQCTL(vsi->tx_rings[i]->reg_idx), 0);
+ wr32(hw, I40E_QINT_RQCTL(vsi->rx_rings[i]->reg_idx), 0);
}
if (pf->flags & I40E_FLAG_MSIX_ENABLED) {
@@ -2786,8 +2786,8 @@ static irqreturn_t i40e_intr(int irq, void *data)
static void map_vector_to_qp(struct i40e_vsi *vsi, int v_idx, int qp_idx)
{
struct i40e_q_vector *q_vector = vsi->q_vectors[v_idx];
- struct i40e_ring *tx_ring = &(vsi->tx_rings[qp_idx]);
- struct i40e_ring *rx_ring = &(vsi->rx_rings[qp_idx]);
+ struct i40e_ring *tx_ring = vsi->tx_rings[qp_idx];
+ struct i40e_ring *rx_ring = vsi->rx_rings[qp_idx];
tx_ring->q_vector = q_vector;
tx_ring->next = q_vector->tx.ring;
@@ -3792,8 +3792,8 @@ void i40e_down(struct i40e_vsi *vsi)
i40e_napi_disable_all(vsi);
for (i = 0; i < vsi->num_queue_pairs; i++) {
- i40e_clean_tx_ring(&vsi->tx_rings[i]);
- i40e_clean_rx_ring(&vsi->rx_rings[i]);
+ i40e_clean_tx_ring(vsi->tx_rings[i]);
+ i40e_clean_rx_ring(vsi->rx_rings[i]);
}
}
@@ -4220,9 +4220,9 @@ static void i40e_check_hang_subtask(struct i40e_pf *pf)
continue;
for (i = 0; i < vsi->num_queue_pairs; i++) {
- set_check_for_tx_hang(&vsi->tx_rings[i]);
+ set_check_for_tx_hang(vsi->tx_rings[i]);
if (test_bit(__I40E_HANG_CHECK_ARMED,
- &vsi->tx_rings[i].state))
+ &vsi->tx_rings[i]->state))
armed++;
}
@@ -4959,6 +4959,7 @@ static int i40e_vsi_mem_alloc(struct i40e_pf *pf, enum i40e_vsi_type type)
int ret = -ENODEV;
struct i40e_vsi *vsi;
int sz_vectors;
+ int sz_rings;
int vsi_idx;
int i;
@@ -5004,7 +5005,18 @@ static int i40e_vsi_mem_alloc(struct i40e_pf *pf, enum i40e_vsi_type type)
vsi->work_limit = I40E_DEFAULT_IRQ_WORK;
INIT_LIST_HEAD(&vsi->mac_filter_list);
- i40e_set_num_rings_in_vsi(vsi);
+ ret = i40e_set_num_rings_in_vsi(vsi);
+ if (ret)
+ goto err_rings;
+
+ /* allocate memory for ring pointers */
+ sz_rings = sizeof(struct i40e_ring *) * vsi->alloc_queue_pairs * 2;
+ vsi->tx_rings = kzalloc(sz_rings, GFP_KERNEL);
+ if (!vsi->tx_rings) {
+ ret = -ENOMEM;
+ goto err_rings;
+ }
+ vsi->rx_rings = &vsi->tx_rings[vsi->alloc_queue_pairs];
/* allocate memory for q_vector pointers */
sz_vectors = sizeof(struct i40e_q_vectors *) * vsi->num_q_vectors;
@@ -5022,6 +5034,8 @@ static int i40e_vsi_mem_alloc(struct i40e_pf *pf, enum i40e_vsi_type type)
goto unlock_pf;
err_vectors:
+ kfree(vsi->tx_rings);
+err_rings:
pf->next_vsi = i - 1;
kfree(vsi);
unlock_pf:
@@ -5067,6 +5081,7 @@ static int i40e_vsi_clear(struct i40e_vsi *vsi)
/* free the ring and vector containers */
kfree(vsi->q_vectors);
+ kfree(vsi->tx_rings);
pf->vsi[vsi->idx] = NULL;
if (vsi->idx < pf->next_vsi)
@@ -5081,34 +5096,39 @@ free_vsi:
}
/**
+ * i40e_vsi_clear_rings - Deallocates the Rx and Tx rings for the provided VSI
+ * @vsi: the VSI being cleaned
+ **/
+static s32 i40e_vsi_clear_rings(struct i40e_vsi *vsi)
+{
+ int i;
+
+ for (i = 0; i < vsi->alloc_queue_pairs; i++) {
+ kfree_rcu(vsi->tx_rings[i], rcu);
+ vsi->tx_rings[i] = NULL;
+ vsi->rx_rings[i] = NULL;
+ }
+
+ return 0;
+}
+
+/**
* i40e_alloc_rings - Allocates the Rx and Tx rings for the provided VSI
* @vsi: the VSI being configured
**/
static int i40e_alloc_rings(struct i40e_vsi *vsi)
{
struct i40e_pf *pf = vsi->back;
- int ret = 0;
int i;
- vsi->rx_rings = kcalloc(vsi->alloc_queue_pairs,
- sizeof(struct i40e_ring), GFP_KERNEL);
- if (!vsi->rx_rings) {
- ret = -ENOMEM;
- goto err_alloc_rings;
- }
-
- vsi->tx_rings = kcalloc(vsi->alloc_queue_pairs,
- sizeof(struct i40e_ring), GFP_KERNEL);
- if (!vsi->tx_rings) {
- ret = -ENOMEM;
- kfree(vsi->rx_rings);
- goto err_alloc_rings;
- }
-
/* Set basic values in the rings to be used later during open() */
for (i = 0; i < vsi->alloc_queue_pairs; i++) {
- struct i40e_ring *rx_ring = &vsi->rx_rings[i];
- struct i40e_ring *tx_ring = &vsi->tx_rings[i];
+ struct i40e_ring *tx_ring;
+ struct i40e_ring *rx_ring;
+
+ tx_ring = kzalloc(sizeof(struct i40e_ring) * 2, GFP_KERNEL);
+ if (!tx_ring)
+ goto err_out;
tx_ring->queue_index = i;
tx_ring->reg_idx = vsi->base_queue + i;
@@ -5119,7 +5139,9 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
tx_ring->count = vsi->num_desc;
tx_ring->size = 0;
tx_ring->dcb_tc = 0;
+ vsi->tx_rings[i] = tx_ring;
+ rx_ring = &tx_ring[1];
rx_ring->queue_index = i;
rx_ring->reg_idx = vsi->base_queue + i;
rx_ring->ring_active = false;
@@ -5133,24 +5155,14 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
set_ring_16byte_desc_enabled(rx_ring);
else
clear_ring_16byte_desc_enabled(rx_ring);
- }
-
-err_alloc_rings:
- return ret;
-}
-
-/**
- * i40e_vsi_clear_rings - Deallocates the Rx and Tx rings for the provided VSI
- * @vsi: the VSI being cleaned
- **/
-static int i40e_vsi_clear_rings(struct i40e_vsi *vsi)
-{
- if (vsi) {
- kfree(vsi->rx_rings);
- kfree(vsi->tx_rings);
+ vsi->rx_rings[i] = rx_ring;
}
return 0;
+
+err_out:
+ i40e_vsi_clear_rings(vsi);
+ return -ENOMEM;
}
/**
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index f153f37..9eee551 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -64,7 +64,7 @@ int i40e_program_fdir_filter(struct i40e_fdir_data *fdir_data,
if (!vsi)
return -ENOENT;
- tx_ring = &vsi->tx_rings[0];
+ tx_ring = vsi->tx_rings[0];
dev = tx_ring->dev;
dma = dma_map_single(dev, fdir_data->raw_packet,
@@ -1823,7 +1823,7 @@ netdev_tx_t i40e_lan_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
- struct i40e_ring *tx_ring = &vsi->tx_rings[skb->queue_mapping];
+ struct i40e_ring *tx_ring = vsi->tx_rings[skb->queue_mapping];
/* hardware can't handle really short frames, hardware padding works
* beyond this point
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index c2a6746..5db36c3 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -228,6 +228,8 @@ struct i40e_ring {
struct i40e_vsi *vsi; /* Backreference to associated VSI */
struct i40e_q_vector *q_vector; /* Backreference to associated vector */
+
+ struct rcu_head rcu; /* to avoid race on free */
} ____cacheline_internodealigned_in_smp;
enum i40e_latency_range {
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 12/13] i40e: Add support for 64 bit netstats
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (10 preceding siblings ...)
2013-10-10 6:41 ` [net-next 11/13] i40e: Move rings from pointer to array to array of pointers Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 6:41 ` [net-next 13/13] i40e: Bump version Jeff Kirsher
2013-10-10 19:30 ` [net-next 00/13][pull request] Intel Wired LAN Driver Updates David Miller
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Alexander Duyck, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Alexander Duyck <alexander.h.duyck@intel.com>
This change brings support for 64 bit netstats to the driver. Previously
the stats were 64 bit but highly racy due to the fact that 64 bit
transactions are not atomic on 32 bit systems. This change makes is so
that the 64 bit byte and packet stats are reliable on all architectures.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 27 +++++++--
drivers/net/ethernet/intel/i40e/i40e_main.c | 78 ++++++++++++++++++++++----
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 4 ++
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 +
4 files changed, 95 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 50153ea..1b86138 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -579,6 +579,7 @@ static void i40e_get_ethtool_stats(struct net_device *netdev,
char *p;
int j;
struct rtnl_link_stats64 *net_stats = i40e_get_vsi_stats_struct(vsi);
+ unsigned int start;
i40e_update_stats(vsi);
@@ -587,12 +588,30 @@ static void i40e_get_ethtool_stats(struct net_device *netdev,
data[i++] = (i40e_gstrings_net_stats[j].sizeof_stat ==
sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
}
+ rcu_read_lock();
for (j = 0; j < vsi->num_queue_pairs; j++, i += 4) {
- data[i] = vsi->tx_rings[j]->stats.packets;
- data[i + 1] = vsi->tx_rings[j]->stats.bytes;
- data[i + 2] = vsi->rx_rings[j]->stats.packets;
- data[i + 3] = vsi->rx_rings[j]->stats.bytes;
+ struct i40e_ring *tx_ring = ACCESS_ONCE(vsi->tx_rings[j]);
+ struct i40e_ring *rx_ring;
+
+ if (!tx_ring)
+ continue;
+
+ /* process Tx ring statistics */
+ do {
+ start = u64_stats_fetch_begin_bh(&tx_ring->syncp);
+ data[i] = tx_ring->stats.packets;
+ data[i + 1] = tx_ring->stats.bytes;
+ } while (u64_stats_fetch_retry_bh(&tx_ring->syncp, start));
+
+ /* Rx ring is the 2nd half of the queue pair */
+ rx_ring = &tx_ring[1];
+ do {
+ start = u64_stats_fetch_begin_bh(&rx_ring->syncp);
+ data[i + 2] = rx_ring->stats.packets;
+ data[i + 3] = rx_ring->stats.bytes;
+ } while (u64_stats_fetch_retry_bh(&rx_ring->syncp, start));
}
+ rcu_read_unlock();
if (vsi == pf->vsi[pf->lan_vsi]) {
for (j = 0; j < I40E_GLOBAL_STATS_LEN; j++) {
p = (char *)pf + i40e_gstrings_stats[j].stat_offset;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 3cf23f5..24ee5d4 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -347,14 +347,53 @@ struct rtnl_link_stats64 *i40e_get_vsi_stats_struct(struct i40e_vsi *vsi)
**/
static struct rtnl_link_stats64 *i40e_get_netdev_stats_struct(
struct net_device *netdev,
- struct rtnl_link_stats64 *storage)
+ struct rtnl_link_stats64 *stats)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
+ struct rtnl_link_stats64 *vsi_stats = i40e_get_vsi_stats_struct(vsi);
+ int i;
+
+ rcu_read_lock();
+ for (i = 0; i < vsi->num_queue_pairs; i++) {
+ struct i40e_ring *tx_ring, *rx_ring;
+ u64 bytes, packets;
+ unsigned int start;
+
+ tx_ring = ACCESS_ONCE(vsi->tx_rings[i]);
+ if (!tx_ring)
+ continue;
- *storage = *i40e_get_vsi_stats_struct(vsi);
+ do {
+ start = u64_stats_fetch_begin_bh(&tx_ring->syncp);
+ packets = tx_ring->stats.packets;
+ bytes = tx_ring->stats.bytes;
+ } while (u64_stats_fetch_retry_bh(&tx_ring->syncp, start));
+
+ stats->tx_packets += packets;
+ stats->tx_bytes += bytes;
+ rx_ring = &tx_ring[1];
+
+ do {
+ start = u64_stats_fetch_begin_bh(&rx_ring->syncp);
+ packets = rx_ring->stats.packets;
+ bytes = rx_ring->stats.bytes;
+ } while (u64_stats_fetch_retry_bh(&rx_ring->syncp, start));
- return storage;
+ stats->rx_packets += packets;
+ stats->rx_bytes += bytes;
+ }
+ rcu_read_unlock();
+
+ /* following stats updated by ixgbe_watchdog_task() */
+ stats->multicast = vsi_stats->multicast;
+ stats->tx_errors = vsi_stats->tx_errors;
+ stats->tx_dropped = vsi_stats->tx_dropped;
+ stats->rx_errors = vsi_stats->rx_errors;
+ stats->rx_crc_errors = vsi_stats->rx_crc_errors;
+ stats->rx_length_errors = vsi_stats->rx_length_errors;
+
+ return stats;
}
/**
@@ -708,21 +747,38 @@ void i40e_update_stats(struct i40e_vsi *vsi)
tx_restart = tx_busy = 0;
rx_page = 0;
rx_buf = 0;
+ rcu_read_lock();
for (q = 0; q < vsi->num_queue_pairs; q++) {
struct i40e_ring *p;
+ u64 bytes, packets;
+ unsigned int start;
- p = vsi->rx_rings[q];
- rx_b += p->stats.bytes;
- rx_p += p->stats.packets;
- rx_buf += p->rx_stats.alloc_rx_buff_failed;
- rx_page += p->rx_stats.alloc_rx_page_failed;
+ /* locate Tx ring */
+ p = ACCESS_ONCE(vsi->tx_rings[q]);
- p = vsi->tx_rings[q];
- tx_b += p->stats.bytes;
- tx_p += p->stats.packets;
+ do {
+ start = u64_stats_fetch_begin_bh(&p->syncp);
+ packets = p->stats.packets;
+ bytes = p->stats.bytes;
+ } while (u64_stats_fetch_retry_bh(&p->syncp, start));
+ tx_b += bytes;
+ tx_p += packets;
tx_restart += p->tx_stats.restart_queue;
tx_busy += p->tx_stats.tx_busy;
+
+ /* Rx queue is part of the same block as Tx queue */
+ p = &p[1];
+ do {
+ start = u64_stats_fetch_begin_bh(&p->syncp);
+ packets = p->stats.packets;
+ bytes = p->stats.bytes;
+ } while (u64_stats_fetch_retry_bh(&p->syncp, start));
+ rx_b += bytes;
+ rx_p += packets;
+ rx_buf += p->rx_stats.alloc_rx_buff_failed;
+ rx_page += p->rx_stats.alloc_rx_page_failed;
}
+ rcu_read_unlock();
vsi->tx_restart = tx_restart;
vsi->tx_busy = tx_busy;
vsi->rx_page_failed = rx_page;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 9eee551..dc89e72 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -411,8 +411,10 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
i += tx_ring->count;
tx_ring->next_to_clean = i;
+ u64_stats_update_begin(&tx_ring->syncp);
tx_ring->stats.bytes += total_bytes;
tx_ring->stats.packets += total_packets;
+ u64_stats_update_end(&tx_ring->syncp);
tx_ring->q_vector->tx.total_bytes += total_bytes;
tx_ring->q_vector->tx.total_packets += total_packets;
@@ -1075,8 +1077,10 @@ next_desc:
}
rx_ring->next_to_clean = i;
+ u64_stats_update_begin(&rx_ring->syncp);
rx_ring->stats.packets += total_rx_packets;
rx_ring->stats.bytes += total_rx_bytes;
+ u64_stats_update_end(&rx_ring->syncp);
rx_ring->q_vector->rx.total_packets += total_rx_packets;
rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index 5db36c3..db55d99 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -218,6 +218,7 @@ struct i40e_ring {
/* stats structs */
struct i40e_queue_stats stats;
+ struct u64_stats_sync syncp;
union {
struct i40e_tx_queue_stats tx_stats;
struct i40e_rx_queue_stats rx_stats;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [net-next 13/13] i40e: Bump version
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (11 preceding siblings ...)
2013-10-10 6:41 ` [net-next 12/13] i40e: Add support for 64 bit netstats Jeff Kirsher
@ 2013-10-10 6:41 ` Jeff Kirsher
2013-10-10 19:30 ` [net-next 00/13][pull request] Intel Wired LAN Driver Updates David Miller
13 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2013-10-10 6:41 UTC (permalink / raw)
To: davem
Cc: Catherine Sullivan, netdev, gospo, sassmann, Jesse Brandeburg,
Jeff Kirsher
From: Catherine Sullivan <catherine.sullivan@intel.com>
Update the version number of the driver.
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 24ee5d4..fbe7fe2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -36,7 +36,7 @@ static const char i40e_driver_string[] =
#define DRV_VERSION_MAJOR 0
#define DRV_VERSION_MINOR 3
-#define DRV_VERSION_BUILD 9
+#define DRV_VERSION_BUILD 10
#define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \
__stringify(DRV_VERSION_MINOR) "." \
__stringify(DRV_VERSION_BUILD) DRV_KERN
--
1.8.3.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [net-next 00/13][pull request] Intel Wired LAN Driver Updates
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
` (12 preceding siblings ...)
2013-10-10 6:41 ` [net-next 13/13] i40e: Bump version Jeff Kirsher
@ 2013-10-10 19:30 ` David Miller
13 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2013-10-10 19:30 UTC (permalink / raw)
To: jeffrey.t.kirsher; +Cc: netdev, gospo, sassmann
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Date: Wed, 9 Oct 2013 23:40:58 -0700
> This series contains updates to i40e only.
>
> Alex provides the majority of the patches against i40e, where he does
> cleanup of the Tx and RX queues and to align the code with the known
> good Tx/Rx queue code in the ixgbe driver.
>
> Anjali provides an i40e patch to update link events to not print to
> the log until the device is administratively up.
>
> Catherine provides a patch to update the driver version.
Pulled, thanks a lot Jeff.
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2014-03-06 4:21 Jeff Kirsher
0 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2014-03-06 4:21 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to i40e and i40evf.
Most notable are:
Joe Perches provides a change to convert ether_addr_equal() to
ether_addr_equal_64bits() which is a bit more efficient.
Joseph completes the implementation of the ethtool ntuple rule
management interface by adding the get, update and delete interface
reset.
Akeem provides a fix to prevent a possible overflow due to multiplication
of number and size by using kzalloc, so use kcalloc.
Jesse provides an implementation for skb_set_hash() and adds the L4 type
return when we know it is an L4 hash. He also adds a counter to
statistics for Tx timeouts to help users. Lastly he provides a change
to stay away from the cache line where the done bit may be getting
written back for the transmit ring since the hardware may be writing the
whole cache line for a partial update.
Shannon cleans up code comments.
Anjali removes a firmware workaround for newer firmware since the number
of MSIx vectors are being reported correctly.
The following are changes since commit e50287be7c007a10e6e2e3332e52466faf4b6a04:
be2net: dma_sync each RX frag before passing it to the stack
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Akeem G Abodunrin (1):
i40e: Prevent overflow due to kzalloc
Anjali Singhai Jain (2):
i40e: Remove a FW workaround for Number of MSIX vectors
i40e: Remove a redundant filter addition
Catherine Sullivan (1):
i40e/i40evf: Bump pf&vf build versions
Greg Rose (1):
i40evf: Enable the ndo_set_features netdev op
Jesse Brandeburg (4):
i40e/i40evf: i40e implementation for skb_set_hash
i40e: count timeout events
i40e: fix nvm version and remove firmware report
i40e/i40evf: carefully fill tx ring
Joe Perches (1):
i40e: use ether_addr_equal_64bits
Joseph Gasparakis (1):
i40e: Flow Director sideband accounting
Neerav Parikh (1):
i40e: Fix static checker warning
Shannon Nelson (1):
i40e: clean up comment style
drivers/net/ethernet/intel/i40e/i40e.h | 37 +-
drivers/net/ethernet/intel/i40e/i40e_common.c | 366 ++++++++++++++++++
drivers/net/ethernet/intel/i40e/i40e_dcb.c | 9 +-
drivers/net/ethernet/intel/i40e/i40e_debugfs.c | 27 +-
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 429 +++++++++------------
drivers/net/ethernet/intel/i40e/i40e_main.c | 88 ++++-
drivers/net/ethernet/intel/i40e/i40e_nvm.c | 117 +++---
drivers/net/ethernet/intel/i40e/i40e_prototype.h | 7 +
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 310 ++++++++++++++-
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c | 2 +-
drivers/net/ethernet/intel/i40evf/i40e_common.c | 366 ++++++++++++++++++
drivers/net/ethernet/intel/i40evf/i40e_prototype.h | 7 +
drivers/net/ethernet/intel/i40evf/i40e_txrx.c | 33 +-
drivers/net/ethernet/intel/i40evf/i40evf_main.c | 7 +-
14 files changed, 1443 insertions(+), 362 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2014-03-12 5:53 Jeff Kirsher
0 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2014-03-12 5:53 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to igb, e1000e, ixgbe and ixgbevf.
Tom Herbert provides changes to e1000e, igb and ixgbe to call skb_set_hash()
to set the hash and its type in an skbuff.
Carolyn provides a fix for igb where using ethtool for EEE settings, which
was not working correctly. Also provides patches to add debugfs support
for igb.
Jacob provides some trivial cleanups and fixes for ixgbe which mainly
dealt with the file headers.
Julia Lawall provides a one fix for ixgbevf where the driver did not need
to adjust the power state on suspend, so the call to pci_set_power_state()
in the resume function was a no-op.
The following are changes since commit a19a7ec8fc8eb32113efeaff2a1ceca273726e9b:
bonding: force cast of IP address in options
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Carolyn Wyborny (4):
igb: Fix for devices using ethtool for EEE settings
igb: Add debugfs skeleton
igb: Add support for debugfs
igb: Add debugfs command/register read and write functionality
Jacob Keller (4):
ixgbe: move setting rx_pb_size into get_invariants
ixgbe: add Linux NICS mailing list to contact info
ixgbe: fixup header for ixgbe_set_rxpba_82598
ixgbe: fix some multiline hw_dbg prints
Julia Lawall (1):
ixgbevf: delete unneeded call to pci_set_power_state
Masanari Iida (1):
ixgbe: Fix format string in ixgbe_fcoe.c
Tom Herbert (3):
net: e1000e calls skb_set_hash
net: igb calls skb_set_hash
net: ixgbe calls skb_set_hash
drivers/net/ethernet/intel/e1000e/netdev.c | 2 +-
drivers/net/ethernet/intel/igb/Makefile | 2 +-
drivers/net/ethernet/intel/igb/e1000_82575.h | 13 +
drivers/net/ethernet/intel/igb/e1000_defines.h | 4 +-
drivers/net/ethernet/intel/igb/e1000_regs.h | 2 +
drivers/net/ethernet/intel/igb/igb.h | 33 +
drivers/net/ethernet/intel/igb/igb_debugfs.c | 707 +++++++++++++++++++++
drivers/net/ethernet/intel/igb/igb_ethtool.c | 45 +-
drivers/net/ethernet/intel/igb/igb_main.c | 126 +++-
drivers/net/ethernet/intel/ixgbe/ixgbe.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c | 18 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c | 11 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_common.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c | 3 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 5 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_sysfs.c | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_type.h | 1 +
drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c | 3 +-
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 1 -
33 files changed, 940 insertions(+), 53 deletions(-)
create mode 100644 drivers/net/ethernet/intel/igb/igb_debugfs.c
--
1.8.3.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2014-03-31 23:34 Jeff Kirsher
2014-04-01 1:19 ` David Miller
0 siblings, 1 reply; 33+ messages in thread
From: Jeff Kirsher @ 2014-03-31 23:34 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains fixes to e1000e, igb, ixgbe, ixgebvf, i40e and
i40evf.
David provides a fix for e1000e to resolve an issue where the device is
capable of transmitting packets but is unable to receive packets until
a previously introduced workaround is called.
Jakub Kicinski provides PTP fixes for ixgbe, which include removing a
redundant if clause and make sure we are not generating both a software and
hardware timestamp. As well as fix a race condition and leaking skbs
when multiple transmit rings try to claim time stamping.
Jean Sacren fixes a function declaration in ixgbe which was introduced
in commit c97506ab0e22 ("ixgbe: Add check for FW veto bit"). In addition
fixes a function header comment in i40e and fixes the error checking
by binding the check to the pertinent DMA bit mask.
Mark provides several fixes for ixgbe and ixgbevf. Most notably are fixes
to resolve namespace issues and fix ECU warnings induced by LER for ixgbe
and ixgbevf.
Joe Perches fixes up unnecessary casts in i40e and i40evf.
Peter Senna Tschudin fixes igb to use pci_iounmap when the virtual mapping
was done with pci_iomap.
The following are changes since commit 0b70195e0c3206103be991e196c26fcf168d0334:
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
David Ertman (1):
e1000e: Fix no connectivity when driver loaded with cable out
Jakub Kicinski (3):
ixgbe: remove redundant if clause from PTP work
ixgbe: never generate both software and hardware timestamps
ixgbe: fix race conditions on queuing skb for HW time stamp
Jean Sacren (3):
ixgbe: fix ixgbe_check_reset_blocked() declaration
i40e: fix function kernel doc description
i40e/i40evf: fix error checking path
Joe Perches (2):
i40e/i40evf: Remove addressof casts to same type
i40e: Remove casts of pointer to same type
Mark Rustad (3):
ixgbevf: Change ixgbe_read_reg to ixgbevf_read_reg
ixgbe: Fix rcu warnings induced by LER
ixgbevf: Fix rcu warnings induced by LER
Peter Senna Tschudin (1):
INTEL-IGB: Convert iounmap to pci_iounmap
drivers/net/ethernet/intel/e1000e/netdev.c | 20 +++++++++++----
drivers/net/ethernet/intel/i40e/i40e_common.c | 4 +--
drivers/net/ethernet/intel/i40e/i40e_ethtool.c | 4 +--
drivers/net/ethernet/intel/i40e/i40e_main.c | 11 ++++----
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 +-
drivers/net/ethernet/intel/i40evf/i40e_common.c | 3 +--
drivers/net/ethernet/intel/i40evf/i40evf_main.c | 11 ++++----
drivers/net/ethernet/intel/igb/igb_main.c | 4 +--
drivers/net/ethernet/intel/ixgbe/ixgbe.h | 2 ++
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 31 +++++++++++++++++------
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c | 2 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h | 2 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c | 7 +++--
drivers/net/ethernet/intel/ixgbevf/ethtool.c | 8 +++---
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h | 1 +
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 26 ++++++++++++++-----
drivers/net/ethernet/intel/ixgbevf/vf.h | 6 ++---
17 files changed, 92 insertions(+), 52 deletions(-)
--
1.9.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [net-next 00/13][pull request] Intel Wired LAN Driver Updates
2014-03-31 23:34 Jeff Kirsher
@ 2014-04-01 1:19 ` David Miller
0 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2014-04-01 1:19 UTC (permalink / raw)
To: jeffrey.t.kirsher; +Cc: netdev, gospo, sassmann
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Date: Mon, 31 Mar 2014 16:34:46 -0700
> This series contains fixes to e1000e, igb, ixgbe, ixgebvf, i40e and
> i40evf.
Pulled, thanks Jeff.
^ permalink raw reply [flat|nested] 33+ messages in thread
* [net-next 00/13][pull request] Intel Wired LAN Driver Updates
@ 2014-04-24 10:30 Jeff Kirsher
0 siblings, 0 replies; 33+ messages in thread
From: Jeff Kirsher @ 2014-04-24 10:30 UTC (permalink / raw)
To: davem; +Cc: Jeff Kirsher, netdev, gospo, sassmann
This series contains updates to igb only.
Carolyn provides a number of cleanups to fix checkpatch warnings/errors
and two minor issues found by coccicheck.
The following are changes since commit 573be693ce72753c71cb957c2d38fd9cc6d9f568:
Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next master
Carolyn Wyborny (13):
igb: Cleanups to fix pointer location error
igb: Cleanups to fix for trailing statement
igb: Cleanups to change comment style on license headers
igb: Cleanups to fix assignment in if error
igb: Cleanups to fix missing break in switch statements
igb: Cleanups to remove return parentheses
igb: Cleanups to fix line length warnings
igb: Cleanups to fix msleep warnings
igb: Cleanups to fix static initialization
igb: Cleanups to replace deprecated DEFINE_PCI_DEVICE_TABLE
igb: Cleanups to remove unneeded extern declaration
igb: Replace 1/0 return values with true/false
igb: Change memcpy to struct assignment
drivers/net/ethernet/intel/igb/e1000_82575.c | 69 ++++++++++----------
drivers/net/ethernet/intel/igb/e1000_82575.h | 50 +++++++--------
drivers/net/ethernet/intel/igb/e1000_defines.h | 71 +++++++++------------
drivers/net/ethernet/intel/igb/e1000_hw.h | 46 ++++++--------
drivers/net/ethernet/intel/igb/e1000_i210.c | 48 +++++++-------
drivers/net/ethernet/intel/igb/e1000_i210.h | 47 +++++++-------
drivers/net/ethernet/intel/igb/e1000_mac.c | 49 +++++++-------
drivers/net/ethernet/intel/igb/e1000_mac.h | 47 +++++++-------
drivers/net/ethernet/intel/igb/e1000_mbx.c | 47 +++++++-------
drivers/net/ethernet/intel/igb/e1000_mbx.h | 47 +++++++-------
drivers/net/ethernet/intel/igb/e1000_nvm.c | 46 ++++++--------
drivers/net/ethernet/intel/igb/e1000_nvm.h | 47 +++++++-------
drivers/net/ethernet/intel/igb/e1000_phy.c | 49 +++++++-------
drivers/net/ethernet/intel/igb/e1000_phy.h | 47 +++++++-------
drivers/net/ethernet/intel/igb/e1000_regs.h | 47 +++++++-------
drivers/net/ethernet/intel/igb/igb.h | 48 +++++++-------
drivers/net/ethernet/intel/igb/igb_ethtool.c | 88 ++++++++++++++------------
drivers/net/ethernet/intel/igb/igb_hwmon.c | 47 +++++++-------
drivers/net/ethernet/intel/igb/igb_main.c | 87 +++++++++++++------------
19 files changed, 486 insertions(+), 541 deletions(-)
--
1.9.0
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2014-04-24 10:31 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-10 6:40 [net-next 00/13][pull request] Intel Wired LAN Driver Updates Jeff Kirsher
2013-10-10 6:40 ` [net-next 01/13] i40e: Link code updates Jeff Kirsher
2013-10-10 6:41 ` [net-next 02/13] i40e: Drop unused completed stat Jeff Kirsher
2013-10-10 6:41 ` [net-next 03/13] i40e: Cleanup Tx buffer info layout Jeff Kirsher
2013-10-10 6:41 ` [net-next 04/13] i40e: Do not directly increment Tx next_to_use Jeff Kirsher
2013-10-10 6:41 ` [net-next 05/13] i40e: clean up Tx fast path Jeff Kirsher
2013-10-10 6:41 ` [net-next 06/13] i40e: Drop dead code and flags from Tx hotpath Jeff Kirsher
2013-10-10 6:41 ` [net-next 07/13] i40e: Add support for Tx byte queue limits Jeff Kirsher
2013-10-10 6:41 ` [net-next 08/13] i40e: Split bytes and packets from Rx/Tx stats Jeff Kirsher
2013-10-10 6:41 ` [net-next 09/13] i40e: Move q_vectors from pointer to array to array of pointers Jeff Kirsher
2013-10-10 6:41 ` [net-next 10/13] i40e: Replace ring container array with linked list Jeff Kirsher
2013-10-10 6:41 ` [net-next 11/13] i40e: Move rings from pointer to array to array of pointers Jeff Kirsher
2013-10-10 6:41 ` [net-next 12/13] i40e: Add support for 64 bit netstats Jeff Kirsher
2013-10-10 6:41 ` [net-next 13/13] i40e: Bump version Jeff Kirsher
2013-10-10 19:30 ` [net-next 00/13][pull request] Intel Wired LAN Driver Updates David Miller
-- strict thread matches above, loose matches on Subject: below --
2014-04-24 10:30 Jeff Kirsher
2014-03-31 23:34 Jeff Kirsher
2014-04-01 1:19 ` David Miller
2014-03-12 5:53 Jeff Kirsher
2014-03-06 4:21 Jeff Kirsher
2013-04-04 11:37 Jeff Kirsher
2012-10-30 7:04 Jeff Kirsher
2012-10-31 18:28 ` David Miller
2012-10-20 6:25 Jeff Kirsher
2012-08-23 9:56 Jeff Kirsher
2012-07-21 23:08 Jeff Kirsher
2012-07-22 19:24 ` David Miller
2012-07-22 19:37 ` David Miller
2012-07-22 21:39 ` Jeff Kirsher
2012-07-22 21:53 ` David Miller
2011-10-07 7:18 Jeff Kirsher
2011-10-07 16:38 ` David Miller
2011-09-17 8:04 Jeff Kirsher
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).