netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC net-next 00/14] default maximal number of RSS queues in mq drivers
@ 2012-06-19 15:13 Yuval Mintz
  2012-06-19 15:13 ` [RFC net-next 01/14] Add Default Yuval Mintz
                   ` (15 more replies)
  0 siblings, 16 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:13 UTC (permalink / raw)
  To: netdev, davem
  Cc: eilong, Yuval Mintz, Divy Le Ray, Or Gerlitz, Jon Mason,
	Anirban Chakraborty, Jitendra Kalsaria, Ron Mercer, Jeff Kirsher,
	Jon Mason, Andrew Gallatin, Sathya Perla, Subbu Seetharaman,
	Ajit Khaparde, Matt Carlson, Michael Chan

Different vendors support different number of RSS queues by default. Today,
there exists an ethtool API through which users can change the number of
channels their driver supports; This enables us to pursue the goal of using
a default number of RSS queues in various multi-queue drivers.

This RFC intendeds to achieve the above default, by upper-limiting the number
of interrupts multi-queue drivers request (by default, not via the new API) 
with correlation to the number of cpus on the machine.

After examining multi-queue drivers that call alloc_etherdev_mq[s],
it became evident that most drivers allocate their devices using hard-coded
values. Changing those defaults directly will most likely cause a regression. 

However, (most) multi-queue driver look at the number of online cpus when 
requesting for interrupts. We assume that the number of interrupts the
driver manages to request is propagated across the driver, and the number
of RSS queues it configures is based upon it. 

This RFC modifies said logic - if the number of cpus is large enough, use
a smaller default value instead. This serves 2 main purposes: 
 1. A step forward unity in the number of RSS queues of various drivers.
 2. It prevents wasteful requests for interrupts on machines with many cpus.

Notice no testing was made on this RFC (other than on the bnx2x driver)
except for compilation test.

Drivers identified as multi-queue, handled in this RFC:

* mellanox mlx4
* neterion vxge
* qlogic   qlge
* intel    igb, igbxe, igbxevf
* chelsio  cxgb3, cxgb4
* myricom  myri10ge
* emulex   benet
* broadcom tg3, bnx2, bnx2x

Driver identified as multi-queue, no reference to number of online cpus found,
and thus unhandled in this RFC:

* neterion  s2io
* marvell   mv643xx
* freescale gianfar
* ibm       ehea
* ti        cpmac
* sun       niu
* sfc       efx
* chelsio   cxgb4vf

Cheers,
Yuval Mintz

Cc: Divy Le Ray <divy@chelsio.com>
Cc: Or Gerlitz <ogerlitz@mellanox.com>
Cc: Jon Mason <jdmason@kudzu.us>
Cc: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
Cc: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
Cc: Ron Mercer <ron.mercer@qlogic.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jon Mason <mason@myri.com>
Cc: Andrew Gallatin <gallatin@myri.com>
Cc: Sathya Perla <sathya.perla@emulex.com>
Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
Cc: Matt Carlson <mcarlson@broadcom.com>
Cc: Michael Chan <mchan@broadcom.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [RFC net-next 01/14] Add Default
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
@ 2012-06-19 15:13 ` Yuval Mintz
  2012-06-19 16:37   ` Alexander Duyck
  2012-06-19 15:13 ` [RFC net-next 02/14] Fix mellanox/mlx4 Yuval Mintz
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:13 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 include/linux/etherdevice.h |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
index 3d406e0..bb1ecaf 100644
--- a/include/linux/etherdevice.h
+++ b/include/linux/etherdevice.h
@@ -44,7 +44,10 @@ extern int eth_mac_addr(struct net_device *dev, void *p);
 extern int eth_change_mtu(struct net_device *dev, int new_mtu);
 extern int eth_validate_addr(struct net_device *dev);
 
-
+/* The maximal number of RSS queues a driver should have unless configured
+ * so explicitly.
+ */
+#define DEFAULT_MAX_NUM_RSS_QUEUES (8)
 
 extern struct net_device *alloc_etherdev_mqs(int sizeof_priv, unsigned int txqs,
 					    unsigned int rxqs);
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 02/14] Fix mellanox/mlx4
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
  2012-06-19 15:13 ` [RFC net-next 01/14] Add Default Yuval Mintz
@ 2012-06-19 15:13 ` Yuval Mintz
  2012-06-19 15:13 ` [RFC net-next 03/14] fix neterion/vxge Yuval Mintz
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:13 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Or Gerlitz

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Or Gerlitz <ogerlitz@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx4/main.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index ee6f4fe..57840ad 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -41,6 +41,7 @@
 #include <linux/slab.h>
 #include <linux/io-mapping.h>
 #include <linux/delay.h>
+#include <linux/etherdevice.h>
 
 #include <linux/mlx4/device.h>
 #include <linux/mlx4/doorbell.h>
@@ -1538,8 +1539,9 @@ static void mlx4_enable_msi_x(struct mlx4_dev *dev)
 {
 	struct mlx4_priv *priv = mlx4_priv(dev);
 	struct msix_entry *entries;
+	int ncpu = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
 	int nreq = min_t(int, dev->caps.num_ports *
-			 min_t(int, num_online_cpus() + 1, MAX_MSIX_P_PORT)
+			 min_t(int, ncpu + 1, MAX_MSIX_P_PORT)
 				+ MSIX_LEGACY_SZ, MAX_MSIX);
 	int err;
 	int i;
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 03/14] fix neterion/vxge
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
  2012-06-19 15:13 ` [RFC net-next 01/14] Add Default Yuval Mintz
  2012-06-19 15:13 ` [RFC net-next 02/14] Fix mellanox/mlx4 Yuval Mintz
@ 2012-06-19 15:13 ` Yuval Mintz
  2012-06-19 15:13 ` [RFC net-next 04/14] Fix qlogic/qlge Yuval Mintz
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:13 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Jon Mason

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jon Mason <jdmason@kudzu.us>
---
 drivers/net/ethernet/neterion/vxge/vxge-main.c |    9 +++++++--
 1 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c
index 2578eb1..a2c6e3f 100644
--- a/drivers/net/ethernet/neterion/vxge/vxge-main.c
+++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c
@@ -3686,8 +3686,13 @@ static int __devinit vxge_config_vpaths(
 		if (driver_config->g_no_cpus == -1)
 			return 0;
 
-		if (!driver_config->g_no_cpus)
-			driver_config->g_no_cpus = num_online_cpus();
+		if (!driver_config->g_no_cpus) {
+			int ncpu = min_t(int, num_online_cpus(),
+					 DEFAULT_MAX_NUM_RSS_QUEUES);
+			driver_config->g_no_cpus = ncpu;
+		}
+
+
 
 		driver_config->vpath_per_dev = driver_config->g_no_cpus >> 1;
 		if (!driver_config->vpath_per_dev)
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 04/14] Fix qlogic/qlge
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (2 preceding siblings ...)
  2012-06-19 15:13 ` [RFC net-next 03/14] fix neterion/vxge Yuval Mintz
@ 2012-06-19 15:13 ` Yuval Mintz
  2012-06-19 15:13 ` [RFC net-next 05/14] Fix intel/ixgbevf Yuval Mintz
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:13 UTC (permalink / raw)
  To: netdev, davem
  Cc: eilong, Yuval Mintz, Anirban Chakraborty, Jitendra Kalsaria,
	Ron Mercer

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
Cc: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
Cc: Ron Mercer <ron.mercer@qlogic.com>
---
 drivers/net/ethernet/qlogic/qlge/qlge_main.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
index 09d8d33..c6c91de 100644
--- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
@@ -4646,10 +4646,12 @@ static int __devinit qlge_probe(struct pci_dev *pdev,
 	struct net_device *ndev = NULL;
 	struct ql_adapter *qdev = NULL;
 	static int cards_found = 0;
-	int err = 0;
+	int err = 0, ncpu;
+
+	ncpu = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
 
 	ndev = alloc_etherdev_mq(sizeof(struct ql_adapter),
-			min(MAX_CPUS, (int)num_online_cpus()));
+				 min(MAX_CPUS, ncpu));
 	if (!ndev)
 		return -ENOMEM;
 
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 05/14] Fix intel/ixgbevf
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (3 preceding siblings ...)
  2012-06-19 15:13 ` [RFC net-next 04/14] Fix qlogic/qlge Yuval Mintz
@ 2012-06-19 15:13 ` Yuval Mintz
  2012-06-19 15:39   ` Alexander Duyck
  2012-06-19 15:14 ` [RFC net-next 06/14] Fix intel/igb Yuval Mintz
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:13 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Jeff Kirsher

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index f69ec42..3ad46c2 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -2014,7 +2014,7 @@ err_tx_ring_allocation:
 static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
 {
 	int err = 0;
-	int vector, v_budget;
+	int vector, v_budget, ncpu;
 
 	/*
 	 * It's easy to be greedy for MSI-X vectors, but it really
@@ -2022,8 +2022,9 @@ static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
 	 * than CPU's.  So let's be conservative and only ask for
 	 * (roughly) twice the number of vectors as there are CPU's.
 	 */
+	ncpu = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
 	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
-		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
+		       ncpu * 2) + NON_Q_VECTORS;
 
 	/* A failure in MSI-X entry allocation isn't fatal, but it does
 	 * mean we disable MSI-X capabilities of the adapter. */
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 06/14] Fix intel/igb
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (4 preceding siblings ...)
  2012-06-19 15:13 ` [RFC net-next 05/14] Fix intel/ixgbevf Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 15:42   ` Alexander Duyck
  2012-06-19 15:14 ` [RFC net-next 07/14] Fix intel/ixgbe Yuval Mintz
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Jeff Kirsher

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/igb/igb_main.c |   14 ++++++++------
 1 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index dd3bfe8..8e7ade5 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -2380,18 +2380,20 @@ static int __devinit igb_sw_init(struct igb_adapter *adapter)
 #endif /* CONFIG_PCI_IOV */
 	switch (hw->mac.type) {
 	case e1000_i210:
-		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES_I210,
-			num_online_cpus());
+		adapter->rss_queues = IGB_MAX_RX_QUEUES_I210;
 		break;
 	case e1000_i211:
-		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES_I211,
-			num_online_cpus());
+		adapter->rss_queues = IGB_MAX_RX_QUEUES_I211;
 		break;
 	default:
-		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES,
-		num_online_cpus());
+		adapter->rss_queues = IGB_MAX_RX_QUEUES;
 		break;
 	}
+
+	adapter->rss_queues = min_t(u32, adapter->rss_queues,
+				    min_t(u32, num_online_cpus(),
+					  DEFAULT_MAX_NUM_RSS_QUEUES));
+
 	/* i350 cannot do RSS and SR-IOV at the same time */
 	if (hw->mac.type == e1000_i350 && adapter->vfs_allocated_count)
 		adapter->rss_queues = 1;
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 07/14] Fix intel/ixgbe
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (5 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 06/14] Fix intel/igb Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 15:54   ` Alexander Duyck
  2012-06-19 15:14 ` [RFC net-next 08/14] Fix chelsio/cxgb3 Yuval Mintz
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Jeff Kirsher

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index af1a531..21e4513 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -802,7 +802,8 @@ static int ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
 	 * The default is to use pairs of vectors.
 	 */
 	v_budget = max(adapter->num_rx_queues, adapter->num_tx_queues);
-	v_budget = min_t(int, v_budget, num_online_cpus());
+	v_budget = min_t(int, v_budget, min_t(int, num_online_cpus(),
+					      DEFAULT_MAX_NUM_RSS_QUEUES));
 	v_budget += NON_Q_VECTORS;
 
 	/*
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 08/14] Fix chelsio/cxgb3
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (6 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 07/14] Fix intel/ixgbe Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 15:14 ` [RFC net-next 09/14] Fix chelsio/cxgb4 Yuval Mintz
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Divy Le Ray

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Divy Le Ray <divy@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
index abb6ce7..17d7829 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
@@ -3050,7 +3050,8 @@ static struct pci_error_handlers t3_err_handler = {
 static void set_nqsets(struct adapter *adap)
 {
 	int i, j = 0;
-	int num_cpus = num_online_cpus();
+	int num_cpus = min_t(int, num_online_cpus(),
+			     DEFAULT_MAX_NUM_RSS_QUEUES);
 	int hwports = adap->params.nports;
 	int nqsets = adap->msix_nvectors - 1;
 
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 09/14] Fix chelsio/cxgb4
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (7 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 08/14] Fix chelsio/cxgb3 Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 15:14 ` [RFC net-next 10/14] Fix myricom/myri10ge Yuval Mintz
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Divy Le Ray

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Divy Le Ray <divy@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index e1f96fb..4a70867 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -3482,7 +3482,7 @@ static inline void init_rspq(struct sge_rspq *q, u8 timer_idx, u8 pkt_cnt_idx,
 static void __devinit cfg_queues(struct adapter *adap)
 {
 	struct sge *s = &adap->sge;
-	int i, q10g = 0, n10g = 0, qidx = 0;
+	int i, q10g = 0, n10g = 0, qidx = 0, ncpu;
 
 	for_each_port(adap, i)
 		n10g += is_10g_port(&adap2pinfo(adap, i)->link_cfg);
@@ -3491,10 +3491,11 @@ static void __devinit cfg_queues(struct adapter *adap)
 	 * We default to 1 queue per non-10G port and up to # of cores queues
 	 * per 10G port.
 	 */
+	ncpu = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
 	if (n10g)
 		q10g = (MAX_ETH_QSETS - (adap->params.nports - n10g)) / n10g;
-	if (q10g > num_online_cpus())
-		q10g = num_online_cpus();
+	if (q10g > ncpu)
+		q10g = ncpu;
 
 	for_each_port(adap, i) {
 		struct port_info *pi = adap2pinfo(adap, i);
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 10/14] Fix myricom/myri10ge
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (8 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 09/14] Fix chelsio/cxgb4 Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 15:14 ` [RFC net-next 11/14] Fix emulex/benet Yuval Mintz
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Jon Mason, Andrew Gallatin

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jon Mason <mason@myri.com>
Cc: Andrew Gallatin <gallatin@myri.com>
---
 drivers/net/ethernet/myricom/myri10ge/myri10ge.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
index 90153fc..70c132d 100644
--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
@@ -3775,7 +3775,7 @@ static void myri10ge_probe_slices(struct myri10ge_priv *mgp)
 
 	mgp->num_slices = 1;
 	msix_cap = pci_find_capability(pdev, PCI_CAP_ID_MSIX);
-	ncpus = num_online_cpus();
+	ncpus = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
 
 	if (myri10ge_max_slices == 1 || msix_cap == 0 ||
 	    (myri10ge_max_slices == -1 && ncpus < 2))
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 11/14] Fix emulex/benet
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (9 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 10/14] Fix myricom/myri10ge Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 22:55   ` Ajit.Khaparde
  2012-06-21  6:42   ` Sathya.Perla
  2012-06-19 15:14 ` [RFC net-next 12/14] Fix broadcom/tg3 Yuval Mintz
                   ` (4 subsequent siblings)
  15 siblings, 2 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem
  Cc: eilong, Yuval Mintz, Sathya Perla, Subbu Seetharaman,
	Ajit Khaparde

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Sathya Perla <sathya.perla@emulex.com>
Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
---
 drivers/net/ethernet/emulex/benet/be_main.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 5a34503..e42597d 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -2153,13 +2153,15 @@ static uint be_num_rss_want(struct be_adapter *adapter)
 static void be_msix_enable(struct be_adapter *adapter)
 {
 #define BE_MIN_MSIX_VECTORS		1
-	int i, status, num_vec, num_roce_vec = 0;
+	int i, status, num_vec, num_roce_vec = 0, ncpu;
+
+	ncpu = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
 
 	/* If RSS queues are not used, need a vec for default RX Q */
-	num_vec = min(be_num_rss_want(adapter), num_online_cpus());
+	num_vec = min(be_num_rss_want(adapter), ncpu);
 	if (be_roce_supported(adapter)) {
 		num_roce_vec = min_t(u32, MAX_ROCE_MSIX_VECTORS,
-					(num_online_cpus() + 1));
+				     (u32)(ncpu + 1));
 		num_roce_vec = min(num_roce_vec, MAX_ROCE_EQS);
 		num_vec += num_roce_vec;
 		num_vec = min(num_vec, MAX_MSIX_VECTORS);
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 12/14] Fix broadcom/tg3
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (10 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 11/14] Fix emulex/benet Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 15:14 ` [RFC net-next 13/14] Fix broadcom/bnx2 Yuval Mintz
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Matt Carlson

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Matt Carlson <mcarlson@broadcom.com>
---
 drivers/net/ethernet/broadcom/tg3.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index e47ff8b..d431f17 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -9908,7 +9908,8 @@ static bool tg3_enable_msix(struct tg3 *tp)
 	int i, rc;
 	struct msix_entry msix_ent[tp->irq_max];
 
-	tp->irq_cnt = num_online_cpus();
+	tp->irq_cnt = min_t(unsigned, num_online_cpus(),
+			    DEFAULT_MAX_NUM_RSS_QUEUES);
 	if (tp->irq_cnt > 1) {
 		/* We want as many rx rings enabled as there are cpus.
 		 * In multiqueue MSI-X mode, the first MSI-X vector
@@ -10967,7 +10968,8 @@ static int tg3_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info,
 		if (netif_running(tp->dev))
 			info->data = tp->irq_cnt;
 		else {
-			info->data = num_online_cpus();
+			info->data = min_t(u32, num_online_cpus(),
+					   DEFAULT_MAX_NUM_RSS_QUEUES);
 			if (info->data > TG3_IRQ_MAX_VECS_RSS)
 				info->data = TG3_IRQ_MAX_VECS_RSS;
 		}
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 13/14] Fix broadcom/bnx2
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (11 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 12/14] Fix broadcom/tg3 Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 15:14 ` [RFC net-next 14/14] Fix broadcom/bnx2x Yuval Mintz
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz, Michael Chan

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
index 9b69a62..09e1f76 100644
--- a/drivers/net/ethernet/broadcom/bnx2.c
+++ b/drivers/net/ethernet/broadcom/bnx2.c
@@ -6246,7 +6246,7 @@ bnx2_enable_msix(struct bnx2 *bp, int msix_vecs)
 static int
 bnx2_setup_int_mode(struct bnx2 *bp, int dis_msi)
 {
-	int cpus = num_online_cpus();
+	int cpus = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
 	int msix_vecs;
 
 	if (!bp->num_req_rx_rings)
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC net-next 14/14] Fix broadcom/bnx2x
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (12 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 13/14] Fix broadcom/bnx2 Yuval Mintz
@ 2012-06-19 15:14 ` Yuval Mintz
  2012-06-19 16:17 ` [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Eilon Greenstein
  2012-06-20 20:43 ` Ben Hutchings
  15 siblings, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-19 15:14 UTC (permalink / raw)
  To: netdev, davem; +Cc: eilong, Yuval Mintz

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index 7cd99b7..99daadb 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -807,9 +807,10 @@ static inline void bnx2x_disable_msi(struct bnx2x *bp)
 
 static inline int bnx2x_calc_num_queues(struct bnx2x *bp)
 {
-	return  num_queues ?
-		 min_t(int, num_queues, BNX2X_MAX_QUEUES(bp)) :
-		 min_t(int, num_online_cpus(), BNX2X_MAX_QUEUES(bp));
+	int nqs = num_queues ? (int) num_queues :
+		  min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
+
+	return min_t(int, nqs, BNX2X_MAX_QUEUES(bp));
 }
 
 static inline void bnx2x_clear_sge_mask_next_elems(struct bnx2x_fastpath *fp)
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 05/14] Fix intel/ixgbevf
  2012-06-19 15:13 ` [RFC net-next 05/14] Fix intel/ixgbevf Yuval Mintz
@ 2012-06-19 15:39   ` Alexander Duyck
  2012-06-19 16:06     ` Eilon Greenstein
  0 siblings, 1 reply; 39+ messages in thread
From: Alexander Duyck @ 2012-06-19 15:39 UTC (permalink / raw)
  To: Yuval Mintz; +Cc: netdev, davem, eilong, Jeff Kirsher

On 06/19/2012 08:13 AM, Yuval Mintz wrote:
> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
>
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    5 +++--
>  1 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> index f69ec42..3ad46c2 100644
> --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> @@ -2014,7 +2014,7 @@ err_tx_ring_allocation:
>  static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
>  {
>  	int err = 0;
> -	int vector, v_budget;
> +	int vector, v_budget, ncpu;
>  
>  	/*
>  	 * It's easy to be greedy for MSI-X vectors, but it really
> @@ -2022,8 +2022,9 @@ static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
>  	 * than CPU's.  So let's be conservative and only ask for
>  	 * (roughly) twice the number of vectors as there are CPU's.
>  	 */
> +	ncpu = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
>  	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
> -		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
> +		       ncpu * 2) + NON_Q_VECTORS;
>  
>  	/* A failure in MSI-X entry allocation isn't fatal, but it does
>  	 * mean we disable MSI-X capabilities of the adapter. */
This change is pointless on the ixgbevf driver.  The VF hardware can
support at most 4 RSS queues.  As such num_rx_queues + num_tx_queues
will never exceed 8 so you are essentially adding a necessary min(x,8).

Thanks,

Alex

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 06/14] Fix intel/igb
  2012-06-19 15:14 ` [RFC net-next 06/14] Fix intel/igb Yuval Mintz
@ 2012-06-19 15:42   ` Alexander Duyck
  2012-06-19 16:07     ` Eilon Greenstein
  0 siblings, 1 reply; 39+ messages in thread
From: Alexander Duyck @ 2012-06-19 15:42 UTC (permalink / raw)
  To: Yuval Mintz; +Cc: netdev, davem, eilong, Jeff Kirsher, Wyborny, Carolyn

On 06/19/2012 08:14 AM, Yuval Mintz wrote:
> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
>
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> ---
>  drivers/net/ethernet/intel/igb/igb_main.c |   14 ++++++++------
>  1 files changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
> index dd3bfe8..8e7ade5 100644
> --- a/drivers/net/ethernet/intel/igb/igb_main.c
> +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> @@ -2380,18 +2380,20 @@ static int __devinit igb_sw_init(struct igb_adapter *adapter)
>  #endif /* CONFIG_PCI_IOV */
>  	switch (hw->mac.type) {
>  	case e1000_i210:
> -		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES_I210,
> -			num_online_cpus());
> +		adapter->rss_queues = IGB_MAX_RX_QUEUES_I210;
>  		break;
>  	case e1000_i211:
> -		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES_I211,
> -			num_online_cpus());
> +		adapter->rss_queues = IGB_MAX_RX_QUEUES_I211;
>  		break;
>  	default:
> -		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES,
> -		num_online_cpus());
> +		adapter->rss_queues = IGB_MAX_RX_QUEUES;
>  		break;
>  	}
> +
> +	adapter->rss_queues = min_t(u32, adapter->rss_queues,
> +				    min_t(u32, num_online_cpus(),
> +					  DEFAULT_MAX_NUM_RSS_QUEUES));
> +
>  	/* i350 cannot do RSS and SR-IOV at the same time */
>  	if (hw->mac.type == e1000_i350 && adapter->vfs_allocated_count)
>  		adapter->rss_queues = 1;
Same issue here as ixgbevf, only we support a max of 8 Rx queues in the
hardware.  So now you are once again adding another unnecessary limit on
something that is already limited to 8 or less.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 07/14] Fix intel/ixgbe
  2012-06-19 15:14 ` [RFC net-next 07/14] Fix intel/ixgbe Yuval Mintz
@ 2012-06-19 15:54   ` Alexander Duyck
  2012-06-19 16:11     ` Eilon Greenstein
  2012-06-20  8:55     ` Eric Dumazet
  0 siblings, 2 replies; 39+ messages in thread
From: Alexander Duyck @ 2012-06-19 15:54 UTC (permalink / raw)
  To: Yuval Mintz; +Cc: netdev, davem, eilong, Jeff Kirsher

On 06/19/2012 08:14 AM, Yuval Mintz wrote:
> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
>
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> index af1a531..21e4513 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> @@ -802,7 +802,8 @@ static int ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
>  	 * The default is to use pairs of vectors.
>  	 */
>  	v_budget = max(adapter->num_rx_queues, adapter->num_tx_queues);
> -	v_budget = min_t(int, v_budget, num_online_cpus());
> +	v_budget = min_t(int, v_budget, min_t(int, num_online_cpus(),
> +					      DEFAULT_MAX_NUM_RSS_QUEUES));
>  	v_budget += NON_Q_VECTORS;
>  
>  	/*
This patch doesn't limit the number of queues.  It is limiting the
number of interrupts.  The two are not directly related as we can
support multiple queues per interrupt.

Also this change assumes we are only using receive side scaling.  We
have other features such as DCB, FCoE, and Flow Director which require
additional queues.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 05/14] Fix intel/ixgbevf
  2012-06-19 15:39   ` Alexander Duyck
@ 2012-06-19 16:06     ` Eilon Greenstein
  2012-06-19 18:07       ` Greg Rose
  0 siblings, 1 reply; 39+ messages in thread
From: Eilon Greenstein @ 2012-06-19 16:06 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, netdev, davem, Jeff Kirsher

On Tue, 2012-06-19 at 08:39 -0700, Alexander Duyck wrote:
> On 06/19/2012 08:13 AM, Yuval Mintz wrote:
> > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> >
> > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    5 +++--
> >  1 files changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > index f69ec42..3ad46c2 100644
> > --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > @@ -2014,7 +2014,7 @@ err_tx_ring_allocation:
> >  static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
> >  {
> >  	int err = 0;
> > -	int vector, v_budget;
> > +	int vector, v_budget, ncpu;
> >  
> >  	/*
> >  	 * It's easy to be greedy for MSI-X vectors, but it really
> > @@ -2022,8 +2022,9 @@ static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
> >  	 * than CPU's.  So let's be conservative and only ask for
> >  	 * (roughly) twice the number of vectors as there are CPU's.
> >  	 */
> > +	ncpu = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);
> >  	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
> > -		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
> > +		       ncpu * 2) + NON_Q_VECTORS;
> >  
> >  	/* A failure in MSI-X entry allocation isn't fatal, but it does
> >  	 * mean we disable MSI-X capabilities of the adapter. */
> This change is pointless on the ixgbevf driver.  The VF hardware can
> support at most 4 RSS queues.  As such num_rx_queues + num_tx_queues
> will never exceed 8 so you are essentially adding a necessary min(x,8).

It is pointless with the current value, but if someone will edit the
kernel source code and replace the 8 with a 2, it will become
meaningful. The compiler will optimize this part, and I think that for
completion, it is best to keep this reference so a future default number
change will not be missed.

Eilon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 06/14] Fix intel/igb
  2012-06-19 15:42   ` Alexander Duyck
@ 2012-06-19 16:07     ` Eilon Greenstein
  0 siblings, 0 replies; 39+ messages in thread
From: Eilon Greenstein @ 2012-06-19 16:07 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: Yuval Mintz, netdev, davem, Jeff Kirsher, Wyborny, Carolyn

On Tue, 2012-06-19 at 08:42 -0700, Alexander Duyck wrote:
> On 06/19/2012 08:14 AM, Yuval Mintz wrote:
> > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> >
> > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > ---
> >  drivers/net/ethernet/intel/igb/igb_main.c |   14 ++++++++------
> >  1 files changed, 8 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
> > index dd3bfe8..8e7ade5 100644
> > --- a/drivers/net/ethernet/intel/igb/igb_main.c
> > +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> > @@ -2380,18 +2380,20 @@ static int __devinit igb_sw_init(struct igb_adapter *adapter)
> >  #endif /* CONFIG_PCI_IOV */
> >  	switch (hw->mac.type) {
> >  	case e1000_i210:
> > -		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES_I210,
> > -			num_online_cpus());
> > +		adapter->rss_queues = IGB_MAX_RX_QUEUES_I210;
> >  		break;
> >  	case e1000_i211:
> > -		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES_I211,
> > -			num_online_cpus());
> > +		adapter->rss_queues = IGB_MAX_RX_QUEUES_I211;
> >  		break;
> >  	default:
> > -		adapter->rss_queues = min_t(u32, IGB_MAX_RX_QUEUES,
> > -		num_online_cpus());
> > +		adapter->rss_queues = IGB_MAX_RX_QUEUES;
> >  		break;
> >  	}
> > +
> > +	adapter->rss_queues = min_t(u32, adapter->rss_queues,
> > +				    min_t(u32, num_online_cpus(),
> > +					  DEFAULT_MAX_NUM_RSS_QUEUES));
> > +
> >  	/* i350 cannot do RSS and SR-IOV at the same time */
> >  	if (hw->mac.type == e1000_i350 && adapter->vfs_allocated_count)
> >  		adapter->rss_queues = 1;
> Same issue here as ixgbevf, only we support a max of 8 Rx queues in the
> hardware.  So now you are once again adding another unnecessary limit on
> something that is already limited to 8 or less.
> 

Same issue and same reply :)

It is here to support a change of the DEFAULT_MAX_NUM_RSS_QUEUES macro.

Eilon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 07/14] Fix intel/ixgbe
  2012-06-19 15:54   ` Alexander Duyck
@ 2012-06-19 16:11     ` Eilon Greenstein
  2012-06-20  8:55     ` Eric Dumazet
  1 sibling, 0 replies; 39+ messages in thread
From: Eilon Greenstein @ 2012-06-19 16:11 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, netdev, davem, Jeff Kirsher

On Tue, 2012-06-19 at 08:54 -0700, Alexander Duyck wrote:
> On 06/19/2012 08:14 AM, Yuval Mintz wrote:
> > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> >
> > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c |    3 ++-
> >  1 files changed, 2 insertions(+), 1 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > index af1a531..21e4513 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > @@ -802,7 +802,8 @@ static int ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
> >  	 * The default is to use pairs of vectors.
> >  	 */
> >  	v_budget = max(adapter->num_rx_queues, adapter->num_tx_queues);
> > -	v_budget = min_t(int, v_budget, num_online_cpus());
> > +	v_budget = min_t(int, v_budget, min_t(int, num_online_cpus(),
> > +					      DEFAULT_MAX_NUM_RSS_QUEUES));
> >  	v_budget += NON_Q_VECTORS;
> >  
> >  	/*
> This patch doesn't limit the number of queues.  It is limiting the
> number of interrupts.  The two are not directly related as we can
> support multiple queues per interrupt.
> 
> Also this change assumes we are only using receive side scaling.  We
> have other features such as DCB, FCoE, and Flow Director which require
> additional queues.

You are right - but DEFAULT_MAX_NUM_RSS_QUEUES is there to limit the RSS
and not everything else. It is harder to determine what else should be
set to a lower value and the two goals were to limit the memory waste in
correlation to the number of CPUs and to have some unification between
the drivers - both goals are applicable mostly to the RSS and not so
much to DCB, FCoE and similar features.

Thanks,
Eilon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 00/14] default maximal number of RSS queues in mq drivers
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (13 preceding siblings ...)
  2012-06-19 15:14 ` [RFC net-next 14/14] Fix broadcom/bnx2x Yuval Mintz
@ 2012-06-19 16:17 ` Eilon Greenstein
  2012-06-20 20:43 ` Ben Hutchings
  15 siblings, 0 replies; 39+ messages in thread
From: Eilon Greenstein @ 2012-06-19 16:17 UTC (permalink / raw)
  To: Yuval Mintz, netdev, davem, Or Gerlitz, Divy Le Ray,
	Jitendra Kalsaria, Ron Mercer, Anirban Chakraborty, Jeff Kirsher,
	Jon Mason, Andrew Gallatin, Jon Mason, Subbu Seetharaman,
	Sathya Perla, Matt Carlson, Ajit Khaparde, Michael Chan

On Tue, 2012-06-19 at 18:13 +0300, Yuval Mintz wrote:
> Different vendors support different number of RSS queues by default. Today,
> there exists an ethtool API through which users can change the number of
> channels their driver supports; This enables us to pursue the goal of using
> a default number of RSS queues in various multi-queue drivers.
> 
> This RFC intendeds to achieve the above default, by upper-limiting the number
> of interrupts multi-queue drivers request (by default, not via the new API) 
> with correlation to the number of cpus on the machine.
> 
> After examining multi-queue drivers that call alloc_etherdev_mq[s],
> it became evident that most drivers allocate their devices using hard-coded
> values. Changing those defaults directly will most likely cause a regression. 
> 
> However, (most) multi-queue driver look at the number of online cpus when 
> requesting for interrupts. We assume that the number of interrupts the
> driver manages to request is propagated across the driver, and the number
> of RSS queues it configures is based upon it. 
> 
> This RFC modifies said logic - if the number of cpus is large enough, use
> a smaller default value instead. This serves 2 main purposes: 
>  1. A step forward unity in the number of RSS queues of various drivers.
>  2. It prevents wasteful requests for interrupts on machines with many cpus.
> 
> Notice no testing was made on this RFC (other than on the bnx2x driver)
> except for compilation test.
> 
> Drivers identified as multi-queue, handled in this RFC:
> 
> * mellanox mlx4
> * neterion vxge
> * qlogic   qlge
> * intel    igb, igbxe, igbxevf
> * chelsio  cxgb3, cxgb4
> * myricom  myri10ge
> * emulex   benet
> * broadcom tg3, bnx2, bnx2x
> 
> Driver identified as multi-queue, no reference to number of online cpus found,
> and thus unhandled in this RFC:
> 
> * neterion  s2io
> * marvell   mv643xx
> * freescale gianfar
> * ibm       ehea
> * ti        cpmac
> * sun       niu
> * sfc       efx
> * chelsio   cxgb4vf
> 
> Cheers,
> Yuval Mintz
> 
> Cc: Divy Le Ray <divy@chelsio.com>
> Cc: Or Gerlitz <ogerlitz@mellanox.com>
> Cc: Jon Mason <jdmason@kudzu.us>
> Cc: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
> Cc: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
> Cc: Ron Mercer <ron.mercer@qlogic.com>
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Cc: Jon Mason <mason@myri.com>
> Cc: Andrew Gallatin <gallatin@myri.com>
> Cc: Sathya Perla <sathya.perla@emulex.com>
> Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
> Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
> Cc: Matt Carlson <mcarlson@broadcom.com>
> Cc: Michael Chan <mchan@broadcom.com>

Obviously we need to make the subject line more self-explanatory and
start with the component name followed by colon. We will fix it in the
next version of the patch, but please comment on the content.

Thanks,
Eilon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 01/14] Add Default
  2012-06-19 15:13 ` [RFC net-next 01/14] Add Default Yuval Mintz
@ 2012-06-19 16:37   ` Alexander Duyck
  2012-06-19 17:41     ` Eilon Greenstein
  2012-06-19 21:22     ` David Miller
  0 siblings, 2 replies; 39+ messages in thread
From: Alexander Duyck @ 2012-06-19 16:37 UTC (permalink / raw)
  To: Yuval Mintz; +Cc: netdev, davem, eilong

On 06/19/2012 08:13 AM, Yuval Mintz wrote:
> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> ---
>  include/linux/etherdevice.h |    5 ++++-
>  1 files changed, 4 insertions(+), 1 deletions(-)
>
> diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
> index 3d406e0..bb1ecaf 100644
> --- a/include/linux/etherdevice.h
> +++ b/include/linux/etherdevice.h
> @@ -44,7 +44,10 @@ extern int eth_mac_addr(struct net_device *dev, void *p);
>  extern int eth_change_mtu(struct net_device *dev, int new_mtu);
>  extern int eth_validate_addr(struct net_device *dev);
>  
> -
> +/* The maximal number of RSS queues a driver should have unless configured
> + * so explicitly.
> + */
> +#define DEFAULT_MAX_NUM_RSS_QUEUES (8)
>  
>  extern struct net_device *alloc_etherdev_mqs(int sizeof_priv, unsigned int txqs,
>  					    unsigned int rxqs);
I'm not a big fan of just having this as a fixed define in the code.  It
seems like it would make much more sense to have this in the Kconfig
somewhere as a range value if you plan on making this changeable in the
future.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 01/14] Add Default
  2012-06-19 16:37   ` Alexander Duyck
@ 2012-06-19 17:41     ` Eilon Greenstein
  2012-06-19 19:01       ` Alexander Duyck
  2012-06-19 21:22     ` David Miller
  1 sibling, 1 reply; 39+ messages in thread
From: Eilon Greenstein @ 2012-06-19 17:41 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, netdev, davem

On Tue, 2012-06-19 at 09:37 -0700, Alexander Duyck wrote:
> On 06/19/2012 08:13 AM, Yuval Mintz wrote:
> > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> > ---
> >  include/linux/etherdevice.h |    5 ++++-
> >  1 files changed, 4 insertions(+), 1 deletions(-)
> >
> > diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
> > index 3d406e0..bb1ecaf 100644
> > --- a/include/linux/etherdevice.h
> > +++ b/include/linux/etherdevice.h
> > @@ -44,7 +44,10 @@ extern int eth_mac_addr(struct net_device *dev, void *p);
> >  extern int eth_change_mtu(struct net_device *dev, int new_mtu);
> >  extern int eth_validate_addr(struct net_device *dev);
> >  
> > -
> > +/* The maximal number of RSS queues a driver should have unless configured
> > + * so explicitly.
> > + */
> > +#define DEFAULT_MAX_NUM_RSS_QUEUES (8)
> >  
> >  extern struct net_device *alloc_etherdev_mqs(int sizeof_priv, unsigned int txqs,
> >  					    unsigned int rxqs);
> I'm not a big fan of just having this as a fixed define in the code.  It
> seems like it would make much more sense to have this in the Kconfig
> somewhere as a range value if you plan on making this changeable in the
> future.

My original suggestion was a kernel command line parameter, but Dave was
less than enthusiastic. If you will follow the original thread, you can
probably understand why I decided to adopt Dave's constant approach
without suggesting Kconfig:
http://marc.info/?l=linux-netdev&m=133992386010982&w=2

However, 8 is not a holy number - I'm open for suggestions.

Thanks,
Eilon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 05/14] Fix intel/ixgbevf
  2012-06-19 16:06     ` Eilon Greenstein
@ 2012-06-19 18:07       ` Greg Rose
  2012-06-19 18:24         ` Eilon Greenstein
  2012-06-25 11:23         ` Yuval Mintz
  0 siblings, 2 replies; 39+ messages in thread
From: Greg Rose @ 2012-06-19 18:07 UTC (permalink / raw)
  To: eilong; +Cc: Alexander Duyck, Yuval Mintz, netdev, davem, Jeff Kirsher

On Tue, 19 Jun 2012 19:06:53 +0300
Eilon Greenstein <eilong@broadcom.com> wrote:

> On Tue, 2012-06-19 at 08:39 -0700, Alexander Duyck wrote:
> > On 06/19/2012 08:13 AM, Yuval Mintz wrote:
> > > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> > >
> > > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > > ---
> > >  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    5 +++--
> > >  1 files changed, 3 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > > b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index
> > > f69ec42..3ad46c2 100644 ---
> > > a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++
> > > b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -2014,7
> > > +2014,7 @@ err_tx_ring_allocation: static int
> > > ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
> > > { int err = 0;
> > > -	int vector, v_budget;
> > > +	int vector, v_budget, ncpu;
> > >  
> > >  	/*
> > >  	 * It's easy to be greedy for MSI-X vectors, but it
> > > really @@ -2022,8 +2022,9 @@ static int
> > > ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
> > >  	 * than CPU's.  So let's be conservative and only ask for
> > >  	 * (roughly) twice the number of vectors as there are
> > > CPU's. */
> > > +	ncpu = min_t(int, num_online_cpus(),
> > > DEFAULT_MAX_NUM_RSS_QUEUES); v_budget =
> > > min(adapter->num_rx_queues + adapter->num_tx_queues,
> > > -		       (int)(num_online_cpus() * 2)) +
> > > NON_Q_VECTORS;
> > > +		       ncpu * 2) + NON_Q_VECTORS;
> > >  
> > >  	/* A failure in MSI-X entry allocation isn't fatal, but
> > > it does
> > >  	 * mean we disable MSI-X capabilities of the adapter. */
> > This change is pointless on the ixgbevf driver.  The VF hardware can
> > support at most 4 RSS queues.  As such num_rx_queues + num_tx_queues
> > will never exceed 8 so you are essentially adding a necessary
> > min(x,8).
> 
> It is pointless with the current value, but if someone will edit the
> kernel source code and replace the 8 with a 2, it will become
> meaningful. The compiler will optimize this part, and I think that for
> completion, it is best to keep this reference so a future default
> number change will not be missed.
> 
> Eilon

I don't feel there is any real point to making this change to the
ixgbevf driver.  82599 virtual functions have 3 MSI-X vectors, one of
which is for the mailbox and the other two can be shared with tx/rx
queue pairs or assigned separately to tx or rx queues.  So this code is
pointless no matter what value is set for DEFAULT_MAX_NUM_RSS_QUEUES.
Perhaps the patches to the other drivers in your RFC will have some
effect but this one looks like a no-op for the ixgbevf driver so there
is no reason for it.

- Greg

> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 05/14] Fix intel/ixgbevf
  2012-06-19 18:07       ` Greg Rose
@ 2012-06-19 18:24         ` Eilon Greenstein
  2012-06-25 11:23         ` Yuval Mintz
  1 sibling, 0 replies; 39+ messages in thread
From: Eilon Greenstein @ 2012-06-19 18:24 UTC (permalink / raw)
  To: Greg Rose; +Cc: Alexander Duyck, Yuval Mintz, netdev, davem, Jeff Kirsher

On Tue, 2012-06-19 at 11:07 -0700, Greg Rose wrote:
> On Tue, 19 Jun 2012 19:06:53 +0300
> Eilon Greenstein <eilong@broadcom.com> wrote:
> 
> > On Tue, 2012-06-19 at 08:39 -0700, Alexander Duyck wrote:
> > > On 06/19/2012 08:13 AM, Yuval Mintz wrote:
> > > > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > > > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> > > >
> > > > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > > > ---
> > > >  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    5 +++--
> > > >  1 files changed, 3 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > > > b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index
> > > > f69ec42..3ad46c2 100644 ---
> > > > a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++
> > > > b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -2014,7
> > > > +2014,7 @@ err_tx_ring_allocation: static int
> > > > ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
> > > > { int err = 0;
> > > > -	int vector, v_budget;
> > > > +	int vector, v_budget, ncpu;
> > > >  
> > > >  	/*
> > > >  	 * It's easy to be greedy for MSI-X vectors, but it
> > > > really @@ -2022,8 +2022,9 @@ static int
> > > > ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
> > > >  	 * than CPU's.  So let's be conservative and only ask for
> > > >  	 * (roughly) twice the number of vectors as there are
> > > > CPU's. */
> > > > +	ncpu = min_t(int, num_online_cpus(),
> > > > DEFAULT_MAX_NUM_RSS_QUEUES); v_budget =
> > > > min(adapter->num_rx_queues + adapter->num_tx_queues,
> > > > -		       (int)(num_online_cpus() * 2)) +
> > > > NON_Q_VECTORS;
> > > > +		       ncpu * 2) + NON_Q_VECTORS;
> > > >  
> > > >  	/* A failure in MSI-X entry allocation isn't fatal, but
> > > > it does
> > > >  	 * mean we disable MSI-X capabilities of the adapter. */
> > > This change is pointless on the ixgbevf driver.  The VF hardware can
> > > support at most 4 RSS queues.  As such num_rx_queues + num_tx_queues
> > > will never exceed 8 so you are essentially adding a necessary
> > > min(x,8).
> > 
> > It is pointless with the current value, but if someone will edit the
> > kernel source code and replace the 8 with a 2, it will become
> > meaningful. The compiler will optimize this part, and I think that for
> > completion, it is best to keep this reference so a future default
> > number change will not be missed.
> > 
> > Eilon
> 
> I don't feel there is any real point to making this change to the
> ixgbevf driver.  82599 virtual functions have 3 MSI-X vectors, one of
> which is for the mailbox and the other two can be shared with tx/rx
> queue pairs or assigned separately to tx or rx queues.  So this code is
> pointless no matter what value is set for DEFAULT_MAX_NUM_RSS_QUEUES.
> Perhaps the patches to the other drivers in your RFC will have some
> effect but this one looks like a no-op for the ixgbevf driver so there
> is no reason for it.


OK - I guess we can just add a comment in this location saying that
using DEFAULT_MAX_NUM_RSS_QUEUES is meaningless for the ixgbevf with
that explanation so it will not look as if it was simply over looked.

Thanks,
Eilon

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 01/14] Add Default
  2012-06-19 17:41     ` Eilon Greenstein
@ 2012-06-19 19:01       ` Alexander Duyck
  2012-06-19 19:53         ` Eilon Greenstein
  2012-06-19 21:26         ` David Miller
  0 siblings, 2 replies; 39+ messages in thread
From: Alexander Duyck @ 2012-06-19 19:01 UTC (permalink / raw)
  To: eilong; +Cc: Yuval Mintz, netdev, davem

On 06/19/2012 10:41 AM, Eilon Greenstein wrote:
> On Tue, 2012-06-19 at 09:37 -0700, Alexander Duyck wrote:
>> On 06/19/2012 08:13 AM, Yuval Mintz wrote:
>>> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
>>> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
>>> ---
>>>  include/linux/etherdevice.h |    5 ++++-
>>>  1 files changed, 4 insertions(+), 1 deletions(-)
>>>
>>> diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
>>> index 3d406e0..bb1ecaf 100644
>>> --- a/include/linux/etherdevice.h
>>> +++ b/include/linux/etherdevice.h
>>> @@ -44,7 +44,10 @@ extern int eth_mac_addr(struct net_device *dev, void *p);
>>>  extern int eth_change_mtu(struct net_device *dev, int new_mtu);
>>>  extern int eth_validate_addr(struct net_device *dev);
>>>  
>>> -
>>> +/* The maximal number of RSS queues a driver should have unless configured
>>> + * so explicitly.
>>> + */
>>> +#define DEFAULT_MAX_NUM_RSS_QUEUES (8)
>>>  
>>>  extern struct net_device *alloc_etherdev_mqs(int sizeof_priv, unsigned int txqs,
>>>  					    unsigned int rxqs);
>> I'm not a big fan of just having this as a fixed define in the code.  It
>> seems like it would make much more sense to have this in the Kconfig
>> somewhere as a range value if you plan on making this changeable in the
>> future.
> My original suggestion was a kernel command line parameter, but Dave was
> less than enthusiastic. If you will follow the original thread, you can
> probably understand why I decided to adopt Dave's constant approach
> without suggesting Kconfig:
> http://marc.info/?l=linux-netdev&m=133992386010982&w=2
There is a huge difference between a kernel parameter an a kconfig
value.  The main idea behind the kconfig value is that you are going to
have different preferences depending on architectures and such so it
would make much more sense to have the default as a config option.
> However, 8 is not a holy number - I'm open for suggestions.
>
> Thanks,
> Eilon
I'm not sure why you couldn't just limit it to 16.  From what I can tell
that is the largest number that gets used for RSS queues on almost all
the different hardware out there.

As far as the rest of the patches for the Intel drivers go you might be
better off if you understood how we allocate queues on the ixgbe/ixgbevf
drivers.  Usually we have the number of queues determined before we set
the number of vectors so your patches that limited the number of vectors
aren't going to have the effect you desire.  So for example RSS
configuration is currently handled in either ixgbe_set_rss_queues or
ixgbe_set_dcb_queues depending on the mode the driver is in.  You would
be much better off looking there for how to limit the RSS queueing on
the ixgbe adapter.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 01/14] Add Default
  2012-06-19 19:01       ` Alexander Duyck
@ 2012-06-19 19:53         ` Eilon Greenstein
  2012-06-19 21:26         ` David Miller
  1 sibling, 0 replies; 39+ messages in thread
From: Eilon Greenstein @ 2012-06-19 19:53 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, netdev, davem

On Tue, 2012-06-19 at 12:01 -0700, Alexander Duyck wrote:
> On 06/19/2012 10:41 AM, Eilon Greenstein wrote:
> > On Tue, 2012-06-19 at 09:37 -0700, Alexander Duyck wrote:
> >> On 06/19/2012 08:13 AM, Yuval Mintz wrote:
> >>> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> >>> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> >>> ---
> >>>  include/linux/etherdevice.h |    5 ++++-
> >>>  1 files changed, 4 insertions(+), 1 deletions(-)
> >>>
> >>> diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
> >>> index 3d406e0..bb1ecaf 100644
> >>> --- a/include/linux/etherdevice.h
> >>> +++ b/include/linux/etherdevice.h
> >>> @@ -44,7 +44,10 @@ extern int eth_mac_addr(struct net_device *dev, void *p);
> >>>  extern int eth_change_mtu(struct net_device *dev, int new_mtu);
> >>>  extern int eth_validate_addr(struct net_device *dev);
> >>>  
> >>> -
> >>> +/* The maximal number of RSS queues a driver should have unless configured
> >>> + * so explicitly.
> >>> + */
> >>> +#define DEFAULT_MAX_NUM_RSS_QUEUES (8)
> >>>  
> >>>  extern struct net_device *alloc_etherdev_mqs(int sizeof_priv, unsigned int txqs,
> >>>  					    unsigned int rxqs);
> >> I'm not a big fan of just having this as a fixed define in the code.  It
> >> seems like it would make much more sense to have this in the Kconfig
> >> somewhere as a range value if you plan on making this changeable in the
> >> future.
> > My original suggestion was a kernel command line parameter, but Dave was
> > less than enthusiastic. If you will follow the original thread, you can
> > probably understand why I decided to adopt Dave's constant approach
> > without suggesting Kconfig:
> > http://marc.info/?l=linux-netdev&m=133992386010982&w=2
> There is a huge difference between a kernel parameter an a kconfig
> value.  The main idea behind the kconfig value is that you are going to
> have different preferences depending on architectures and such so it
> would make much more sense to have the default as a config option.

Yes, I'm aware of that. Coming from the orientation of number of CPUs
and memory constrains, the kernel parameter came to mind first, after
receiving the reply about using just a good default, I have considered
the kconfig alternative but decided not to make further suggestions and
just go with a good default.

> I'm not sure why you couldn't just limit it to 16.  From what I can tell
> that is the largest number that gets used for RSS queues on almost all
> the different hardware out there.

cxgb4 32, myril10ge 32, efx 32, niu 24.
The point is that I was requested by a customer to support more queues,
but simply  enabling that much more MSI-X vectors in the FW will cause
the driver to consume too much memory and this is probably not desired
for most users. Having the set_channels API is good solution to have a
default value which is different than the maximal value, and that brings
us to where we are now - finding a default value for all multi-queue
drivers.


> As far as the rest of the patches for the Intel drivers go you might be
> better off if you understood how we allocate queues on the ixgbe/ixgbevf
> drivers.  Usually we have the number of queues determined before we set
> the number of vectors so your patches that limited the number of vectors
> aren't going to have the effect you desire.  So for example RSS
> configuration is currently handled in either ixgbe_set_rss_queues or
> ixgbe_set_dcb_queues depending on the mode the driver is in.  You would
> be much better off looking there for how to limit the RSS queueing on
> the ixgbe adapter.

OK, we will move the logic to those functions.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 01/14] Add Default
  2012-06-19 16:37   ` Alexander Duyck
  2012-06-19 17:41     ` Eilon Greenstein
@ 2012-06-19 21:22     ` David Miller
  1 sibling, 0 replies; 39+ messages in thread
From: David Miller @ 2012-06-19 21:22 UTC (permalink / raw)
  To: alexander.h.duyck; +Cc: yuvalmin, netdev, eilong

From: Alexander Duyck <alexander.h.duyck@intel.com>
Date: Tue, 19 Jun 2012 09:37:18 -0700

> I'm not a big fan of just having this as a fixed define in the code.  It
> seems like it would make much more sense to have this in the Kconfig
> somewhere as a range value if you plan on making this changeable in the
> future.

Please not Kconfig :-/

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 01/14] Add Default
  2012-06-19 19:01       ` Alexander Duyck
  2012-06-19 19:53         ` Eilon Greenstein
@ 2012-06-19 21:26         ` David Miller
  1 sibling, 0 replies; 39+ messages in thread
From: David Miller @ 2012-06-19 21:26 UTC (permalink / raw)
  To: alexander.h.duyck; +Cc: eilong, yuvalmin, netdev

From: Alexander Duyck <alexander.h.duyck@intel.com>
Date: Tue, 19 Jun 2012 12:01:54 -0700

> There is a huge difference between a kernel parameter an a kconfig
> value.  The main idea behind the kconfig value is that you are going to
> have different preferences depending on architectures and such so it
> would make much more sense to have the default as a config option.

I don't think the issue of how many queues to use in a network driver
when you have 1024 cpus is architecture specific.

Please drop this idea, thanks.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* RE: [RFC net-next 11/14] Fix emulex/benet
  2012-06-19 15:14 ` [RFC net-next 11/14] Fix emulex/benet Yuval Mintz
@ 2012-06-19 22:55   ` Ajit.Khaparde
  2012-06-21  6:42   ` Sathya.Perla
  1 sibling, 0 replies; 39+ messages in thread
From: Ajit.Khaparde @ 2012-06-19 22:55 UTC (permalink / raw)
  To: yuvalmin, netdev, davem; +Cc: eilong, Sathya.Perla, subbu.seetharaman

> From: Yuval Mintz [yuvalmin@broadcom.com]
> Sent: Tuesday, June 19, 2012 10:14 AM
> To: netdev@vger.kernel.org; davem@davemloft.net
> Cc: eilong@broadcom.com; Yuval Mintz; Perla, Sathya; Seetharaman, Subramanian; Khaparde, Ajit
> Subject: [RFC net-next 11/14] Fix emulex/benet

> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

> Cc: Sathya Perla <sathya.perla@emulex.com>
> Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
> Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
> ---
>  drivers/net/ethernet/emulex/benet/be_main.c |    8 +++++---
>  1 files changed, 5 insertions(+), 3 deletions(-)


We will get back to you on this later in the day.

Thanks
-Ajit

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 07/14] Fix intel/ixgbe
  2012-06-19 15:54   ` Alexander Duyck
  2012-06-19 16:11     ` Eilon Greenstein
@ 2012-06-20  8:55     ` Eric Dumazet
  2012-06-20 15:30       ` John Fastabend
  1 sibling, 1 reply; 39+ messages in thread
From: Eric Dumazet @ 2012-06-20  8:55 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, netdev, davem, eilong, Jeff Kirsher

On Tue, 2012-06-19 at 08:54 -0700, Alexander Duyck wrote:

> This patch doesn't limit the number of queues.  It is limiting the
> number of interrupts.  The two are not directly related as we can
> support multiple queues per interrupt.
> 
> Also this change assumes we are only using receive side scaling.  We
> have other features such as DCB, FCoE, and Flow Director which require
> additional queues.
> 

Yet, it would be good if ixgbe doesnt allocate 36 queues on a 4 cpu
machine.

"tc -s class show dev eth0" output is full of not used classes.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 07/14] Fix intel/ixgbe
  2012-06-20  8:55     ` Eric Dumazet
@ 2012-06-20 15:30       ` John Fastabend
  0 siblings, 0 replies; 39+ messages in thread
From: John Fastabend @ 2012-06-20 15:30 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Alexander Duyck, Yuval Mintz, netdev, davem, eilong, Jeff Kirsher

On 6/20/2012 1:55 AM, Eric Dumazet wrote:
> On Tue, 2012-06-19 at 08:54 -0700, Alexander Duyck wrote:
>
>> This patch doesn't limit the number of queues.  It is limiting the
>> number of interrupts.  The two are not directly related as we can
>> support multiple queues per interrupt.
>>
>> Also this change assumes we are only using receive side scaling.  We
>> have other features such as DCB, FCoE, and Flow Director which require
>> additional queues.
>>
>
> Yet, it would be good if ixgbe doesnt allocate 36 queues on a 4 cpu
> machine.
>
> "tc -s class show dev eth0" output is full of not used classes.
>
>
>

We do this for the DCB/FCoE/RSS/Flow Director case where we want to
use multiple queues per traffic class (802.1Qaz). As it is now we
have to set the max queues at alloc_etherdev_mq() time so we use a
max of
	(num_cpu * max traffic classes) + num_cpu

The last num_cpu is in error and I have a patch in JeffK's tree to
remove this. In many cases it seems excessive but sometimes it is
helpful.

.John

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 00/14] default maximal number of RSS queues in mq drivers
  2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (14 preceding siblings ...)
  2012-06-19 16:17 ` [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Eilon Greenstein
@ 2012-06-20 20:43 ` Ben Hutchings
  2012-06-20 20:48   ` Ben Hutchings
  15 siblings, 1 reply; 39+ messages in thread
From: Ben Hutchings @ 2012-06-20 20:43 UTC (permalink / raw)
  To: Yuval Mintz
  Cc: netdev, davem, eilong, Divy Le Ray, Or Gerlitz, Jon Mason,
	Anirban Chakraborty, Jitendra Kalsaria, Ron Mercer, Jeff Kirsher,
	Jon Mason, Andrew Gallatin, Sathya Perla, Subbu Seetharaman,
	Ajit Khaparde, Matt Carlson, Michael Chan

On Tue, 2012-06-19 at 18:13 +0300, Yuval Mintz wrote:
> Different vendors support different number of RSS queues by default. Today,
> there exists an ethtool API through which users can change the number of
> channels their driver supports; This enables us to pursue the goal of using
> a default number of RSS queues in various multi-queue drivers.
> 
> This RFC intendeds to achieve the above default, by upper-limiting the number
> of interrupts multi-queue drivers request (by default, not via the new API) 
> with correlation to the number of cpus on the machine.
> 
> After examining multi-queue drivers that call alloc_etherdev_mq[s],
> it became evident that most drivers allocate their devices using hard-coded
> values. Changing those defaults directly will most likely cause a regression. 
> 
> However, (most) multi-queue driver look at the number of online cpus when 
> requesting for interrupts. We assume that the number of interrupts the
> driver manages to request is propagated across the driver, and the number
> of RSS queues it configures is based upon it. 
> 
> This RFC modifies said logic - if the number of cpus is large enough, use
> a smaller default value instead. This serves 2 main purposes: 
>  1. A step forward unity in the number of RSS queues of various drivers.
>  2. It prevents wasteful requests for interrupts on machines with many cpus.
[...]
> Driver identified as multi-queue, no reference to number of online cpus found,
> and thus unhandled in this RFC:
[...]
> * sfc       efx
[...]

In sfc we currently look at the CPU topology to count cores instead of
threads.  The result is the same unless the system has hyperthreading
(or other SMT) enabled.

I've seen many diagnostic reports from customer support tickets where
there were 32 queue-sets and MSI-X vectors in use (the maximum currently
supported by the driver), but very few had a problem with that.

I would be interested in a scheme to use fewer queues for RSS but more
for flow steering (accelerated RFS, XPS and ethtool NFC).  We had some
discussion of this at last year's netconf but sadly I've not yet found
time to work on it.

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 00/14] default maximal number of RSS queues in mq drivers
  2012-06-20 20:43 ` Ben Hutchings
@ 2012-06-20 20:48   ` Ben Hutchings
  2012-06-21  5:15     ` Yuval Mintz
  0 siblings, 1 reply; 39+ messages in thread
From: Ben Hutchings @ 2012-06-20 20:48 UTC (permalink / raw)
  To: Yuval Mintz
  Cc: netdev, davem, eilong, Divy Le Ray, Or Gerlitz, Jon Mason,
	Anirban Chakraborty, Jitendra Kalsaria, Ron Mercer, Jeff Kirsher,
	Jon Mason, Andrew Gallatin, Sathya Perla, Subbu Seetharaman,
	Ajit Khaparde, Matt Carlson, Michael Chan

Also, I would recommend encapsulating the calculation of default number
of RSS queues in a function, rather than repeating it in every driver.
That will make it easier to replace with something more sophisticated
and configurable later on.

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 00/14] default maximal number of RSS queues in mq drivers
  2012-06-20 20:48   ` Ben Hutchings
@ 2012-06-21  5:15     ` Yuval Mintz
  2012-06-21  5:49       ` David Miller
  0 siblings, 1 reply; 39+ messages in thread
From: Yuval Mintz @ 2012-06-21  5:15 UTC (permalink / raw)
  To: Ben Hutchings
  Cc: netdev, davem, eilong, Divy Le Ray, Or Gerlitz, Jon Mason,
	Anirban Chakraborty, Jitendra Kalsaria, Ron Mercer, Jeff Kirsher,
	Jon Mason, Andrew Gallatin, Sathya Perla, Subbu Seetharaman,
	Ajit Khaparde, Matt Carlson, Michael Chan

On 06/20/2012 11:48 PM, Ben Hutchings wrote:

> Also, I would recommend encapsulating the calculation of default number
> of RSS queues in a function, rather than repeating it in every driver.
> That will make it easier to replace with something more sophisticated
> and configurable later on.
> 
> Ben.
> 

Do You also have the notion of where's the correct place to place such a
function?

Thanks,
Yuval

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 00/14] default maximal number of RSS queues in mq drivers
  2012-06-21  5:15     ` Yuval Mintz
@ 2012-06-21  5:49       ` David Miller
  0 siblings, 0 replies; 39+ messages in thread
From: David Miller @ 2012-06-21  5:49 UTC (permalink / raw)
  To: yuvalmin
  Cc: bhutchings, netdev, eilong, divy, ogerlitz, jdmason,
	anirban.chakraborty, jitendra.kalsaria, ron.mercer,
	jeffrey.t.kirsher, mason, gallatin, sathya.perla,
	subbu.seetharaman, ajit.khaparde, mcarlson, mchan

From: "Yuval Mintz" <yuvalmin@broadcom.com>
Date: Thu, 21 Jun 2012 08:15:36 +0300

> On 06/20/2012 11:48 PM, Ben Hutchings wrote:
> 
>> Also, I would recommend encapsulating the calculation of default number
>> of RSS queues in a function, rather than repeating it in every driver.
>> That will make it easier to replace with something more sophisticated
>> and configurable later on.
>> 
>> Ben.
>> 
> 
> Do You also have the notion of where's the correct place to place such a
> function?

I would put the extern declaration into linux/netdevice.h and the
implementation in net/core/dev.c

^ permalink raw reply	[flat|nested] 39+ messages in thread

* RE: [RFC net-next 11/14] Fix emulex/benet
  2012-06-19 15:14 ` [RFC net-next 11/14] Fix emulex/benet Yuval Mintz
  2012-06-19 22:55   ` Ajit.Khaparde
@ 2012-06-21  6:42   ` Sathya.Perla
  1 sibling, 0 replies; 39+ messages in thread
From: Sathya.Perla @ 2012-06-21  6:42 UTC (permalink / raw)
  To: yuvalmin, netdev, davem; +Cc: eilong

Yuval, for be2net, the best place to cap the number of queues to a global default 
value would be_num_rss_want(). The change would look like:

diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/
index fa2a01e..5265b42 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -2186,12 +2186,15 @@ static void be_msix_disable(struct be_adapter *adapter)
 
 static uint be_num_rss_want(struct be_adapter *adapter)
 {
+       u32 num = 0;
+
        if ((adapter->function_caps & BE_FUNCTION_CAPS_RSS) &&
             !sriov_want(adapter) && be_physfn(adapter) &&
-            !be_is_mc(adapter))
-               return (adapter->be3_native) ? BE3_MAX_RSS_QS : BE2_MAX_RSS_QS;
-       else
-               return 0;
+            !be_is_mc(adapter)) {
+               num = (adapter->be3_native) ? BE3_MAX_RSS_QS : BE2_MAX_RSS_QS;
+               num = min_t(u32, num, DEFAULT_MAX_NUM_RSS_QUEUES);
+       }
+       return num;
 }
 
 static void be_msix_enable(struct be_adapter *adapter)

thanks,
-Sathya

________________________________________
From: Yuval Mintz [yuvalmin@broadcom.com]

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Sathya Perla <sathya.perla@emulex.com>
Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
---
 drivers/net/ethernet/emulex/benet/be_main.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 5a34503..e42597d 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -2153,13 +2153,15 @@ static uint be_num_rss_want(struct be_adapter *adapter)
 static void be_msix_enable(struct be_adapter *adapter)
 {
 #define BE_MIN_MSIX_VECTORS            1
-       int i, status, num_vec, num_roce_vec = 0;
+       int i, status, num_vec, num_roce_vec = 0, ncpu;
+
+       ncpu = min_t(int, num_online_cpus(), DEFAULT_MAX_NUM_RSS_QUEUES);

        /* If RSS queues are not used, need a vec for default RX Q */
-       num_vec = min(be_num_rss_want(adapter), num_online_cpus());
+       num_vec = min(be_num_rss_want(adapter), ncpu);
        if (be_roce_supported(adapter)) {
                num_roce_vec = min_t(u32, MAX_ROCE_MSIX_VECTORS,
-                                       (num_online_cpus() + 1));
+                                    (u32)(ncpu + 1));
                num_roce_vec = min(num_roce_vec, MAX_ROCE_EQS);
                num_vec += num_roce_vec;
                num_vec = min(num_vec, MAX_MSIX_VECTORS);
--
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [RFC net-next 05/14] Fix intel/ixgbevf
  2012-06-19 18:07       ` Greg Rose
  2012-06-19 18:24         ` Eilon Greenstein
@ 2012-06-25 11:23         ` Yuval Mintz
  1 sibling, 0 replies; 39+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:23 UTC (permalink / raw)
  To: Greg Rose; +Cc: eilong, Alexander Duyck, netdev, davem, Jeff Kirsher

>>>>  	 * It's easy to be greedy for MSI-X vectors, but it

>>>> really @@ -2022,8 +2022,9 @@ static int
>>>> ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
>>>>  	 * than CPU's.  So let's be conservative and only ask for
>>>>  	 * (roughly) twice the number of vectors as there are
>>>> CPU's. */
>>>> +	ncpu = min_t(int, num_online_cpus(),
>>>> DEFAULT_MAX_NUM_RSS_QUEUES); v_budget =
>>>> min(adapter->num_rx_queues + adapter->num_tx_queues,
>>>> -		       (int)(num_online_cpus() * 2)) +
>>>> NON_Q_VECTORS;
>>>> +		       ncpu * 2) + NON_Q_VECTORS;
>>>>  
>>>>  	/* A failure in MSI-X entry allocation isn't fatal, but
>>>> it does
>>>>  	 * mean we disable MSI-X capabilities of the adapter. */
>>> This change is pointless on the ixgbevf driver.  The VF hardware can
>>> support at most 4 RSS queues.  As such num_rx_queues + num_tx_queues
>>> will never exceed 8 so you are essentially adding a necessary
>>> min(x,8).
>>
>> It is pointless with the current value, but if someone will edit the
>> kernel source code and replace the 8 with a 2, it will become
>> meaningful. The compiler will optimize this part, and I think that for
>> completion, it is best to keep this reference so a future default
>> number change will not be missed.
>>
>> Eilon
> 
> I don't feel there is any real point to making this change to the
> ixgbevf driver.  82599 virtual functions have 3 MSI-X vectors, one of
> which is for the mailbox and the other two can be shared with tx/rx
> queue pairs or assigned separately to tx or rx queues.  So this code is
> pointless no matter what value is set for DEFAULT_MAX_NUM_RSS_QUEUES.
> Perhaps the patches to the other drivers in your RFC will have some
> effect but this one looks like a no-op for the ixgbevf driver so there
> is no reason for it.
> 
> - Greg
> 

Hi Greg,

Since we're changing the RFC to use a new wrapper function which should
replace num_online_cpus (for these purpose), the next RFC version will still
change this driver (for uniformity, if nothing else).

Of course, if you would still have reservations for this change - send them.

Thanks,
Yuval

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2012-06-25 11:25 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-06-19 15:13 [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
2012-06-19 15:13 ` [RFC net-next 01/14] Add Default Yuval Mintz
2012-06-19 16:37   ` Alexander Duyck
2012-06-19 17:41     ` Eilon Greenstein
2012-06-19 19:01       ` Alexander Duyck
2012-06-19 19:53         ` Eilon Greenstein
2012-06-19 21:26         ` David Miller
2012-06-19 21:22     ` David Miller
2012-06-19 15:13 ` [RFC net-next 02/14] Fix mellanox/mlx4 Yuval Mintz
2012-06-19 15:13 ` [RFC net-next 03/14] fix neterion/vxge Yuval Mintz
2012-06-19 15:13 ` [RFC net-next 04/14] Fix qlogic/qlge Yuval Mintz
2012-06-19 15:13 ` [RFC net-next 05/14] Fix intel/ixgbevf Yuval Mintz
2012-06-19 15:39   ` Alexander Duyck
2012-06-19 16:06     ` Eilon Greenstein
2012-06-19 18:07       ` Greg Rose
2012-06-19 18:24         ` Eilon Greenstein
2012-06-25 11:23         ` Yuval Mintz
2012-06-19 15:14 ` [RFC net-next 06/14] Fix intel/igb Yuval Mintz
2012-06-19 15:42   ` Alexander Duyck
2012-06-19 16:07     ` Eilon Greenstein
2012-06-19 15:14 ` [RFC net-next 07/14] Fix intel/ixgbe Yuval Mintz
2012-06-19 15:54   ` Alexander Duyck
2012-06-19 16:11     ` Eilon Greenstein
2012-06-20  8:55     ` Eric Dumazet
2012-06-20 15:30       ` John Fastabend
2012-06-19 15:14 ` [RFC net-next 08/14] Fix chelsio/cxgb3 Yuval Mintz
2012-06-19 15:14 ` [RFC net-next 09/14] Fix chelsio/cxgb4 Yuval Mintz
2012-06-19 15:14 ` [RFC net-next 10/14] Fix myricom/myri10ge Yuval Mintz
2012-06-19 15:14 ` [RFC net-next 11/14] Fix emulex/benet Yuval Mintz
2012-06-19 22:55   ` Ajit.Khaparde
2012-06-21  6:42   ` Sathya.Perla
2012-06-19 15:14 ` [RFC net-next 12/14] Fix broadcom/tg3 Yuval Mintz
2012-06-19 15:14 ` [RFC net-next 13/14] Fix broadcom/bnx2 Yuval Mintz
2012-06-19 15:14 ` [RFC net-next 14/14] Fix broadcom/bnx2x Yuval Mintz
2012-06-19 16:17 ` [RFC net-next 00/14] default maximal number of RSS queues in mq drivers Eilon Greenstein
2012-06-20 20:43 ` Ben Hutchings
2012-06-20 20:48   ` Ben Hutchings
2012-06-21  5:15     ` Yuval Mintz
2012-06-21  5:49       ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).