netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers
@ 2012-06-25 11:45 Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 01/14] net-next: Add netif_get_num_default_rss_queues Yuval Mintz
                   ` (13 more replies)
  0 siblings, 14 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev
  Cc: eilong, Yuval Mintz, Divy Le Ray, Or Gerlitz, Jon Mason,
	Anirban Chakraborty, Jitendra Kalsaria, Ron Mercer, Jeff Kirsher,
	Jon Mason, Andrew Gallatin, Sathya Perla, Subbu Seetharaman,
	Ajit Khaparde, Matt Carlson, Michael Chan, Eric Dumazet,
	Ben Hutchings

Different vendors support different number of RSS queues by default. Today,
there exists an ethtool API through which users can change the number of
channels their driver supports; This enables us to pursue the goal of using
a default number of RSS queues in various multi-queue drivers.

This RFC intendeds to achieve the above default, by upper-limiting the number
of interrupts multi-queue drivers request (by default, not via the new API) 
with correlation to the number of cpus on the machine.

After examining multi-queue drivers that call alloc_etherdev_mq[s],
it became evident that most drivers allocate their devices using hard-coded
values. Changing those defaults directly will most likely cause a regression. 

However, (most) multi-queue driver look at the number of online cpus when 
requesting for interrupts. We assume that the number of interrupts the
driver manages to request is propagated across the driver, and the number
of RSS queues it configures is based upon it. 

This RFC modifies said logic - if the number of cpus is large enough, use
a smaller default value instead. This serves 2 main purposes: 
 1. A step forward unity in the number of RSS queues of various drivers.
 2. It prevents wasteful requests for interrupts on machines with many cpus.

Notice no testing was made on this RFC (other than on the bnx2x driver)
except for compilation test.

Drivers identified as multi-queue, handled in this RFC:

* mellanox mlx4
* neterion vxge
* qlogic   qlge
* intel    igb, igbxe igbxevf
* chelsio  cxgb3, cxgb4
* myricom  myri10ge
* emulex   benet
* broadcom tg3, bnx2, bnx2x

Driver identified as multi-queue, no reference to number of online cpus found,
and thus unhandled in this RFC:

* neterion  s2io
* marvell   mv643xx
* freescale gianfar
* ibm       ehea
* ti        cpmac
* sun       niu
* sfc       efx
* chelsio   cxgb4vf

Cheers,
Yuval Mintz

Cc: Divy Le Ray <divy@chelsio.com>
Cc: Or Gerlitz <ogerlitz@mellanox.com>
Cc: Jon Mason <jdmason@kudzu.us>
Cc: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
Cc: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
Cc: Ron Mercer <ron.mercer@qlogic.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jon Mason <mason@myri.com>
Cc: Andrew Gallatin <gallatin@myri.com>
Cc: Sathya Perla <sathya.perla@emulex.com>
Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
Cc: Matt Carlson <mcarlson@broadcom.com>
Cc: Michael Chan <mchan@broadcom.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 01/14] net-next: Add netif_get_num_default_rss_queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 02/14] mlx4: set maximal number of default RSS queues Yuval Mintz
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz

Most multi-queue networking driver consider the number of online cpus when
configuring RSS queues. 
This patch adds a wrapper to the number of cpus, setting an upper limit on the
number of cpus a driver should consider (by default) when allocating resources
for his queues.

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 include/linux/netdevice.h |    3 +++
 net/core/dev.c            |   11 +++++++++++
 2 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 2c2ecea..ab0251d 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -2119,6 +2119,9 @@ static inline int netif_copy_real_num_queues(struct net_device *to_dev,
 #endif
 }
 
+#define DEFAULT_MAX_NUM_RSS_QUEUES	(8)
+extern int netif_get_num_default_rss_queues(void);
+
 /* Use this variant when it is known for sure that it
  * is executing from hardware interrupt context or with hardware interrupts
  * disabled.
diff --git a/net/core/dev.c b/net/core/dev.c
index 57c4f9b..caae49c 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -1793,6 +1793,17 @@ int netif_set_real_num_rx_queues(struct net_device *dev, unsigned int rxq)
 EXPORT_SYMBOL(netif_set_real_num_rx_queues);
 #endif
 
+/* netif_get_num_default_rss_queues - default number of RSS queues
+ *
+ * This routine should set an upper limit on the number of RSS queues
+ * used by default by multiqueue devices.
+ */
+int netif_get_num_default_rss_queues()
+{
+	return min_t(int, DEFAULT_MAX_NUM_RSS_QUEUES, num_online_cpus());
+}
+EXPORT_SYMBOL(netif_get_num_default_rss_queues);
+
 static inline void __netif_reschedule(struct Qdisc *q)
 {
 	struct softnet_data *sd;
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 02/14] mlx4: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 01/14] net-next: Add netif_get_num_default_rss_queues Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 03/14] vxge: " Yuval Mintz
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz, Or Gerlitz

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Or Gerlitz <ogerlitz@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx4/main.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index ee6f4fe..8f990a3 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -41,6 +41,7 @@
 #include <linux/slab.h>
 #include <linux/io-mapping.h>
 #include <linux/delay.h>
+#include <linux/netdevice.h>
 
 #include <linux/mlx4/device.h>
 #include <linux/mlx4/doorbell.h>
@@ -1539,8 +1540,8 @@ static void mlx4_enable_msi_x(struct mlx4_dev *dev)
 	struct mlx4_priv *priv = mlx4_priv(dev);
 	struct msix_entry *entries;
 	int nreq = min_t(int, dev->caps.num_ports *
-			 min_t(int, num_online_cpus() + 1, MAX_MSIX_P_PORT)
-				+ MSIX_LEGACY_SZ, MAX_MSIX);
+			 min_t(int, netif_get_num_default_rss_queues() + 1,
+			       MAX_MSIX_P_PORT) + MSIX_LEGACY_SZ, MAX_MSIX);
 	int err;
 	int i;
 
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 03/14] vxge: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 01/14] net-next: Add netif_get_num_default_rss_queues Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 02/14] mlx4: set maximal number of default RSS queues Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 04/14] qlge: " Yuval Mintz
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz, Jon Mason

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jon Mason <jdmason@kudzu.us>
---
 drivers/net/ethernet/neterion/vxge/vxge-main.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c
index 2578eb1..2fd1edb 100644
--- a/drivers/net/ethernet/neterion/vxge/vxge-main.c
+++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c
@@ -3687,7 +3687,8 @@ static int __devinit vxge_config_vpaths(
 			return 0;
 
 		if (!driver_config->g_no_cpus)
-			driver_config->g_no_cpus = num_online_cpus();
+			driver_config->g_no_cpus =
+				netif_get_num_default_rss_queues();
 
 		driver_config->vpath_per_dev = driver_config->g_no_cpus >> 1;
 		if (!driver_config->vpath_per_dev)
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 04/14] qlge: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (2 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 03/14] vxge: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 05/14] cxgb3: " Yuval Mintz
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev
  Cc: eilong, Yuval Mintz, Anirban Chakraborty, Jitendra Kalsaria,
	Ron Mercer

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
Cc: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
Cc: Ron Mercer <ron.mercer@qlogic.com>
---
 drivers/net/ethernet/qlogic/qlge/qlge_main.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
index 09d8d33..3c3499d 100644
--- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
@@ -4649,7 +4649,7 @@ static int __devinit qlge_probe(struct pci_dev *pdev,
 	int err = 0;
 
 	ndev = alloc_etherdev_mq(sizeof(struct ql_adapter),
-			min(MAX_CPUS, (int)num_online_cpus()));
+			min(MAX_CPUS, netif_get_num_default_rss_queues()));
 	if (!ndev)
 		return -ENOMEM;
 
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 05/14] cxgb3: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (3 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 04/14] qlge: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 06/14] cxgb4: " Yuval Mintz
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz, Divy Le Ray

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Divy Le Ray <divy@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
index abb6ce7..9b08749 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
@@ -3050,7 +3050,7 @@ static struct pci_error_handlers t3_err_handler = {
 static void set_nqsets(struct adapter *adap)
 {
 	int i, j = 0;
-	int num_cpus = num_online_cpus();
+	int num_cpus = netif_get_num_default_rss_queues();
 	int hwports = adap->params.nports;
 	int nqsets = adap->msix_nvectors - 1;
 
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 06/14] cxgb4: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (4 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 05/14] cxgb3: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 07/14] myri10ge: " Yuval Mintz
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz, Divy Le Ray

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Divy Le Ray <divy@chelsio.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index e1f96fb..5ed49af 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -3493,8 +3493,8 @@ static void __devinit cfg_queues(struct adapter *adap)
 	 */
 	if (n10g)
 		q10g = (MAX_ETH_QSETS - (adap->params.nports - n10g)) / n10g;
-	if (q10g > num_online_cpus())
-		q10g = num_online_cpus();
+	if (q10g > netif_get_num_default_rss_queues())
+		q10g = netif_get_num_default_rss_queues();
 
 	for_each_port(adap, i) {
 		struct port_info *pi = adap2pinfo(adap, i);
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 07/14] myri10ge: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (5 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 06/14] cxgb4: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 08/14] tg3: " Yuval Mintz
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz, Jon Mason

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jon Mason <mason@myri.com>
---
 drivers/net/ethernet/myricom/myri10ge/myri10ge.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
index 90153fc..fa85cf1 100644
--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
@@ -3775,7 +3775,7 @@ static void myri10ge_probe_slices(struct myri10ge_priv *mgp)
 
 	mgp->num_slices = 1;
 	msix_cap = pci_find_capability(pdev, PCI_CAP_ID_MSIX);
-	ncpus = num_online_cpus();
+	ncpus = netif_get_num_default_rss_queues();
 
 	if (myri10ge_max_slices == 1 || msix_cap == 0 ||
 	    (myri10ge_max_slices == -1 && ncpus < 2))
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 08/14] tg3: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (6 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 07/14] myri10ge: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 09/14] bnx2: " Yuval Mintz
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz, Matt Carlson

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Matt Carlson <mcarlson@broadcom.com>
---
 drivers/net/ethernet/broadcom/tg3.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index e47ff8b..6cbab03 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -9908,7 +9908,7 @@ static bool tg3_enable_msix(struct tg3 *tp)
 	int i, rc;
 	struct msix_entry msix_ent[tp->irq_max];
 
-	tp->irq_cnt = num_online_cpus();
+	tp->irq_cnt = netif_get_num_default_rss_queues();
 	if (tp->irq_cnt > 1) {
 		/* We want as many rx rings enabled as there are cpus.
 		 * In multiqueue MSI-X mode, the first MSI-X vector
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 09/14] bnx2: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (7 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 08/14] tg3: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 10/14] bnx2x: " Yuval Mintz
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz, Michael Chan

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Michael Chan <mchan@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
index 9b69a62..3f49285 100644
--- a/drivers/net/ethernet/broadcom/bnx2.c
+++ b/drivers/net/ethernet/broadcom/bnx2.c
@@ -6246,7 +6246,7 @@ bnx2_enable_msix(struct bnx2 *bp, int msix_vecs)
 static int
 bnx2_setup_int_mode(struct bnx2 *bp, int dis_msi)
 {
-	int cpus = num_online_cpus();
+	int cpus = netif_get_num_default_rss_queues();
 	int msix_vecs;
 
 	if (!bp->num_req_rx_rings)
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 10/14] bnx2x: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (8 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 09/14] bnx2: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 11/14] igb: " Yuval Mintz
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index daa894b..53659f3 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -822,7 +822,8 @@ static inline int bnx2x_calc_num_queues(struct bnx2x *bp)
 {
 	return  num_queues ?
 		 min_t(int, num_queues, BNX2X_MAX_QUEUES(bp)) :
-		 min_t(int, num_online_cpus(), BNX2X_MAX_QUEUES(bp));
+		 min_t(int, netif_get_num_default_rss_queues(),
+		       BNX2X_MAX_QUEUES(bp));
 }
 
 static inline void bnx2x_clear_sge_mask_next_elems(struct bnx2x_fastpath *fp)
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 11/14] igb: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (9 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 10/14] bnx2x: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 17:31   ` Vick, Matthew
  2012-06-25 11:45 ` [RFC net-next (v2) 12/14] ixgbe: " Yuval Mintz
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev; +Cc: eilong, Yuval Mintz, Jeff Kirsher

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/igb/igb_main.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 01ced68..b11ee60 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -2465,7 +2465,8 @@ static int __devinit igb_sw_init(struct igb_adapter *adapter)
 		break;
 	}
 
-	adapter->rss_queues = min_t(u32, max_rss_queues, num_online_cpus());
+	adapter->rss_queues = min_t(u32, max_rss_queues,
+				    netif_get_num_default_rss_queues());
 
 	/* Determine if we need to pair queues. */
 	switch (hw->mac.type) {
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (10 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 11/14] igb: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 15:44   ` Alexander Duyck
  2012-06-25 11:45 ` [RFC net-next (v2) 13/14] be2net: " Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 14/14] ixgbevf: " Yuval Mintz
  13 siblings, 1 reply; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev
  Cc: eilong, Yuval Mintz, Jeff Kirsher, John Fastabend,
	Alexander Duyck

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: John Fastabend <john.r.fastabend@intel.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index af1a531..b352ea8 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -802,7 +802,7 @@ static int ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
 	 * The default is to use pairs of vectors.
 	 */
 	v_budget = max(adapter->num_rx_queues, adapter->num_tx_queues);
-	v_budget = min_t(int, v_budget, num_online_cpus());
+	v_budget = min_t(int, v_budget, netif_get_num_default_rss_queues());
 	v_budget += NON_Q_VECTORS;
 
 	/*
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 13/14] be2net: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (11 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 12/14] ixgbe: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 11:45 ` [RFC net-next (v2) 14/14] ixgbevf: " Yuval Mintz
  13 siblings, 0 replies; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev
  Cc: eilong, Yuval Mintz, Sathya Perla, Subbu Seetharaman,
	Ajit Khaparde

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Sathya Perla <sathya.perla@emulex.com>
Cc: Subbu Seetharaman <subbu.seetharaman@emulex.com>
Cc: Ajit Khaparde <ajit.khaparde@emulex.com>
---
 drivers/net/ethernet/emulex/benet/be_main.c |   10 ++++++----
 1 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 5a34503..a8564d0 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -2142,12 +2142,14 @@ static void be_msix_disable(struct be_adapter *adapter)
 
 static uint be_num_rss_want(struct be_adapter *adapter)
 {
+	u32 num = 0;
 	if ((adapter->function_caps & BE_FUNCTION_CAPS_RSS) &&
 	     !sriov_want(adapter) && be_physfn(adapter) &&
-	     !be_is_mc(adapter))
-		return (adapter->be3_native) ? BE3_MAX_RSS_QS : BE2_MAX_RSS_QS;
-	else
-		return 0;
+	     !be_is_mc(adapter)) {
+		num = (adapter->be3_native) ? BE3_MAX_RSS_QS : BE2_MAX_RSS_QS;
+		num = min_t(u32, num, (u32)netif_get_num_default_rss_queues());
+	}
+	return num;
 }
 
 static void be_msix_enable(struct be_adapter *adapter)
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC net-next (v2) 14/14] ixgbevf: set maximal number of default RSS queues
  2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
                   ` (12 preceding siblings ...)
  2012-06-25 11:45 ` [RFC net-next (v2) 13/14] be2net: " Yuval Mintz
@ 2012-06-25 11:45 ` Yuval Mintz
  2012-06-25 15:45   ` Alexander Duyck
  13 siblings, 1 reply; 30+ messages in thread
From: Yuval Mintz @ 2012-06-25 11:45 UTC (permalink / raw)
  To: davem, netdev
  Cc: eilong, Yuval Mintz, Jeff Kirsher, Alexander Duyck, Greg Rose

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Greg Rose <gregory.v.rose@intel.com>
---
 drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index f69ec42..7889644 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -2023,7 +2023,7 @@ static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
 	 * (roughly) twice the number of vectors as there are CPU's.
 	 */
 	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
-		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
+		       netif_get_num_default_rss_queues() * 2) + NON_Q_VECTORS;
 
 	/* A failure in MSI-X entry allocation isn't fatal, but it does
 	 * mean we disable MSI-X capabilities of the adapter. */
-- 
1.7.9.rc2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-25 11:45 ` [RFC net-next (v2) 12/14] ixgbe: " Yuval Mintz
@ 2012-06-25 15:44   ` Alexander Duyck
  2012-06-25 17:53     ` Eilon Greenstein
  0 siblings, 1 reply; 30+ messages in thread
From: Alexander Duyck @ 2012-06-25 15:44 UTC (permalink / raw)
  To: Yuval Mintz; +Cc: davem, netdev, eilong, Jeff Kirsher, John Fastabend

On 06/25/2012 04:45 AM, Yuval Mintz wrote:
> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
>
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Cc: John Fastabend <john.r.fastabend@intel.com>
> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> index af1a531..b352ea8 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> @@ -802,7 +802,7 @@ static int ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
>  	 * The default is to use pairs of vectors.
>  	 */
>  	v_budget = max(adapter->num_rx_queues, adapter->num_tx_queues);
> -	v_budget = min_t(int, v_budget, num_online_cpus());
> +	v_budget = min_t(int, v_budget, netif_get_num_default_rss_queues());
>  	v_budget += NON_Q_VECTORS;
>  
>  	/*
This doesn't limit queues, only interrupt vectors.  As I told you before
you should look at the ixgbe_set_rss_queues function if you actually
want to limit the number of RSS queues.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 14/14] ixgbevf: set maximal number of default RSS queues
  2012-06-25 11:45 ` [RFC net-next (v2) 14/14] ixgbevf: " Yuval Mintz
@ 2012-06-25 15:45   ` Alexander Duyck
  2012-06-25 15:53     ` Eilon Greenstein
  0 siblings, 1 reply; 30+ messages in thread
From: Alexander Duyck @ 2012-06-25 15:45 UTC (permalink / raw)
  To: Yuval Mintz; +Cc: davem, netdev, eilong, Jeff Kirsher, Greg Rose

On 06/25/2012 04:45 AM, Yuval Mintz wrote:
> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
>
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> Cc: Greg Rose <gregory.v.rose@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> index f69ec42..7889644 100644
> --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> @@ -2023,7 +2023,7 @@ static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
>  	 * (roughly) twice the number of vectors as there are CPU's.
>  	 */
>  	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
> -		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
> +		       netif_get_num_default_rss_queues() * 2) + NON_Q_VECTORS;
>  
>  	/* A failure in MSI-X entry allocation isn't fatal, but it does
>  	 * mean we disable MSI-X capabilities of the adapter. */
I'm not even sure why you are bothering to alter this driver since it is
currently single queue only.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 14/14] ixgbevf: set maximal number of default RSS queues
  2012-06-25 15:45   ` Alexander Duyck
@ 2012-06-25 15:53     ` Eilon Greenstein
  2012-06-25 16:33       ` Alexander Duyck
  0 siblings, 1 reply; 30+ messages in thread
From: Eilon Greenstein @ 2012-06-25 15:53 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, davem, netdev, Jeff Kirsher, Greg Rose

On Mon, 2012-06-25 at 08:45 -0700, Alexander Duyck wrote:
> On 06/25/2012 04:45 AM, Yuval Mintz wrote:
> > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> >
> > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> > Cc: Greg Rose <gregory.v.rose@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > index f69ec42..7889644 100644
> > --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
> > @@ -2023,7 +2023,7 @@ static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
> >  	 * (roughly) twice the number of vectors as there are CPU's.
> >  	 */
> >  	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
> > -		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
> > +		       netif_get_num_default_rss_queues() * 2) + NON_Q_VECTORS;
> >  
> >  	/* A failure in MSI-X entry allocation isn't fatal, but it does
> >  	 * mean we disable MSI-X capabilities of the adapter. */
> I'm not even sure why you are bothering to alter this driver since it is
> currently single queue only.

Alex,

Can you please explain the logic behind these lines in the current
implementation?
	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
It looks as if you were trying to be generic and support a future in
which this driver will support more than a single queue - to be future
ready, this RFC is replacing the usage of num_online_cpus with
netif_get_num_default_rss_queues. Are you suggesting to remove these
lines all together? Keeping this minimum function with only the
num_online_cpus does not seem right without considering also the new
default value.

Thanks,
Eilon

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 14/14] ixgbevf: set maximal number of default RSS queues
  2012-06-25 15:53     ` Eilon Greenstein
@ 2012-06-25 16:33       ` Alexander Duyck
  2012-06-25 18:28         ` Eilon Greenstein
  0 siblings, 1 reply; 30+ messages in thread
From: Alexander Duyck @ 2012-06-25 16:33 UTC (permalink / raw)
  To: eilong; +Cc: Yuval Mintz, davem, netdev, Jeff Kirsher, Greg Rose

On 06/25/2012 08:53 AM, Eilon Greenstein wrote:
> On Mon, 2012-06-25 at 08:45 -0700, Alexander Duyck wrote:
>> On 06/25/2012 04:45 AM, Yuval Mintz wrote:
>>> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
>>> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
>>>
>>> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
>>> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
>>> Cc: Greg Rose <gregory.v.rose@intel.com>
>>> ---
>>>  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c |    2 +-
>>>  1 files changed, 1 insertions(+), 1 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
>>> index f69ec42..7889644 100644
>>> --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
>>> +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
>>> @@ -2023,7 +2023,7 @@ static int ixgbevf_set_interrupt_capability(struct ixgbevf_adapter *adapter)
>>>  	 * (roughly) twice the number of vectors as there are CPU's.
>>>  	 */
>>>  	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
>>> -		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
>>> +		       netif_get_num_default_rss_queues() * 2) + NON_Q_VECTORS;
>>>  
>>>  	/* A failure in MSI-X entry allocation isn't fatal, but it does
>>>  	 * mean we disable MSI-X capabilities of the adapter. */
>> I'm not even sure why you are bothering to alter this driver since it is
>> currently single queue only.
> Alex,
>
> Can you please explain the logic behind these lines in the current
> implementation?
> 	v_budget = min(adapter->num_rx_queues + adapter->num_tx_queues,
> 		       (int)(num_online_cpus() * 2)) + NON_Q_VECTORS;
> It looks as if you were trying to be generic and support a future in
> which this driver will support more than a single queue - to be future
> ready, this RFC is replacing the usage of num_online_cpus with
> netif_get_num_default_rss_queues. Are you suggesting to remove these
> lines all together? Keeping this minimum function with only the
> num_online_cpus does not seem right without considering also the new
> default value.
>
> Thanks,
> Eilon
Eilon,

It seems like you didn't listen to anything I told you for the last
round of patches.

You are confusing interrupt vectors with queues as you did with ixgbe. 
The VF driver supports up to 3 interrupts, but is only single queue.  As
such we can split things out so that we either support one vector with
both Tx and Rx or separate Tx and Rx vectors.  How many vectors you have
doesn't really matter anyway since this patch doesn't limit queues.  All
it does is limit the number of interrupt vectors.

I suggest you locate where we set "adapter->num_rx_queues" and attempt
to address the issue there.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [RFC net-next (v2) 11/14] igb: set maximal number of default RSS queues
  2012-06-25 11:45 ` [RFC net-next (v2) 11/14] igb: " Yuval Mintz
@ 2012-06-25 17:31   ` Vick, Matthew
  2012-06-25 18:20     ` Eilon Greenstein
  2012-06-25 18:38     ` Ben Hutchings
  0 siblings, 2 replies; 30+ messages in thread
From: Vick, Matthew @ 2012-06-25 17:31 UTC (permalink / raw)
  To: Yuval Mintz, davem@davemloft.net, netdev@vger.kernel.org
  Cc: eilong@broadcom.com, Kirsher, Jeffrey T

> -----Original Message-----
> From: netdev-owner@vger.kernel.org [mailto:netdev-
> owner@vger.kernel.org] On Behalf Of Yuval Mintz
> Sent: Monday, June 25, 2012 4:46 AM
> To: davem@davemloft.net; netdev@vger.kernel.org
> Cc: eilong@broadcom.com; Yuval Mintz; Kirsher, Jeffrey T
> Subject: [RFC net-next (v2) 11/14] igb: set maximal number of default
> RSS queues
> 
> Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> 
> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> ---
>  drivers/net/ethernet/intel/igb/igb_main.c |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/igb/igb_main.c
> b/drivers/net/ethernet/intel/igb/igb_main.c
> index 01ced68..b11ee60 100644
> --- a/drivers/net/ethernet/intel/igb/igb_main.c
> +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> @@ -2465,7 +2465,8 @@ static int __devinit igb_sw_init(struct
> igb_adapter *adapter)
>  		break;
>  	}
> 
> -	adapter->rss_queues = min_t(u32, max_rss_queues,
> num_online_cpus());
> +	adapter->rss_queues = min_t(u32, max_rss_queues,
> +				    netif_get_num_default_rss_queues());
> 
>  	/* Determine if we need to pair queues. */
>  	switch (hw->mac.type) {
> --
> 1.7.9.rc2

Since igb supports a maximum of 8 RSS queues anyway, it's generally unaffected by this change. That being said, I'm confused about what you're trying to accomplish--is it to standardize the number of RSS queues or limit the number of interrupts by default? Or is it the former with a hope that it will trickle down to the latter?

For example, if you take an igb device that supports 8 RSS queues (say, I350) and you change the default to 4 RSS queues, you will end up with the same number of interrupt vectors being requested as we will disable queue pairing and request separate vectors for Rx/Tx queues. I haven't looked at the other drivers affected by this RFC, but it's possible other drivers behave the same way.

Matthew

Matthew Vick
Linux Development
LAN Access Division
Intel Corporation

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-25 15:44   ` Alexander Duyck
@ 2012-06-25 17:53     ` Eilon Greenstein
  2012-06-25 18:38       ` Alexander Duyck
  0 siblings, 1 reply; 30+ messages in thread
From: Eilon Greenstein @ 2012-06-25 17:53 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, davem, netdev, Jeff Kirsher, John Fastabend

On Mon, 2012-06-25 at 08:44 -0700, Alexander Duyck wrote:
> This doesn't limit queues, only interrupt vectors.  As I told you before
> you should look at the ixgbe_set_rss_queues function if you actually
> want to limit the number of RSS queues.

How about this:
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index af1a531..23a8609 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -277,6 +277,8 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
 	bool ret = false;
 	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];
 
+	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
+
 	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
 		f->mask = 0xF;
 		adapter->num_rx_queues = f->indices;
@@ -302,7 +304,9 @@ static inline bool ixgbe_set_fdir_queues(struct ixgbe_adapter *adapter)
 	bool ret = false;
 	struct ixgbe_ring_feature *f_fdir = &adapter->ring_feature[RING_F_FDIR];
 
-	f_fdir->indices = min_t(int, num_online_cpus(), f_fdir->indices);
+	f_fdir->indices = min_t(int, netif_get_num_default_rss_queues(),
+				f_fdir->indices);
+
 	f_fdir->mask = 0;
 
 	/*
@@ -339,8 +343,7 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
 	if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
 		return false;
 
-	f->indices = min_t(int, num_online_cpus(), f->indices);
-
+	f->indices = min_t(int, f->indices, netif_get_num_default_rss_queues());
 	adapter->num_rx_queues = 1;
 	adapter->num_tx_queues = 1;
 

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* RE: [RFC net-next (v2) 11/14] igb: set maximal number of default RSS queues
  2012-06-25 17:31   ` Vick, Matthew
@ 2012-06-25 18:20     ` Eilon Greenstein
  2012-06-25 18:38     ` Ben Hutchings
  1 sibling, 0 replies; 30+ messages in thread
From: Eilon Greenstein @ 2012-06-25 18:20 UTC (permalink / raw)
  To: Vick, Matthew
  Cc: Yuval Mintz, davem@davemloft.net, netdev@vger.kernel.org,
	Kirsher, Jeffrey T

On Mon, 2012-06-25 at 17:31 +0000, Vick, Matthew wrote:
> > -----Original Message-----
> > From: netdev-owner@vger.kernel.org [mailto:netdev-
> > owner@vger.kernel.org] On Behalf Of Yuval Mintz
> > Sent: Monday, June 25, 2012 4:46 AM
> > To: davem@davemloft.net; netdev@vger.kernel.org
> > Cc: eilong@broadcom.com; Yuval Mintz; Kirsher, Jeffrey T
> > Subject: [RFC net-next (v2) 11/14] igb: set maximal number of default
> > RSS queues
> > 
> > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> > 
> > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > ---
> >  drivers/net/ethernet/intel/igb/igb_main.c |    3 ++-
> >  1 files changed, 2 insertions(+), 1 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/igb/igb_main.c
> > b/drivers/net/ethernet/intel/igb/igb_main.c
> > index 01ced68..b11ee60 100644
> > --- a/drivers/net/ethernet/intel/igb/igb_main.c
> > +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> > @@ -2465,7 +2465,8 @@ static int __devinit igb_sw_init(struct
> > igb_adapter *adapter)
> >  		break;
> >  	}
> > 
> > -	adapter->rss_queues = min_t(u32, max_rss_queues,
> > num_online_cpus());
> > +	adapter->rss_queues = min_t(u32, max_rss_queues,
> > +				    netif_get_num_default_rss_queues());
> > 
> >  	/* Determine if we need to pair queues. */
> >  	switch (hw->mac.type) {
> > --
> > 1.7.9.rc2
> 
> Since igb supports a maximum of 8 RSS queues anyway, it's generally unaffected by this change. That being said, I'm confused about what you're trying to accomplish--is it to standardize the number of RSS queues or limit the number of interrupts by default? Or is it the former with a hope that it will trickle down to the latter?
> 
> For example, if you take an igb device that supports 8 RSS queues (say, I350) and you change the default to 4 RSS queues, you will end up with the same number of interrupt vectors being requested as we will disable queue pairing and request separate vectors for Rx/Tx queues. I haven't looked at the other drivers affected by this RFC, but it's possible other drivers behave the same way.

The main motivation is the Rx rings that consume significant amount of
memory. The MSI-X vectors are a nice byproduct, but the logic you
describe still sounds good for this kind of low numbers.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 14/14] ixgbevf: set maximal number of default RSS queues
  2012-06-25 16:33       ` Alexander Duyck
@ 2012-06-25 18:28         ` Eilon Greenstein
  0 siblings, 0 replies; 30+ messages in thread
From: Eilon Greenstein @ 2012-06-25 18:28 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, davem, netdev, Jeff Kirsher, Greg Rose

On Mon, 2012-06-25 at 09:33 -0700, Alexander Duyck wrote:
> I suggest you locate where we set "adapter->num_rx_queues" and attempt
> to address the issue there.

I see what you mean. On one hand, you use the num_rx_queues as if it can
get higher numbers and the code seems to be ready for that with all the
for loops in place, but on the other hand, it is hard coded to 1 at
ixgbevf_set_num_queues and it seems weird to add min function with
explicit '1'. Do you suggest to leave this driver alone hoping that if
in the future the '1' will change the author will remember to use the
default value?

Thanks,
Eilon

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-25 17:53     ` Eilon Greenstein
@ 2012-06-25 18:38       ` Alexander Duyck
  2012-06-25 18:44         ` Ben Hutchings
  2012-06-26 11:08         ` Yuval Mintz
  0 siblings, 2 replies; 30+ messages in thread
From: Alexander Duyck @ 2012-06-25 18:38 UTC (permalink / raw)
  To: eilong; +Cc: Yuval Mintz, davem, netdev, Jeff Kirsher, John Fastabend

On 06/25/2012 10:53 AM, Eilon Greenstein wrote:
> On Mon, 2012-06-25 at 08:44 -0700, Alexander Duyck wrote:
>> This doesn't limit queues, only interrupt vectors.  As I told you before
>> you should look at the ixgbe_set_rss_queues function if you actually
>> want to limit the number of RSS queues.
> How about this:
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> index af1a531..23a8609 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> @@ -277,6 +277,8 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
>  	bool ret = false;
>  	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];
>  
> +	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
> +
>  	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
>  		f->mask = 0xF;
>  		adapter->num_rx_queues = f->indices;
> @@ -302,7 +304,9 @@ static inline bool ixgbe_set_fdir_queues(struct ixgbe_adapter *adapter)
>  	bool ret = false;
>  	struct ixgbe_ring_feature *f_fdir = &adapter->ring_feature[RING_F_FDIR];
>  
> -	f_fdir->indices = min_t(int, num_online_cpus(), f_fdir->indices);
> +	f_fdir->indices = min_t(int, netif_get_num_default_rss_queues(),
> +				f_fdir->indices);
> +
>  	f_fdir->mask = 0;
>  
>  	/*
> @@ -339,8 +343,7 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
>  	if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
>  		return false;
>  
> -	f->indices = min_t(int, num_online_cpus(), f->indices);
> -
> +	f->indices = min_t(int, f->indices, netif_get_num_default_rss_queues());
>  	adapter->num_rx_queues = 1;
>  	adapter->num_tx_queues = 1;
>
This makes much more sense, but still needs a few minor changes.  The
first change I would recommend is to not alter ixgbe_set_fdir_queues
since that controls flow director queues, not RSS queues.  The second
would be to update ixgbe_set_dcb_queues since that does RSS per DCB
traffic class and will also need to be updated.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [RFC net-next (v2) 11/14] igb: set maximal number of default RSS queues
  2012-06-25 17:31   ` Vick, Matthew
  2012-06-25 18:20     ` Eilon Greenstein
@ 2012-06-25 18:38     ` Ben Hutchings
  1 sibling, 0 replies; 30+ messages in thread
From: Ben Hutchings @ 2012-06-25 18:38 UTC (permalink / raw)
  To: Vick, Matthew
  Cc: Yuval Mintz, davem@davemloft.net, netdev@vger.kernel.org,
	eilong@broadcom.com, Kirsher, Jeffrey T

On Mon, 2012-06-25 at 17:31 +0000, Vick, Matthew wrote:
> > -----Original Message-----
> > From: netdev-owner@vger.kernel.org [mailto:netdev-
> > owner@vger.kernel.org] On Behalf Of Yuval Mintz
> > Sent: Monday, June 25, 2012 4:46 AM
> > To: davem@davemloft.net; netdev@vger.kernel.org
> > Cc: eilong@broadcom.com; Yuval Mintz; Kirsher, Jeffrey T
> > Subject: [RFC net-next (v2) 11/14] igb: set maximal number of default
> > RSS queues
> > 
> > Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
> > Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
> > 
> > Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > ---
> >  drivers/net/ethernet/intel/igb/igb_main.c |    3 ++-
> >  1 files changed, 2 insertions(+), 1 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/igb/igb_main.c
> > b/drivers/net/ethernet/intel/igb/igb_main.c
> > index 01ced68..b11ee60 100644
> > --- a/drivers/net/ethernet/intel/igb/igb_main.c
> > +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> > @@ -2465,7 +2465,8 @@ static int __devinit igb_sw_init(struct
> > igb_adapter *adapter)
> >  		break;
> >  	}
> > 
> > -	adapter->rss_queues = min_t(u32, max_rss_queues,
> > num_online_cpus());
> > +	adapter->rss_queues = min_t(u32, max_rss_queues,
> > +				    netif_get_num_default_rss_queues());
> > 
> >  	/* Determine if we need to pair queues. */
> >  	switch (hw->mac.type) {
> > --
> > 1.7.9.rc2
> 
> Since igb supports a maximum of 8 RSS queues anyway, it's generally
> unaffected by this change.

I'm interested in providing a global configuration mechanism for this,
which is why I asked Yuval to introduce this function.

> That being said, I'm confused about what you're trying to
> accomplish--is it to standardize the number of RSS queues or limit the
> number of interrupts by default? Or is it the former with a hope that
> it will trickle down to the latter?
>
> For example, if you take an igb device that supports 8 RSS queues
> (say, I350) and you change the default to 4 RSS queues, you will end
> up with the same number of interrupt vectors being requested as we
> will disable queue pairing and request separate vectors for Rx/Tx
> queues. I haven't looked at the other drivers affected by this RFC,
> but it's possible other drivers behave the same way.

sfc also supports those two modes, but under control of a module
parameter.  If it's a common enough feature then perhaps it should also
be subject to global configuration (though that might be confusing if
most drivers continue to support only one mode).

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-25 18:38       ` Alexander Duyck
@ 2012-06-25 18:44         ` Ben Hutchings
  2012-06-25 21:03           ` Alexander Duyck
  2012-06-26 11:08         ` Yuval Mintz
  1 sibling, 1 reply; 30+ messages in thread
From: Ben Hutchings @ 2012-06-25 18:44 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: eilong, Yuval Mintz, davem, netdev, Jeff Kirsher, John Fastabend

On Mon, 2012-06-25 at 11:38 -0700, Alexander Duyck wrote:
> On 06/25/2012 10:53 AM, Eilon Greenstein wrote:
> > On Mon, 2012-06-25 at 08:44 -0700, Alexander Duyck wrote:
> >> This doesn't limit queues, only interrupt vectors.  As I told you before
> >> you should look at the ixgbe_set_rss_queues function if you actually
> >> want to limit the number of RSS queues.
> > How about this:
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > index af1a531..23a8609 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > @@ -277,6 +277,8 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
> >  	bool ret = false;
> >  	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];
> >  
> > +	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
> > +
> >  	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
> >  		f->mask = 0xF;
> >  		adapter->num_rx_queues = f->indices;
> > @@ -302,7 +304,9 @@ static inline bool ixgbe_set_fdir_queues(struct ixgbe_adapter *adapter)
> >  	bool ret = false;
> >  	struct ixgbe_ring_feature *f_fdir = &adapter->ring_feature[RING_F_FDIR];
> >  
> > -	f_fdir->indices = min_t(int, num_online_cpus(), f_fdir->indices);
> > +	f_fdir->indices = min_t(int, netif_get_num_default_rss_queues(),
> > +				f_fdir->indices);
> > +
> >  	f_fdir->mask = 0;
> >  
> >  	/*
> > @@ -339,8 +343,7 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
> >  	if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
> >  		return false;
> >  
> > -	f->indices = min_t(int, num_online_cpus(), f->indices);
> > -
> > +	f->indices = min_t(int, f->indices, netif_get_num_default_rss_queues());
> >  	adapter->num_rx_queues = 1;
> >  	adapter->num_tx_queues = 1;
> >
> This makes much more sense, but still needs a few minor changes.  The
> first change I would recommend is to not alter ixgbe_set_fdir_queues
> since that controls flow director queues, not RSS queues.  The second
> would be to update ixgbe_set_dcb_queues since that does RSS per DCB
> traffic class and will also need to be updated.

There is a difference between the stated aim (reduce memory allocated
for RX buffers) and this implementation (limit number of RSS queues
only).  If we agree on that aim then we should be limiting the total
number of RX queues.

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-25 18:44         ` Ben Hutchings
@ 2012-06-25 21:03           ` Alexander Duyck
  0 siblings, 0 replies; 30+ messages in thread
From: Alexander Duyck @ 2012-06-25 21:03 UTC (permalink / raw)
  To: Ben Hutchings
  Cc: eilong, Yuval Mintz, davem, netdev, Jeff Kirsher, John Fastabend

On 06/25/2012 11:44 AM, Ben Hutchings wrote:
> On Mon, 2012-06-25 at 11:38 -0700, Alexander Duyck wrote:
>> On 06/25/2012 10:53 AM, Eilon Greenstein wrote:
>>> On Mon, 2012-06-25 at 08:44 -0700, Alexander Duyck wrote:
>>>> This doesn't limit queues, only interrupt vectors.  As I told you before
>>>> you should look at the ixgbe_set_rss_queues function if you actually
>>>> want to limit the number of RSS queues.
>>> How about this:
>>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> index af1a531..23a8609 100644
>>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> @@ -277,6 +277,8 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
>>>  	bool ret = false;
>>>  	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];
>>>  
>>> +	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
>>> +
>>>  	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
>>>  		f->mask = 0xF;
>>>  		adapter->num_rx_queues = f->indices;
>>> @@ -302,7 +304,9 @@ static inline bool ixgbe_set_fdir_queues(struct ixgbe_adapter *adapter)
>>>  	bool ret = false;
>>>  	struct ixgbe_ring_feature *f_fdir = &adapter->ring_feature[RING_F_FDIR];
>>>  
>>> -	f_fdir->indices = min_t(int, num_online_cpus(), f_fdir->indices);
>>> +	f_fdir->indices = min_t(int, netif_get_num_default_rss_queues(),
>>> +				f_fdir->indices);
>>> +
>>>  	f_fdir->mask = 0;
>>>  
>>>  	/*
>>> @@ -339,8 +343,7 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
>>>  	if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
>>>  		return false;
>>>  
>>> -	f->indices = min_t(int, num_online_cpus(), f->indices);
>>> -
>>> +	f->indices = min_t(int, f->indices, netif_get_num_default_rss_queues());
>>>  	adapter->num_rx_queues = 1;
>>>  	adapter->num_tx_queues = 1;
>>>
>> This makes much more sense, but still needs a few minor changes.  The
>> first change I would recommend is to not alter ixgbe_set_fdir_queues
>> since that controls flow director queues, not RSS queues.  The second
>> would be to update ixgbe_set_dcb_queues since that does RSS per DCB
>> traffic class and will also need to be updated.
> There is a difference between the stated aim (reduce memory allocated
> for RX buffers) and this implementation (limit number of RSS queues
> only).  If we agree on that aim then we should be limiting the total
> number of RX queues.
>
> Ben
The problem is there are certain features that require a certain number
of Tx/Rx queues.  In addition, certain features will behave differently
from RSS in terms of how well they scale based on the number of queues.

For example if we enable DCB we require at least one queue per traffic
class.  In the case of ATR we should have 1 queue per CPU since the Tx
queue selection is based on the number of CPUs.  If we don't have that
1:1 mapping we should be disabling ATR since the feature will not
function correctly in that case.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-25 18:38       ` Alexander Duyck
  2012-06-25 18:44         ` Ben Hutchings
@ 2012-06-26 11:08         ` Yuval Mintz
  2012-06-26 15:55           ` Alexander Duyck
  1 sibling, 1 reply; 30+ messages in thread
From: Yuval Mintz @ 2012-06-26 11:08 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: eilong, davem, netdev, Jeff Kirsher, John Fastabend


>> How about this:
>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> index af1a531..23a8609 100644
>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> @@ -277,6 +277,8 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
>>  	bool ret = false;
>>  	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];
>>  
>> +	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
>> +
>>  	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
>>  		f->mask = 0xF;
>>  		adapter->num_rx_queues = f->indices;
>> @@ -302,7 +304,9 @@ static inline bool ixgbe_set_fdir_queues(struct ixgbe_adapter *adapter)
>>  	bool ret = false;
>>  	struct ixgbe_ring_feature *f_fdir = &adapter->ring_feature[RING_F_FDIR];
>>  
>> -	f_fdir->indices = min_t(int, num_online_cpus(), f_fdir->indices);
>> +	f_fdir->indices = min_t(int, netif_get_num_default_rss_queues(),
>> +				f_fdir->indices);
>> +
>>  	f_fdir->mask = 0;
>>  
>>  	/*
>> @@ -339,8 +343,7 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
>>  	if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
>>  		return false;
>>  
>> -	f->indices = min_t(int, num_online_cpus(), f->indices);
>> -
>> +	f->indices = min_t(int, f->indices, netif_get_num_default_rss_queues());
>>  	adapter->num_rx_queues = 1;
>>  	adapter->num_tx_queues = 1;
>>
> This makes much more sense, but still needs a few minor changes.



Well, what about this one:

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index af1a531..0dd1e51 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -277,6 +277,7 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
 	bool ret = false;
 	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];

+	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
 	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
 		f->mask = 0xF;
 		adapter->num_rx_queues = f->indices;
@@ -376,7 +377,7 @@ static inline bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter)

 	/* Map queue offset and counts onto allocated tx queues */
 	per_tc_q = min_t(unsigned int, dev->num_tx_queues / tcs, DCB_QUEUE_CAP);
-	q = min_t(int, num_online_cpus(), per_tc_q);
+	q = min_t(int, netif_get_num_default_rss_queues(), per_tc_q);

 	for (i = 0; i < tcs; i++) {
 		netdev_set_tc_queue(dev, i, q, offset);

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-26 11:08         ` Yuval Mintz
@ 2012-06-26 15:55           ` Alexander Duyck
  2012-06-26 18:23             ` Eilon Greenstein
  0 siblings, 1 reply; 30+ messages in thread
From: Alexander Duyck @ 2012-06-26 15:55 UTC (permalink / raw)
  To: Yuval Mintz; +Cc: eilong, davem, netdev, Jeff Kirsher, John Fastabend

On 06/26/2012 04:08 AM, Yuval Mintz wrote:
>>> How about this:
>>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> index af1a531..23a8609 100644
>>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>>> @@ -277,6 +277,8 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
>>>  	bool ret = false;
>>>  	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];
>>>  
>>> +	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
>>> +
>>>  	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
>>>  		f->mask = 0xF;
>>>  		adapter->num_rx_queues = f->indices;
>>> @@ -302,7 +304,9 @@ static inline bool ixgbe_set_fdir_queues(struct ixgbe_adapter *adapter)
>>>  	bool ret = false;
>>>  	struct ixgbe_ring_feature *f_fdir = &adapter->ring_feature[RING_F_FDIR];
>>>  
>>> -	f_fdir->indices = min_t(int, num_online_cpus(), f_fdir->indices);
>>> +	f_fdir->indices = min_t(int, netif_get_num_default_rss_queues(),
>>> +				f_fdir->indices);
>>> +
>>>  	f_fdir->mask = 0;
>>>  
>>>  	/*
>>> @@ -339,8 +343,7 @@ static inline bool ixgbe_set_fcoe_queues(struct ixgbe_adapter *adapter)
>>>  	if (!(adapter->flags & IXGBE_FLAG_FCOE_ENABLED))
>>>  		return false;
>>>  
>>> -	f->indices = min_t(int, num_online_cpus(), f->indices);
>>> -
>>> +	f->indices = min_t(int, f->indices, netif_get_num_default_rss_queues());
>>>  	adapter->num_rx_queues = 1;
>>>  	adapter->num_tx_queues = 1;
>>>
>> This makes much more sense, but still needs a few minor changes.
>
>
> Well, what about this one:
>
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> index af1a531..0dd1e51 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> @@ -277,6 +277,7 @@ static inline bool ixgbe_set_rss_queues(struct ixgbe_adapter *adapter)
>  	bool ret = false;
>  	struct ixgbe_ring_feature *f = &adapter->ring_feature[RING_F_RSS];
>
> +	f->indices = min_t(int, netif_get_num_default_rss_queues(), f->indices);
>  	if (adapter->flags & IXGBE_FLAG_RSS_ENABLED) {
>  		f->mask = 0xF;
>  		adapter->num_rx_queues = f->indices;
> @@ -376,7 +377,7 @@ static inline bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter)
>
>  	/* Map queue offset and counts onto allocated tx queues */
>  	per_tc_q = min_t(unsigned int, dev->num_tx_queues / tcs, DCB_QUEUE_CAP);
> -	q = min_t(int, num_online_cpus(), per_tc_q);
> +	q = min_t(int, netif_get_num_default_rss_queues(), per_tc_q);
>
>  	for (i = 0; i < tcs; i++) {
>  		netdev_set_tc_queue(dev, i, q, offset);
>
Add back in the bit for ixgbe_set_fcoe_queues and you should just about
have it in terms of limiting the number of RSS queues.  That bit is
valid since the FCoE queues are based directly off of the RSS configuration.

One thing that just occurred to me though is that this is going to lock
the upper limit for us and we won't be able to override it if we
implement the set channels code.  I believe the same thing goes for the
igb driver as well.

Is there any chance you could just bypass the ixgbe and igb drivers for
now and give us time to come up with a more complete solution that would
allow us to add the set_channels calls.  One issue is I have a number of
ixgbe patches that are going to be completely rewriting this code anyway
so I could probably just add set_channels support and support for your
function once it is included in net-next.

Thanks,

Alex

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC net-next (v2) 12/14] ixgbe: set maximal number of default RSS queues
  2012-06-26 15:55           ` Alexander Duyck
@ 2012-06-26 18:23             ` Eilon Greenstein
  0 siblings, 0 replies; 30+ messages in thread
From: Eilon Greenstein @ 2012-06-26 18:23 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: Yuval Mintz, davem, netdev, Jeff Kirsher, John Fastabend

On Tue, 2012-06-26 at 08:55 -0700, Alexander Duyck wrote:
> One thing that just occurred to me though is that this is going to lock
> the upper limit for us and we won't be able to override it if we
> implement the set channels code.  I believe the same thing goes for the
> igb driver as well.
> 
> Is there any chance you could just bypass the ixgbe and igb drivers for
> now and give us time to come up with a more complete solution that would
> allow us to add the set_channels calls.  One issue is I have a number of
> ixgbe patches that are going to be completely rewriting this code anyway
> so I could probably just add set_channels support and support for your
> function once it is included in net-next.

Alex,

I share your concern. I actually brought it up originally when
discussing this RFC:
http://marc.info/?l=linux-netdev&m=133991625809201&w=2

>From my end, if you would like to add the Intel side support that would
be great. Unless anyone object, we will leave the ixgbe and igb out of
this patch series.

Thanks,
Eilon

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2012-06-26 18:24 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-06-25 11:45 [RFC net-next (v2) 00/14] default maximal number of RSS queues in mq drivers Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 01/14] net-next: Add netif_get_num_default_rss_queues Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 02/14] mlx4: set maximal number of default RSS queues Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 03/14] vxge: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 04/14] qlge: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 05/14] cxgb3: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 06/14] cxgb4: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 07/14] myri10ge: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 08/14] tg3: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 09/14] bnx2: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 10/14] bnx2x: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 11/14] igb: " Yuval Mintz
2012-06-25 17:31   ` Vick, Matthew
2012-06-25 18:20     ` Eilon Greenstein
2012-06-25 18:38     ` Ben Hutchings
2012-06-25 11:45 ` [RFC net-next (v2) 12/14] ixgbe: " Yuval Mintz
2012-06-25 15:44   ` Alexander Duyck
2012-06-25 17:53     ` Eilon Greenstein
2012-06-25 18:38       ` Alexander Duyck
2012-06-25 18:44         ` Ben Hutchings
2012-06-25 21:03           ` Alexander Duyck
2012-06-26 11:08         ` Yuval Mintz
2012-06-26 15:55           ` Alexander Duyck
2012-06-26 18:23             ` Eilon Greenstein
2012-06-25 11:45 ` [RFC net-next (v2) 13/14] be2net: " Yuval Mintz
2012-06-25 11:45 ` [RFC net-next (v2) 14/14] ixgbevf: " Yuval Mintz
2012-06-25 15:45   ` Alexander Duyck
2012-06-25 15:53     ` Eilon Greenstein
2012-06-25 16:33       ` Alexander Duyck
2012-06-25 18:28         ` Eilon Greenstein

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).