netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sathya Perla <sathya.perla@emulex.com>
To: <netdev@vger.kernel.org>
Subject: [PATCH next 3/6] be2net: Create single TXQ on BE3-R 1G ports
Date: Tue, 1 Oct 2013 15:59:58 +0530	[thread overview]
Message-ID: <1380623401-15630-4-git-send-email-sathya.perla@emulex.com> (raw)
In-Reply-To: <1380623401-15630-1-git-send-email-sathya.perla@emulex.com>

From: Vasundhara Volam <vasundhara.volam@emulex.com>

On BE3-R 1G ports (identified by port numbers 2 and 3) the FW cannot properly
support multiple TXQs. This also makes the number of RX and TX queues symmetric
as only a single RXQ is available on 1G ports.

Signed-off-by: Vasundhara Volam <vasundhara.volam@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
---
 drivers/net/ethernet/emulex/benet/be_main.c |   11 ++---------
 1 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 4cb2ac7..be12987 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -2967,8 +2967,9 @@ static void BEx_get_resources(struct be_adapter *adapter,
 		res->max_vlans = BE_NUM_VLANS_SUPPORTED;
 	res->max_mcast_mac = BE_MAX_MC;
 
+	/* For BE3 1Gb ports, F/W does not properly support multiple TXQs */
 	if (BE2_chip(adapter) || use_sriov || be_is_mc(adapter) ||
-	    !be_physfn(adapter))
+	    !be_physfn(adapter) || (adapter->port_num > 1))
 		res->max_tx_qs = 1;
 	else
 		res->max_tx_qs = BE3_MAX_TX_QS;
@@ -3010,14 +3011,6 @@ static int be_get_resources(struct be_adapter *adapter)
 		adapter->res = res;
 	}
 
-	/* For BE3 only check if FW suggests a different max-txqs value */
-	if (BE3_chip(adapter)) {
-		status = be_cmd_get_profile_config(adapter, &res, 0);
-		if (!status && res.max_tx_qs)
-			adapter->res.max_tx_qs =
-				min(adapter->res.max_tx_qs, res.max_tx_qs);
-	}
-
 	/* For Lancer, SH etc read per-function resource limits from FW.
 	 * GET_FUNC_CONFIG returns per function guaranteed limits.
 	 * GET_PROFILE_CONFIG returns PCI-E related limits PF-pool limits
-- 
1.7.1

  parent reply	other threads:[~2013-10-01 10:26 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-01 10:29 [PATCH next 0/6] be2net: patch set Sathya Perla
2013-10-01 10:29 ` [PATCH next 1/6] be2net: Call be_vf_setup() even when VFs are enbaled from previous load Sathya Perla
2013-10-01 10:29 ` [PATCH next 2/6] be2net: pass if_id for v1 and V2 versions of TX_CREATE cmd Sathya Perla
2013-10-10 10:53   ` Sathya Perla
2013-10-11 19:02     ` David Miller
2013-10-01 10:29 ` Sathya Perla [this message]
2013-10-01 10:29 ` [PATCH next 4/6] be2net: call ENABLE_VF cmd for Skyhawk-R too Sathya Perla
2013-10-01 10:30 ` [PATCH next 5/6] be2net: fix adaptive interrupt coalescing Sathya Perla
2013-10-01 10:30 ` [PATCH next 6/6] be2net: add a counter for pkts dropped in xmit path Sathya Perla
2013-10-01 16:46 ` [PATCH next 0/6] be2net: patch set David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1380623401-15630-4-git-send-email-sathya.perla@emulex.com \
    --to=sathya.perla@emulex.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).