From: Vernon Mauery <vernux@us.ibm.com>
To: Eilon Greenstein <eilong@broadcom.com>
Cc: Andi Kleen <andi@firstfloor.org>, netdev <netdev@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
rt-users <linux-rt-users@vger.kernel.org>
Subject: Re: High contention on the sk_buff_head.lock
Date: Wed, 18 Mar 2009 14:51:16 -0700 [thread overview]
Message-ID: <49C16CD4.3010708@us.ibm.com> (raw)
In-Reply-To: <1237412732.29116.2.camel@lb-tlvb-eliezer>
Eilon Greenstein wrote:
> On Wed, 2009-03-18 at 14:07 -0700, Vernon Mauery wrote:
>>> The real "fix" would be probably to use a multi queue capable NIC
>>> and a NIC driver that sets up multiple queues for TX (normally they
>>> only do for RX). Then cores or a set of cores (often the number
>>> of cores is larger than the number of NIC queues) could avoid this
>>> problem. Disadvantage: more memory use.
>> Hmmm. So do either the netxen_nic or bnx2x drivers support multiple
>> queues? (that is the HW that I have access to right now). And do I
>> need to do anything to set them up?
>>
> The version of bnx2x in net-next support multi Tx queues (and Rx). It
> will open an equal number of Tx and Rx queues up to 16 or the number of
> cores in the system. You can validate that all queues are transmitting
> with "ethtool -S" which has per queue statistics in that version.
Thanks. I will test to see how this affects this lock contention the
next time the broadcom hardware is available.
--Vernon
next prev parent reply other threads:[~2009-03-18 21:51 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-18 17:24 High contention on the sk_buff_head.lock Vernon Mauery
2009-03-18 19:07 ` Eric Dumazet
2009-03-18 20:17 ` Vernon Mauery
2009-03-20 23:29 ` Jarek Poplawski
2009-03-23 8:32 ` Eric Dumazet
2009-03-23 8:37 ` David Miller
2009-03-23 8:50 ` Jarek Poplawski
2009-04-02 14:13 ` Herbert Xu
2009-04-02 14:15 ` Herbert Xu
2009-03-18 20:54 ` Andi Kleen
2009-03-18 21:03 ` David Miller
2009-03-18 21:10 ` Vernon Mauery
2009-03-18 21:38 ` David Miller
2009-03-18 21:49 ` Vernon Mauery
2009-03-19 1:02 ` David Miller
2009-03-18 21:54 ` Gregory Haskins
2009-03-19 1:03 ` David Miller
2009-03-19 1:13 ` Sven-Thorsten Dietrich
2009-03-19 1:17 ` David Miller
2009-03-19 1:43 ` Sven-Thorsten Dietrich
2009-03-19 1:54 ` David Miller
2009-03-19 5:49 ` Eric Dumazet
2009-03-19 5:58 ` David Miller
2009-03-19 14:04 ` [PATCH] net: reorder struct Qdisc for better SMP performance Eric Dumazet
2009-03-20 8:33 ` David Miller
2009-03-19 13:45 ` High contention on the sk_buff_head.lock Andi Kleen
2009-03-19 3:48 ` Gregory Haskins
2009-03-19 5:38 ` David Miller
2009-03-19 12:42 ` Gregory Haskins
2009-03-19 20:52 ` David Miller
2009-03-19 12:50 ` Peter W. Morreale
2009-03-19 7:15 ` Evgeniy Polyakov
2009-03-18 21:07 ` Vernon Mauery
2009-03-18 21:45 ` Eilon Greenstein
2009-03-18 21:51 ` Vernon Mauery [this message]
2009-03-18 21:59 ` Andi Kleen
2009-03-18 22:19 ` Rick Jones
2009-03-19 12:59 ` Peter W. Morreale
2009-03-19 13:36 ` Peter W. Morreale
2009-03-19 13:46 ` Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49C16CD4.3010708@us.ibm.com \
--to=vernux@us.ibm.com \
--cc=andi@firstfloor.org \
--cc=eilong@broadcom.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).