From: Jay Vosburgh <fubar@us.ibm.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: "George B." <georgeb@gmail.com>, netdev@vger.kernel.org
Subject: Re: Network multiqueue question
Date: Thu, 15 Apr 2010 11:09:46 -0700 [thread overview]
Message-ID: <21433.1271354986@death.nxdomain.ibm.com> (raw)
In-Reply-To: <1271353637.16881.2846.camel@edumazet-laptop>
Eric Dumazet <eric.dumazet@gmail.com> wrote:
>Le jeudi 15 avril 2010 à 09:58 -0700, George B. a écrit :
>> I am in need of a little education on multiqueue and was wondering if
>> someone here might be able to help me.
>>
>> Given intel igb network driver, it appears I can do something like:
>>
>> tc qdisc del dev eth0 root handle 1: multiq
>>
>> which works and reports 4 bands: dev eth0 root refcnt 4 bands 4/4
>>
>> But our network is a little more complicated. Above the ethernet we
>> have the bonding driver which is using mode 2 bonding with two
>> ethernet slaves. Then we have vlans on the bond interface. Our
>> production traffic is on a vlan and resource contention is an issue as
>> these are busy machines.
>>
>> It is my understanding that the vlan driver became multiqueue aware in
>> 2.6.32 (we are currently using 2.6.31).
>>
>> It would seem that the first thing the kernel would encounter with
>> traffic headed out would be the vlan interface, and then the bond
>> interface, and then the physical ethernet interface. Is that correct?
>> So with my kernel, I would seem to get no utility from multiq on the
>> ethernet interface if the vlan interface is going to be a
>> single-threaded bottleneck. What about the bond driver? Is it
>> currently multiqueue aware?
>>
>> I am try to get some sort of logical picture of how all these things
>> interact with each other to get things a little more efficient and
>> reduce resource contention in the application while still trying to be
>> efficient in use of network ports/interfaces.
>>
>> If someone feels up to the task of sending a little education my way,
>> I would be most appreciative. There doesn't seem to be a whole lot of
>> documentation floating around about multiqueue other than a blurb of
>> text in the kernel and David's presentation of last year.
>
>Hi George
>
>Vlan is multiqueue aware, but bonding is not unfortunatly at this
>moment.
>
>We could let it being 'multiqueue' (a patch was submitted by Oleg A.
>Arkhangelsky a while ago), but bonding xmit routine needs to lock a
>central lock, shared by all queues, so it wont be very efficient...
The lock is a read lock, so theoretically it should be possible
to enter the bonding transmit function on multiple CPUs at the same
time. The lock may thrash around, though.
>Since this bothers me a bit, I will probably work on this in a near
>future. (adding real multiqueue capability and RCU to bonding fast
>paths)
>
>Ref: http://permalink.gmane.org/gmane.linux.network/152987
The question I have about it (and the above patch), is: what
does multi-queue "awareness" really mean for a bonding device? How does
allocating a bunch of TX queues help, given that the determination of
the transmitting device hasn't necessarily been made?
I haven't had the chance to acquire some multi-queue network
cards and check things out with bonding, so I'm not really sure how it
should work. Should the bond look, from a multi-queue perspective, like
the largest slave, or should it look like the sum of the slaves? Some
of this is may be mode-specific, as well.
-J
---
-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com
next prev parent reply other threads:[~2010-04-15 18:10 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-15 16:58 Network multiqueue question George B.
2010-04-15 17:47 ` Eric Dumazet
2010-04-15 18:09 ` Jay Vosburgh [this message]
2010-04-15 18:41 ` Eric Dumazet
2010-04-16 3:54 ` George B.
2010-04-16 4:00 ` George B.
2010-04-16 4:53 ` Eric Dumazet
2010-04-16 7:28 ` George B.
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=21433.1271354986@death.nxdomain.ibm.com \
--to=fubar@us.ibm.com \
--cc=eric.dumazet@gmail.com \
--cc=georgeb@gmail.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).