From: "George B." <georgeb@gmail.com>
To: Jay Vosburgh <fubar@us.ibm.com>
Cc: netdev@vger.kernel.org
Subject: Re: Network multiqueue question
Date: Thu, 15 Apr 2010 20:54:32 -0700 [thread overview]
Message-ID: <i2wb65cae941004152054nb301536ep1ba8a9d06cccb781@mail.gmail.com> (raw)
In-Reply-To: <21433.1271354986@death.nxdomain.ibm.com>
On Thu, Apr 15, 2010 at 11:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
> The question I have about it (and the above patch), is: what
> does multi-queue "awareness" really mean for a bonding device? How does
> allocating a bunch of TX queues help, given that the determination of
> the transmitting device hasn't necessarily been made?
Good point.
> I haven't had the chance to acquire some multi-queue network
> cards and check things out with bonding, so I'm not really sure how it
> should work. Should the bond look, from a multi-queue perspective, like
> the largest slave, or should it look like the sum of the slaves? Some
> of this is may be mode-specific, as well.
I would say that having the number of bands be either the number of
cores or 4, whichever is the smaller would be a good start. That is
probably fine for GigE. Of the network cards we have that support
multiqueue, they are either 4 or 8 bands. In an optimal world, you
would have the number of bands that you have available at the physical
ethernet level but changing those on the fly in case of a change in
available interfaces might be more trouble than it is worth.
Four or eight would seem to be a good number to start with as I don't
think I have seen an ethernet card with less than 4. If you have
fewer than 4 CPUs there probably isn't much utility in having more
bands than processors, or maybe that utility rapidly diminishes as the
number of bands increases beyond the number of CPUs. At that point
you have probably just spent a lot of work building a bigger buffer.
I would be happy with 4 bands. I guess it just depends on where you
want the bottleneck. If you have 8 bands on the bond driver (another
reasonable alternative) and only 4 bands available for output, you
have just moved the contention down a layer to between the bond and
the ethernet driver. But I am a fan of moving the point of contention
as far away from the application interface as possible. If I have one
big lock around the bond driver and have 6 things waiting to talk to
the network, those are six things that can't be doing anything else.
I would rather have the application handle its network task and get
back to other things. Now if you have 8 bands of bond and only 4
bands of ethernet, or even one band of ethernet, oh well. Maybe have
1 to 8 bands configurable by an option to the driver that could be set
explicitly and defaults to, say, 4?
Thanks for taking the time to answer.
George
next prev parent reply other threads:[~2010-04-16 3:54 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-15 16:58 Network multiqueue question George B.
2010-04-15 17:47 ` Eric Dumazet
2010-04-15 18:09 ` Jay Vosburgh
2010-04-15 18:41 ` Eric Dumazet
2010-04-16 3:54 ` George B. [this message]
2010-04-16 4:00 ` George B.
2010-04-16 4:53 ` Eric Dumazet
2010-04-16 7:28 ` George B.
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=i2wb65cae941004152054nb301536ep1ba8a9d06cccb781@mail.gmail.com \
--to=georgeb@gmail.com \
--cc=fubar@us.ibm.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).