netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* noqueue on bonding devices
@ 2010-07-28  8:32 Simon Horman
  2010-07-28 17:37 ` Jay Vosburgh
  0 siblings, 1 reply; 3+ messages in thread
From: Simon Horman @ 2010-07-28  8:32 UTC (permalink / raw)
  To: Jay Vosburgh; +Cc: netdev

Hi Jay, Hi All,

I would just to wonder out loud if it is intentional that bonding
devices default to noqueue, whereas for instance ethernet devices
default to a pfifo_fast with qlen 1000.

The reason that I ask, is that when setting up some bandwidth
control using tc I encountered some strange behaviour which
I eventually tracked down to the queue-length of the qdiscs being 1p -
inherited from noqueue, as opposed to 1000p which would occur
on an ethernet device.

Its trivial to work around, by either altering the txqueuelen on
the bonding device before adding the qdisc or by manually setting
the qlen of the qdisc. But it did take us a while to determine the
cause of the problem we were seeing. And as it seems inconsistent
I'm interested to know why this is the case.

On an unrelated note, MAINTANERS lists bonding-devel@lists.sourceforge.net
but the (recent) archives seem to be entirely spam.  Is the MAINTAINERS
file correct?


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: noqueue on bonding devices
  2010-07-28  8:32 noqueue on bonding devices Simon Horman
@ 2010-07-28 17:37 ` Jay Vosburgh
  2010-07-28 23:42   ` Simon Horman
  0 siblings, 1 reply; 3+ messages in thread
From: Jay Vosburgh @ 2010-07-28 17:37 UTC (permalink / raw)
  To: Simon Horman; +Cc: netdev

Simon Horman <horms@verge.net.au> wrote:

>Hi Jay, Hi All,
>
>I would just to wonder out loud if it is intentional that bonding
>devices default to noqueue, whereas for instance ethernet devices
>default to a pfifo_fast with qlen 1000.

	Yes, it is.

>The reason that I ask, is that when setting up some bandwidth
>control using tc I encountered some strange behaviour which
>I eventually tracked down to the queue-length of the qdiscs being 1p -
>inherited from noqueue, as opposed to 1000p which would occur
>on an ethernet device.
>
>Its trivial to work around, by either altering the txqueuelen on
>the bonding device before adding the qdisc or by manually setting
>the qlen of the qdisc. But it did take us a while to determine the
>cause of the problem we were seeing. And as it seems inconsistent
>I'm interested to know why this is the case.

	Software-only virtual devices (loopback, bonding, bridge, vlan,
etc) typically have no transmit queue because, well, the device does no
queueing.  Meaning that there is no flow control infrastructure in the
software device; bonding, et al, won't ever flow control (call
netif_stop_queue to temporarily suspend transmit) or accumulate packets
on a transmit queue.

	Hardware ethernet devices set a queue length because it is
meaningful for them to do so.  When their hardware transmit ring fills
up, they will assert flow control, and stop accepting new packets for
transmit.  Packets then accumulate in the software transmit queue, and
when the device unblocks, those packets are ready to go.  When under
continuous load, hardware network devices typically free up ring entries
in blocks (not one at a time), so the software transmit queue helps to
smooth out the chunkiness of the hardware driver's processing, minimize
dropped packets, etc.

	It's certainly possible to add a queue and qdisc to a bonding
device, and is reasonable to do if you want to do packet scheduling with
tc and friends.  In this case, the queue is really just for the tc
actions to connect to; the queue won't accumulate packets on account of
the driver (but could if the scheduler, e.g., rate limits).

>On an unrelated note, MAINTANERS lists bonding-devel@lists.sourceforge.net
>but the (recent) archives seem to be entirely spam.  Is the MAINTAINERS
>file correct?

	Yah, I should probably change that; the spam is pretty heavy,
and there isn't much I can do to limit it.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: noqueue on bonding devices
  2010-07-28 17:37 ` Jay Vosburgh
@ 2010-07-28 23:42   ` Simon Horman
  0 siblings, 0 replies; 3+ messages in thread
From: Simon Horman @ 2010-07-28 23:42 UTC (permalink / raw)
  To: Jay Vosburgh; +Cc: netdev

On Wed, Jul 28, 2010 at 10:37:56AM -0700, Jay Vosburgh wrote:
> Simon Horman <horms@verge.net.au> wrote:
> 
> >Hi Jay, Hi All,
> >
> >I would just to wonder out loud if it is intentional that bonding
> >devices default to noqueue, whereas for instance ethernet devices
> >default to a pfifo_fast with qlen 1000.
> 
> 	Yes, it is.
> 
> >The reason that I ask, is that when setting up some bandwidth
> >control using tc I encountered some strange behaviour which
> >I eventually tracked down to the queue-length of the qdiscs being 1p -
> >inherited from noqueue, as opposed to 1000p which would occur
> >on an ethernet device.
> >
> >Its trivial to work around, by either altering the txqueuelen on
> >the bonding device before adding the qdisc or by manually setting
> >the qlen of the qdisc. But it did take us a while to determine the
> >cause of the problem we were seeing. And as it seems inconsistent
> >I'm interested to know why this is the case.
> 
> 	Software-only virtual devices (loopback, bonding, bridge, vlan,
> etc) typically have no transmit queue because, well, the device does no
> queueing.  Meaning that there is no flow control infrastructure in the
> software device; bonding, et al, won't ever flow control (call
> netif_stop_queue to temporarily suspend transmit) or accumulate packets
> on a transmit queue.
> 
> 	Hardware ethernet devices set a queue length because it is
> meaningful for them to do so.  When their hardware transmit ring fills
> up, they will assert flow control, and stop accepting new packets for
> transmit.  Packets then accumulate in the software transmit queue, and
> when the device unblocks, those packets are ready to go.  When under
> continuous load, hardware network devices typically free up ring entries
> in blocks (not one at a time), so the software transmit queue helps to
> smooth out the chunkiness of the hardware driver's processing, minimize
> dropped packets, etc.
> 
> 	It's certainly possible to add a queue and qdisc to a bonding
> device, and is reasonable to do if you want to do packet scheduling with
> tc and friends.  In this case, the queue is really just for the tc
> actions to connect to; the queue won't accumulate packets on account of
> the driver (but could if the scheduler, e.g., rate limits).

Thanks for the detailed explanation, much appreciated.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2010-07-28 23:42 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-28  8:32 noqueue on bonding devices Simon Horman
2010-07-28 17:37 ` Jay Vosburgh
2010-07-28 23:42   ` Simon Horman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).