public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: "John A. Sullivan III" <jsullivan@opensourcedevel.com>
To: John Fastabend <john.r.fastabend@intel.com>
Cc: netdev@vger.kernel.org
Subject: Re: Requeues
Date: Sat, 03 Mar 2012 05:13:57 -0500	[thread overview]
Message-ID: <1330769637.4671.193.camel@denise.theartistscloset.com> (raw)
In-Reply-To: <4F51C01C.3040208@intel.com>

On Fri, 2012-03-02 at 22:54 -0800, John Fastabend wrote:
> On 3/2/2012 9:48 PM, John A. Sullivan III wrote:
> > Hello, all.  I am seeing a small but significant number of requeues on
> > our pfifo_fast qdiscs.  I've not been able to find much on what this
> > means but the little I have implies it may be a problem with the
> > physical interfaces.  However, these are fairly high end systems with
> > Intel e1000 quad port cards.
> > 
> > We thought it might have something to do with the bonded interfaces so
> > we checked some other high end systems without bonded interfaces but the
> > same quad port cards and, lo and behold, the same small but significant
> > number of requeues.
> > 
> > Is this normal or does it indicate a problem somewhere? Thanks - John
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> One of two things can happen to cause the requeue counter to increment.
> 
> When the qdisc dequeues a packet it then gets enqueued in yet another
> queue in the e1000 driver. If the qdisc is dequeueing packets faster
> than the hardware can consume them the driver will return a
> NETDEV_TX_BUSY error. This causes the qdisc to 'requeue' the packet and
> in the process increments this counter. 
> 
> The gist is pfifo_fast dequeued a packet tried to give it to e1000 which
> pushed back so pfifo_fast put the packet back on the queue.
> 
> By the way you can tune the size of the e1000 queues manually by playing
> with:
> 
> 	/sys/class/net/ethx/tx_queue_len
> 
> But you likely don't want to make them any larger else you'll hit the
> buffer bloat problem. I'm guessing we should add byte queue limit support
> here.
> 
> The second way to trigger this is multiple cpus contending for a lock.
> 
> In short requeue counts as long as its not excessive are just part 
> of normal operation. So shouldn't be a problem.
<snip>
Thank you very much.  I was not aware of CPU contention being a cause.
We did notice the problem primarily on systems with 8 to 16 cores - John

  reply	other threads:[~2012-03-03 10:14 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-03  5:48 Requeues John A. Sullivan III
2012-03-03  6:54 ` Requeues John Fastabend
2012-03-03 10:13   ` John A. Sullivan III [this message]
2012-03-09 17:54     ` Requeues Dave Taht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1330769637.4671.193.camel@denise.theartistscloset.com \
    --to=jsullivan@opensourcedevel.com \
    --cc=john.r.fastabend@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox