public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: John Fastabend <john.r.fastabend@intel.com>
To: "John A. Sullivan III" <jsullivan@opensourcedevel.com>
Cc: netdev@vger.kernel.org
Subject: Re: Requeues
Date: Fri, 02 Mar 2012 22:54:20 -0800	[thread overview]
Message-ID: <4F51C01C.3040208@intel.com> (raw)
In-Reply-To: <1330753728.4671.190.camel@denise.theartistscloset.com>

On 3/2/2012 9:48 PM, John A. Sullivan III wrote:
> Hello, all.  I am seeing a small but significant number of requeues on
> our pfifo_fast qdiscs.  I've not been able to find much on what this
> means but the little I have implies it may be a problem with the
> physical interfaces.  However, these are fairly high end systems with
> Intel e1000 quad port cards.
> 
> We thought it might have something to do with the bonded interfaces so
> we checked some other high end systems without bonded interfaces but the
> same quad port cards and, lo and behold, the same small but significant
> number of requeues.
> 
> Is this normal or does it indicate a problem somewhere? Thanks - John
> 
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

One of two things can happen to cause the requeue counter to increment.

When the qdisc dequeues a packet it then gets enqueued in yet another
queue in the e1000 driver. If the qdisc is dequeueing packets faster
than the hardware can consume them the driver will return a
NETDEV_TX_BUSY error. This causes the qdisc to 'requeue' the packet and
in the process increments this counter. 

The gist is pfifo_fast dequeued a packet tried to give it to e1000 which
pushed back so pfifo_fast put the packet back on the queue.

By the way you can tune the size of the e1000 queues manually by playing
with:

	/sys/class/net/ethx/tx_queue_len

But you likely don't want to make them any larger else you'll hit the
buffer bloat problem. I'm guessing we should add byte queue limit support
here.

The second way to trigger this is multiple cpus contending for a lock.

In short requeue counts as long as its not excessive are just part 
of normal operation. So shouldn't be a problem.

Thanks,
John

  reply	other threads:[~2012-03-03  6:54 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-03  5:48 Requeues John A. Sullivan III
2012-03-03  6:54 ` John Fastabend [this message]
2012-03-03 10:13   ` Requeues John A. Sullivan III
2012-03-09 17:54     ` Requeues Dave Taht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F51C01C.3040208@intel.com \
    --to=john.r.fastabend@intel.com \
    --cc=jsullivan@opensourcedevel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox