netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: James Chapman <jchapman@katalix.com>
To: Chris Friesen <cfriesen@nortel.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: questions on NAPI processing latency and dropped network packets
Date: Thu, 10 Jan 2008 18:25:43 +0000	[thread overview]
Message-ID: <47866327.8090905@katalix.com> (raw)
In-Reply-To: <478654C3.60806@nortel.com>

Chris Friesen wrote:
> Hi all,
> 
> I've got an issue that's popped up with a deployed system running
> 2.6.10.  I'm looking for some help figuring out why incoming network
> packets aren't being processed fast enough.
> 
> After a recent userspace app change, we've started seeing packets being
> dropped by the ethernet hardware (e1000, NAPI is enabled).

What's changed in your application? Any real-time threads in there?

>From the top output below, looks like SigtranServices is consuming all
your CPU...

> The
> error/dropped/fifo counts are going up in ethtool:
> 
>      rx_packets: 32180834
>      rx_bytes: 5480756958
>      rx_errors: 862506
>      rx_dropped: 771345
>      rx_length_errors: 0
>      rx_over_errors: 0
>      rx_crc_errors: 0
>      rx_frame_errors: 0
>      rx_fifo_errors: 91161
>      rx_missed_errors: 91161
> 
> This link is receiving roughly 13K packets/sec, and we're dropping
> roughly 51 packets/sec due to fifo errors.
> 
> Increasing the rx descriptor ring size from 256 up to around 3000 or so
> seems to make the problem stop, but it seems to me that this is just a
> workaround for the latency in processing the incoming packets.
> 
> So, I'm looking for some suggestions on how to fix this or to figure out
> where the latency is coming from.
> 
> Some additional information:
> 
> 
> 1) Interrupts are being processed on both cpus:
> 
> root@base0-0-0-13-0-11-1:/root> cat /proc/interrupts
>            CPU0       CPU1
>  30:    1703756    4530785  U3-MPIC Level     eth0
> 
> 
> 
> 
> 2) "top" shows a fair amount of time processing softirqs, but very
> little time in ksoftirqd (or is that a sampling artifact?).
> 
> 
> Tasks: 79 total, 1 running, 78 sleeping, 0 stopped, 0 zombie
> Cpu0: 23.6% us, 30.9% sy, 0.0% ni, 36.9% id, 0.0% wa, 0.3% hi, 8.3% si
> Cpu1: 30.4% us, 24.1% sy, 0.0% ni, 5.9% id, 0.0% wa, 0.7% hi, 38.9% si
> Mem:  4007812k total, 2199148k used,  1808664k free,     0k buffers
> Swap:   0k total,       0k used,      0k free,   219844k cached
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>  5375 root      15   0 2682m 1.8g 6640 S 99.9 46.7  31:17.68
> SigtranServices
>  7696 root      17   0  6952 3212 1192 S  7.3  0.1   0:15.75
> schedmon.ppc210
>  7859 root      16   0  2688 1228  964 R  0.7  0.0   0:00.04 top
>  2956 root       8  -8 18940 7436 5776 S  0.3  0.2   0:01.35 blademtc
>     1 root      16   0  1660  620  532 S  0.0  0.0   0:30.62 init
>     2 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/0
>     3 root      15   0     0    0    0 S  0.0  0.0   0:00.55 ksoftirqd/0
>     4 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/1
>     5 root      15   0     0    0    0 S  0.0  0.0   0:00.43 ksoftirqd/1
> 
> 
> 3) /proc/sys/net/core/netdev_max_backlog is set to the default of 300
> 
> 
> So...anyone have any ideas/suggestions?
> 
> Thanks,
> 
> Chris

-- 
James Chapman
Katalix Systems Ltd
http://www.katalix.com
Catalysts for your Embedded Linux software development


  parent reply	other threads:[~2008-01-10 18:25 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-01-10 17:24 questions on NAPI processing latency and dropped network packets Chris Friesen
2008-01-10 17:37 ` Kok, Auke
2008-01-10 18:12   ` Chris Friesen
2008-01-10 18:26     ` Kok, Auke
2008-01-10 18:25 ` James Chapman [this message]
2008-01-10 21:29   ` Chris Friesen
2008-01-10 18:41 ` Rick Jones
2008-01-10 19:01   ` Kok, Auke
2008-01-11  1:20 ` David Miller
2008-01-11 14:59   ` Chris Friesen
2008-01-11 22:29     ` Herbert Xu
2008-01-12  1:53     ` David Miller
2008-01-14 15:58       ` Chris Friesen
2008-01-15  7:19         ` Jarek Poplawski
2008-01-15 14:47           ` Chris Friesen
2008-01-15 15:17             ` Radoslaw Szkodzinski
2008-01-15 17:14               ` Chris Friesen
2008-01-15 17:23                 ` Eric Dumazet
2008-01-15 20:29             ` Jarek Poplawski
2008-01-16  0:17               ` Herbert Xu
2008-01-16  6:58                 ` Jarek Poplawski
2008-01-16 20:04                   ` Willy Tarreau
2008-01-16 22:42                     ` Jarek Poplawski
2008-01-12  5:37 ` Ray Lee
2008-01-14 15:49   ` Chris Friesen
2008-01-14 16:56     ` Eric Dumazet
2008-01-14 19:25       ` Chris Friesen
2008-01-14 19:33         ` Eric Dumazet
2008-01-14 20:02           ` Chris Friesen
2008-01-15 15:09             ` Vlad Yasevich
2008-01-21 19:53 ` Chris Friesen
2008-01-21 21:11   ` Ben Greear
2008-01-21 23:15     ` Chris Friesen
2008-01-21 23:32       ` Ben Greear
2008-01-21 21:31   ` Eric Dumazet
2008-01-21 23:25     ` Chris Friesen
2008-01-22  5:46       ` Eric Dumazet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47866327.8090905@katalix.com \
    --to=jchapman@katalix.com \
    --cc=cfriesen@nortel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).