netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Dumazet <dada1@cosmosbay.com>
To: Willy Tarreau <w@1wt.eu>
Cc: Matthias Saou
	<thias@spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net>,
	linux-kernel@vger.kernel.org,
	Linux Netdev List <netdev@vger.kernel.org>
Subject: Re: Wrong network usage reported by /proc
Date: Tue, 05 May 2009 07:22:16 +0200	[thread overview]
Message-ID: <49FFCD08.7050600@cosmosbay.com> (raw)
In-Reply-To: <20090505050435.GK570@1wt.eu>

Willy Tarreau a écrit :
> On Mon, May 04, 2009 at 09:11:51PM +0200, Matthias Saou wrote:
>> Eric Dumazet wrote :
>>
>>> Matthias Saou a écrit :
>>>> Hi,
>>>>
>>>> I'm posting here as a last resort. I've got lots of heavily used RHEL5
>>>> servers (2.6.18 based) that are reporting all sorts of impossible
>>>> network usage values through /proc, leading to unrealistic snmp/cacti
>>>> graphs where the outgoing bandwidth used it higher than the physical
>>>> interface's maximum speed.
>>>>
>>>> For some details and a test script which compares values from /proc
>>>> with values from tcpdump :
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=489541
>>>>
>>>> The values collected using tcpdump always seem realistic and match the
>>>> values seen on the remote network equipments. So my obvious conclusion
>>>> (but possibly wrong given my limited knowledge) is that something is
>>>> wrong in the kernel, since it's the one exposing the /proc interface.
>>>>
>>>> I've reproduced what seems to be the same problem on recent kernels,
>>>> including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
>>>> simple python script available here allows to see it quite easily :
>>>> https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
>>>>
>>>>  * I run the script on my Workstation, I have an FTP server enabled
>>>>  * I download a DVD ISO from a remote workstation : The values match
>>>>  * I start ping floods from remote workstations : The values reported
>>>>    by /proc are much higher than the ones reported by tcpdump. I used
>>>>    "ping -s 500 -f myworkstation" from two remote workstations
>>>>
>>>> If there's anything flawed in my debugging, I'd love to have someone
>>>> point it out to me. TIA to anyone willing to have a look.
>>>>
>>>> Matthias
>>>>
>>> I could not reproduce this here... what kind of NIC are you using on
>>> affected systems ? Some ethernet drivers report stats from card itself,
>>> and I remember seeing some strange stats on some hardware, but I cannot
>>> remember which one it was (we were reading NULL values instead of
>>> real ones, once in a while, maybe it was a firmware issue...)
>> My workstation has a Broadcom BCM5752 (tg3 module). The servers which
>> are most affected have Intel 82571EB (e1000e). But the issue is that
>> with /proc, the values are a lot _higher_ than with tcpdump, and the
>> tcpdump values seem to be the correct ones.
> 
> the e1000 chip reports stats every 2 seconds. So you have to collect
> stats every 2 seconds otherwise you get "camel-looking" stats.
> 

I looked at e1000e driver, and apparently tx_packets & tx_bytes are computed
by the TX completion routine, not by the chip.

static bool e1000_clean_tx_irq(struct e1000_adapter *adapter)
{
...

                        if (cleaned) {
                                struct sk_buff *skb = buffer_info->skb;
                                unsigned int segs, bytecount;
                                segs = skb_shinfo(skb)->gso_segs ?: 1;
                                /* multiply data chunks by size of headers */
                                bytecount = ((segs - 1) * skb_headlen(skb)) +
                                            skb->len;
// maybe bytecount is wrong on some skbs ?
                                total_tx_packets += segs;
                                total_tx_bytes += bytecount;
                        }

...
    adapter->net_stats.tx_bytes += total_tx_bytes;
    adapter->net_stats.tx_packets += total_tx_packets;
...
}

and driver get_stats() does return this adapter->net_stats structure to caller

static struct net_device_stats *e1000_get_stats(struct net_device *netdev) 
{
	struct e1000_adapter *adapter = netdev_priv(netdev);

	/* only return the current stats */
	return &adapter->net_stats;
}

Could be converted to use netdev->stats... Oh well...


  reply	other threads:[~2009-05-05  5:22 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20090504171408.3e13822c@python3.es.egwn.lan>
2009-05-04 17:53 ` Wrong network usage reported by /proc Eric Dumazet
2009-05-04 19:11   ` Matthias Saou
2009-05-05  5:04     ` Willy Tarreau
2009-05-05  5:22       ` Eric Dumazet [this message]
2009-05-05  5:50         ` Willy Tarreau
2009-05-05  8:09           ` Matthias Saou
2009-05-05  8:51             ` Eric Dumazet
2009-05-07 17:58               ` Matthias Saou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49FFCD08.7050600@cosmosbay.com \
    --to=dada1@cosmosbay.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=thias@spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net \
    --cc=w@1wt.eu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).