* Re: Wrong network usage reported by /proc
[not found] <20090504171408.3e13822c@python3.es.egwn.lan>
@ 2009-05-04 17:53 ` Eric Dumazet
2009-05-04 19:11 ` Matthias Saou
0 siblings, 1 reply; 8+ messages in thread
From: Eric Dumazet @ 2009-05-04 17:53 UTC (permalink / raw)
To: Matthias Saou; +Cc: linux-kernel, Linux Netdev List
Matthias Saou a écrit :
> Hi,
>
> I'm posting here as a last resort. I've got lots of heavily used RHEL5
> servers (2.6.18 based) that are reporting all sorts of impossible
> network usage values through /proc, leading to unrealistic snmp/cacti
> graphs where the outgoing bandwidth used it higher than the physical
> interface's maximum speed.
>
> For some details and a test script which compares values from /proc
> with values from tcpdump :
> https://bugzilla.redhat.com/show_bug.cgi?id=489541
>
> The values collected using tcpdump always seem realistic and match the
> values seen on the remote network equipments. So my obvious conclusion
> (but possibly wrong given my limited knowledge) is that something is
> wrong in the kernel, since it's the one exposing the /proc interface.
>
> I've reproduced what seems to be the same problem on recent kernels,
> including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
> simple python script available here allows to see it quite easily :
> https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
>
> * I run the script on my Workstation, I have an FTP server enabled
> * I download a DVD ISO from a remote workstation : The values match
> * I start ping floods from remote workstations : The values reported
> by /proc are much higher than the ones reported by tcpdump. I used
> "ping -s 500 -f myworkstation" from two remote workstations
>
> If there's anything flawed in my debugging, I'd love to have someone
> point it out to me. TIA to anyone willing to have a look.
>
> Matthias
>
I could not reproduce this here... what kind of NIC are you using on
affected systems ? Some ethernet drivers report stats from card itself,
and I remember seeing some strange stats on some hardware, but I cannot
remember which one it was (we were reading NULL values instead of
real ones, once in a while, maybe it was a firmware issue...)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Wrong network usage reported by /proc
2009-05-04 17:53 ` Wrong network usage reported by /proc Eric Dumazet
@ 2009-05-04 19:11 ` Matthias Saou
2009-05-05 5:04 ` Willy Tarreau
0 siblings, 1 reply; 8+ messages in thread
From: Matthias Saou @ 2009-05-04 19:11 UTC (permalink / raw)
To: Eric Dumazet; +Cc: linux-kernel, Linux Netdev List
Eric Dumazet wrote :
> Matthias Saou a écrit :
> > Hi,
> >
> > I'm posting here as a last resort. I've got lots of heavily used RHEL5
> > servers (2.6.18 based) that are reporting all sorts of impossible
> > network usage values through /proc, leading to unrealistic snmp/cacti
> > graphs where the outgoing bandwidth used it higher than the physical
> > interface's maximum speed.
> >
> > For some details and a test script which compares values from /proc
> > with values from tcpdump :
> > https://bugzilla.redhat.com/show_bug.cgi?id=489541
> >
> > The values collected using tcpdump always seem realistic and match the
> > values seen on the remote network equipments. So my obvious conclusion
> > (but possibly wrong given my limited knowledge) is that something is
> > wrong in the kernel, since it's the one exposing the /proc interface.
> >
> > I've reproduced what seems to be the same problem on recent kernels,
> > including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
> > simple python script available here allows to see it quite easily :
> > https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
> >
> > * I run the script on my Workstation, I have an FTP server enabled
> > * I download a DVD ISO from a remote workstation : The values match
> > * I start ping floods from remote workstations : The values reported
> > by /proc are much higher than the ones reported by tcpdump. I used
> > "ping -s 500 -f myworkstation" from two remote workstations
> >
> > If there's anything flawed in my debugging, I'd love to have someone
> > point it out to me. TIA to anyone willing to have a look.
> >
> > Matthias
> >
>
> I could not reproduce this here... what kind of NIC are you using on
> affected systems ? Some ethernet drivers report stats from card itself,
> and I remember seeing some strange stats on some hardware, but I cannot
> remember which one it was (we were reading NULL values instead of
> real ones, once in a while, maybe it was a firmware issue...)
My workstation has a Broadcom BCM5752 (tg3 module). The servers which
are most affected have Intel 82571EB (e1000e). But the issue is that
with /proc, the values are a lot _higher_ than with tcpdump, and the
tcpdump values seem to be the correct ones.
Matthias
--
Clean custom Red Hat Linux rpm packages : http://freshrpms.net/
Fedora release 10 (Cambridge) - Linux kernel
2.6.27.21-170.2.56.fc10.x86_64 Load : 2.20 0.88 0.42
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Wrong network usage reported by /proc
2009-05-04 19:11 ` Matthias Saou
@ 2009-05-05 5:04 ` Willy Tarreau
2009-05-05 5:22 ` Eric Dumazet
0 siblings, 1 reply; 8+ messages in thread
From: Willy Tarreau @ 2009-05-05 5:04 UTC (permalink / raw)
To: Matthias Saou; +Cc: Eric Dumazet, linux-kernel, Linux Netdev List
On Mon, May 04, 2009 at 09:11:51PM +0200, Matthias Saou wrote:
> Eric Dumazet wrote :
>
> > Matthias Saou a écrit :
> > > Hi,
> > >
> > > I'm posting here as a last resort. I've got lots of heavily used RHEL5
> > > servers (2.6.18 based) that are reporting all sorts of impossible
> > > network usage values through /proc, leading to unrealistic snmp/cacti
> > > graphs where the outgoing bandwidth used it higher than the physical
> > > interface's maximum speed.
> > >
> > > For some details and a test script which compares values from /proc
> > > with values from tcpdump :
> > > https://bugzilla.redhat.com/show_bug.cgi?id=489541
> > >
> > > The values collected using tcpdump always seem realistic and match the
> > > values seen on the remote network equipments. So my obvious conclusion
> > > (but possibly wrong given my limited knowledge) is that something is
> > > wrong in the kernel, since it's the one exposing the /proc interface.
> > >
> > > I've reproduced what seems to be the same problem on recent kernels,
> > > including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
> > > simple python script available here allows to see it quite easily :
> > > https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
> > >
> > > * I run the script on my Workstation, I have an FTP server enabled
> > > * I download a DVD ISO from a remote workstation : The values match
> > > * I start ping floods from remote workstations : The values reported
> > > by /proc are much higher than the ones reported by tcpdump. I used
> > > "ping -s 500 -f myworkstation" from two remote workstations
> > >
> > > If there's anything flawed in my debugging, I'd love to have someone
> > > point it out to me. TIA to anyone willing to have a look.
> > >
> > > Matthias
> > >
> >
> > I could not reproduce this here... what kind of NIC are you using on
> > affected systems ? Some ethernet drivers report stats from card itself,
> > and I remember seeing some strange stats on some hardware, but I cannot
> > remember which one it was (we were reading NULL values instead of
> > real ones, once in a while, maybe it was a firmware issue...)
>
> My workstation has a Broadcom BCM5752 (tg3 module). The servers which
> are most affected have Intel 82571EB (e1000e). But the issue is that
> with /proc, the values are a lot _higher_ than with tcpdump, and the
> tcpdump values seem to be the correct ones.
the e1000 chip reports stats every 2 seconds. So you have to collect
stats every 2 seconds otherwise you get "camel-looking" stats.
Willy
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Wrong network usage reported by /proc
2009-05-05 5:04 ` Willy Tarreau
@ 2009-05-05 5:22 ` Eric Dumazet
2009-05-05 5:50 ` Willy Tarreau
0 siblings, 1 reply; 8+ messages in thread
From: Eric Dumazet @ 2009-05-05 5:22 UTC (permalink / raw)
To: Willy Tarreau; +Cc: Matthias Saou, linux-kernel, Linux Netdev List
Willy Tarreau a écrit :
> On Mon, May 04, 2009 at 09:11:51PM +0200, Matthias Saou wrote:
>> Eric Dumazet wrote :
>>
>>> Matthias Saou a écrit :
>>>> Hi,
>>>>
>>>> I'm posting here as a last resort. I've got lots of heavily used RHEL5
>>>> servers (2.6.18 based) that are reporting all sorts of impossible
>>>> network usage values through /proc, leading to unrealistic snmp/cacti
>>>> graphs where the outgoing bandwidth used it higher than the physical
>>>> interface's maximum speed.
>>>>
>>>> For some details and a test script which compares values from /proc
>>>> with values from tcpdump :
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=489541
>>>>
>>>> The values collected using tcpdump always seem realistic and match the
>>>> values seen on the remote network equipments. So my obvious conclusion
>>>> (but possibly wrong given my limited knowledge) is that something is
>>>> wrong in the kernel, since it's the one exposing the /proc interface.
>>>>
>>>> I've reproduced what seems to be the same problem on recent kernels,
>>>> including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
>>>> simple python script available here allows to see it quite easily :
>>>> https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
>>>>
>>>> * I run the script on my Workstation, I have an FTP server enabled
>>>> * I download a DVD ISO from a remote workstation : The values match
>>>> * I start ping floods from remote workstations : The values reported
>>>> by /proc are much higher than the ones reported by tcpdump. I used
>>>> "ping -s 500 -f myworkstation" from two remote workstations
>>>>
>>>> If there's anything flawed in my debugging, I'd love to have someone
>>>> point it out to me. TIA to anyone willing to have a look.
>>>>
>>>> Matthias
>>>>
>>> I could not reproduce this here... what kind of NIC are you using on
>>> affected systems ? Some ethernet drivers report stats from card itself,
>>> and I remember seeing some strange stats on some hardware, but I cannot
>>> remember which one it was (we were reading NULL values instead of
>>> real ones, once in a while, maybe it was a firmware issue...)
>> My workstation has a Broadcom BCM5752 (tg3 module). The servers which
>> are most affected have Intel 82571EB (e1000e). But the issue is that
>> with /proc, the values are a lot _higher_ than with tcpdump, and the
>> tcpdump values seem to be the correct ones.
>
> the e1000 chip reports stats every 2 seconds. So you have to collect
> stats every 2 seconds otherwise you get "camel-looking" stats.
>
I looked at e1000e driver, and apparently tx_packets & tx_bytes are computed
by the TX completion routine, not by the chip.
static bool e1000_clean_tx_irq(struct e1000_adapter *adapter)
{
...
if (cleaned) {
struct sk_buff *skb = buffer_info->skb;
unsigned int segs, bytecount;
segs = skb_shinfo(skb)->gso_segs ?: 1;
/* multiply data chunks by size of headers */
bytecount = ((segs - 1) * skb_headlen(skb)) +
skb->len;
// maybe bytecount is wrong on some skbs ?
total_tx_packets += segs;
total_tx_bytes += bytecount;
}
...
adapter->net_stats.tx_bytes += total_tx_bytes;
adapter->net_stats.tx_packets += total_tx_packets;
...
}
and driver get_stats() does return this adapter->net_stats structure to caller
static struct net_device_stats *e1000_get_stats(struct net_device *netdev)
{
struct e1000_adapter *adapter = netdev_priv(netdev);
/* only return the current stats */
return &adapter->net_stats;
}
Could be converted to use netdev->stats... Oh well...
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Wrong network usage reported by /proc
2009-05-05 5:22 ` Eric Dumazet
@ 2009-05-05 5:50 ` Willy Tarreau
2009-05-05 8:09 ` Matthias Saou
0 siblings, 1 reply; 8+ messages in thread
From: Willy Tarreau @ 2009-05-05 5:50 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Matthias Saou, linux-kernel, Linux Netdev List
On Tue, May 05, 2009 at 07:22:16AM +0200, Eric Dumazet wrote:
> Willy Tarreau a écrit :
> > On Mon, May 04, 2009 at 09:11:51PM +0200, Matthias Saou wrote:
> >> Eric Dumazet wrote :
> >>
> >>> Matthias Saou a écrit :
> >>>> Hi,
> >>>>
> >>>> I'm posting here as a last resort. I've got lots of heavily used RHEL5
> >>>> servers (2.6.18 based) that are reporting all sorts of impossible
> >>>> network usage values through /proc, leading to unrealistic snmp/cacti
> >>>> graphs where the outgoing bandwidth used it higher than the physical
> >>>> interface's maximum speed.
> >>>>
> >>>> For some details and a test script which compares values from /proc
> >>>> with values from tcpdump :
> >>>> https://bugzilla.redhat.com/show_bug.cgi?id=489541
> >>>>
> >>>> The values collected using tcpdump always seem realistic and match the
> >>>> values seen on the remote network equipments. So my obvious conclusion
> >>>> (but possibly wrong given my limited knowledge) is that something is
> >>>> wrong in the kernel, since it's the one exposing the /proc interface.
> >>>>
> >>>> I've reproduced what seems to be the same problem on recent kernels,
> >>>> including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
> >>>> simple python script available here allows to see it quite easily :
> >>>> https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
> >>>>
> >>>> * I run the script on my Workstation, I have an FTP server enabled
> >>>> * I download a DVD ISO from a remote workstation : The values match
> >>>> * I start ping floods from remote workstations : The values reported
> >>>> by /proc are much higher than the ones reported by tcpdump. I used
> >>>> "ping -s 500 -f myworkstation" from two remote workstations
> >>>>
> >>>> If there's anything flawed in my debugging, I'd love to have someone
> >>>> point it out to me. TIA to anyone willing to have a look.
> >>>>
> >>>> Matthias
> >>>>
> >>> I could not reproduce this here... what kind of NIC are you using on
> >>> affected systems ? Some ethernet drivers report stats from card itself,
> >>> and I remember seeing some strange stats on some hardware, but I cannot
> >>> remember which one it was (we were reading NULL values instead of
> >>> real ones, once in a while, maybe it was a firmware issue...)
> >> My workstation has a Broadcom BCM5752 (tg3 module). The servers which
> >> are most affected have Intel 82571EB (e1000e). But the issue is that
> >> with /proc, the values are a lot _higher_ than with tcpdump, and the
> >> tcpdump values seem to be the correct ones.
> >
> > the e1000 chip reports stats every 2 seconds. So you have to collect
> > stats every 2 seconds otherwise you get "camel-looking" stats.
> >
>
> I looked at e1000e driver, and apparently tx_packets & tx_bytes are computed
> by the TX completion routine, not by the chip.
Ah I thought that was the chip which returned those stats every 2 seconds,
otherwise I don't see the reason to delay their reporting. Wait, I'm speaking
about e1000, never tried e1000e. Maybe there have been changes there. Anyway,
Matthias talked about RHEL5's 2.6.18 in which I don't think there was e1000e.
Anyway we did not get any concrete data for now, so it's hard to tell (I
haven't copy-pasted the links above in my browser yet).
Regards,
Willy
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Wrong network usage reported by /proc
2009-05-05 5:50 ` Willy Tarreau
@ 2009-05-05 8:09 ` Matthias Saou
2009-05-05 8:51 ` Eric Dumazet
0 siblings, 1 reply; 8+ messages in thread
From: Matthias Saou @ 2009-05-05 8:09 UTC (permalink / raw)
To: Willy Tarreau; +Cc: linux-kernel, Linux Netdev List
[-- Attachment #1: Type: text/plain, Size: 4303 bytes --]
Willy Tarreau wrote :
> On Tue, May 05, 2009 at 07:22:16AM +0200, Eric Dumazet wrote:
> > Willy Tarreau a écrit :
> > > On Mon, May 04, 2009 at 09:11:51PM +0200, Matthias Saou wrote:
> > >> Eric Dumazet wrote :
> > >>
> > >>> Matthias Saou a écrit :
> > >>>> Hi,
> > >>>>
> > >>>> I'm posting here as a last resort. I've got lots of heavily used RHEL5
> > >>>> servers (2.6.18 based) that are reporting all sorts of impossible
> > >>>> network usage values through /proc, leading to unrealistic snmp/cacti
> > >>>> graphs where the outgoing bandwidth used it higher than the physical
> > >>>> interface's maximum speed.
> > >>>>
> > >>>> For some details and a test script which compares values from /proc
> > >>>> with values from tcpdump :
> > >>>> https://bugzilla.redhat.com/show_bug.cgi?id=489541
> > >>>>
> > >>>> The values collected using tcpdump always seem realistic and match the
> > >>>> values seen on the remote network equipments. So my obvious conclusion
> > >>>> (but possibly wrong given my limited knowledge) is that something is
> > >>>> wrong in the kernel, since it's the one exposing the /proc interface.
> > >>>>
> > >>>> I've reproduced what seems to be the same problem on recent kernels,
> > >>>> including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
> > >>>> simple python script available here allows to see it quite easily :
> > >>>> https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
> > >>>>
> > >>>> * I run the script on my Workstation, I have an FTP server enabled
> > >>>> * I download a DVD ISO from a remote workstation : The values match
> > >>>> * I start ping floods from remote workstations : The values reported
> > >>>> by /proc are much higher than the ones reported by tcpdump. I used
> > >>>> "ping -s 500 -f myworkstation" from two remote workstations
> > >>>>
> > >>>> If there's anything flawed in my debugging, I'd love to have someone
> > >>>> point it out to me. TIA to anyone willing to have a look.
> > >>>>
> > >>>> Matthias
> > >>>>
> > >>> I could not reproduce this here... what kind of NIC are you using on
> > >>> affected systems ? Some ethernet drivers report stats from card itself,
> > >>> and I remember seeing some strange stats on some hardware, but I cannot
> > >>> remember which one it was (we were reading NULL values instead of
> > >>> real ones, once in a while, maybe it was a firmware issue...)
> > >> My workstation has a Broadcom BCM5752 (tg3 module). The servers which
> > >> are most affected have Intel 82571EB (e1000e). But the issue is that
> > >> with /proc, the values are a lot _higher_ than with tcpdump, and the
> > >> tcpdump values seem to be the correct ones.
> > >
> > > the e1000 chip reports stats every 2 seconds. So you have to collect
> > > stats every 2 seconds otherwise you get "camel-looking" stats.
> > >
> >
> > I looked at e1000e driver, and apparently tx_packets & tx_bytes are computed
> > by the TX completion routine, not by the chip.
>
> Ah I thought that was the chip which returned those stats every 2 seconds,
> otherwise I don't see the reason to delay their reporting. Wait, I'm speaking
> about e1000, never tried e1000e. Maybe there have been changes there. Anyway,
> Matthias talked about RHEL5's 2.6.18 in which I don't think there was e1000e.
>
> Anyway we did not get any concrete data for now, so it's hard to tell (I
> haven't copy-pasted the links above in my browser yet).
If you need any more data, please just ask. What makes me wonder most,
though, is that tcpdump and iptraf report what seem to be correct
bandwidth values (they seem to use the same low level access for their
counters) whereas snmp and ifconfig (which seem to use /proc for
theirs) report unrealistically high values.
The tcpdump vs. /proc would be the first thing to look at, since it
might give hints as to where the problem might lie, no?
From there, I could collect any data one might find relevant to
diagnose further.
I'm attaching the simple python script I've used for testing.
Matthias
--
Clean custom Red Hat Linux rpm packages : http://freshrpms.net/
Fedora release 10 (Cambridge) - Linux kernel
2.6.27.21-170.2.56.fc10.x86_64 Load : 0.19 0.15 0.05
[-- Attachment #2: bandwidth-monitor.py --]
[-- Type: text/x-python, Size: 3674 bytes --]
#!/usr/bin/python
import re
import time
import thread
import getopt
import signal
import sys
from subprocess import Popen, PIPE, STDOUT
# TODO print not refreshing correctly
def get_bytes_from_tcpdump(interface, src, byte_values):
command = Popen(['tcpdump', '-n', '-e', '-p', '-l', '-v', '-i',
interface, 'src', src], stdout=PIPE, stderr=PIPE,
bufsize=0)
while 1:
line = command.stdout.readline()
if not line:
# time.sleep(1)
continue
bytes_pattern = re.search('length \d*', line)
# dest_pattern = re.search('> .*: ', line)
if bytes_pattern:
s = bytes_pattern.group(0)
bytes = int(s[7:]) + 5
else:
# ARP packet
bytes = 28 + 14
byte_values[0] += bytes
byte_values[1] += 1
# time.sleep(1)
# if dest_pattern:
# s = dest_pattern.group()
# dest = s[2:len(s)-2]
def get_bytes_from_proc(interface, byte_values):
wrap = 2**32
offset = read_proc(interface)
while(1):
current_bytes = read_proc(interface)
increase = current_bytes - offset
if increase < 0:
increase = (wrap - (byte_values[0] % wrap)) + current_bytes
byte_values[0] += increase
offset = current_bytes
time.sleep(1)
def get_bytes_from_ifconfig(interface, byte_values):
offset = read_ifconfig(interface)
while(1):
bytes = read_ifconfig(interface)
byte_values[0] += (bytes - offset)
offset = bytes
time.sleep(1)
def read_ifconfig(interface):
command = Popen(['/sbin/ifconfig', interface], stdout=PIPE, stderr=PIPE)
# received bytes
# lines = command.communicate()[0].split()[34]
# transmitted bytes
try:
s = command.communicate()
except Exception, e:
print "failed: %r" % e
bytes = int(s[0].split()[38].split(':')[1])
return bytes
def read_proc(interface):
f = open('/proc/net/dev')
for line in f:
values = line.split()
i = values[0].split(':')[0]
if interface == i:
bytes = int(values[8])
# received bytes
# bytes = int(values[0].split(':')[1])
f.close()
return bytes
f.close()
def signal_handler(signum, frame):
# print "bye"
sys.exit(0)
def main(interface, host):
signal.signal(signal.SIGINT, signal_handler)
byte_value_tcpdump = [0, 0]
byte_value_proc = [0]
byte_value_ifconfig = [0]
thread.start_new_thread(get_bytes_from_tcpdump,
(interface, host, byte_value_tcpdump))
thread.start_new_thread(get_bytes_from_proc, (interface, byte_value_proc))
# thread.start_new_thread(get_bytes_from_ifconfig, (interface,
# byte_value_ifconfig))
while 1:
s = "TCPDUMP: %d (%d packets)\nPROC: %d" % (byte_value_tcpdump[0],
byte_value_tcpdump[1],
byte_value_proc[0])
print s
time.sleep(1)
def usage():
print "Usage: monitor -i interface (e.g. eth0) -m host_ip"
if __name__ == "__main__":
interface = None
ip = None
opts, args = getopt.getopt(sys.argv[1:], "hi:m:", ["help"])
for o, a in opts:
if o == '-i':
interface = a
elif o == '-m':
ip = a
elif o in ['-h', '--help']:
usage()
sys.exit()
if not interface or not ip:
usage()
sys.exit()
main(interface, ip)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Wrong network usage reported by /proc
2009-05-05 8:09 ` Matthias Saou
@ 2009-05-05 8:51 ` Eric Dumazet
2009-05-07 17:58 ` Matthias Saou
0 siblings, 1 reply; 8+ messages in thread
From: Eric Dumazet @ 2009-05-05 8:51 UTC (permalink / raw)
To: Matthias Saou; +Cc: Willy Tarreau, linux-kernel, Linux Netdev List
Matthias Saou a écrit :
> Willy Tarreau wrote :
>
>> On Tue, May 05, 2009 at 07:22:16AM +0200, Eric Dumazet wrote:
>>> Willy Tarreau a écrit :
>>>> On Mon, May 04, 2009 at 09:11:51PM +0200, Matthias Saou wrote:
>>>>> Eric Dumazet wrote :
>>>>>
>>>>>> Matthias Saou a écrit :
>>>>>>> Hi,
>>>>>>>
>>>>>>> I'm posting here as a last resort. I've got lots of heavily used RHEL5
>>>>>>> servers (2.6.18 based) that are reporting all sorts of impossible
>>>>>>> network usage values through /proc, leading to unrealistic snmp/cacti
>>>>>>> graphs where the outgoing bandwidth used it higher than the physical
>>>>>>> interface's maximum speed.
>>>>>>>
>>>>>>> For some details and a test script which compares values from /proc
>>>>>>> with values from tcpdump :
>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=489541
>>>>>>>
>>>>>>> The values collected using tcpdump always seem realistic and match the
>>>>>>> values seen on the remote network equipments. So my obvious conclusion
>>>>>>> (but possibly wrong given my limited knowledge) is that something is
>>>>>>> wrong in the kernel, since it's the one exposing the /proc interface.
>>>>>>>
>>>>>>> I've reproduced what seems to be the same problem on recent kernels,
>>>>>>> including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
>>>>>>> simple python script available here allows to see it quite easily :
>>>>>>> https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
>>>>>>>
>>>>>>> * I run the script on my Workstation, I have an FTP server enabled
>>>>>>> * I download a DVD ISO from a remote workstation : The values match
>>>>>>> * I start ping floods from remote workstations : The values reported
>>>>>>> by /proc are much higher than the ones reported by tcpdump. I used
>>>>>>> "ping -s 500 -f myworkstation" from two remote workstations
>>>>>>>
>>>>>>> If there's anything flawed in my debugging, I'd love to have someone
>>>>>>> point it out to me. TIA to anyone willing to have a look.
>>>>>>>
>>>>>>> Matthias
>>>>>>>
>>>>>> I could not reproduce this here... what kind of NIC are you using on
>>>>>> affected systems ? Some ethernet drivers report stats from card itself,
>>>>>> and I remember seeing some strange stats on some hardware, but I cannot
>>>>>> remember which one it was (we were reading NULL values instead of
>>>>>> real ones, once in a while, maybe it was a firmware issue...)
>>>>> My workstation has a Broadcom BCM5752 (tg3 module). The servers which
>>>>> are most affected have Intel 82571EB (e1000e). But the issue is that
>>>>> with /proc, the values are a lot _higher_ than with tcpdump, and the
>>>>> tcpdump values seem to be the correct ones.
>>>> the e1000 chip reports stats every 2 seconds. So you have to collect
>>>> stats every 2 seconds otherwise you get "camel-looking" stats.
>>>>
>>> I looked at e1000e driver, and apparently tx_packets & tx_bytes are computed
>>> by the TX completion routine, not by the chip.
>> Ah I thought that was the chip which returned those stats every 2 seconds,
>> otherwise I don't see the reason to delay their reporting. Wait, I'm speaking
>> about e1000, never tried e1000e. Maybe there have been changes there. Anyway,
>> Matthias talked about RHEL5's 2.6.18 in which I don't think there was e1000e.
>>
>> Anyway we did not get any concrete data for now, so it's hard to tell (I
>> haven't copy-pasted the links above in my browser yet).
>
> If you need any more data, please just ask. What makes me wonder most,
> though, is that tcpdump and iptraf report what seem to be correct
> bandwidth values (they seem to use the same low level access for their
> counters) whereas snmp and ifconfig (which seem to use /proc for
> theirs) report unrealistically high values.
>
> The tcpdump vs. /proc would be the first thing to look at, since it
> might give hints as to where the problem might lie, no?
>
> From there, I could collect any data one might find relevant to
> diagnose further.
>
> I'm attaching the simple python script I've used for testing.
>
> Matthias
>
>
Your python script is buggy, since space after ':' is optional
# cat /proc/net/dev | cut -c1-80
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packe
lo: 16056 36 0 0 0 0 0 0 16056
eth0:624245505 7370445 0 0 0 0 0 108 586782291 737
eth1:2512329067 11360819 0 0 0 0 0 0 2521050992
bond0:3378296009 15279963 0 0 0 0 0 0 3390533080
bond1: 0 0 0 0 0 0 0 0 0
eth2:865966942 3919144 0 0 0 0 0 0 869482088 391
eth3: 0 0 0 0 0 0 0 0 0
vlan.103: 1277511 18134 0 0 0 0 0 0 3439082 1
vlan.825:3095633732 15533200 0 0 0 0 0 0 332349968
So your read_proc() is wrong, since is uses line.split
def read_proc(interface):
f = open('/proc/net/dev')
for line in f:
values = line.split()
i = values[0].split(':')[0]
if interface == i:
bytes = int(values[8])
# received bytes
# bytes = int(values[0].split(':')[1])
f.close()
return bytes
f.close()
BTW, your tcpdump might report lower values too, since it doesnt account for all headers, nor
non IP frames, or forwarded frames (source IP is then not your host IP)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Wrong network usage reported by /proc
2009-05-05 8:51 ` Eric Dumazet
@ 2009-05-07 17:58 ` Matthias Saou
0 siblings, 0 replies; 8+ messages in thread
From: Matthias Saou @ 2009-05-07 17:58 UTC (permalink / raw)
To: linux-kernel, Linux Netdev List
[-- Attachment #1: Type: text/plain, Size: 5895 bytes --]
Eric Dumazet wrote :
> Matthias Saou a écrit :
> > Willy Tarreau wrote :
> >
> >> On Tue, May 05, 2009 at 07:22:16AM +0200, Eric Dumazet wrote:
> >>> Willy Tarreau a écrit :
> >>>> On Mon, May 04, 2009 at 09:11:51PM +0200, Matthias Saou wrote:
> >>>>> Eric Dumazet wrote :
> >>>>>
> >>>>>> Matthias Saou a écrit :
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> I'm posting here as a last resort. I've got lots of heavily used RHEL5
> >>>>>>> servers (2.6.18 based) that are reporting all sorts of impossible
> >>>>>>> network usage values through /proc, leading to unrealistic snmp/cacti
> >>>>>>> graphs where the outgoing bandwidth used it higher than the physical
> >>>>>>> interface's maximum speed.
> >>>>>>>
> >>>>>>> For some details and a test script which compares values from /proc
> >>>>>>> with values from tcpdump :
> >>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=489541
> >>>>>>>
> >>>>>>> The values collected using tcpdump always seem realistic and match the
> >>>>>>> values seen on the remote network equipments. So my obvious conclusion
> >>>>>>> (but possibly wrong given my limited knowledge) is that something is
> >>>>>>> wrong in the kernel, since it's the one exposing the /proc interface.
> >>>>>>>
> >>>>>>> I've reproduced what seems to be the same problem on recent kernels,
> >>>>>>> including the 2.6.27.21-170.2.56.fc10.x86_64 I'm running right now. The
> >>>>>>> simple python script available here allows to see it quite easily :
> >>>>>>> https://www.redhat.com/archives/rhelv5-list/2009-February/msg00166.html
> >>>>>>>
> >>>>>>> * I run the script on my Workstation, I have an FTP server enabled
> >>>>>>> * I download a DVD ISO from a remote workstation : The values match
> >>>>>>> * I start ping floods from remote workstations : The values reported
> >>>>>>> by /proc are much higher than the ones reported by tcpdump. I used
> >>>>>>> "ping -s 500 -f myworkstation" from two remote workstations
> >>>>>>>
> >>>>>>> If there's anything flawed in my debugging, I'd love to have someone
> >>>>>>> point it out to me. TIA to anyone willing to have a look.
> >>>>>>>
> >>>>>>> Matthias
> >>>>>>>
> >>>>>> I could not reproduce this here... what kind of NIC are you using on
> >>>>>> affected systems ? Some ethernet drivers report stats from card itself,
> >>>>>> and I remember seeing some strange stats on some hardware, but I cannot
> >>>>>> remember which one it was (we were reading NULL values instead of
> >>>>>> real ones, once in a while, maybe it was a firmware issue...)
> >>>>> My workstation has a Broadcom BCM5752 (tg3 module). The servers which
> >>>>> are most affected have Intel 82571EB (e1000e). But the issue is that
> >>>>> with /proc, the values are a lot _higher_ than with tcpdump, and the
> >>>>> tcpdump values seem to be the correct ones.
> >>>> the e1000 chip reports stats every 2 seconds. So you have to collect
> >>>> stats every 2 seconds otherwise you get "camel-looking" stats.
> >>>>
> >>> I looked at e1000e driver, and apparently tx_packets & tx_bytes are computed
> >>> by the TX completion routine, not by the chip.
> >> Ah I thought that was the chip which returned those stats every 2 seconds,
> >> otherwise I don't see the reason to delay their reporting. Wait, I'm speaking
> >> about e1000, never tried e1000e. Maybe there have been changes there. Anyway,
> >> Matthias talked about RHEL5's 2.6.18 in which I don't think there was e1000e.
> >>
> >> Anyway we did not get any concrete data for now, so it's hard to tell (I
> >> haven't copy-pasted the links above in my browser yet).
> >
> > If you need any more data, please just ask. What makes me wonder most,
> > though, is that tcpdump and iptraf report what seem to be correct
> > bandwidth values (they seem to use the same low level access for their
> > counters) whereas snmp and ifconfig (which seem to use /proc for
> > theirs) report unrealistically high values.
> >
> > The tcpdump vs. /proc would be the first thing to look at, since it
> > might give hints as to where the problem might lie, no?
> >
> > From there, I could collect any data one might find relevant to
> > diagnose further.
> >
> > I'm attaching the simple python script I've used for testing.
> >
> > Matthias
> >
> >
>
> Your python script is buggy, since space after ':' is optional
[...]
You are right. The script isn't mine originally. I've reviewed and
modified an updated version, attached to this email, which should fix
this as well as other issues, and be more readable.
I've re-done some testing and it seems like I'm not able to reproduce
the problem except on the 32bit RHEL5.2 servers where it's VERY
noticeable.
Sample output :
TCPDUMP: 82189861266 (56100271 packets)
PROC: 764627087298 (59162342 packets)
Yes, /proc/net/dev is reporting nearly 10 times more bytes than the sum
of what tcpdump reports in its "length x" fields. This is about what I
see on my snmp graphs : little more than 100Mbps reported from the
switch when the server reports 1Gbps.
The Red Hat bugzilla entry has been updated, and the issue is surely
better off tracked there. My current guess would also be a bug in the
e1000e module...
But if this rings a bell to anyone, please poke me! The module I'm using
is this one :
filename: /lib/modules/2.6.18-92.1.10.el5PAE/kernel/drivers/net/e1000e/e1000e.ko
version: 0.2.0
license: GPL
description: Intel(R) PRO/1000 Network Driver
author: Intel Corporation, <linux.nics@intel.com>
srcversion: 7DD4D251CA27FFAE6342F30
Thanks all for your feedback and sorry for the wrong initial script.
Matthias
--
Clean custom Red Hat Linux rpm packages : http://freshrpms.net/
Fedora release 10 (Cambridge) - Linux kernel
2.6.27.21-170.2.56.fc10.x86_64 Load : 0.34 0.21 0.19
[-- Attachment #2: monitor.py --]
[-- Type: text/x-python, Size: 3882 bytes --]
#!/usr/bin/python
#
# Simple script to print out realtime byte and packet traffic count on an
# interface using both tcpdump output and /proc/net/dev content
#
# Last change : 20090507
#
import re
import time
import thread
import getopt
import signal
import sys
from subprocess import Popen, PIPE, STDOUT
# TODO print not refreshing correctly
# tx[0] are tx_bytes
# tx[1] are tx_packets
def get_tx_from_tcpdump(interface, tx):
command = Popen(['tcpdump', '-n', '-e', '-p', '-l', '-v', '-i',
interface], stdout=PIPE, stderr=PIPE,
bufsize=1) # line buffering, optimizes a lot
while 1:
line = command.stdout.readline()
if not line:
# time.sleep(1)
continue
# Extract the nnn from the ", length nnn)" part of the line
bytes_pattern = re.search('length (\d+)', line)
if bytes_pattern:
tx[0] += int(bytes_pattern.group(1))
tx[1] += 1
else:
# ARP packet or other output... could be 28 + 14, but just ignore
bytes = 0
# Don't wait
#time.sleep(1)
# tx[0] are tx_bytes
# tx[1] are tx_packets
def get_tx_from_proc(interface, tx):
wrap = 2**32
# Get the initial values
tx_bytes_prev, tx_packets_prev = read_proc_tx(interface)
# Something went wrong...
if tx_bytes_prev is None or tx_packets_prev is None:
s = ("Could not read data from /proc/net/dev. "
"I was looking for the interface %s." % interface)
tx[0] = s
return None
# Main loop to update tx data values
while(1):
tx_bytes, tx_packets = read_proc_tx(interface)
# Get the difference wrt the previous poll
tx_bytes_diff = tx_bytes - tx_bytes_prev
tx_packets_diff = tx_packets - tx_packets_prev
# Check for an eventual wrap and re-ajust
if tx_bytes_diff < 0:
tx_bytes_diff = (wrap - (tx[0] % wrap)) + tx_bytes
print "*** Bytes wrap! (from %s to %s)" % (tx_bytes_prev, tx_bytes)
if tx_packets_diff < 0:
tx_packets_diff = (wrap - (tx[1] % wrap)) + tx_packets
# Update our counters
tx[0] += tx_bytes_diff
tx[1] += tx_packets_diff
tx_bytes_prev = tx_bytes
tx_packets_prev = tx_packets
# Wait
time.sleep(1)
# Return an array of tx[bytes, packets]
def read_proc_tx(interface):
f = open('/proc/net/dev')
for line in f:
values = line.split(":")
i = values[0].replace(' ', '')
if interface == i:
tx = [int(values[1].split()[8]), int(values[1].split()[9])]
f.close()
return tx
f.close()
def signal_handler(signum, frame):
sys.exit(0)
def main(interface):
signal.signal(signal.SIGINT, signal_handler)
tx_tcpdump = [0, 0]
tx_proc = [0, 0]
thread.start_new_thread(get_tx_from_tcpdump, (interface, tx_tcpdump))
thread.start_new_thread(get_tx_from_proc, (interface, tx_proc))
while 1:
tcpdump_bytes = tx_tcpdump[0]
tcpdump_packets = tx_tcpdump[1]
proc_bytes = tx_proc[0]
proc_packets = tx_proc[1]
if type(proc_bytes) == type('0'):
print "Error: %s" % proc_bytes
sys.exit(0)
s = "TCPDUMP: %d (%d packets)\nPROC: %d (%d packets)" % (
tcpdump_bytes, tcpdump_packets, proc_bytes, proc_packets )
print s
time.sleep(1)
def usage():
print "Usage: monitor -i <interface> (e.g. eth0)"
if __name__ == "__main__":
interface = None
ip = None
opts, args = getopt.getopt(sys.argv[1:], "hi:", ["help"])
for o, a in opts:
if o == '-i':
interface = a
elif o in ['-h', '--help']:
usage()
sys.exit()
if not interface:
usage()
sys.exit()
main(interface)
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2009-05-07 17:58 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20090504171408.3e13822c@python3.es.egwn.lan>
2009-05-04 17:53 ` Wrong network usage reported by /proc Eric Dumazet
2009-05-04 19:11 ` Matthias Saou
2009-05-05 5:04 ` Willy Tarreau
2009-05-05 5:22 ` Eric Dumazet
2009-05-05 5:50 ` Willy Tarreau
2009-05-05 8:09 ` Matthias Saou
2009-05-05 8:51 ` Eric Dumazet
2009-05-07 17:58 ` Matthias Saou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).