* Debugging Network Performance
@ 2002-06-04 3:27 Allen Curtis
0 siblings, 0 replies; 13+ messages in thread
From: Allen Curtis @ 2002-06-04 3:27 UTC (permalink / raw)
To: linuxppc-dev
I was wondering if anyone could provide some pointers on monitoring and
debugging network communications. The performance of Ethernet communications
seems to vary with kernel revisions. I would like to analyze this problem
but I need a little guidance on the issue.
TIA!
=========== Previous 8260 Ethernet email ==================
2. Some relative performance measurements (HHL 2.4.2 vs. 2.4.19pre9)
10T Hub | 100BT switch
-------------------------------------|
2.4.2 | 410KBps | 750KBps |
-------------------------------------|
2.4.19 | 440KBps | 190KBps |
--------------------------------------
RedHat 2.4.18-3 (x86) 3900KBps
3. TOP shows as much as 48% system utilization during a single FTP transfer.
With the fix mentioned in #1, there are 0 errors reported by ifconfig.
I may be missing something but I believe that these numbers are going the
wrong way. There is still the question of performance decrease when using a
100BT switch. Signal intregety can not be the only suspect considering the
performance did increase using an older version of the kernel. (although
nothing like the Workstation performance)
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* Debugging Network Performance
@ 2002-06-04 4:34 Allen Curtis
2002-06-04 9:50 ` Kenneth Johansson
0 siblings, 1 reply; 13+ messages in thread
From: Allen Curtis @ 2002-06-04 4:34 UTC (permalink / raw)
To: linuxppc-embedded
I was wondering if anyone could provide some pointers on monitoring and
debugging network communications. The performance of Ethernet communications
seems to vary with kernel revisions. I would like to analyze this problem
but I need a little guidance on the issue.
TIA!
=========== Previous 8260 Ethernet email ==================
2. Some relative performance measurements (HHL 2.4.2 vs. 2.4.19pre9)
10T Hub | 100BT switch
-------------------------------------|
2.4.2 | 410KBps | 750KBps |
-------------------------------------|
2.4.19 | 440KBps | 190KBps |
--------------------------------------
RedHat 2.4.18-3 (x86) 3900KBps
3. TOP shows as much as 48% system utilization during a single FTP transfer.
With the fix mentioned in #1, there are 0 errors reported by ifconfig.
I may be missing something but I believe that these numbers are going the
wrong way. There is still the question of performance decrease when using a
100BT switch. Signal intregety can not be the only suspect considering the
performance did increase using an older version of the kernel. (although
nothing like the Workstation performance)
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Debugging Network Performance
2002-06-04 4:34 Allen Curtis
@ 2002-06-04 9:50 ` Kenneth Johansson
0 siblings, 0 replies; 13+ messages in thread
From: Kenneth Johansson @ 2002-06-04 9:50 UTC (permalink / raw)
To: acurtis; +Cc: Linuxppc embedded
[-- Attachment #1: Type: text/plain, Size: 733 bytes --]
I suggest using netpipe or something so you also can see the impact of
different packet sizes.
I have attached one graph done from a walnut card but I do not remember
exactly what kernel version was used.
On Tue, 2002-06-04 at 06:34, Allen Curtis wrote:
>
> I was wondering if anyone could provide some pointers on monitoring and
> debugging network communications. The performance of Ethernet communications
> seems to vary with kernel revisions. I would like to analyze this problem
> but I need a little guidance on the issue.
--
Kenneth Johansson
Ericsson AB Tel: +46 8 404 71 83
Borgafjordsgatan 9 Fax: +46 8 404 72 72
164 80 Stockholm kenneth.johansson@etx.ericsson.se
[-- Attachment #2: out.png --]
[-- Type: image/png, Size: 1691 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Debugging Network Performance
@ 2002-06-04 11:12 Mark Wisner
2002-06-04 13:30 ` Allen Curtis
0 siblings, 1 reply; 13+ messages in thread
From: Mark Wisner @ 2002-06-04 11:12 UTC (permalink / raw)
To: acurtis; +Cc: linuxppc-dev
Allen,
Netperf can give you a vary detailed report about network performance. If
you think your problem is related to network hardware problems, look at the
errors listed in "ifconfig". This should tell you if you are getting bad
packets or dropping packets.
If you think it is a kernel problem some benchmarks may help you narrow
down the problem. Networks are pretty fluid and sometimes hard to get
reproducible results. When I try to determine driver/kernel network
perfomance I try to use an isolated network where I have control over all
traffic or I use test hardware such as IXIA or Smartbits.
Mark K. Wisner
Advisory Software Engineer
IBM Microelectronics
3039 Cornwallis Rd
RTP, NC 27709
Tel. 919-254-7191
Fax 919-543-7575
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
2002-06-04 11:12 Mark Wisner
@ 2002-06-04 13:30 ` Allen Curtis
0 siblings, 0 replies; 13+ messages in thread
From: Allen Curtis @ 2002-06-04 13:30 UTC (permalink / raw)
To: Mark Wisner; +Cc: linuxppc-dev
> Netperf can give you a vary detailed report about network performance. If
> you think your problem is related to network hardware problems,
> look at the
> errors listed in "ifconfig". This should tell you if you are getting bad
> packets or dropping packets.
I did fix a problem in the driver and now there are no error reported by
ifconfig.
> If you think it is a kernel problem some benchmarks may help you narrow
> down the problem. Networks are pretty fluid and sometimes hard to get
> reproducible results. When I try to determine driver/kernel network
> perfomance I try to use an isolated network where I have control over all
> traffic or I use test hardware such as IXIA or Smartbits.
All testing is done on an isolated network. I do not believe that the
problem is in the driver itself. The driver has not changed significantly. I
do need to check the error path since it appears that errors actually help
performance.
What is the best way to track packet processing through the kernel?
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
@ 2002-06-04 14:35 Mark Wisner
2002-06-04 14:46 ` Allen Curtis
0 siblings, 1 reply; 13+ messages in thread
From: Mark Wisner @ 2002-06-04 14:35 UTC (permalink / raw)
To: acurtis; +Cc: linuxppc-dev
Allen,
KGDB may be a good start. netif_rx() is where the packet gets passed to the
stack. It basically throws the packet on a linklist for ip_rcv() in
ip_input.c to pick up and start processing it through the stack until it
passes the results to the application. This is tough stuff. Let me know how
you make out.
I would guess the IP stack has not changed much between kernels. I would be
more concerned about how the scheduler is working and other kernel tasks.
Mark K. Wisner
Advisory Software Engineer
IBM Microelectronics
3039 Cornwallis Rd
RTP, NC 27709
Tel. 919-254-7191
Fax 919-543-7575
"Allen Curtis" <acurtis@onz.com>@lists.linuxppc.org on 06/04/2002 09:30:16
AM
Please respond to <acurtis@onz.com>
Sent by: owner-linuxppc-dev@lists.linuxppc.org
To: Mark Wisner/Raleigh/IBM@IBMUS
cc: <linuxppc-dev@lists.linuxppc.org>
Subject: RE: Debugging Network Performance
> Netperf can give you a vary detailed report about network performance. If
> you think your problem is related to network hardware problems,
> look at the
> errors listed in "ifconfig". This should tell you if you are getting bad
> packets or dropping packets.
I did fix a problem in the driver and now there are no error reported by
ifconfig.
> If you think it is a kernel problem some benchmarks may help you narrow
> down the problem. Networks are pretty fluid and sometimes hard to get
> reproducible results. When I try to determine driver/kernel network
> perfomance I try to use an isolated network where I have control over all
> traffic or I use test hardware such as IXIA or Smartbits.
All testing is done on an isolated network. I do not believe that the
problem is in the driver itself. The driver has not changed significantly.
I
do need to check the error path since it appears that errors actually help
performance.
What is the best way to track packet processing through the kernel?
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
2002-06-04 14:35 Mark Wisner
@ 2002-06-04 14:46 ` Allen Curtis
2002-06-04 15:05 ` Michael Fischer
0 siblings, 1 reply; 13+ messages in thread
From: Allen Curtis @ 2002-06-04 14:46 UTC (permalink / raw)
To: Mark Wisner; +Cc: linuxppc-dev
> KGDB may be a good start. netif_rx() is where the packet gets
> passed to the
> stack. It basically throws the packet on a linklist for ip_rcv() in
> ip_input.c to pick up and start processing it through the stack until it
> passes the results to the application. This is tough stuff. Let
> me know how
> you make out.
Any idea who the network stack maintainer is? Perhaps they have something
that timestamps a packets progress through the pipeline.
> I would guess the IP stack has not changed much between kernels.
> I would be
> more concerned about how the scheduler is working and other kernel tasks.
The system is basically idle except for the NFS/FTP traffic.
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
2002-06-04 14:46 ` Allen Curtis
@ 2002-06-04 15:05 ` Michael Fischer
0 siblings, 0 replies; 13+ messages in thread
From: Michael Fischer @ 2002-06-04 15:05 UTC (permalink / raw)
To: acurtis; +Cc: linuxppc-dev
Hello,
> Any idea who the network stack maintainer is? Perhaps they have something
> that timestamps a packets progress through the pipeline.
i dont know about what/if they are using but when i had to do something
similar i patched my kernlel with the LTT package
(http://www.opersys.com/LTT/). They have some built in tracepoints which are
usefull + added some user defined trace events at different points within
the driver and the stack. With a sequence number in the package which is
copied to the trace you can pretty much follow and timestamp the way of the
packet through the system.
Of cause this is affecting the performance a bit but it may give you some
idea where the packet spents most of the time.
Best regards, Michael
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Debugging Network Performance
@ 2002-06-05 5:18 Bill Fink
2002-06-05 13:16 ` Allen Curtis
0 siblings, 1 reply; 13+ messages in thread
From: Bill Fink @ 2002-06-05 5:18 UTC (permalink / raw)
To: LinuxPPC Developers; +Cc: Bill Fink
On Tue, 4 Jun 2002, Allen Curtis wrote:
> > If you think it is a kernel problem some benchmarks may help you narrow
> > down the problem. Networks are pretty fluid and sometimes hard to get
> > reproducible results. When I try to determine driver/kernel network
> > perfomance I try to use an isolated network where I have control over all
> > traffic or I use test hardware such as IXIA or Smartbits.
>
> All testing is done on an isolated network. I do not believe that the
> problem is in the driver itself. The driver has not changed significantly. I
> do need to check the error path since it appears that errors actually help
> performance.
>
> What is the best way to track packet processing through the kernel?
When I was helping to investigate a performance problem with the SUNGEM
driver versus the GMAC driver, Anton Blanchard made the following suggestion:
"It would be interesting to see where the cpu is being used. Could you
boot with profile=2 and use readprofile to find the worst cpu hogs
during a run?"
I actually never tried doing this as the problem was resolved through
other methods, but it sounded like something possibly useful to try.
The problem with the poor SUNGEM performance turned out to be lots of
extraneous unnecessary interrupts, so you might want to check out the
eth0 interrupts in /proc/interrupts, comparing the two cases.
-Bill
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
@ 2002-06-05 12:18 Jean-Denis Boyer
2002-06-05 13:35 ` Allen Curtis
0 siblings, 1 reply; 13+ messages in thread
From: Jean-Denis Boyer @ 2002-06-05 12:18 UTC (permalink / raw)
To: 'acurtis@onz.com'; +Cc: linuxppc-embedded
Allen,
With my 200MHz 8260 based board, using kernel 2.4.19-pre7,
I fetch a large file (~50Mb) with ftp, and send it to /dev/null
get large_file /dev/null
and I obtain the following performance:
50216878 bytes received in 5.52 seconds (9090508 bytes/s)
About 8977 kbytes/sec! This is around the maximum for a 100Mbps link.
In 10Mbps half duplex, I achieve a
The connection is in half duplex on the board side,
and full duplex on the ftp server (It crosses a switch).
Switching to full duplex for both does not affect the performance.
If the performance drops in 100Mbps full duplex, the problem might be
one of configuration between your BCM switch and the external switch.
Are both ends of the link (between your BCM switch and external switch)
set to auto-negotiation? If you disable auto-negociation on one side
only (you force it in full duplex for example), you should also the other to
full duplex.
That is important, and caused me hours of research. ;-)
Hope this helps.
--------------------------------------------
Jean-Denis Boyer, B.Eng., System Architect
Mediatrix Telecom Inc.
4229 Garlock Street
Sherbrooke (Québec)
J1L 2C8 CANADA
(819)829-8749 x241
--------------------------------------------
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
2002-06-05 5:18 Debugging Network Performance Bill Fink
@ 2002-06-05 13:16 ` Allen Curtis
2002-06-05 13:56 ` Bill Fink
0 siblings, 1 reply; 13+ messages in thread
From: Allen Curtis @ 2002-06-05 13:16 UTC (permalink / raw)
To: Bill Fink, LinuxPPC Developers
> "It would be interesting to see where the cpu is being used. Could you
> boot with profile=2 and use readprofile to find the worst cpu hogs
> during a run?"
Where can I find documentation on the profile=## option? Are there also
tools that will help to interpret the logs?
> The problem with the poor SUNGEM performance turned out to be lots of
> extraneous unnecessary interrupts, so you might want to check out the
> eth0 interrupts in /proc/interrupts, comparing the two cases.
I will check the interrupts. There are several errata regarding the FCC
interface. I know that the code considered these cases but I need to make
sure that it did not get removed mistakenly.
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
2002-06-05 12:18 Jean-Denis Boyer
@ 2002-06-05 13:35 ` Allen Curtis
0 siblings, 0 replies; 13+ messages in thread
From: Allen Curtis @ 2002-06-05 13:35 UTC (permalink / raw)
To: Jean-Denis Boyer; +Cc: linuxppc-embedded
> With my 200MHz 8260 based board, using kernel 2.4.19-pre7,
> I fetch a large file (~50Mb) with ftp, and send it to /dev/null
> get large_file /dev/null
> and I obtain the following performance:
> 50216878 bytes received in 5.52 seconds (9090508 bytes/s)
I will check smaller files. I know that there is a nice burst of activity
when the transfer starts. Can you try 100MB+ transfer and see if you still
get the same performance. Perhaps I should try 2.4.19pre7. If I get better
results it sure helps to narrow the search for likely suspects. I will also
try piping to /dev/null.
> If the performance drops in 100Mbps full duplex, the problem might be
> one of configuration between your BCM switch and the external switch.
> Are both ends of the link (between your BCM switch and external switch)
> set to auto-negotiation?
The switch is a LinkSys 5 port 10/100 switch. (no configuration options that
I am aware of) All ports on the BCM should be in auto-negotiate mode. In a
previous version I printed the negotiation status of each port and it showed
100, full-duplex.
Thanks for the pointers. Based on the emails that I have received, I am not
the only one with this problem. Hopefully we can reproduce your results and
provide some feedback to the group.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Debugging Network Performance
2002-06-05 13:16 ` Allen Curtis
@ 2002-06-05 13:56 ` Bill Fink
0 siblings, 0 replies; 13+ messages in thread
From: Bill Fink @ 2002-06-05 13:56 UTC (permalink / raw)
To: acurtis; +Cc: linuxppc-dev, Bill Fink
On Wed, 5 Jun 2002, "Allen Curtis" wrote:
> > "It would be interesting to see where the cpu is being used. Could you
> > boot with profile=2 and use readprofile to find the worst cpu hogs
> > during a run?"
>
> Where can I find documentation on the profile=## option? Are there also
> tools that will help to interpret the logs?
Since I didn't actually try it, I don't have a lot more information.
>From /usr/src/linux/Documentation/kernel-parameters.txt:
profile= [KNL] enable kernel profiling via /proc/profile
(param:log level).
And of course check the readprofile man page.
-Bill
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2002-06-05 13:56 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-06-05 5:18 Debugging Network Performance Bill Fink
2002-06-05 13:16 ` Allen Curtis
2002-06-05 13:56 ` Bill Fink
-- strict thread matches above, loose matches on Subject: below --
2002-06-05 12:18 Jean-Denis Boyer
2002-06-05 13:35 ` Allen Curtis
2002-06-04 14:35 Mark Wisner
2002-06-04 14:46 ` Allen Curtis
2002-06-04 15:05 ` Michael Fischer
2002-06-04 11:12 Mark Wisner
2002-06-04 13:30 ` Allen Curtis
2002-06-04 4:34 Allen Curtis
2002-06-04 9:50 ` Kenneth Johansson
2002-06-04 3:27 Allen Curtis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).