linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* Re: Debugging Network Performance
@ 2002-06-05  5:18 Bill Fink
  2002-06-05 13:16 ` Allen Curtis
  0 siblings, 1 reply; 13+ messages in thread
From: Bill Fink @ 2002-06-05  5:18 UTC (permalink / raw)
  To: LinuxPPC Developers; +Cc: Bill Fink


On Tue, 4 Jun 2002, Allen Curtis wrote:

> > If you think it is a kernel problem some benchmarks may help you narrow
> > down the problem. Networks are pretty fluid and sometimes hard to get
> > reproducible results. When I try to determine driver/kernel network
> > perfomance I try to use an isolated network where I have control over all
> > traffic or I use test hardware such as IXIA or Smartbits.
>
> All testing is done on an isolated network. I do not believe that the
> problem is in the driver itself. The driver has not changed significantly. I
> do need to check the error path since it appears that errors actually help
> performance.
>
> What is the best way to track packet processing through the kernel?

When I was helping to investigate a performance problem with the SUNGEM
driver versus the GMAC driver, Anton Blanchard made the following suggestion:

    "It would be interesting to see where the cpu is being used. Could you
    boot with profile=2 and use readprofile to find the worst cpu hogs
    during a run?"

I actually never tried doing this as the problem was resolved through
other methods, but it sounded like something possibly useful to try.

The problem with the poor SUNGEM performance turned out to be lots of
extraneous unnecessary interrupts, so you might want to check out the
eth0 interrupts in /proc/interrupts, comparing the two cases.

						-Bill

** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
@ 2002-06-05 12:18 Jean-Denis Boyer
  2002-06-05 13:35 ` Allen Curtis
  0 siblings, 1 reply; 13+ messages in thread
From: Jean-Denis Boyer @ 2002-06-05 12:18 UTC (permalink / raw)
  To: 'acurtis@onz.com'; +Cc: linuxppc-embedded


Allen,

With my 200MHz 8260 based board, using kernel 2.4.19-pre7,
I fetch a large file (~50Mb) with ftp, and send it to /dev/null
  get large_file /dev/null
and I obtain the following performance:
  50216878 bytes received in 5.52 seconds (9090508 bytes/s)

About 8977 kbytes/sec! This is around the maximum for a 100Mbps link.

In 10Mbps half duplex, I achieve a

The connection is in half duplex on the board side,
and full duplex on the ftp server (It crosses a switch).
Switching to full duplex for both does not affect the performance.

If the performance drops in 100Mbps full duplex, the problem might be
one of configuration between your BCM switch and the external switch.

Are both ends of the link (between your BCM switch and external switch)
set to auto-negotiation? If you disable auto-negociation on one side
only (you force it in full duplex for example), you should also the other to
full duplex.
That is important, and caused me hours of research. ;-)

Hope this helps.

--------------------------------------------
 Jean-Denis Boyer, B.Eng., System Architect
 Mediatrix Telecom Inc.
 4229 Garlock Street
 Sherbrooke (Québec)
 J1L 2C8  CANADA
 (819)829-8749 x241
--------------------------------------------

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 13+ messages in thread
* RE: Debugging Network Performance
@ 2002-06-04 14:35 Mark Wisner
  2002-06-04 14:46 ` Allen Curtis
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Wisner @ 2002-06-04 14:35 UTC (permalink / raw)
  To: acurtis; +Cc: linuxppc-dev


Allen,
KGDB may be a good start. netif_rx() is where the packet gets passed to the
stack. It basically throws the packet on a linklist for ip_rcv() in
ip_input.c to pick up and start processing it through the stack until it
passes the results to the application. This is tough stuff. Let me know how
you make out.

I would guess the IP stack has not changed much between kernels. I would be
more concerned about how the scheduler is working and other kernel tasks.

Mark K. Wisner
Advisory Software Engineer
IBM Microelectronics
3039 Cornwallis Rd
RTP, NC 27709
Tel. 919-254-7191
Fax 919-543-7575


"Allen Curtis" <acurtis@onz.com>@lists.linuxppc.org on 06/04/2002 09:30:16
AM

Please respond to <acurtis@onz.com>

Sent by:    owner-linuxppc-dev@lists.linuxppc.org


To:    Mark Wisner/Raleigh/IBM@IBMUS
cc:    <linuxppc-dev@lists.linuxppc.org>
Subject:    RE: Debugging Network Performance




> Netperf can give you a vary detailed report about network performance. If
> you think your problem is related to network hardware problems,
> look at the
> errors listed in "ifconfig". This should tell you if you are getting bad
> packets or dropping packets.

I did fix a problem in the driver and now there are no error reported by
ifconfig.

> If you think it is a kernel problem some benchmarks may help you narrow
> down the problem. Networks are pretty fluid and sometimes hard to get
> reproducible results. When I try to determine driver/kernel network
> perfomance I try to use an isolated network where I have control over all
> traffic or I use test hardware such as IXIA or Smartbits.

All testing is done on an isolated network. I do not believe that the
problem is in the driver itself. The driver has not changed significantly.
I
do need to check the error path since it appears that errors actually help
performance.

What is the best way to track packet processing through the kernel?


** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 13+ messages in thread
* Re: Debugging Network Performance
@ 2002-06-04 11:12 Mark Wisner
  2002-06-04 13:30 ` Allen Curtis
  0 siblings, 1 reply; 13+ messages in thread
From: Mark Wisner @ 2002-06-04 11:12 UTC (permalink / raw)
  To: acurtis; +Cc: linuxppc-dev


Allen,
Netperf can give you a vary detailed report about network performance. If
you think your problem is related to network hardware problems, look at the
errors listed in "ifconfig". This should tell you if you are getting bad
packets or dropping packets.
If you think it is a kernel problem some benchmarks may help you narrow
down the problem. Networks are pretty fluid and sometimes hard to get
reproducible results. When I try to determine driver/kernel network
perfomance I try to use an isolated network where I have control over all
traffic or I use test hardware such as IXIA or Smartbits.

Mark K. Wisner
Advisory Software Engineer
IBM Microelectronics
3039 Cornwallis Rd
RTP, NC 27709
Tel. 919-254-7191
Fax 919-543-7575


** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 13+ messages in thread
* Debugging Network Performance
@ 2002-06-04  4:34 Allen Curtis
  2002-06-04  9:50 ` Kenneth Johansson
  0 siblings, 1 reply; 13+ messages in thread
From: Allen Curtis @ 2002-06-04  4:34 UTC (permalink / raw)
  To: linuxppc-embedded


I was wondering if anyone could provide some pointers on monitoring and
debugging network communications. The performance of Ethernet communications
seems to vary with kernel revisions. I would like to analyze this problem
but I need a little guidance on the issue.

TIA!

=========== Previous 8260 Ethernet email ==================

2. Some relative performance measurements (HHL 2.4.2 vs. 2.4.19pre9)
			10T Hub	|	100BT switch
		 -------------------------------------|
	2.4.2	 |	410KBps	|	750KBps	  |
		 -------------------------------------|
	2.4.19 |	440KBps	|	190KBps	  |
		 --------------------------------------

	RedHat 2.4.18-3 (x86)	3900KBps

3. TOP shows as much as 48% system utilization during a single FTP transfer.
With the fix mentioned in #1, there are 0 errors reported by ifconfig.

I may be missing something but I believe that these numbers are going the
wrong way. There is still the question of performance decrease when using a
100BT switch. Signal intregety can not be the only suspect considering the
performance did increase using an older version of the kernel. (although
nothing like the Workstation performance)


** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 13+ messages in thread
* Debugging Network Performance
@ 2002-06-04  3:27 Allen Curtis
  0 siblings, 0 replies; 13+ messages in thread
From: Allen Curtis @ 2002-06-04  3:27 UTC (permalink / raw)
  To: linuxppc-dev


I was wondering if anyone could provide some pointers on monitoring and
debugging network communications. The performance of Ethernet communications
seems to vary with kernel revisions. I would like to analyze this problem
but I need a little guidance on the issue.

TIA!

=========== Previous 8260 Ethernet email ==================

2. Some relative performance measurements (HHL 2.4.2 vs. 2.4.19pre9)
			10T Hub	|	100BT switch
		 -------------------------------------|
	2.4.2	 |	410KBps	|	750KBps	  |
		 -------------------------------------|
	2.4.19 |	440KBps	|	190KBps	  |
		 --------------------------------------

	RedHat 2.4.18-3 (x86)	3900KBps

3. TOP shows as much as 48% system utilization during a single FTP transfer.
With the fix mentioned in #1, there are 0 errors reported by ifconfig.

I may be missing something but I believe that these numbers are going the
wrong way. There is still the question of performance decrease when using a
100BT switch. Signal intregety can not be the only suspect considering the
performance did increase using an older version of the kernel. (although
nothing like the Workstation performance)

** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2002-06-05 13:56 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-06-05  5:18 Debugging Network Performance Bill Fink
2002-06-05 13:16 ` Allen Curtis
2002-06-05 13:56   ` Bill Fink
  -- strict thread matches above, loose matches on Subject: below --
2002-06-05 12:18 Jean-Denis Boyer
2002-06-05 13:35 ` Allen Curtis
2002-06-04 14:35 Mark Wisner
2002-06-04 14:46 ` Allen Curtis
2002-06-04 15:05   ` Michael Fischer
2002-06-04 11:12 Mark Wisner
2002-06-04 13:30 ` Allen Curtis
2002-06-04  4:34 Allen Curtis
2002-06-04  9:50 ` Kenneth Johansson
2002-06-04  3:27 Allen Curtis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).