* 8260 Network Performance update
@ 2002-06-06 5:00 Allen Curtis
2002-06-06 5:41 ` Dan Malek
2002-06-06 12:31 ` Kenneth Johansson
0 siblings, 2 replies; 6+ messages in thread
From: Allen Curtis @ 2002-06-06 5:00 UTC (permalink / raw)
To: Ppc Developers
1. Unidirectional TCP/IP traffic (ftp): get 180MBfile /dev/null
(Thanks Jean-Denis)
10T Hub | 100BT switch
-------------------------------------|
2.4.2 | 839KBps | 6526KBps |
--------------------------------------
2.4.19 | 838KBps | 9412KBps |
--------------------------------------
These numbers look good!
2. Here is a description of the original test and some new test results.
------------- FTP Put -------------
| |---------------->| |
| Host | NFS save | 8260 PPC |
| |<----------------| |
------------- -------------
Given that the unidirectional transfers look good I assume that the problem
is either resource related (running out of Ethernet buffers) or scheduling
related. The following tests use the 2.4.19pre9 kernel but vary the number
of RX/TX buffers. (symmetric allocation)
10T Hub | 100BT switch
-------------------------------------|
16 RTB | 440KBps | 190KBps |
--------------------------------------
32 RTB | 450KBps | 230KBps |
--------------------------------------
64 RTB | 450KBps | 240KBps |
--------------------------------------
The above data shows that this is not a raw communication speed issue but
rather a scheduling or resource issue where either FTP or NFS is getting
starved. My guess is that FTP is taking all the receive buffers leaving
nothing for NFS to work with when storing the file.
Does this help to identify the problem and a possible solution? Additional
tests recommendations?
TIA!
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: 8260 Network Performance update
2002-06-06 5:00 8260 Network Performance update Allen Curtis
@ 2002-06-06 5:41 ` Dan Malek
2002-06-06 12:41 ` Allen Curtis
2002-06-06 12:31 ` Kenneth Johansson
1 sibling, 1 reply; 6+ messages in thread
From: Dan Malek @ 2002-06-06 5:41 UTC (permalink / raw)
To: acurtis; +Cc: Ppc Developers
Allen Curtis wrote:
> Does this help to identify the problem and a possible solution? Additional
> tests recommendations?
If you are writing a custom application, don't forget that you can set
several socket options (the most popular are the buffer sizes) that will
have an effect on the link performance if you know something about the
link parameters. You could try increasing the number of receive buffers
in the Ethernet driver, and I guess we should modify the driver to DMA
directly into skbufs. I would be surprised if either of these last two
would increase the performance, but I've been surprised by a few things lately :-).
-- Dan
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: 8260 Network Performance update
2002-06-06 5:41 ` Dan Malek
@ 2002-06-06 12:41 ` Allen Curtis
2002-06-06 17:16 ` Dan Malek
0 siblings, 1 reply; 6+ messages in thread
From: Allen Curtis @ 2002-06-06 12:41 UTC (permalink / raw)
To: Dan Malek; +Cc: Ppc Developers
> link parameters. You could try increasing the number of receive buffers
> in the Ethernet driver, and I guess we should modify the driver to DMA
> directly into skbufs. I would be surprised if either of these last two
> would increase the performance, but I've been surprised by a few
> things lately :-).
The table below is performance vs. number of driver buffers. (16 - 64)
10T Hub | 100BT switch
-------------------------------------|
16 RTB | 440KBps | 190KBps |
--------------------------------------
32 RTB | 450KBps | 230KBps |
--------------------------------------
64 RTB | 450KBps | 240KBps |
--------------------------------------
There is a slight improvement when you increase the number of buffers from
16 (default) to 32. There does not appear to be any benefit beyond that.
I am guessing that the problem is not in the driver unless the driver is
suppose to enforce some soft of fair usage algorithm. Is there a network
usage scheduler of some kind? I do not have this problem on a x86 RedHat
system...
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: 8260 Network Performance update
2002-06-06 12:41 ` Allen Curtis
@ 2002-06-06 17:16 ` Dan Malek
0 siblings, 0 replies; 6+ messages in thread
From: Dan Malek @ 2002-06-06 17:16 UTC (permalink / raw)
To: acurtis; +Cc: Ppc Developers
Allen Curtis wrote:
> There is a slight improvement when you increase the number of buffers from
> 16 (default) to 32. There does not appear to be any benefit beyond that.
That isn't the number of buffers, it's the number of buffer pages, so don't
go overboard allocating these. The 8260 is going to fill buffers and shove
packets up the IP stack as fast as they appear on the wire. Adding buffers
at the driver level will just cover up processing latencies by the application.
> I am guessing that the problem is not in the driver unless the driver is
> suppose to enforce some soft of fair usage algorithm. Is there a network
> usage scheduler of some kind?
The only thing that will happen is the IP stack will start tossing packets
if there is a memory shortfall.
> ..... I do not have this problem on a x86 RedHat
> system...
Are you doing _exactly_ the same thing? It still looks like you have a
hardware configuration problem on the link, because I know other boards
will operate with the same software at the limit of the 10 or 100 Mbit
throughput.
-- Dan
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: 8260 Network Performance update
2002-06-06 5:00 8260 Network Performance update Allen Curtis
2002-06-06 5:41 ` Dan Malek
@ 2002-06-06 12:31 ` Kenneth Johansson
1 sibling, 0 replies; 6+ messages in thread
From: Kenneth Johansson @ 2002-06-06 12:31 UTC (permalink / raw)
To: acurtis; +Cc: Ppc Developers
I don't know if this has any relation to what you do but I have had
serious problems with nfs and 2.4.19-pre versions on X86.
Some combination of kernels have even resulted in a complete nfs failure
for the mount point.
Version 2.4.18 works and 2.4.19_pre10 works also (same version on server
and client) others have gone from lockup to 20-30 kB transfer speed when
going from nfs to nfs. One way transfers usually works ok.
I have not seen any problems with tcp traffic so it probably is
something different.
Hmm are you storing the ftp transfers in nfs ?? try mounting tmpfs and
run from that.
On Thu, 2002-06-06 at 07:00, Allen Curtis wrote:
>
> 1. Unidirectional TCP/IP traffic (ftp): get 180MBfile /dev/null
> (Thanks Jean-Denis)
>
> 10T Hub | 100BT switch
> -------------------------------------|
> 2.4.2 | 839KBps | 6526KBps |
> --------------------------------------
> 2.4.19 | 838KBps | 9412KBps |
> --------------------------------------
>
> These numbers look good!
>
> 2. Here is a description of the original test and some new test results.
>
> ------------- FTP Put -------------
> | |---------------->| |
> | Host | NFS save | 8260 PPC |
> | |<----------------| |
> ------------- -------------
>
> Given that the unidirectional transfers look good I assume that the problem
> is either resource related (running out of Ethernet buffers) or scheduling
> related. The following tests use the 2.4.19pre9 kernel but vary the number
> of RX/TX buffers. (symmetric allocation)
>
> 10T Hub | 100BT switch
> -------------------------------------|
> 16 RTB | 440KBps | 190KBps |
> --------------------------------------
> 32 RTB | 450KBps | 230KBps |
> --------------------------------------
> 64 RTB | 450KBps | 240KBps |
> --------------------------------------
>
>
> The above data shows that this is not a raw communication speed issue but
> rather a scheduling or resource issue where either FTP or NFS is getting
> starved. My guess is that FTP is taking all the receive buffers leaving
> nothing for NFS to work with when storing the file.
>
> Does this help to identify the problem and a possible solution? Additional
> tests recommendations?
>
> TIA!
>
>
>
--
Kenneth Johansson
Ericsson AB Tel: +46 8 404 71 83
Borgafjordsgatan 9 Fax: +46 8 404 72 72
164 80 Stockholm kenneth.johansson@etx.ericsson.se
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* 8260 Network Performance update
@ 2002-06-06 4:55 Allen Curtis
0 siblings, 0 replies; 6+ messages in thread
From: Allen Curtis @ 2002-06-06 4:55 UTC (permalink / raw)
To: Ppc Embedded
1. Unidirectional TCP/IP traffic (ftp): get 180MBfile /dev/null
(Thanks Jean-Denis)
10T Hub | 100BT switch
-------------------------------------|
2.4.2 | 839KBps | 6526KBps |
--------------------------------------
2.4.19 | 838KBps | 9412KBps |
--------------------------------------
These numbers look good!
2. Here is a description of the original test and some new test results.
------------- FTP Put -------------
| |---------------->| |
| Host | NFS save | 8260 PPC |
| |<----------------| |
------------- -------------
Given that the unidirectional transfers look good I assume that the problem
is either resource related (running out of Ethernet buffers) or scheduling
related. The following tests use the 2.4.19pre9 kernel but vary the number
of RX/TX buffers. (symmetric allocation)
10T Hub | 100BT switch
-------------------------------------|
16 RTB | 440KBps | 190KBps |
--------------------------------------
32 RTB | 450KBps | 230KBps |
--------------------------------------
64 RTB | 450KBps | 240KBps |
--------------------------------------
The above data shows that this is not a raw communication speed issue but
rather a scheduling or resource issue where either FTP or NFS is getting
starved. My guess is that FTP is taking all the receive buffers leaving
nothing for NFS to work with when storing the file.
Does this help to identify the problem and a possible solution? Additional
tests recommendations?
TIA!
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2002-06-06 17:16 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-06-06 5:00 8260 Network Performance update Allen Curtis
2002-06-06 5:41 ` Dan Malek
2002-06-06 12:41 ` Allen Curtis
2002-06-06 17:16 ` Dan Malek
2002-06-06 12:31 ` Kenneth Johansson
-- strict thread matches above, loose matches on Subject: below --
2002-06-06 4:55 Allen Curtis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).