From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from wr-out-0506.google.com (wr-out-0506.google.com [64.233.184.224]) by ozlabs.org (Postfix) with ESMTP id 23640DDEDE for ; Mon, 25 Jun 2007 21:52:52 +1000 (EST) Received: by wr-out-0506.google.com with SMTP id 69so870822wra for ; Mon, 25 Jun 2007 04:52:51 -0700 (PDT) Message-ID: <720399a30706250452m4f04f6e6w1f0fdb4b0da76b12@mail.gmail.com> Date: Mon, 25 Jun 2007 04:52:50 -0700 From: "Bhupender Saharan" To: "Mohammad Sadegh Sadri" Subject: Re: ML403 gigabit ethernet bandwidth - 2.6 kernel In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_153_12213300.1182772370337" References: Cc: Andrei Konovalov , Linux PPC Linux PPC List-Id: Linux on Embedded PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , ------=_Part_153_12213300.1182772370337 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Hi, We need to findout where is the bottlenect. 1. Run vmstat on the ML403 board and find out the percentage CPU is busy when you are transferring the file. That will show if cpu is busy or not. 2. Run oprofile and find out which are the routines eating away the cpu time. Once we have data from both the above routines, we can find out the bottlenecks. Regards Bhupi On 6/23/07, Mohammad Sadegh Sadri wrote: > > > Dear all, > > Recently we did a set of tests on performance of virtex 4FX hard TEMAC > module using ML403 > > we studied all of the posts here carefully: these are the system > characteristics; > > Board : ML403 > EDK : EDK9.1SP2 > Hard TEMAC version and PLTEMAC version are both 3.0.a > PPC clock frequency : 300MHz > Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one > week ago > DMA type: 3 (sg dma) > DRE : enabled for TX and RX, (2) > CSUM offload is enabled for both of TX and RX > tx and rx fifo sizes : 131072 bits > > the board comes up over NFS root file system completely and without any > problems. > > PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes > memory, Dual gigabit ethernet port, running linux 2.6.21.3 > We have tested the PC system band width and it can easily reach 966mbits/s > when connected to the same PC. ( using the same cross cable used for ml403 > test) > > Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE) > > (from board to PC) > netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s > 87380 -S 87380 > > the measured bandwidth for this test was just 40.66Mbits. > It is also true for netperf from PC to board. > > we do not have any more idea about what we should do to improve the > bandwidth. > any help or ideas is appreciated... > > _________________________________________________________________ > Connect to the next generation of MSN Messenger > > http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline > _______________________________________________ > Linuxppc-embedded mailing list > Linuxppc-embedded@ozlabs.org > https://ozlabs.org/mailman/listinfo/linuxppc-embedded > ------=_Part_153_12213300.1182772370337 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline Hi,

We need to findout where is the bottlenect.

1. Run vmstat on the ML403 board and find out the percentage CPU is busy when you are transferring the file. That will show if cpu is busy or not.
2. Run oprofile and find out which are the routines eating away the cpu time.

Once we have data from both the above routines, we can find out the bottlenecks.


Regards
Bhupi


On 6/23/07, Mohammad Sadegh Sadri <mamsadegh@hotmail.com> wrote:

Dear all,

Recently we did a set of tests on performance of virtex 4FX hard TEMAC module using ML403

we studied all of the posts here carefully: these are the system characteristics;

Board : ML403
EDK    : EDK9.1SP2
Hard TEMAC version and PLTEMAC version are both 3.0.a
PPC clock frequency :  300MHz
Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one week ago
DMA type: 3 (sg dma)
DRE : enabled for TX and RX, (2)
CSUM offload is enabled for both of TX and RX
tx and rx fifo sizes : 131072 bits

the board comes up over NFS root file system completely and without any problems.

PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes memory, Dual gigabit ethernet port, running linux 2.6.21.3
We have tested the PC system band width and it can easily reach 966mbits/s when connected to the same PC. ( using the same cross cable used for ml403 test)

Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)

(from board to PC)
netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s 87380 -S 87380

the measured bandwidth for this test was just 40.66Mbits.
It is also true for netperf from PC to board.

we do not have any more idea about what we should do to improve the bandwidth.
any help or ideas is appreciated...

_________________________________________________________________
Connect to the next generation of MSN Messenger
http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
_______________________________________________
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded

------=_Part_153_12213300.1182772370337--