* ML403 gigabit ethernet bandwidth - 2.6 kernel
@ 2007-06-23 12:19 Mohammad Sadegh Sadri
2007-06-23 18:48 ` Ming Liu
2007-06-25 11:52 ` Bhupender Saharan
0 siblings, 2 replies; 13+ messages in thread
From: Mohammad Sadegh Sadri @ 2007-06-23 12:19 UTC (permalink / raw)
To: Andrei Konovalov, Linux PPC Linux PPC, Grant Likely
Dear all,
Recently we did a set of tests on performance of virtex 4FX hard TEMAC modu=
le using ML403
we studied all of the posts here carefully: these are the system characteri=
stics;
Board : ML403=20
EDK : EDK9.1SP2
Hard TEMAC version and PLTEMAC version are both 3.0.a
PPC clock frequency : 300MHz
Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one =
week ago
DMA type: 3 (sg dma)
DRE : enabled for TX and RX, (2)
CSUM offload is enabled for both of TX and RX
tx and rx fifo sizes : 131072 bits
the board comes up over NFS root file system completely and without any pro=
blems.=20
PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes m=
emory, Dual gigabit ethernet port, running linux 2.6.21.3=20
We have tested the PC system band width and it can easily reach 966mbits/s =
when connected to the same PC. ( using the same cross cable used for ml403 =
test)
Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
(from board to PC)
netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s =
87380 -S 87380 =20
the measured bandwidth for this test was just 40.66Mbits. =20
It is also true for netperf from PC to board.
we do not have any more idea about what we should do to improve the bandwid=
th.=20
any help or ideas is appreciated...
_________________________________________________________________
Connect to the next generation of MSN Messenger=A0
http://imagine-msn.com/messenger/launch80/default.aspx?locale=3Den-us&sourc=
e=3Dwlmailtagline=
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
2007-06-23 12:19 Mohammad Sadegh Sadri
@ 2007-06-23 18:48 ` Ming Liu
2007-06-25 11:52 ` Bhupender Saharan
1 sibling, 0 replies; 13+ messages in thread
From: Ming Liu @ 2007-06-23 18:48 UTC (permalink / raw)
To: mamsadegh, akonovalov, linuxppc-embedded, grant.likely
Dear Mohammad,
There are some parameters which could be adjusted to improve the
performance. They are: TX and RX_Threshold TX and RX_waitbound. In my
system, we use TX_Threshold=16 and Rx_Threshold=8 and both waitbound=1.
Also Jumbo frame of 8982 could be enable.
Try those hints and share your improvement with us.
BR
Ming
>From: Mohammad Sadegh Sadri <mamsadegh@hotmail.com>
>To: Andrei Konovalov <akonovalov@ru.mvista.com>, Linux PPC Linux
PPC<linuxppc-embedded@ozlabs.org>, Grant Likely <grant.likely@secretlab.ca>
>Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
>Date: Sat, 23 Jun 2007 12:19:12 +0000
>
>
>Dear all,
>
>Recently we did a set of tests on performance of virtex 4FX hard TEMAC
module using ML403
>
>we studied all of the posts here carefully: these are the system
characteristics;
>
>Board : ML403
>EDK : EDK9.1SP2
>Hard TEMAC version and PLTEMAC version are both 3.0.a
>PPC clock frequency : 300MHz
>Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one
week ago
>DMA type: 3 (sg dma)
>DRE : enabled for TX and RX, (2)
>CSUM offload is enabled for both of TX and RX
>tx and rx fifo sizes : 131072 bits
>
>the board comes up over NFS root file system completely and without any
problems.
>
>PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes
memory, Dual gigabit ethernet port, running linux 2.6.21.3
>We have tested the PC system band width and it can easily reach 966mbits/s
when connected to the same PC. ( using the same cross cable used for ml403
test)
>
>Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
>
>(from board to PC)
>netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s
87380 -S 87380
>
>the measured bandwidth for this test was just 40.66Mbits.
>It is also true for netperf from PC to board.
>
>we do not have any more idea about what we should do to improve the
bandwidth.
>any help or ideas is appreciated...
>
>_________________________________________________________________
>Connect to the next generation of MSN
Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
>_______________________________________________
>Linuxppc-embedded mailing list
>Linuxppc-embedded@ozlabs.org
>https://ozlabs.org/mailman/listinfo/linuxppc-embedded
_________________________________________________________________
免费下载 MSN Explorer: http://explorer.msn.com/lccn/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
@ 2007-06-23 19:08 Mohammad Sadegh Sadri
2007-06-23 19:10 ` Ming Liu
0 siblings, 1 reply; 13+ messages in thread
From: Mohammad Sadegh Sadri @ 2007-06-23 19:08 UTC (permalink / raw)
To: Ming Liu, akonovalov, linuxppc-embedded, grant.likely
Dear Ming,
Really thanks for reply,
about thresholds and waitbound OK! I'll adjust them in adapter.c ,
but what about enabling jumbo frames? should I do any thing special to enable Jumbo fram support?
we were thinking that it is enabled by default. Is it?
thanks
----------------------------------------
> From: eemingliu@hotmail.com
> To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com; linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> Date: Sat, 23 Jun 2007 18:48:19 +0000
>
> Dear Mohammad,
> There are some parameters which could be adjusted to improve the
> performance. They are: TX and RX_Threshold TX and RX_waitbound. In my
> system, we use TX_Threshold=16 and Rx_Threshold=8 and both waitbound=1.
>
> Also Jumbo frame of 8982 could be enable.
>
> Try those hints and share your improvement with us.
>
> BR
> Ming
>
> >From: Mohammad Sadegh Sadri
> >To: Andrei Konovalov , Linux PPC Linux
> PPC, Grant Likely
> >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> >Date: Sat, 23 Jun 2007 12:19:12 +0000
> >
> >
> >Dear all,
> >
> >Recently we did a set of tests on performance of virtex 4FX hard TEMAC
> module using ML403
> >
> >we studied all of the posts here carefully: these are the system
> characteristics;
> >
> >Board : ML403
> >EDK : EDK9.1SP2
> >Hard TEMAC version and PLTEMAC version are both 3.0.a
> >PPC clock frequency : 300MHz
> >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one
> week ago
> >DMA type: 3 (sg dma)
> >DRE : enabled for TX and RX, (2)
> >CSUM offload is enabled for both of TX and RX
> >tx and rx fifo sizes : 131072 bits
> >
> >the board comes up over NFS root file system completely and without any
> problems.
> >
> >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes
> memory, Dual gigabit ethernet port, running linux 2.6.21.3
> >We have tested the PC system band width and it can easily reach 966mbits/s
> when connected to the same PC. ( using the same cross cable used for ml403
> test)
> >
> >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> >
> >(from board to PC)
> >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s
> 87380 -S 87380
> >
> >the measured bandwidth for this test was just 40.66Mbits.
> >It is also true for netperf from PC to board.
> >
> >we do not have any more idea about what we should do to improve the
> bandwidth.
> >any help or ideas is appreciated...
> >
> >_________________________________________________________________
> >Connect to the next generation of MSN
> Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
>
> >_______________________________________________
> >Linuxppc-embedded mailing list
> >Linuxppc-embedded@ozlabs.org
> >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
>
> _________________________________________________________________
> 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
>
_________________________________________________________________
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
2007-06-23 19:08 Mohammad Sadegh Sadri
@ 2007-06-23 19:10 ` Ming Liu
0 siblings, 0 replies; 13+ messages in thread
From: Ming Liu @ 2007-06-23 19:10 UTC (permalink / raw)
To: mamsadegh, akonovalov, linuxppc-embedded, grant.likely
Use the following command in Linux please:
ifconfig eth0 mtu 8982
As well you should do that on your PC in the measurement.
Ming
>From: Mohammad Sadegh Sadri <mamsadegh@hotmail.com>
>To: Ming Liu <eemingliu@hotmail.com>,
<akonovalov@ru.mvista.com>,<linuxppc-embedded@ozlabs.org>,
<grant.likely@secretlab.ca>
>Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
>Date: Sat, 23 Jun 2007 19:08:29 +0000
>
>
>Dear Ming,
>
>Really thanks for reply,
>
>about thresholds and waitbound OK! I'll adjust them in adapter.c ,
>
>but what about enabling jumbo frames? should I do any thing special to
enable Jumbo fram support?
>
>we were thinking that it is enabled by default. Is it?
>
>thanks
>
>
>
>
>----------------------------------------
> > From: eemingliu@hotmail.com
> > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > Date: Sat, 23 Jun 2007 18:48:19 +0000
> >
> > Dear Mohammad,
> > There are some parameters which could be adjusted to improve the
> > performance. They are: TX and RX_Threshold TX and RX_waitbound. In my
> > system, we use TX_Threshold=16 and Rx_Threshold=8 and both waitbound=1.
> >
> > Also Jumbo frame of 8982 could be enable.
> >
> > Try those hints and share your improvement with us.
> >
> > BR
> > Ming
> >
> > >From: Mohammad Sadegh Sadri
> > >To: Andrei Konovalov , Linux PPC Linux
> > PPC, Grant Likely
> > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > >Date: Sat, 23 Jun 2007 12:19:12 +0000
> > >
> > >
> > >Dear all,
> > >
> > >Recently we did a set of tests on performance of virtex 4FX hard TEMAC
> > module using ML403
> > >
> > >we studied all of the posts here carefully: these are the system
> > characteristics;
> > >
> > >Board : ML403
> > >EDK : EDK9.1SP2
> > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > >PPC clock frequency : 300MHz
> > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near
one
> > week ago
> > >DMA type: 3 (sg dma)
> > >DRE : enabled for TX and RX, (2)
> > >CSUM offload is enabled for both of TX and RX
> > >tx and rx fifo sizes : 131072 bits
> > >
> > >the board comes up over NFS root file system completely and without
any
> > problems.
> > >
> > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz ,
2Gigabytes
> > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > >We have tested the PC system band width and it can easily reach
966mbits/s
> > when connected to the same PC. ( using the same cross cable used for
ml403
> > test)
> > >
> > >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> > >
> > >(from board to PC)
> > >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m
16384 -s
> > 87380 -S 87380
> > >
> > >the measured bandwidth for this test was just 40.66Mbits.
> > >It is also true for netperf from PC to board.
> > >
> > >we do not have any more idea about what we should do to improve the
> > bandwidth.
> > >any help or ideas is appreciated...
> > >
> > >_________________________________________________________________
> > >Connect to the next generation of MSN
> >
Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
> >
> > >_______________________________________________
> > >Linuxppc-embedded mailing list
> > >Linuxppc-embedded@ozlabs.org
> > >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
> >
> > _________________________________________________________________
> > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> >
>
>_________________________________________________________________
>News, entertainment and everything you care about at Live.com. Get it now!
>http://www.live.com/getstarted.aspx
_________________________________________________________________
免费下载 MSN Explorer: http://explorer.msn.com/lccn/
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
@ 2007-06-24 14:36 Mohammad Sadegh Sadri
2007-06-25 10:03 ` Ming Liu
0 siblings, 1 reply; 13+ messages in thread
From: Mohammad Sadegh Sadri @ 2007-06-24 14:36 UTC (permalink / raw)
To: eemingliu; +Cc: akonovalov, linuxppc-embedded
Dear Ming
We have changed our system characteristics to have TX_THRESHOLD=16 and RX_THRESHOLD=8 and in addition we enabled jumbo frames of 8982 bytes.
The results are as follows:
PC-->ML403
TCP_SENDFILE : 38Mbps
ML403--->PC
TCP_SENDFILE: 155Mbps
The transfer rate from ML403 to PC has improved by a factor of 2,
I see on the posts here in the mailing list that you have reached a band width of 301Mbps.
we are also wondering why we do not have any improve in PC to ML403 bandwidth
also we observed if TX_THRESHOLD=16 and RX_THRESHOLD=2, then PC to ML403 bandwidth will increase to some thing near 60Mbps.
----------------------------------------
> From: eemingliu@hotmail.com
> To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com; linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> Date: Sat, 23 Jun 2007 19:10:16 +0000
>
> Use the following command in Linux please:
>
> ifconfig eth0 mtu 8982
>
> As well you should do that on your PC in the measurement.
>
> Ming
>
>
> >From: Mohammad Sadegh Sadri
> >To: Ming Liu ,
> ,,
>
> >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> >Date: Sat, 23 Jun 2007 19:08:29 +0000
> >
> >
> >Dear Ming,
> >
> >Really thanks for reply,
> >
> >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> >
> >but what about enabling jumbo frames? should I do any thing special to
> enable Jumbo fram support?
> >
> >we were thinking that it is enabled by default. Is it?
> >
> >thanks
> >
> >
> >
> >
> >----------------------------------------
> > > From: eemingliu@hotmail.com
> > > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
> linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > Date: Sat, 23 Jun 2007 18:48:19 +0000
> > >
> > > Dear Mohammad,
> > > There are some parameters which could be adjusted to improve the
> > > performance. They are: TX and RX_Threshold TX and RX_waitbound. In my
> > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both waitbound=1.
> > >
> > > Also Jumbo frame of 8982 could be enable.
> > >
> > > Try those hints and share your improvement with us.
> > >
> > > BR
> > > Ming
> > >
> > > >From: Mohammad Sadegh Sadri
> > > >To: Andrei Konovalov , Linux PPC Linux
> > > PPC, Grant Likely
> > > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > >Date: Sat, 23 Jun 2007 12:19:12 +0000
> > > >
> > > >
> > > >Dear all,
> > > >
> > > >Recently we did a set of tests on performance of virtex 4FX hard TEMAC
> > > module using ML403
> > > >
> > > >we studied all of the posts here carefully: these are the system
> > > characteristics;
> > > >
> > > >Board : ML403
> > > >EDK : EDK9.1SP2
> > > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > > >PPC clock frequency : 300MHz
> > > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near
> one
> > > week ago
> > > >DMA type: 3 (sg dma)
> > > >DRE : enabled for TX and RX, (2)
> > > >CSUM offload is enabled for both of TX and RX
> > > >tx and rx fifo sizes : 131072 bits
> > > >
> > > >the board comes up over NFS root file system completely and without
> any
> > > problems.
> > > >
> > > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz ,
> 2Gigabytes
> > > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > > >We have tested the PC system band width and it can easily reach
> 966mbits/s
> > > when connected to the same PC. ( using the same cross cable used for
> ml403
> > > test)
> > > >
> > > >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> > > >
> > > >(from board to PC)
> > > >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m
> 16384 -s
> > > 87380 -S 87380
> > > >
> > > >the measured bandwidth for this test was just 40.66Mbits.
> > > >It is also true for netperf from PC to board.
> > > >
> > > >we do not have any more idea about what we should do to improve the
> > > bandwidth.
> > > >any help or ideas is appreciated...
> > > >
> > > >_________________________________________________________________
> > > >Connect to the next generation of MSN
> > >
> Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
>
> > >
> > > >_______________________________________________
> > > >Linuxppc-embedded mailing list
> > > >Linuxppc-embedded@ozlabs.org
> > > >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
> > >
> > > _________________________________________________________________
> > > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> > >
> >
> >_________________________________________________________________
> >News, entertainment and everything you care about at Live.com. Get it now!
> >http://www.live.com/getstarted.aspx
>
> _________________________________________________________________
> 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
>
_________________________________________________________________
Discover the new Windows Vista
http://search.msn.com/results.aspx?q=windows+vista&mkt=en-US&form=QBRE
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
2007-06-24 14:36 Mohammad Sadegh Sadri
@ 2007-06-25 10:03 ` Ming Liu
0 siblings, 0 replies; 13+ messages in thread
From: Ming Liu @ 2007-06-25 10:03 UTC (permalink / raw)
To: mamsadegh; +Cc: akonovalov, linuxppc-embedded
Dear Mohammad,
>The results are as follows:
>PC-->ML403
>TCP_SENDFILE : 38Mbps
>
>ML403--->PC
>TCP_SENDFILE: 155Mbps
This result is unreasonable. Because PC is more powerful than your board,
so PC->board should be faster than board->PC.
>The transfer rate from ML403 to PC has improved by a factor of 2,
>I see on the posts here in the mailing list that you have reached a band
width of 301Mbps.
Yes, with all features which could improve performance enabled, we can get
around 300Mbps for TCP transfer. one more hint, did you enable caches on
your system? perhaps it will help. Anyway, double check your hardware
design to make sure all features are enabled.That's all I can suggest.
BR
Ming
>
>
>
>
>----------------------------------------
> > From: eemingliu@hotmail.com
> > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > Date: Sat, 23 Jun 2007 19:10:16 +0000
> >
> > Use the following command in Linux please:
> >
> > ifconfig eth0 mtu 8982
> >
> > As well you should do that on your PC in the measurement.
> >
> > Ming
> >
> >
> > >From: Mohammad Sadegh Sadri
> > >To: Ming Liu ,
> > ,,
> >
> > >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > >Date: Sat, 23 Jun 2007 19:08:29 +0000
> > >
> > >
> > >Dear Ming,
> > >
> > >Really thanks for reply,
> > >
> > >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> > >
> > >but what about enabling jumbo frames? should I do any thing special to
> > enable Jumbo fram support?
> > >
> > >we were thinking that it is enabled by default. Is it?
> > >
> > >thanks
> > >
> > >
> > >
> > >
> > >----------------------------------------
> > > > From: eemingliu@hotmail.com
> > > > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
> > linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > Date: Sat, 23 Jun 2007 18:48:19 +0000
> > > >
> > > > Dear Mohammad,
> > > > There are some parameters which could be adjusted to improve the
> > > > performance. They are: TX and RX_Threshold TX and RX_waitbound. In
my
> > > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both
waitbound=1.
> > > >
> > > > Also Jumbo frame of 8982 could be enable.
> > > >
> > > > Try those hints and share your improvement with us.
> > > >
> > > > BR
> > > > Ming
> > > >
> > > > >From: Mohammad Sadegh Sadri
> > > > >To: Andrei Konovalov , Linux PPC Linux
> > > > PPC, Grant Likely
> > > > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > >Date: Sat, 23 Jun 2007 12:19:12 +0000
> > > > >
> > > > >
> > > > >Dear all,
> > > > >
> > > > >Recently we did a set of tests on performance of virtex 4FX hard
TEMAC
> > > > module using ML403
> > > > >
> > > > >we studied all of the posts here carefully: these are the system
> > > > characteristics;
> > > > >
> > > > >Board : ML403
> > > > >EDK : EDK9.1SP2
> > > > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > > > >PPC clock frequency : 300MHz
> > > > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing
near
> > one
> > > > week ago
> > > > >DMA type: 3 (sg dma)
> > > > >DRE : enabled for TX and RX, (2)
> > > > >CSUM offload is enabled for both of TX and RX
> > > > >tx and rx fifo sizes : 131072 bits
> > > > >
> > > > >the board comes up over NFS root file system completely and
without
> > any
> > > > problems.
> > > > >
> > > > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz ,
> > 2Gigabytes
> > > > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > > > >We have tested the PC system band width and it can easily reach
> > 966mbits/s
> > > > when connected to the same PC. ( using the same cross cable used
for
> > ml403
> > > > test)
> > > > >
> > > > >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> > > > >
> > > > >(from board to PC)
> > > > >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m
> > 16384 -s
> > > > 87380 -S 87380
> > > > >
> > > > >the measured bandwidth for this test was just 40.66Mbits.
> > > > >It is also true for netperf from PC to board.
> > > > >
> > > > >we do not have any more idea about what we should do to improve
the
> > > > bandwidth.
> > > > >any help or ideas is appreciated...
> > > > >
> > > > >_________________________________________________________________
> > > > >Connect to the next generation of MSN
> > > >
> >
Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
> >
> > > >
> > > > >_______________________________________________
> > > > >Linuxppc-embedded mailing list
> > > > >Linuxppc-embedded@ozlabs.org
> > > > >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
> > > >
> > > > _________________________________________________________________
> > > > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> > > >
> > >
> > >_________________________________________________________________
> > >News, entertainment and everything you care about at Live.com. Get it
now!
> > >http://www.live.com/getstarted.aspx
> >
> > _________________________________________________________________
> > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> >
>
>_________________________________________________________________
>Discover the new Windows Vista
>http://search.msn.com/results.aspx?q=windows+vista&mkt=en-US&form=QBRE
_________________________________________________________________
免费下载 MSN Explorer: http://explorer.msn.com/lccn/
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: ML403 gigabit ethernet bandwidth - 2.6 kernel
2007-06-23 12:19 Mohammad Sadegh Sadri
2007-06-23 18:48 ` Ming Liu
@ 2007-06-25 11:52 ` Bhupender Saharan
1 sibling, 0 replies; 13+ messages in thread
From: Bhupender Saharan @ 2007-06-25 11:52 UTC (permalink / raw)
To: Mohammad Sadegh Sadri; +Cc: Andrei Konovalov, Linux PPC Linux PPC
[-- Attachment #1: Type: text/plain, Size: 2193 bytes --]
Hi,
We need to findout where is the bottlenect.
1. Run vmstat on the ML403 board and find out the percentage CPU is busy
when you are transferring the file. That will show if cpu is busy or not.
2. Run oprofile and find out which are the routines eating away the cpu
time.
Once we have data from both the above routines, we can find out the
bottlenecks.
Regards
Bhupi
On 6/23/07, Mohammad Sadegh Sadri <mamsadegh@hotmail.com> wrote:
>
>
> Dear all,
>
> Recently we did a set of tests on performance of virtex 4FX hard TEMAC
> module using ML403
>
> we studied all of the posts here carefully: these are the system
> characteristics;
>
> Board : ML403
> EDK : EDK9.1SP2
> Hard TEMAC version and PLTEMAC version are both 3.0.a
> PPC clock frequency : 300MHz
> Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one
> week ago
> DMA type: 3 (sg dma)
> DRE : enabled for TX and RX, (2)
> CSUM offload is enabled for both of TX and RX
> tx and rx fifo sizes : 131072 bits
>
> the board comes up over NFS root file system completely and without any
> problems.
>
> PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes
> memory, Dual gigabit ethernet port, running linux 2.6.21.3
> We have tested the PC system band width and it can easily reach 966mbits/s
> when connected to the same PC. ( using the same cross cable used for ml403
> test)
>
> Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
>
> (from board to PC)
> netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s
> 87380 -S 87380
>
> the measured bandwidth for this test was just 40.66Mbits.
> It is also true for netperf from PC to board.
>
> we do not have any more idea about what we should do to improve the
> bandwidth.
> any help or ideas is appreciated...
>
> _________________________________________________________________
> Connect to the next generation of MSN Messenger
>
> http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
> _______________________________________________
> Linuxppc-embedded mailing list
> Linuxppc-embedded@ozlabs.org
> https://ozlabs.org/mailman/listinfo/linuxppc-embedded
>
[-- Attachment #2: Type: text/html, Size: 2871 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
@ 2007-06-25 15:21 Glenn.G.Hart
0 siblings, 0 replies; 13+ messages in thread
From: Glenn.G.Hart @ 2007-06-25 15:21 UTC (permalink / raw)
To: eemingliu; +Cc: akonovalov, linuxppc-embedded, mamsadegh
[-- Attachment #1: Type: text/plain, Size: 7622 bytes --]
All,
I am also very interested in the network throughput. I am using the Avenet
Mini-Module which has a V4FX12. The ML403 is very close to the
Mini-Module. I am getting a throughput of about 100 Mbps. The biggest
difference was turning on the cache. 100 MHz vs. 300 MHz only improved the
performance slightly. Using the checksum offloading was also a big help in
getting the throughput up. The RX Threshhold also helped, but the jumbo
frames did not seem to help. I am not sure what I can do to get the 300
Mbps Ming is getting. I saw on a previous post someone was using 128k
FIFO depth. I am using a 32k depth.
Glenn
(Embedded "Ming Liu" <eemingliu@hotmail.com>@ozlabs.org
image moved 06/25/2007 06:03 AM
to file:
pic11478.jpg)
Sent by:
linuxppc-embedded-bounces+glenn.g.hart=us.westinghouse.com@ozlabs.org
To: mamsadegh@hotmail.com
cc: akonovalov@ru.mvista.com, linuxppc-embedded@ozlabs.org
Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
Security Level:? Internal
Dear Mohammad,
>The results are as follows:
>PC-->ML403
>TCP_SENDFILE : 38Mbps
>
>ML403--->PC
>TCP_SENDFILE: 155Mbps
This result is unreasonable. Because PC is more powerful than your board,
so PC->board should be faster than board->PC.
>The transfer rate from ML403 to PC has improved by a factor of 2,
>I see on the posts here in the mailing list that you have reached a band
width of 301Mbps.
Yes, with all features which could improve performance enabled, we can get
around 300Mbps for TCP transfer. one more hint, did you enable caches on
your system? perhaps it will help. Anyway, double check your hardware
design to make sure all features are enabled.That's all I can suggest.
BR
Ming
>
>
>
>
>----------------------------------------
> > From: eemingliu@hotmail.com
> > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > Date: Sat, 23 Jun 2007 19:10:16 +0000
> >
> > Use the following command in Linux please:
> >
> > ifconfig eth0 mtu 8982
> >
> > As well you should do that on your PC in the measurement.
> >
> > Ming
> >
> >
> > >From: Mohammad Sadegh Sadri
> > >To: Ming Liu ,
> > ,,
> >
> > >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > >Date: Sat, 23 Jun 2007 19:08:29 +0000
> > >
> > >
> > >Dear Ming,
> > >
> > >Really thanks for reply,
> > >
> > >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> > >
> > >but what about enabling jumbo frames? should I do any thing special to
> > enable Jumbo fram support?
> > >
> > >we were thinking that it is enabled by default. Is it?
> > >
> > >thanks
> > >
> > >
> > >
> > >
> > >----------------------------------------
> > > > From: eemingliu@hotmail.com
> > > > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
> > linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > Date: Sat, 23 Jun 2007 18:48:19 +0000
> > > >
> > > > Dear Mohammad,
> > > > There are some parameters which could be adjusted to improve the
> > > > performance. They are: TX and RX_Threshold TX and RX_waitbound. In
my
> > > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both
waitbound=1.
> > > >
> > > > Also Jumbo frame of 8982 could be enable.
> > > >
> > > > Try those hints and share your improvement with us.
> > > >
> > > > BR
> > > > Ming
> > > >
> > > > >From: Mohammad Sadegh Sadri
> > > > >To: Andrei Konovalov , Linux PPC Linux
> > > > PPC, Grant Likely
> > > > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > >Date: Sat, 23 Jun 2007 12:19:12 +0000
> > > > >
> > > > >
> > > > >Dear all,
> > > > >
> > > > >Recently we did a set of tests on performance of virtex 4FX hard
TEMAC
> > > > module using ML403
> > > > >
> > > > >we studied all of the posts here carefully: these are the system
> > > > characteristics;
> > > > >
> > > > >Board : ML403
> > > > >EDK : EDK9.1SP2
> > > > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > > > >PPC clock frequency : 300MHz
> > > > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing
near
> > one
> > > > week ago
> > > > >DMA type: 3 (sg dma)
> > > > >DRE : enabled for TX and RX, (2)
> > > > >CSUM offload is enabled for both of TX and RX
> > > > >tx and rx fifo sizes : 131072 bits
> > > > >
> > > > >the board comes up over NFS root file system completely and
without
> > any
> > > > problems.
> > > > >
> > > > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz ,
> > 2Gigabytes
> > > > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > > > >We have tested the PC system band width and it can easily reach
> > 966mbits/s
> > > > when connected to the same PC. ( using the same cross cable used
for
> > ml403
> > > > test)
> > > > >
> > > > >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> > > > >
> > > > >(from board to PC)
> > > > >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m
> > 16384 -s
> > > > 87380 -S 87380
> > > > >
> > > > >the measured bandwidth for this test was just 40.66Mbits.
> > > > >It is also true for netperf from PC to board.
> > > > >
> > > > >we do not have any more idea about what we should do to improve
the
> > > > bandwidth.
> > > > >any help or ideas is appreciated...
> > > > >
> > > > >_________________________________________________________________
> > > > >Connect to the next generation of MSN
> > > >
> >
Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
> >
> > > >
> > > > >_______________________________________________
> > > > >Linuxppc-embedded mailing list
> > > > >Linuxppc-embedded@ozlabs.org
> > > > >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
> > > >
> > > > _________________________________________________________________
> > > > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> > > >
> > >
> > >_________________________________________________________________
> > >News, entertainment and everything you care about at Live.com. Get it
now!
> > >http://www.live.com/getstarted.aspx
> >
> > _________________________________________________________________
> > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> >
>
>_________________________________________________________________
>Discover the new Windows Vista
>http://search.msn.com/results.aspx?q=windows+vista&mkt=en-US&form=QBRE
_________________________________________________________________
免费下载 MSN Explorer: http://explorer.msn.com/lccn/
_______________________________________________
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded
[-- Attachment #2: pic11478.jpg --]
[-- Type: image/jpeg, Size: 2170 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
@ 2007-06-25 19:42 Greg Crocker
2007-06-26 7:42 ` Ming Liu
0 siblings, 1 reply; 13+ messages in thread
From: Greg Crocker @ 2007-06-25 19:42 UTC (permalink / raw)
To: linuxppc-embedded
[-- Attachment #1: Type: text/plain, Size: 409 bytes --]
I was able to achieve ~320 Mbit/sec data rate using the Gigabit System
Reference Design (GSRD XAPP535/536) from Xilinx. This utilizes the
LocalLink TEMAC to perform the transfers. The reference design provides the
Linux 2.4 drivers that can be ported to Linux 2.6 with a little effort.
This implementation did not use checksum offloading and the data rates were
achieved using TCP_STREAM on netperf.
Greg
[-- Attachment #2: Type: text/html, Size: 443 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
2007-06-25 19:42 ML403 gigabit ethernet bandwidth - 2.6 kernel Greg Crocker
@ 2007-06-26 7:42 ` Ming Liu
0 siblings, 0 replies; 13+ messages in thread
From: Ming Liu @ 2007-06-26 7:42 UTC (permalink / raw)
To: greg.crocker, linuxppc-embedded
Actually I have asked the xilinx expert on the statistics. With the
PLB_TEMAC, we can also get a result like that, say 300Mbps for TCP. (From
their numbers, the throughput is even higher.)
Some remindings from my experience: Remember to enable everything in the
hardware and software which can improve the performance, such as,
CS_offloading, data allighment engines, large fifos, and so on, from the
configuration of PLB_TEMAC in EDK. As well, remember to enable cache. In
the software field, interrupt_coaleascing will also help. At this time,
normally we can get more than 100Mbps for TCP. Jumbo-frame of 8982 will
almost double this number.
Have fun.
BR
Ming
>From: "Greg Crocker" <greg.crocker@gmail.com>
>To: linuxppc-embedded@ozlabs.org
>Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
>Date: Mon, 25 Jun 2007 15:42:09 -0400
>
>I was able to achieve ~320 Mbit/sec data rate using the Gigabit
>System
>Reference Design (GSRD XAPP535/536) from Xilinx. This utilizes the
>LocalLink TEMAC to perform the transfers. The reference design
>provides the
>Linux 2.4 drivers that can be ported to Linux 2.6 with a little
>effort.
>
>This implementation did not use checksum offloading and the data
>rates were
>achieved using TCP_STREAM on netperf.
>
>Greg
>_______________________________________________
>Linuxppc-embedded mailing list
>Linuxppc-embedded@ozlabs.org
>https://ozlabs.org/mailman/listinfo/linuxppc-embedded
_________________________________________________________________
与联机的朋友进行交流,请使用 MSN Messenger: http://messenger.msn.com/cn
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
@ 2007-06-26 14:53 Mohammad Sadegh Sadri
2007-06-26 18:12 ` Ming Liu
0 siblings, 1 reply; 13+ messages in thread
From: Mohammad Sadegh Sadri @ 2007-06-26 14:53 UTC (permalink / raw)
To: eemingliu; +Cc: akonovalov, linuxppc-embedded
Dear Ming,
Thanks to your comments , Our tests now give the following results:
ML403--->PC : 410Mbits/s
PC--->ML403 : 210Mbits/s
We have described the characteristics of our base system in previous posts here
In additiona we have :
1- enabled the ppc caches
2- we have set BD_IN_BRAM in adapter.c to 1. ( default is 0 )
TX_THRESHOLD is 16 and RX_THRESHOLD is 2.
the virtex4 fx12 device on ML403 is now completely full, we do not have any free block memories nor any logic slices. Maybe if we had more space we could choose higher values for XTE_SEND_BD_CNT and XTE_RECV_BD_CNT i.e. 384. Do you think this will improve performance?
There is also another interesting test,
We executed netperf on both of PC and ML403 simultanously , when we do not put BDs in BRAM, the performance of ML403-->PC link drops from 390Mbits to 45Mbits, but when using PLB BRAMs for BDs the performance decreases from 410Mbits/s to just 130Mbita/s. It is important when the user wants to transfer data in both directions simulatanously.
Thanks
----------------------------------------
> From: eemingliu@hotmail.com
> To: mamsadegh@hotmail.com
> CC: akonovalov@ru.mvista.com; linuxppc-embedded@ozlabs.org
> Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> Date: Mon, 25 Jun 2007 10:03:30 +0000
>
> Dear Mohammad,
>
> >The results are as follows:
> >PC-->ML403
> >TCP_SENDFILE : 38Mbps
> >
> >ML403--->PC
> >TCP_SENDFILE: 155Mbps
>
> This result is unreasonable. Because PC is more powerful than your board,
> so PC->board should be faster than board->PC.
>
> >The transfer rate from ML403 to PC has improved by a factor of 2,
> >I see on the posts here in the mailing list that you have reached a band
> width of 301Mbps.
>
> Yes, with all features which could improve performance enabled, we can get
> around 300Mbps for TCP transfer. one more hint, did you enable caches on
> your system? perhaps it will help. Anyway, double check your hardware
> design to make sure all features are enabled.That's all I can suggest.
>
> BR
> Ming
>
>
> >
> >
> >
> >
> >----------------------------------------
> > > From: eemingliu@hotmail.com
> > > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
> linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > Date: Sat, 23 Jun 2007 19:10:16 +0000
> > >
> > > Use the following command in Linux please:
> > >
> > > ifconfig eth0 mtu 8982
> > >
> > > As well you should do that on your PC in the measurement.
> > >
> > > Ming
> > >
> > >
> > > >From: Mohammad Sadegh Sadri
> > > >To: Ming Liu ,
> > > ,,
> > >
> > > >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > >Date: Sat, 23 Jun 2007 19:08:29 +0000
> > > >
> > > >
> > > >Dear Ming,
> > > >
> > > >Really thanks for reply,
> > > >
> > > >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> > > >
> > > >but what about enabling jumbo frames? should I do any thing special to
> > > enable Jumbo fram support?
> > > >
> > > >we were thinking that it is enabled by default. Is it?
> > > >
> > > >thanks
> > > >
> > > >
> > > >
> > > >
> > > >----------------------------------------
> > > > > From: eemingliu@hotmail.com
> > > > > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
> > > linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > > Date: Sat, 23 Jun 2007 18:48:19 +0000
> > > > >
> > > > > Dear Mohammad,
> > > > > There are some parameters which could be adjusted to improve the
> > > > > performance. They are: TX and RX_Threshold TX and RX_waitbound. In
> my
> > > > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both
> waitbound=1.
> > > > >
> > > > > Also Jumbo frame of 8982 could be enable.
> > > > >
> > > > > Try those hints and share your improvement with us.
> > > > >
> > > > > BR
> > > > > Ming
> > > > >
> > > > > >From: Mohammad Sadegh Sadri
> > > > > >To: Andrei Konovalov , Linux PPC Linux
> > > > > PPC, Grant Likely
> > > > > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > > >Date: Sat, 23 Jun 2007 12:19:12 +0000
> > > > > >
> > > > > >
> > > > > >Dear all,
> > > > > >
> > > > > >Recently we did a set of tests on performance of virtex 4FX hard
> TEMAC
> > > > > module using ML403
> > > > > >
> > > > > >we studied all of the posts here carefully: these are the system
> > > > > characteristics;
> > > > > >
> > > > > >Board : ML403
> > > > > >EDK : EDK9.1SP2
> > > > > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > > > > >PPC clock frequency : 300MHz
> > > > > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing
> near
> > > one
> > > > > week ago
> > > > > >DMA type: 3 (sg dma)
> > > > > >DRE : enabled for TX and RX, (2)
> > > > > >CSUM offload is enabled for both of TX and RX
> > > > > >tx and rx fifo sizes : 131072 bits
> > > > > >
> > > > > >the board comes up over NFS root file system completely and
> without
> > > any
> > > > > problems.
> > > > > >
> > > > > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz ,
> > > 2Gigabytes
> > > > > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > > > > >We have tested the PC system band width and it can easily reach
> > > 966mbits/s
> > > > > when connected to the same PC. ( using the same cross cable used
> for
> > > ml403
> > > > > test)
> > > > > >
> > > > > >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> > > > > >
> > > > > >(from board to PC)
> > > > > >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m
> > > 16384 -s
> > > > > 87380 -S 87380
> > > > > >
> > > > > >the measured bandwidth for this test was just 40.66Mbits.
> > > > > >It is also true for netperf from PC to board.
> > > > > >
> > > > > >we do not have any more idea about what we should do to improve
> the
> > > > > bandwidth.
> > > > > >any help or ideas is appreciated...
> > > > > >
> > > > > >_________________________________________________________________
> > > > > >Connect to the next generation of MSN
> > > > >
> > >
> Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
>
> > >
> > > > >
> > > > > >_______________________________________________
> > > > > >Linuxppc-embedded mailing list
> > > > > >Linuxppc-embedded@ozlabs.org
> > > > > >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
> > > > >
> > > > > _________________________________________________________________
> > > > > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> > > > >
> > > >
> > > >_________________________________________________________________
> > > >News, entertainment and everything you care about at Live.com. Get it
> now!
> > > >http://www.live.com/getstarted.aspx
> > >
> > > _________________________________________________________________
> > > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> > >
> >
> >_________________________________________________________________
> >Discover the new Windows Vista
> >http://search.msn.com/results.aspx?q=windows+vista&mkt=en-US&form=QBRE
>
> _________________________________________________________________
> 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
>
_________________________________________________________________
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
2007-06-26 14:53 Mohammad Sadegh Sadri
@ 2007-06-26 18:12 ` Ming Liu
0 siblings, 0 replies; 13+ messages in thread
From: Ming Liu @ 2007-06-26 18:12 UTC (permalink / raw)
To: mamsadegh; +Cc: akonovalov, linuxppc-embedded
Dear Mohammad,
>ML403--->PC : 410Mbits/s
>PC--->ML403 : 210Mbits/s
These results are interesting. In priciple, board to PC will be less than
PC to board. And also you board to PC speed is quite fast. I never had that
high before. :)
>We have described the characteristics of our base system in previous posts
here
>
>In additiona we have :
>1- enabled the ppc caches
This will help the improvement quite a lot.
>2- we have set BD_IN_BRAM in adapter.c to 1. ( default is 0 )
Actually I didn't try to modify this before. My previous results are based
on bd_NOT_in_bram. :) From my understanding, enable this option will put
the buffer descriptor in BRAM rather than DDR. Perhaps I can also try it
and to see if there is any improvement on my system.
>TX_THRESHOLD is 16 and RX_THRESHOLD is 2.
>
>the virtex4 fx12 device on ML403 is now completely full, we do not have
any free block memories nor any logic slices. Maybe if we had more space we
could choose higher values for XTE_SEND_BD_CNT and XTE_RECV_BD_CNT i.e.
384. Do you think this will improve performance?
Probably yes. But I never modified these numbers before. My default ones
are 512 respectively.
>There is also another interesting test,
>We executed netperf on both of PC and ML403 simultanously , when we do not
put BDs in BRAM, the performance of ML403-->PC link drops from 390Mbits to
45Mbits, but when using PLB BRAMs for BDs the performance decreases from
410Mbits/s to just 130Mbita/s. It is important when the user wants to
transfer data in both directions simulatanously.
Definitely! The bottleneck is CPU processing capability. So if you send and
receive data at the same time, the results will be much worse. I think
another reason is TCP is guaranteed protocal. So there will be some
acknowledgements returning back when you send packets out. Thus your
bandwidth will be taken a little away. However compared with the
consumption of CPU, this perhaps will be trivial.
BR
Ming
>
>
>----------------------------------------
> > From: eemingliu@hotmail.com
> > To: mamsadegh@hotmail.com
> > CC: akonovalov@ru.mvista.com; linuxppc-embedded@ozlabs.org
> > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > Date: Mon, 25 Jun 2007 10:03:30 +0000
> >
> > Dear Mohammad,
> >
> > >The results are as follows:
> > >PC-->ML403
> > >TCP_SENDFILE : 38Mbps
> > >
> > >ML403--->PC
> > >TCP_SENDFILE: 155Mbps
> >
> > This result is unreasonable. Because PC is more powerful than your
board,
> > so PC->board should be faster than board->PC.
> >
> > >The transfer rate from ML403 to PC has improved by a factor of 2,
> > >I see on the posts here in the mailing list that you have reached a
band
> > width of 301Mbps.
> >
> > Yes, with all features which could improve performance enabled, we can
get
> > around 300Mbps for TCP transfer. one more hint, did you enable caches
on
> > your system? perhaps it will help. Anyway, double check your hardware
> > design to make sure all features are enabled.That's all I can suggest.
> >
> > BR
> > Ming
> >
> >
> > >
> > >
> > >
> > >
> > >----------------------------------------
> > > > From: eemingliu@hotmail.com
> > > > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
> > linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > Date: Sat, 23 Jun 2007 19:10:16 +0000
> > > >
> > > > Use the following command in Linux please:
> > > >
> > > > ifconfig eth0 mtu 8982
> > > >
> > > > As well you should do that on your PC in the measurement.
> > > >
> > > > Ming
> > > >
> > > >
> > > > >From: Mohammad Sadegh Sadri
> > > > >To: Ming Liu ,
> > > > ,,
> > > >
> > > > >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > >Date: Sat, 23 Jun 2007 19:08:29 +0000
> > > > >
> > > > >
> > > > >Dear Ming,
> > > > >
> > > > >Really thanks for reply,
> > > > >
> > > > >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> > > > >
> > > > >but what about enabling jumbo frames? should I do any thing
special to
> > > > enable Jumbo fram support?
> > > > >
> > > > >we were thinking that it is enabled by default. Is it?
> > > > >
> > > > >thanks
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >----------------------------------------
> > > > > > From: eemingliu@hotmail.com
> > > > > > To: mamsadegh@hotmail.com; akonovalov@ru.mvista.com;
> > > > linuxppc-embedded@ozlabs.org; grant.likely@secretlab.ca
> > > > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > > > Date: Sat, 23 Jun 2007 18:48:19 +0000
> > > > > >
> > > > > > Dear Mohammad,
> > > > > > There are some parameters which could be adjusted to improve
the
> > > > > > performance. They are: TX and RX_Threshold TX and RX_waitbound.
In
> > my
> > > > > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both
> > waitbound=1.
> > > > > >
> > > > > > Also Jumbo frame of 8982 could be enable.
> > > > > >
> > > > > > Try those hints and share your improvement with us.
> > > > > >
> > > > > > BR
> > > > > > Ming
> > > > > >
> > > > > > >From: Mohammad Sadegh Sadri
> > > > > > >To: Andrei Konovalov , Linux PPC Linux
> > > > > > PPC, Grant Likely
> > > > > > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > > > >Date: Sat, 23 Jun 2007 12:19:12 +0000
> > > > > > >
> > > > > > >
> > > > > > >Dear all,
> > > > > > >
> > > > > > >Recently we did a set of tests on performance of virtex 4FX
hard
> > TEMAC
> > > > > > module using ML403
> > > > > > >
> > > > > > >we studied all of the posts here carefully: these are the
system
> > > > > > characteristics;
> > > > > > >
> > > > > > >Board : ML403
> > > > > > >EDK : EDK9.1SP2
> > > > > > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > > > > > >PPC clock frequency : 300MHz
> > > > > > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some
thing
> > near
> > > > one
> > > > > > week ago
> > > > > > >DMA type: 3 (sg dma)
> > > > > > >DRE : enabled for TX and RX, (2)
> > > > > > >CSUM offload is enabled for both of TX and RX
> > > > > > >tx and rx fifo sizes : 131072 bits
> > > > > > >
> > > > > > >the board comes up over NFS root file system completely and
> > without
> > > > any
> > > > > > problems.
> > > > > > >
> > > > > > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz ,
> > > > 2Gigabytes
> > > > > > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > > > > > >We have tested the PC system band width and it can easily
reach
> > > > 966mbits/s
> > > > > > when connected to the same PC. ( using the same cross cable
used
> > for
> > > > ml403
> > > > > > test)
> > > > > > >
> > > > > > >Netperf is compiled with TCP SEND FILE enabled, (
-DHAVE_SENDFILE)
> > > > > > >
> > > > > > >(from board to PC)
> > > > > > >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf --
-m
> > > > 16384 -s
> > > > > > 87380 -S 87380
> > > > > > >
> > > > > > >the measured bandwidth for this test was just 40.66Mbits.
> > > > > > >It is also true for netperf from PC to board.
> > > > > > >
> > > > > > >we do not have any more idea about what we should do to
improve
> > the
> > > > > > bandwidth.
> > > > > > >any help or ideas is appreciated...
> > > > > > >
> > > > > >
>_________________________________________________________________
> > > > > > >Connect to the next generation of MSN
> > > > > >
> > > >
> >
Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
> >
> > > >
> > > > > >
> > > > > > >_______________________________________________
> > > > > > >Linuxppc-embedded mailing list
> > > > > > >Linuxppc-embedded@ozlabs.org
> > > > > > >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
> > > > > >
> > > > > >
_________________________________________________________________
> > > > > > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> > > > > >
> > > > >
> > > > >_________________________________________________________________
> > > > >News, entertainment and everything you care about at Live.com. Get
it
> > now!
> > > > >http://www.live.com/getstarted.aspx
> > > >
> > > > _________________________________________________________________
> > > > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> > > >
> > >
> > >_________________________________________________________________
> > >Discover the new Windows Vista
> > >http://search.msn.com/results.aspx?q=windows+vista&mkt=en-US&form=QBRE
> >
> > _________________________________________________________________
> > 免费下载 MSN Explorer: http://explorer.msn.com/lccn/
> >
>
>_________________________________________________________________
>News, entertainment and everything you care about at Live.com. Get it now!
>http://www.live.com/getstarted.aspx
_________________________________________________________________
与联机的朋友进行交流,请使用 MSN Messenger: http://messenger.msn.com/cn
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
@ 2007-07-04 15:17 Mohammad Sadegh Sadri
0 siblings, 0 replies; 13+ messages in thread
From: Mohammad Sadegh Sadri @ 2007-07-04 15:17 UTC (permalink / raw)
To: eemingliu, akonovalov, linuxppc-embedded
DQpEZWFyIEFsbCwNCg0KT3VyIHRlc3RzIHNob3cgdGhhdCBDUFUgdXNhZ2UgaXMgYWx3YXlzIDEw
MCUgZHVyaW5nIG5ldHBlcmYgdGVzdC4NClNvIHRoZSBzcGVlZCBvZiBDUFUgaXMgaW1wb3J0YW50
IGluIHRoZSBvdmVyYWxsIHBlcmZvcm1hbmNlIG9mIHRoZSBnaWdhYml0IGxpbmsuDQpJZiB3ZSBj
YW4gaW5jcmVhc2UgdGhlIENQVSBjb3JlIGNsb2NrIGZyZXF1ZW5jeSB3ZSBtYXkgYWNoaWV2ZSBi
ZXR0ZXIgcmVzdWx0cyB1c2luZyBleGlzdGluZyBoYXJkd2FyZS9zb2Z0d2FyZSBjb25maWd1cmF0
aW9uLg0KDQpJIGtub3cgdGhhdCB0aGUgUFBDIGNvcmUgaW5zaWRlIEZYMTIgY2FuIHJ1biB3aXRo
IGNsb2NrIGZyZXF1ZW5jaWVzIHVwIHRvIDQ1ME1IeiwgSG93ZXZlciBCYXNlIFN5c3RlbSBCdWls
ZGVyIGZvciBNTDQwMyBhbGxvd3MganVzdCBmcmVxdWVuY2llcyBvZiB1cCB0byAzMDBNSHogZm9y
IFBQQyBjb3JlLiBEb2VzIGFueSBib2R5IGhlcmUga25vdyBob3cgSSBjYW4gbWFrZSBQUEMgY29y
ZSB0byBydW4gd2l0aCA0MDBNSHogaW4gTUw0MDM/DQoNCnRoYW5rcw0KDQoNCg0KDQotLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+IEZyb206IGVlbWluZ2xpdUBob3Rt
YWlsLmNvbQ0KPiBUbzogbWFtc2FkZWdoQGhvdG1haWwuY29tDQo+IENDOiBha29ub3ZhbG92QHJ1
Lm12aXN0YS5jb207IGxpbnV4cHBjLWVtYmVkZGVkQG96bGFicy5vcmcNCj4gU3ViamVjdDogUkU6
IE1MNDAzIGdpZ2FiaXQgZXRoZXJuZXQgYmFuZHdpZHRoIC0gMi42IGtlcm5lbA0KPiBEYXRlOiBU
dWUsIDI2IEp1biAyMDA3IDE4OjEyOjU1ICswMDAwDQo+IA0KPiBEZWFyIE1vaGFtbWFkLA0KPiAN
Cj4gPk1MNDAzLS0tPlBDIDogNDEwTWJpdHMvcw0KPiA+UEMtLS0+TUw0MDMgOiAyMTBNYml0cy9z
DQo+IA0KPiBUaGVzZSByZXN1bHRzIGFyZSBpbnRlcmVzdGluZy4gSW4gcHJpY2lwbGUsIGJvYXJk
IHRvIFBDIHdpbGwgYmUgbGVzcyB0aGFuIA0KPiBQQyB0byBib2FyZC4gQW5kIGFsc28geW91IGJv
YXJkIHRvIFBDIHNwZWVkIGlzIHF1aXRlIGZhc3QuIEkgbmV2ZXIgaGFkIHRoYXQgDQo+IGhpZ2gg
YmVmb3JlLiA6KQ0KPiANCj4gPldlIGhhdmUgZGVzY3JpYmVkIHRoZSBjaGFyYWN0ZXJpc3RpY3Mg
b2Ygb3VyIGJhc2Ugc3lzdGVtIGluIHByZXZpb3VzIHBvc3RzIA0KPiBoZXJlDQo+ID4NCj4gPklu
IGFkZGl0aW9uYSB3ZSBoYXZlIDoNCj4gPjEtIGVuYWJsZWQgdGhlIHBwYyBjYWNoZXMNCj4gDQo+
IFRoaXMgd2lsbCBoZWxwIHRoZSBpbXByb3ZlbWVudCBxdWl0ZSBhIGxvdC4NCj4gDQo+ID4yLSB3
ZSBoYXZlIHNldCBCRF9JTl9CUkFNIGluIGFkYXB0ZXIuYyB0byAxLiAoIGRlZmF1bHQgaXMgMCAp
DQo+IA0KPiBBY3R1YWxseSBJIGRpZG4ndCB0cnkgdG8gbW9kaWZ5IHRoaXMgYmVmb3JlLiBNeSBw
cmV2aW91cyByZXN1bHRzIGFyZSBiYXNlZCANCj4gb24gYmRfTk9UX2luX2JyYW0uIDopIEZyb20g
bXkgdW5kZXJzdGFuZGluZywgZW5hYmxlIHRoaXMgb3B0aW9uIHdpbGwgcHV0IA0KPiB0aGUgYnVm
ZmVyIGRlc2NyaXB0b3IgaW4gQlJBTSByYXRoZXIgdGhhbiBERFIuIFBlcmhhcHMgSSBjYW4gYWxz
byB0cnkgaXQgDQo+IGFuZCB0byBzZWUgaWYgdGhlcmUgaXMgYW55IGltcHJvdmVtZW50IG9uIG15
IHN5c3RlbS4NCj4gDQo+ID5UWF9USFJFU0hPTEQgaXMgMTYgYW5kIFJYX1RIUkVTSE9MRCBpcyAy
Lg0KPiA+DQo+ID50aGUgdmlydGV4NCBmeDEyIGRldmljZSBvbiBNTDQwMyBpcyBub3cgY29tcGxl
dGVseSBmdWxsLCB3ZSBkbyBub3QgaGF2ZSANCj4gYW55IGZyZWUgYmxvY2sgbWVtb3JpZXMgbm9y
IGFueSBsb2dpYyBzbGljZXMuIE1heWJlIGlmIHdlIGhhZCBtb3JlIHNwYWNlIHdlIA0KPiBjb3Vs
ZCBjaG9vc2UgaGlnaGVyIHZhbHVlcyBmb3IgWFRFX1NFTkRfQkRfQ05UIGFuZCBYVEVfUkVDVl9C
RF9DTlQgaS5lLiANCj4gMzg0LiBEbyB5b3UgdGhpbmsgdGhpcyB3aWxsIGltcHJvdmUgcGVyZm9y
bWFuY2U/DQo+IA0KPiBQcm9iYWJseSB5ZXMuIEJ1dCBJIG5ldmVyIG1vZGlmaWVkIHRoZXNlIG51
bWJlcnMgYmVmb3JlLiBNeSBkZWZhdWx0IG9uZXMgDQo+IGFyZSA1MTIgcmVzcGVjdGl2ZWx5Lg0K
PiANCj4gPlRoZXJlIGlzIGFsc28gYW5vdGhlciBpbnRlcmVzdGluZyB0ZXN0LA0KPiA+V2UgZXhl
Y3V0ZWQgbmV0cGVyZiBvbiBib3RoIG9mIFBDIGFuZCBNTDQwMyBzaW11bHRhbm91c2x5ICwgd2hl
biB3ZSBkbyBub3QgDQo+IHB1dCBCRHMgaW4gQlJBTSwgdGhlIHBlcmZvcm1hbmNlIG9mIE1MNDAz
LS0+UEMgbGluayBkcm9wcyBmcm9tIDM5ME1iaXRzIHRvIA0KPiA0NU1iaXRzLCBidXQgd2hlbiB1
c2luZyBQTEIgQlJBTXMgZm9yIEJEcyB0aGUgcGVyZm9ybWFuY2UgZGVjcmVhc2VzIGZyb20gDQo+
IDQxME1iaXRzL3MgdG8ganVzdCAxMzBNYml0YS9zLiBJdCBpcyBpbXBvcnRhbnQgd2hlbiB0aGUg
dXNlciB3YW50cyB0byANCj4gdHJhbnNmZXIgZGF0YSBpbiBib3RoIGRpcmVjdGlvbnMgc2ltdWxh
dGFub3VzbHkuDQo+IA0KPiBEZWZpbml0ZWx5ISBUaGUgYm90dGxlbmVjayBpcyBDUFUgcHJvY2Vz
c2luZyBjYXBhYmlsaXR5LiBTbyBpZiB5b3Ugc2VuZCBhbmQgDQo+IHJlY2VpdmUgZGF0YSBhdCB0
aGUgc2FtZSB0aW1lLCB0aGUgcmVzdWx0cyB3aWxsIGJlIG11Y2ggd29yc2UuIEkgdGhpbmsgDQo+
IGFub3RoZXIgcmVhc29uIGlzIFRDUCBpcyBndWFyYW50ZWVkIHByb3RvY2FsLiBTbyB0aGVyZSB3
aWxsIGJlIHNvbWUgDQo+IGFja25vd2xlZGdlbWVudHMgcmV0dXJuaW5nIGJhY2sgd2hlbiB5b3Ug
c2VuZCBwYWNrZXRzIG91dC4gVGh1cyB5b3VyIA0KPiBiYW5kd2lkdGggd2lsbCBiZSB0YWtlbiBh
IGxpdHRsZSBhd2F5LiBIb3dldmVyIGNvbXBhcmVkIHdpdGggdGhlIA0KPiBjb25zdW1wdGlvbiBv
ZiBDUFUsIHRoaXMgcGVyaGFwcyB3aWxsIGJlIHRyaXZpYWwuDQo+IA0KPiBCUg0KPiBNaW5nDQo+
IA0KPiANCj4gDQo+IA0KPiA+DQo+ID4NCj4gPi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0NCj4gPiA+IEZyb206IGVlbWluZ2xpdUBob3RtYWlsLmNvbQ0KPiA+ID4gVG86
IG1hbXNhZGVnaEBob3RtYWlsLmNvbQ0KPiA+ID4gQ0M6IGFrb25vdmFsb3ZAcnUubXZpc3RhLmNv
bTsgbGludXhwcGMtZW1iZWRkZWRAb3psYWJzLm9yZw0KPiA+ID4gU3ViamVjdDogUkU6IE1MNDAz
IGdpZ2FiaXQgZXRoZXJuZXQgYmFuZHdpZHRoIC0gMi42IGtlcm5lbA0KPiA+ID4gRGF0ZTogTW9u
LCAyNSBKdW4gMjAwNyAxMDowMzozMCArMDAwMA0KPiA+ID4NCj4gPiA+IERlYXIgTW9oYW1tYWQs
DQo+ID4gPg0KPiA+ID4gPlRoZSByZXN1bHRzIGFyZSBhcyBmb2xsb3dzOg0KPiA+ID4gPlBDLS0+
TUw0MDMNCj4gPiA+ID5UQ1BfU0VOREZJTEUgOiAzOE1icHMNCj4gPiA+ID4NCj4gPiA+ID5NTDQw
My0tLT5QQw0KPiA+ID4gPlRDUF9TRU5ERklMRTogMTU1TWJwcw0KPiA+ID4NCj4gPiA+IFRoaXMg
cmVzdWx0IGlzIHVucmVhc29uYWJsZS4gQmVjYXVzZSBQQyBpcyBtb3JlIHBvd2VyZnVsIHRoYW4g
eW91ciANCj4gYm9hcmQsDQo+ID4gPiBzbyBQQy0+Ym9hcmQgc2hvdWxkIGJlIGZhc3RlciB0aGFu
IGJvYXJkLT5QQy4NCj4gPiA+DQo+ID4gPiA+VGhlIHRyYW5zZmVyIHJhdGUgZnJvbSBNTDQwMyB0
byBQQyBoYXMgaW1wcm92ZWQgYnkgYSBmYWN0b3Igb2YgMiwNCj4gPiA+ID5JIHNlZSBvbiB0aGUg
cG9zdHMgaGVyZSBpbiB0aGUgbWFpbGluZyBsaXN0IHRoYXQgeW91IGhhdmUgcmVhY2hlZCBhIA0K
PiBiYW5kDQo+ID4gPiB3aWR0aCBvZiAzMDFNYnBzLg0KPiA+ID4NCj4gPiA+IFllcywgd2l0aCBh
bGwgZmVhdHVyZXMgd2hpY2ggY291bGQgaW1wcm92ZSBwZXJmb3JtYW5jZSBlbmFibGVkLCB3ZSBj
YW4gDQo+IGdldA0KPiA+ID4gYXJvdW5kIDMwME1icHMgZm9yIFRDUCB0cmFuc2Zlci4gb25lIG1v
cmUgaGludCwgZGlkIHlvdSBlbmFibGUgY2FjaGVzIA0KPiBvbg0KPiA+ID4geW91ciBzeXN0ZW0/
IHBlcmhhcHMgaXQgd2lsbCBoZWxwLiBBbnl3YXksIGRvdWJsZSBjaGVjayB5b3VyIGhhcmR3YXJl
DQo+ID4gPiBkZXNpZ24gdG8gbWFrZSBzdXJlIGFsbCBmZWF0dXJlcyBhcmUgZW5hYmxlZC5UaGF0
J3MgYWxsIEkgY2FuIHN1Z2dlc3QuDQo+ID4gPg0KPiA+ID4gQlINCj4gPiA+IE1pbmcNCj4gPiA+
DQo+ID4gPg0KPiA+ID4gPg0KPiA+ID4gPg0KPiA+ID4gPg0KPiA+ID4gPg0KPiA+ID4gPi0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gPiA+ID4gPiBGcm9tOiBlZW1p
bmdsaXVAaG90bWFpbC5jb20NCj4gPiA+ID4gPiBUbzogbWFtc2FkZWdoQGhvdG1haWwuY29tOyBh
a29ub3ZhbG92QHJ1Lm12aXN0YS5jb207DQo+ID4gPiBsaW51eHBwYy1lbWJlZGRlZEBvemxhYnMu
b3JnOyBncmFudC5saWtlbHlAc2VjcmV0bGFiLmNhDQo+ID4gPiA+ID4gU3ViamVjdDogUkU6IE1M
NDAzIGdpZ2FiaXQgZXRoZXJuZXQgYmFuZHdpZHRoIC0gMi42IGtlcm5lbA0KPiA+ID4gPiA+IERh
dGU6IFNhdCwgMjMgSnVuIDIwMDcgMTk6MTA6MTYgKzAwMDANCj4gPiA+ID4gPg0KPiA+ID4gPiA+
IFVzZSB0aGUgZm9sbG93aW5nIGNvbW1hbmQgaW4gTGludXggcGxlYXNlOg0KPiA+ID4gPiA+DQo+
ID4gPiA+ID4gaWZjb25maWcgZXRoMCBtdHUgODk4Mg0KPiA+ID4gPiA+DQo+ID4gPiA+ID4gQXMg
d2VsbCB5b3Ugc2hvdWxkIGRvIHRoYXQgb24geW91ciBQQyBpbiB0aGUgbWVhc3VyZW1lbnQuDQo+
ID4gPiA+ID4NCj4gPiA+ID4gPiBNaW5nDQo+ID4gPiA+ID4NCj4gPiA+ID4gPg0KPiA+ID4gPiA+
ID5Gcm9tOiBNb2hhbW1hZCBTYWRlZ2ggU2FkcmkNCj4gPiA+ID4gPiA+VG86IE1pbmcgTGl1ICwN
Cj4gPiA+ID4gPiAsLA0KPiA+ID4gPiA+DQo+ID4gPiA+ID4gPlN1YmplY3Q6IFJFOiBNTDQwMyBn
aWdhYml0IGV0aGVybmV0IGJhbmR3aWR0aCAtIDIuNiBrZXJuZWwNCj4gPiA+ID4gPiA+RGF0ZTog
U2F0LCAyMyBKdW4gMjAwNyAxOTowODoyOSArMDAwMA0KPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+
DQo+ID4gPiA+ID4gPkRlYXIgTWluZywNCj4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPlJlYWxseSB0
aGFua3MgZm9yIHJlcGx5LA0KPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+YWJvdXQgdGhyZXNob2xk
cyBhbmQgd2FpdGJvdW5kIE9LISBJJ2xsIGFkanVzdCB0aGVtIGluIGFkYXB0ZXIuYyAsDQo+ID4g
PiA+ID4gPg0KPiA+ID4gPiA+ID5idXQgd2hhdCBhYm91dCBlbmFibGluZyBqdW1ibyBmcmFtZXM/
IHNob3VsZCBJIGRvIGFueSB0aGluZyANCj4gc3BlY2lhbCB0bw0KPiA+ID4gPiA+IGVuYWJsZSBK
dW1ibyBmcmFtIHN1cHBvcnQ/DQo+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID53ZSB3ZXJlIHRoaW5r
aW5nIHRoYXQgaXQgaXMgZW5hYmxlZCBieSBkZWZhdWx0LiBJcyBpdD8NCj4gPiA+ID4gPiA+DQo+
ID4gPiA+ID4gPnRoYW5rcw0KPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPg0K
PiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLQ0KPiA+ID4gPiA+ID4gPiBGcm9tOiBlZW1pbmdsaXVAaG90bWFpbC5jb20NCj4gPiA+
ID4gPiA+ID4gVG86IG1hbXNhZGVnaEBob3RtYWlsLmNvbTsgYWtvbm92YWxvdkBydS5tdmlzdGEu
Y29tOw0KPiA+ID4gPiA+IGxpbnV4cHBjLWVtYmVkZGVkQG96bGFicy5vcmc7IGdyYW50Lmxpa2Vs
eUBzZWNyZXRsYWIuY2ENCj4gPiA+ID4gPiA+ID4gU3ViamVjdDogUkU6IE1MNDAzIGdpZ2FiaXQg
ZXRoZXJuZXQgYmFuZHdpZHRoIC0gMi42IGtlcm5lbA0KPiA+ID4gPiA+ID4gPiBEYXRlOiBTYXQs
IDIzIEp1biAyMDA3IDE4OjQ4OjE5ICswMDAwDQo+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+
IERlYXIgTW9oYW1tYWQsDQo+ID4gPiA+ID4gPiA+IFRoZXJlIGFyZSBzb21lIHBhcmFtZXRlcnMg
d2hpY2ggY291bGQgYmUgYWRqdXN0ZWQgdG8gaW1wcm92ZSANCj4gdGhlDQo+ID4gPiA+ID4gPiA+
IHBlcmZvcm1hbmNlLiBUaGV5IGFyZTogVFggYW5kIFJYX1RocmVzaG9sZCBUWCBhbmQgUlhfd2Fp
dGJvdW5kLiANCj4gSW4NCj4gPiA+IG15DQo+ID4gPiA+ID4gPiA+IHN5c3RlbSwgd2UgdXNlIFRY
X1RocmVzaG9sZD0xNiBhbmQgUnhfVGhyZXNob2xkPTggYW5kIGJvdGgNCj4gPiA+IHdhaXRib3Vu
ZD0xLg0KPiA+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID4gPiBBbHNvIEp1bWJvIGZyYW1lIG9mIDg5
ODIgY291bGQgYmUgZW5hYmxlLg0KPiA+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID4gPiBUcnkgdGhv
c2UgaGludHMgYW5kIHNoYXJlIHlvdXIgaW1wcm92ZW1lbnQgd2l0aCB1cy4NCj4gPiA+ID4gPiA+
ID4NCj4gPiA+ID4gPiA+ID4gQlINCj4gPiA+ID4gPiA+ID4gTWluZw0KPiA+ID4gPiA+ID4gPg0K
PiA+ID4gPiA+ID4gPiA+RnJvbTogTW9oYW1tYWQgU2FkZWdoIFNhZHJpDQo+ID4gPiA+ID4gPiA+
ID5UbzogQW5kcmVpIEtvbm92YWxvdiAsIExpbnV4IFBQQyBMaW51eA0KPiA+ID4gPiA+ID4gPiBQ
UEMsIEdyYW50IExpa2VseQ0KPiA+ID4gPiA+ID4gPiA+U3ViamVjdDogTUw0MDMgZ2lnYWJpdCBl
dGhlcm5ldCBiYW5kd2lkdGggLSAyLjYga2VybmVsDQo+ID4gPiA+ID4gPiA+ID5EYXRlOiBTYXQs
IDIzIEp1biAyMDA3IDEyOjE5OjEyICswMDAwDQo+ID4gPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+
ID4gPg0KPiA+ID4gPiA+ID4gPiA+RGVhciBhbGwsDQo+ID4gPiA+ID4gPiA+ID4NCj4gPiA+ID4g
PiA+ID4gPlJlY2VudGx5IHdlIGRpZCBhIHNldCBvZiB0ZXN0cyBvbiBwZXJmb3JtYW5jZSBvZiB2
aXJ0ZXggNEZYIA0KPiBoYXJkDQo+ID4gPiBURU1BQw0KPiA+ID4gPiA+ID4gPiBtb2R1bGUgdXNp
bmcgTUw0MDMNCj4gPiA+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID4gPiA+d2Ugc3R1ZGllZCBhbGwg
b2YgdGhlIHBvc3RzIGhlcmUgY2FyZWZ1bGx5OiB0aGVzZSBhcmUgdGhlIA0KPiBzeXN0ZW0NCj4g
PiA+ID4gPiA+ID4gY2hhcmFjdGVyaXN0aWNzOw0KPiA+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4g
PiA+ID5Cb2FyZCA6IE1MNDAzDQo+ID4gPiA+ID4gPiA+ID5FREsgICAgOiBFREs5LjFTUDINCj4g
PiA+ID4gPiA+ID4gPkhhcmQgVEVNQUMgdmVyc2lvbiBhbmQgUExURU1BQyB2ZXJzaW9uIGFyZSBi
b3RoIDMuMC5hDQo+ID4gPiA+ID4gPiA+ID5QUEMgY2xvY2sgZnJlcXVlbmN5IDogIDMwME1Ieg0K
PiA+ID4gPiA+ID4gPiA+S2VybmVsIDogMi42LjIxLXJjNyAsIGRvd25sb2FkZWQgZnJvbSBncmFu
dCdzIGdpdCB0cmVlIHNvbWUgDQo+IHRoaW5nDQo+ID4gPiBuZWFyDQo+ID4gPiA+ID4gb25lDQo+
ID4gPiA+ID4gPiA+IHdlZWsgYWdvDQo+ID4gPiA+ID4gPiA+ID5ETUEgdHlwZTogMyAoc2cgZG1h
KQ0KPiA+ID4gPiA+ID4gPiA+RFJFIDogZW5hYmxlZCBmb3IgVFggYW5kIFJYLCAoMikNCj4gPiA+
ID4gPiA+ID4gPkNTVU0gb2ZmbG9hZCBpcyBlbmFibGVkIGZvciBib3RoIG9mIFRYIGFuZCBSWA0K
PiA+ID4gPiA+ID4gPiA+dHggYW5kIHJ4IGZpZm8gc2l6ZXMgOiAxMzEwNzIgYml0cw0KPiA+ID4g
PiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+ID50aGUgYm9hcmQgY29tZXMgdXAgb3ZlciBORlMgcm9v
dCBmaWxlIHN5c3RlbSBjb21wbGV0ZWx5IGFuZA0KPiA+ID4gd2l0aG91dA0KPiA+ID4gPiA+IGFu
eQ0KPiA+ID4gPiA+ID4gPiBwcm9ibGVtcy4NCj4gPiA+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID4g
PiA+UEMgc3lzdGVtIHVzZWQgZm9yIHRoZXNlIHRlc3RzIGlzIDogQ1BVIFA0IER1YWwgQ29yZSwg
My40R0h6ICwNCj4gPiA+ID4gPiAyR2lnYWJ5dGVzDQo+ID4gPiA+ID4gPiA+IG1lbW9yeSwgRHVh
bCBnaWdhYml0IGV0aGVybmV0IHBvcnQsIHJ1bm5pbmcgbGludXggMi42LjIxLjMNCj4gPiA+ID4g
PiA+ID4gPldlIGhhdmUgdGVzdGVkIHRoZSBQQyBzeXN0ZW0gYmFuZCB3aWR0aCBhbmQgaXQgY2Fu
IGVhc2lseSANCj4gcmVhY2gNCj4gPiA+ID4gPiA5NjZtYml0cy9zDQo+ID4gPiA+ID4gPiA+IHdo
ZW4gY29ubmVjdGVkIHRvIHRoZSBzYW1lIFBDLiAoIHVzaW5nIHRoZSBzYW1lIGNyb3NzIGNhYmxl
IA0KPiB1c2VkDQo+ID4gPiBmb3INCj4gPiA+ID4gPiBtbDQwMw0KPiA+ID4gPiA+ID4gPiB0ZXN0
KQ0KPiA+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+ID5OZXRwZXJmIGlzIGNvbXBpbGVkIHdp
dGggVENQIFNFTkQgRklMRSBlbmFibGVkLCAoIA0KPiAtREhBVkVfU0VOREZJTEUpDQo+ID4gPiA+
ID4gPiA+ID4NCj4gPiA+ID4gPiA+ID4gPihmcm9tIGJvYXJkIHRvIFBDKQ0KPiA+ID4gPiA+ID4g
PiA+bmV0cGVyZiAtdCBUQ1BfU0VOREZJTEUgLUggMTAuMTAuMTAuMjUwIC1GIC9ib290L3pJbWFn
ZS5lbGYgLS0gDQo+IC1tDQo+ID4gPiA+ID4gMTYzODQgLXMNCj4gPiA+ID4gPiA+ID4gODczODAg
LVMgODczODANCj4gPiA+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID4gPiA+dGhlIG1lYXN1cmVkIGJh
bmR3aWR0aCBmb3IgdGhpcyB0ZXN0IHdhcyBqdXN0IDQwLjY2TWJpdHMuDQo+ID4gPiA+ID4gPiA+
ID5JdCBpcyBhbHNvIHRydWUgZm9yIG5ldHBlcmYgZnJvbSBQQyB0byBib2FyZC4NCj4gPiA+ID4g
PiA+ID4gPg0KPiA+ID4gPiA+ID4gPiA+d2UgZG8gbm90IGhhdmUgYW55IG1vcmUgaWRlYSBhYm91
dCB3aGF0IHdlIHNob3VsZCBkbyB0byANCj4gaW1wcm92ZQ0KPiA+ID4gdGhlDQo+ID4gPiA+ID4g
PiA+IGJhbmR3aWR0aC4NCj4gPiA+ID4gPiA+ID4gPmFueSBoZWxwIG9yIGlkZWFzIGlzIGFwcHJl
Y2lhdGVkLi4uDQo+ID4gPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+ID4gDQo+ID5fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0K
PiA+ID4gPiA+ID4gPiA+Q29ubmVjdCB0byB0aGUgbmV4dCBnZW5lcmF0aW9uIG9mIE1TTg0KPiA+
ID4gPiA+ID4gPg0KPiA+ID4gPiA+DQo+ID4gPiANCj4gTWVzc2VuZ2VyPz5odHRwOi8vaW1hZ2lu
ZS1tc24uY29tL21lc3Nlbmdlci9sYXVuY2g4MC9kZWZhdWx0LmFzcHg/bG9jYWxlPWVuLXVzJnNv
dXJjZT13bG1haWx0YWdsaW5lDQo+IA0KPiA+ID4NCj4gPiA+ID4gPg0KPiA+ID4gPiA+ID4gPg0K
PiA+ID4gPiA+ID4gPiA+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18NCj4gPiA+ID4gPiA+ID4gPkxpbnV4cHBjLWVtYmVkZGVkIG1haWxpbmcgbGlzdA0KPiA+
ID4gPiA+ID4gPiA+TGludXhwcGMtZW1iZWRkZWRAb3psYWJzLm9yZw0KPiA+ID4gPiA+ID4gPiA+
aHR0cHM6Ly9vemxhYnMub3JnL21haWxtYW4vbGlzdGluZm8vbGludXhwcGMtZW1iZWRkZWQNCj4g
PiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+ID4gDQo+IF9fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4gPiA+ID4gPiA+IOWF
jei0ueS4i+i9vSBNU04gRXhwbG9yZXI6ICAgaHR0cDovL2V4cGxvcmVyLm1zbi5jb20vbGNjbi8N
Cj4gPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4gPiA+
ID4gPk5ld3MsIGVudGVydGFpbm1lbnQgYW5kIGV2ZXJ5dGhpbmcgeW91IGNhcmUgYWJvdXQgYXQg
TGl2ZS5jb20uIEdldCANCj4gaXQNCj4gPiA+IG5vdyENCj4gPiA+ID4gPiA+aHR0cDovL3d3dy5s
aXZlLmNvbS9nZXRzdGFydGVkLmFzcHgNCj4gPiA+ID4gPg0KPiA+ID4gPiA+IF9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+
ID4gPiA+ID4g5YWN6LS55LiL6L29IE1TTiBFeHBsb3JlcjogICBodHRwOi8vZXhwbG9yZXIubXNu
LmNvbS9sY2NuLw0KPiA+ID4gPiA+DQo+ID4gPiA+DQo+ID4gPiA+X19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4gPiA+ID5E
aXNjb3ZlciB0aGUgbmV3IFdpbmRvd3MgVmlzdGENCj4gPiA+ID5odHRwOi8vc2VhcmNoLm1zbi5j
b20vcmVzdWx0cy5hc3B4P3E9d2luZG93cyt2aXN0YSZta3Q9ZW4tVVMmZm9ybT1RQlJFDQo+ID4g
Pg0KPiA+ID4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18NCj4gPiA+IOWFjei0ueS4i+i9vSBNU04gRXhwbG9yZXI6ICAgaHR0
cDovL2V4cGxvcmVyLm1zbi5jb20vbGNjbi8NCj4gPiA+DQo+ID4NCj4gPl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID5O
ZXdzLCBlbnRlcnRhaW5tZW50IGFuZCBldmVyeXRoaW5nIHlvdSBjYXJlIGFib3V0IGF0IExpdmUu
Y29tLiBHZXQgaXQgbm93IQ0KPiA+aHR0cDovL3d3dy5saXZlLmNvbS9nZXRzdGFydGVkLmFzcHgN
Cj4gDQo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fDQo+IOS4juiBlOacuueahOaci+WPi+i/m+ihjOS6pOa1ge+8jOivt+S9
v+eUqCBNU04gTWVzc2VuZ2VyOiAgaHR0cDovL21lc3Nlbmdlci5tc24uY29tL2NuICANCj4gDQoN
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fDQpDb25uZWN0IHRvIHRoZSBuZXh0IGdlbmVyYXRpb24gb2YgTVNOIE1lc3Nlbmdl
csKgDQpodHRwOi8vaW1hZ2luZS1tc24uY29tL21lc3Nlbmdlci9sYXVuY2g4MC9kZWZhdWx0LmFz
cHg/bG9jYWxlPWVuLXVzJnNvdXJjZT13bG1haWx0YWdsaW5l
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2007-07-04 15:18 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-25 19:42 ML403 gigabit ethernet bandwidth - 2.6 kernel Greg Crocker
2007-06-26 7:42 ` Ming Liu
-- strict thread matches above, loose matches on Subject: below --
2007-07-04 15:17 Mohammad Sadegh Sadri
2007-06-26 14:53 Mohammad Sadegh Sadri
2007-06-26 18:12 ` Ming Liu
2007-06-25 15:21 Glenn.G.Hart
2007-06-24 14:36 Mohammad Sadegh Sadri
2007-06-25 10:03 ` Ming Liu
2007-06-23 19:08 Mohammad Sadegh Sadri
2007-06-23 19:10 ` Ming Liu
2007-06-23 12:19 Mohammad Sadegh Sadri
2007-06-23 18:48 ` Ming Liu
2007-06-25 11:52 ` Bhupender Saharan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).