linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [gianfar]bandwidth management problem on mpc8313 based board
@ 2011-06-07 13:02 Vijay Nikam
  2011-06-07 19:22 ` Scott Wood
  2011-06-08 11:25 ` David Laight
  0 siblings, 2 replies; 5+ messages in thread
From: Vijay Nikam @ 2011-06-07 13:02 UTC (permalink / raw)
  To: linuxppc-dev

Dear All,

I have mpc8313 powerpc based board with silicon revision 2.1. the
processor has two ETH ports (eTsec1 and eTsec2) i.e. eth0 and eth1.
eth0 is 1Gbps port and eth1 is 100Mbps port. On board there is L2
switch from TANTOS2G (psb6972) supports one port 1Gbps,
and from switch there are 4 more eth ports derived which are 100Mbps
ports and port based VLAN is configured for this purpose.

The interface between switch and eth0 (port of processor) is RGMII. So
the processor port and switch port are connected on 1Gbps Link.
The other 4 derived ports (100Mbps) are used to connect to external world.
On this board Embedded Linux is running of kernel version 2.6.23 with HRT patch.
The ethernet controller driver in use is "gianfar" version 1.3
The driver is configured properly as it determines both links 1000Mbps
(eth0) and 100Mbps (eth1) also verified with ethtool.

After this I started to perform bandwidth test using iperf tool.
When I performed this test on one port out of 4 derived ports I am
getting bandwidth in the range of 80-85Mbps
but when the same test is performed on 2 ports simultaneously then the
per port bandwidth is reduced to 40-45Mbps.

But my understanding is all of the 4 ports should support 100Mbps
bandwidth simultaneously (as base port is 1Gbps).
Then why bandwidth gets reduced when more than one port are
communicating simultaneously?
Any reason or suggestion I should check for this problem?

Kindly Please acknowledge, thanks

Kind Regards,
Vijay Nikam

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [gianfar]bandwidth management problem on mpc8313 based board
  2011-06-07 13:02 [gianfar]bandwidth management problem on mpc8313 based board Vijay Nikam
@ 2011-06-07 19:22 ` Scott Wood
  2011-06-08 10:51   ` Vijay Nikam
  2011-06-08 11:25 ` David Laight
  1 sibling, 1 reply; 5+ messages in thread
From: Scott Wood @ 2011-06-07 19:22 UTC (permalink / raw)
  To: Vijay Nikam; +Cc: linuxppc-dev

On Tue, 7 Jun 2011 18:32:37 +0530
Vijay Nikam <vijay.t.nikam@gmail.com> wrote:

> Dear All,
> 
> I have mpc8313 powerpc based board with silicon revision 2.1. the
> processor has two ETH ports (eTsec1 and eTsec2) i.e. eth0 and eth1.
> eth0 is 1Gbps port and eth1 is 100Mbps port. On board there is L2
> switch from TANTOS2G (psb6972) supports one port 1Gbps,
> and from switch there are 4 more eth ports derived which are 100Mbps
> ports and port based VLAN is configured for this purpose.
> 
> The interface between switch and eth0 (port of processor) is RGMII. So
> the processor port and switch port are connected on 1Gbps Link.
> The other 4 derived ports (100Mbps) are used to connect to external world.
> On this board Embedded Linux is running of kernel version 2.6.23 with HRT patch.

That's rather old.

> The ethernet controller driver in use is "gianfar" version 1.3
> The driver is configured properly as it determines both links 1000Mbps
> (eth0) and 100Mbps (eth1) also verified with ethtool.
> 
> After this I started to perform bandwidth test using iperf tool.
> When I performed this test on one port out of 4 derived ports I am
> getting bandwidth in the range of 80-85Mbps
> but when the same test is performed on 2 ports simultaneously then the
> per port bandwidth is reduced to 40-45Mbps.
> 
> But my understanding is all of the 4 ports should support 100Mbps
> bandwidth simultaneously (as base port is 1Gbps).
> Then why bandwidth gets reduced when more than one port are
> communicating simultaneously?
> Any reason or suggestion I should check for this problem?

What's your CPU utilization?  The CPU may just not be able to keep up with
that much traffic, with the software you're running.

What packet size are you using?

-Scott

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [gianfar]bandwidth management problem on mpc8313 based board
  2011-06-07 19:22 ` Scott Wood
@ 2011-06-08 10:51   ` Vijay Nikam
  2011-06-08 17:00     ` Scott Wood
  0 siblings, 1 reply; 5+ messages in thread
From: Vijay Nikam @ 2011-06-08 10:51 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc-dev

Hello Scott,

Thanks for the prompt reply.

> What's your CPU utilization?  The CPU may just not be able to keep up wit=
h
> that much traffic, with the software you're running.
The software I am using to check bandwidth is 'iperf'. Without running iper=
f the
CPU utilization varies around 30-50% and with iperf running it shoots
upto 99.9%.

> What packet size are you using?
The packet size is - 1518 + VLAN_Tag (4Bytes) =3D 1522 Bytes

Another point which I would like to clear is that mpc8313 has eth0 (eTsec1)=
 of
1Gbps, if more than 50% of CPU Time is available then why the total bandwid=
th
should limit to less than 100 Mbps?  At least 400Mbps should be expected, p=
lease
correct if I am wrong!

Please acknowledge, thanks.

Kind Regards,
Vijay Nikam


On Wed, Jun 8, 2011 at 12:52 AM, Scott Wood <scottwood@freescale.com> wrote=
:
> On Tue, 7 Jun 2011 18:32:37 +0530
> Vijay Nikam <vijay.t.nikam@gmail.com> wrote:
>
>> Dear All,
>>
>> I have mpc8313 powerpc based board with silicon revision 2.1. the
>> processor has two ETH ports (eTsec1 and eTsec2) i.e. eth0 and eth1.
>> eth0 is 1Gbps port and eth1 is 100Mbps port. On board there is L2
>> switch from TANTOS2G (psb6972) supports one port 1Gbps,
>> and from switch there are 4 more eth ports derived which are 100Mbps
>> ports and port based VLAN is configured for this purpose.
>>
>> The interface between switch and eth0 (port of processor) is RGMII. So
>> the processor port and switch port are connected on 1Gbps Link.
>> The other 4 derived ports (100Mbps) are used to connect to external worl=
d.
>> On this board Embedded Linux is running of kernel version 2.6.23 with HR=
T patch.
>
> That's rather old.
>
>> The ethernet controller driver in use is "gianfar" version 1.3
>> The driver is configured properly as it determines both links 1000Mbps
>> (eth0) and 100Mbps (eth1) also verified with ethtool.
>>
>> After this I started to perform bandwidth test using iperf tool.
>> When I performed this test on one port out of 4 derived ports I am
>> getting bandwidth in the range of 80-85Mbps
>> but when the same test is performed on 2 ports simultaneously then the
>> per port bandwidth is reduced to 40-45Mbps.
>>
>> But my understanding is all of the 4 ports should support 100Mbps
>> bandwidth simultaneously (as base port is 1Gbps).
>> Then why bandwidth gets reduced when more than one port are
>> communicating simultaneously?
>> Any reason or suggestion I should check for this problem?
>
> What's your CPU utilization? =A0The CPU may just not be able to keep up w=
ith
> that much traffic, with the software you're running.
>
> What packet size are you using?
>
> -Scott
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [gianfar]bandwidth management problem on mpc8313 based board
  2011-06-07 13:02 [gianfar]bandwidth management problem on mpc8313 based board Vijay Nikam
  2011-06-07 19:22 ` Scott Wood
@ 2011-06-08 11:25 ` David Laight
  1 sibling, 0 replies; 5+ messages in thread
From: David Laight @ 2011-06-08 11:25 UTC (permalink / raw)
  To: Vijay Nikam, linuxppc-dev

> Subject: [gianfar]bandwidth management problem on mpc8313 based board
...
> I have mpc8313 powerpc based board with silicon revision 2.1. the
> processor has two ETH ports (eTsec1 and eTsec2) i.e. eth0 and eth1.
> eth0 is 1Gbps port and eth1 is 100Mbps port. On board there is L2
> switch from TANTOS2G (psb6972) supports one port 1Gbps,
> and from switch there are 4 more eth ports derived which are 100Mbps
> ports and port based VLAN is configured for this purpose.
>=20
> The interface between switch and eth0 (port of processor) is RGMII. So
> the processor port and switch port are connected on 1Gbps Link.
...
> After this I started to perform bandwidth test using iperf tool.
> When I performed this test on one port out of 4 derived ports I am
> getting bandwidth in the range of 80-85Mbps
> but when the same test is performed on 2 ports simultaneously then the
> per port bandwidth is reduced to 40-45Mbps.

To summerise, you have a Ge port connected by RGMII (cross over) to
an on-board switch that is configured to use VLAN tagging to drive
four external 100M ports?

I see two likely reasons for the aggregate throughput being constant:
1) The switch has limited throughput/buffering
2) The host really is 100% busy
3) The remote system has limited throughput

I'd vote for the system being busy and 'top' (or whatever you are
using) lying about the cpu usage. Measuring free cpu time by counting
it in a low priority process is much more accurate than relying on
the 'code interrupted by timer tick' scheme.
(Clearly the scheduler could use a high-res timestamp on entry/exit
to the idle loop and/or process switch - but, to my knowledge, the
linux kernel only uses the timer interrupt.)

	David

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [gianfar]bandwidth management problem on mpc8313 based board
  2011-06-08 10:51   ` Vijay Nikam
@ 2011-06-08 17:00     ` Scott Wood
  0 siblings, 0 replies; 5+ messages in thread
From: Scott Wood @ 2011-06-08 17:00 UTC (permalink / raw)
  To: Vijay Nikam; +Cc: linuxppc-dev

On Wed, 8 Jun 2011 16:21:03 +0530
Vijay Nikam <vijay.t.nikam@gmail.com> wrote:

> Hello Scott,
> 
> Thanks for the prompt reply.
> 
> > What's your CPU utilization?  The CPU may just not be able to keep up with
> > that much traffic, with the software you're running.
> The software I am using to check bandwidth is 'iperf'.

Plus the Linux network stack.

> Without running iperf the
> CPU utilization varies around 30-50% and with iperf running it shoots
> upto 99.9%.

OK, so you're CPU limited.

You might want to try a newer kernel; things may have improved in the past
several years.

If the reason you're running such an old kernel is because you're using the
Freescale BSP, contact Freescale support and ask what performance you're
supposed to be able to get (as well as if they have a newer BSP available).

> > What packet size are you using?
> The packet size is - 1518 + VLAN_Tag (4Bytes) = 1522 Bytes
> 
> Another point which I would like to clear is that mpc8313 has eth0 (eTsec1) of
> 1Gbps, if more than 50% of CPU Time is available then why the total bandwidth
> should limit to less than 100 Mbps?  At least 400Mbps should be expected, please
> correct if I am wrong!

I assume the remote end isn't CPU limited...  Is the test limited by latency
or bandwidth?  You're sure that you've actually got a gigabit link
end-to-end?  Didn't you say the 30-50% was without running iperf at all --
did you mean running it only on one port?

Beyond that, I guess you'd have to do some debugging to see where the
packets are getting dropped.

-Scott

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-06-08 17:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-07 13:02 [gianfar]bandwidth management problem on mpc8313 based board Vijay Nikam
2011-06-07 19:22 ` Scott Wood
2011-06-08 10:51   ` Vijay Nikam
2011-06-08 17:00     ` Scott Wood
2011-06-08 11:25 ` David Laight

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).