linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* Small UDP packet performance
@ 2004-01-19  3:44 Jacky Lam
  2004-01-19  5:28 ` Eugene Surovegin
  0 siblings, 1 reply; 10+ messages in thread
From: Jacky Lam @ 2004-01-19  3:44 UTC (permalink / raw)
  To: linuxppc-embedded


Hi,

    Currently, I find that my PPC 405EP board has some problem about the
throughput of small UDP packet (188 bytes). The throughput is just only
~0.01%. However, for large packet, say 1K, the throughput is very good. It
is more than 50%. Is it a problem of PPC Linux, ethernet driver or generic
Linux? Is there any way to tune it? Thanks.


Jacky


** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-19  3:44 Small UDP packet performance Jacky Lam
@ 2004-01-19  5:28 ` Eugene Surovegin
  2004-01-19  5:47   ` Eugene Surovegin
  2004-01-20  0:48   ` Jacky Lam
  0 siblings, 2 replies; 10+ messages in thread
From: Eugene Surovegin @ 2004-01-19  5:28 UTC (permalink / raw)
  To: Jacky Lam; +Cc: linuxppc-embedded


On Mon, Jan 19, 2004 at 11:44:55AM +0800, Jacky Lam wrote:
>     Currently, I find that my PPC 405EP board has some problem about the
> throughput of small UDP packet (188 bytes). The throughput is just only
> ~0.01%. However, for large packet, say 1K, the throughput is very good. It
> is more than 50%. Is it a problem of PPC Linux, ethernet driver or generic
> Linux? Is there any way to tune it? Thanks.

When talking about IP stack/driver performance usually you should pay
attention to the packets per second (pps) not throughput (whatever you
mean under this term).

If you analyze your test cases using pps you'll probably notice that
performance of your 405ep board is the same in both cases (at least if
in your tests you were _sending_ data from the 405ep).

When computing "throughput" with small packet size don't forget to add
L1/L2/L3 overhead (inter frame gap, Ethernet frame headers, IP and UDP
headers), in case of UDP total overhead per packet is 66 bytes of
Ethernet bandwidth;

To demonstrate this point, consider the following (generated) table:

UDP min size - 18, max size - 1472, overhead 66
  20, rate 145348
 120, rate  67204
 220, rate  43706
 320, rate  32383
 420, rate  25720
 520, rate  21331
 620, rate  18221
 720, rate  15903
 820, rate  14108
 920, rate  12677
1020, rate  11510
1120, rate  10539
1220, rate   9720
1320, rate   9018
1420, rate   8411

First column - is the size of UDP payload, "rate" is the theoretical
maximum number of packets you can fit in 100Mb link.

Let's say you box is fast enough to send 10K packets per second. As
you see it's more than enough to saturate 100Mb Ethernet with big
packets (UDP payload > ~1120 bytes).

But with 200-sized UDP payload you can only fill ~2.7% of 100Mb link.

Also, please note, that usually you can _send_ more packets than
receive and if you flood the box with a lot of packets you can
experience so called "congestion collapse" (google "Beyond Softnet"
for more details). This can be solved by modifing ethernet driver to
use NAPI (some work is in progress for 4xx).

Eugene

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-19  5:28 ` Eugene Surovegin
@ 2004-01-19  5:47   ` Eugene Surovegin
  2004-01-20  0:48   ` Jacky Lam
  1 sibling, 0 replies; 10+ messages in thread
From: Eugene Surovegin @ 2004-01-19  5:47 UTC (permalink / raw)
  To: Jacky Lam, linuxppc-embedded


On Sun, Jan 18, 2004 at 09:28:06PM -0800, Eugene Surovegin wrote:
> Let's say you box is fast enough to send 10K packets per second. As
> you see it's more than enough to saturate 100Mb Ethernet with big
> packets (UDP payload > ~1120 bytes).
>
> But with 200-sized UDP payload you can only fill ~2.7% of 100Mb link.
>

grr, it's 21% of 100Mb link (16% effective (without overhead)).

Eugene.

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-19  5:28 ` Eugene Surovegin
  2004-01-19  5:47   ` Eugene Surovegin
@ 2004-01-20  0:48   ` Jacky Lam
  2004-01-20  1:02     ` Eugene Surovegin
  1 sibling, 1 reply; 10+ messages in thread
From: Jacky Lam @ 2004-01-20  0:48 UTC (permalink / raw)
  To: Eugene Surovegin; +Cc: linuxppc-embedded


    Thanks. My case is that I am using NetPerf to test the UDP performance.
I am using a x86 with GigaEthernet card and sending packet to my 405EP in
full speed. I find that the box nearly dead in first min and lost all
packets. I know that it is unavoidable. But I want to know anyway to improve
as much as possible.

    What is NAPI? How can I make use of it? Is there any buffer size in UDP
stack that I can adjust? Thanks.

Jacky
----- Original Message -----
From: "Eugene Surovegin" <ebs@ebshome.net>
To: "Jacky Lam" <jackylam@astri.org>
Cc: <linuxppc-embedded@lists.linuxppc.org>
Sent: Monday, January 19, 2004 1:28 PM
Subject: Re: Small UDP packet performance


> On Mon, Jan 19, 2004 at 11:44:55AM +0800, Jacky Lam wrote:
> >     Currently, I find that my PPC 405EP board has some problem about the
> > throughput of small UDP packet (188 bytes). The throughput is just only
> > ~0.01%. However, for large packet, say 1K, the throughput is very good.
It
> > is more than 50%. Is it a problem of PPC Linux, ethernet driver or
generic
> > Linux? Is there any way to tune it? Thanks.
>
> When talking about IP stack/driver performance usually you should pay
> attention to the packets per second (pps) not throughput (whatever you
> mean under this term).
>
> If you analyze your test cases using pps you'll probably notice that
> performance of your 405ep board is the same in both cases (at least if
> in your tests you were _sending_ data from the 405ep).
>
> When computing "throughput" with small packet size don't forget to add
> L1/L2/L3 overhead (inter frame gap, Ethernet frame headers, IP and UDP
> headers), in case of UDP total overhead per packet is 66 bytes of
> Ethernet bandwidth;
>
> To demonstrate this point, consider the following (generated) table:
>
> UDP min size - 18, max size - 1472, overhead 66
>   20, rate 145348
>  120, rate  67204
>  220, rate  43706
>  320, rate  32383
>  420, rate  25720
>  520, rate  21331
>  620, rate  18221
>  720, rate  15903
>  820, rate  14108
>  920, rate  12677
> 1020, rate  11510
> 1120, rate  10539
> 1220, rate   9720
> 1320, rate   9018
> 1420, rate   8411
>
> First column - is the size of UDP payload, "rate" is the theoretical
> maximum number of packets you can fit in 100Mb link.
>
> Let's say you box is fast enough to send 10K packets per second. As
> you see it's more than enough to saturate 100Mb Ethernet with big
> packets (UDP payload > ~1120 bytes).
>
> But with 200-sized UDP payload you can only fill ~2.7% of 100Mb link.
>
> Also, please note, that usually you can _send_ more packets than
> receive and if you flood the box with a lot of packets you can
> experience so called "congestion collapse" (google "Beyond Softnet"
> for more details). This can be solved by modifing ethernet driver to
> use NAPI (some work is in progress for 4xx).
>
> Eugene


** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-20  0:48   ` Jacky Lam
@ 2004-01-20  1:02     ` Eugene Surovegin
  2004-01-20 23:23       ` Andrew May
  0 siblings, 1 reply; 10+ messages in thread
From: Eugene Surovegin @ 2004-01-20  1:02 UTC (permalink / raw)
  To: Jacky Lam; +Cc: linuxppc-embedded


On Tue, Jan 20, 2004 at 08:48:38AM +0800, Jacky Lam wrote:
>
>     Thanks. My case is that I am using NetPerf to test the UDP performance.
> I am using a x86 with GigaEthernet card and sending packet to my 405EP in
> full speed. I find that the box nearly dead in first min and lost all
> packets. I know that it is unavoidable. But I want to know anyway to improve
> as much as possible.
>
>     What is NAPI? How can I make use of it?

Please, use google as I suggested and you find good discussion of the
problem you are experiencing.

Right now, there is no publicly available NAPI-enabled  driver for
4xx, although I have something done in that direction, but unfortunately
I put this task on hold because of more urgent problems :( (maybe I'll
return to it this or next month)

> Is there any buffer size in UDP stack that I can adjust?

No, it has nothing to do with UDP socket buffer size.

Eugene


** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-20  1:02     ` Eugene Surovegin
@ 2004-01-20 23:23       ` Andrew May
  2004-01-21  1:16         ` Eugene Surovegin
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew May @ 2004-01-20 23:23 UTC (permalink / raw)
  To: Jacky Lam, linuxppc-embedded; +Cc: Eugene Surovegin

[-- Attachment #1: Type: text/plain, Size: 1123 bytes --]

On Mon, Jan 19, 2004 at 05:02:27PM -0800, Eugene Surovegin wrote:
>
> On Tue, Jan 20, 2004 at 08:48:38AM +0800, Jacky Lam wrote:
> >
> >     Thanks. My case is that I am using NetPerf to test the UDP performance.
> > I am using a x86 with GigaEthernet card and sending packet to my 405EP in
> > full speed. I find that the box nearly dead in first min and lost all
> > packets. I know that it is unavoidable. But I want to know anyway to improve
> > as much as possible.
> >
> >     What is NAPI? How can I make use of it?
>
> Please, use google as I suggested and you find good discussion of the
> problem you are experiencing.
>
> Right now, there is no publicly available NAPI-enabled  driver for
> 4xx, although I have something done in that direction, but unfortunately
> I put this task on hold because of more urgent problems :( (maybe I'll
> return to it this or next month)

Here is what I have done for a NAPI version of the driver. I don't have time
either to do a patch or merge this up to the latest kernel. I have frozen down
at 2.4.21-pre4 somewhere and I won't be merging anytime soon. It is stable for
me.

[-- Attachment #2: 405napi.tgz --]
[-- Type: application/x-gtar, Size: 28265 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-20 23:23       ` Andrew May
@ 2004-01-21  1:16         ` Eugene Surovegin
  2004-01-21  2:05           ` Andrew May
  0 siblings, 1 reply; 10+ messages in thread
From: Eugene Surovegin @ 2004-01-21  1:16 UTC (permalink / raw)
  To: Andrew May; +Cc: Jacky Lam, linuxppc-embedded


On Tue, Jan 20, 2004 at 03:23:42PM -0800, Andrew May wrote:
> Here is what I have done for a NAPI version of the driver. I don't have time
> either to do a patch or merge this up to the latest kernel. I have frozen down
> at 2.4.21-pre4 somewhere and I won't be merging anytime soon. It is stable for
> me.

Looks OK, although I chose another way to disable RX IRQs :)

Sad thing is because of bad MAL design NAPI will be limited to one
EMAC per MAL (there is no way to disable IRQ generation on channel
basis), it's not an issue for 405, but for 440 I haven't figured
out how to overcome this limitation yet.

Andrew, I'm just curious, you probably did some measurements with
your NAPI driver, care to share them :) ?

Eugene


** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-21  1:16         ` Eugene Surovegin
@ 2004-01-21  2:05           ` Andrew May
  2004-01-21  3:38             ` Eugene Surovegin
  2004-01-22  9:26             ` Jacky Lam
  0 siblings, 2 replies; 10+ messages in thread
From: Andrew May @ 2004-01-21  2:05 UTC (permalink / raw)
  To: linuxppc-embedded


On Tue, Jan 20, 2004 at 05:16:50PM -0800, Eugene Surovegin wrote:
> On Tue, Jan 20, 2004 at 03:23:42PM -0800, Andrew May wrote:
> > Here is what I have done for a NAPI version of the driver. I don't have time
> > either to do a patch or merge this up to the latest kernel. I have frozen down
> > at 2.4.21-pre4 somewhere and I won't be merging anytime soon. It is stable for
> > me.
>
> Looks OK, although I chose another way to disable RX IRQs :)

Do tell.

> Sad thing is because of bad MAL design NAPI will be limited to one
> EMAC per MAL (there is no way to disable IRQ generation on channel
> basis), it's not an issue for 405, but for 440 I haven't figured
> out how to overcome this limitation yet.

Yep it is a problem and even for the 405's with more than one ethernet.

> Andrew, I'm just curious, you probably did some measurements with
> your NAPI driver, care to share them :) ?

It may be hard to compare them. I am adding some data to each packet
and routing them to another PCI device. I think things max out around
18kpps one way. Going full-duplex things even out pretty good both
ways with the total pps being slightly higher. The most important
thing is that when the input rate goes higher the output stays at
the max instead of going down.

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-21  2:05           ` Andrew May
@ 2004-01-21  3:38             ` Eugene Surovegin
  2004-01-22  9:26             ` Jacky Lam
  1 sibling, 0 replies; 10+ messages in thread
From: Eugene Surovegin @ 2004-01-21  3:38 UTC (permalink / raw)
  To: Andrew May; +Cc: linuxppc-embedded


On Tue, Jan 20, 2004 at 06:05:01PM -0800, Andrew May wrote:
> > Looks OK, although I chose another way to disable RX IRQs :)
>
> Do tell.

I don't set I bit in RX BD, and I use MALx_CFG[EOPIE] do
enable/disable end-of-packet IRQ generation.

> The most important
> thing is that when the input rate goes higher the output stays at
> the max instead of going down.

Yeah, that's definetely is the goal :).

Eugene.

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Small UDP packet performance
  2004-01-21  2:05           ` Andrew May
  2004-01-21  3:38             ` Eugene Surovegin
@ 2004-01-22  9:26             ` Jacky Lam
  1 sibling, 0 replies; 10+ messages in thread
From: Jacky Lam @ 2004-01-22  9:26 UTC (permalink / raw)
  To: Andrew May, linuxppc-embedded


Hi,

        Thanks for you patch. I tried your patch and see the throughput is
really improved a lot. For packet size of 512, the throughput jumps from 2%
to 55%. For packet size of 1024, the throughput jumps from 50% to 66%.
Although the improvement for packet size of 256 and 188 doesn't change much
(0%->4% and 0%->2% respectively), it helps a lot already.

Regards,
Jacky
----- Original Message -----
From: "Andrew May" <acmay@acmay.homeip.net>
To: <linuxppc-embedded@lists.linuxppc.org>
Sent: Wednesday, January 21, 2004 10:05 AM
Subject: Re: Small UDP packet performance


>
> On Tue, Jan 20, 2004 at 05:16:50PM -0800, Eugene Surovegin wrote:
> > On Tue, Jan 20, 2004 at 03:23:42PM -0800, Andrew May wrote:
> > > Here is what I have done for a NAPI version of the driver. I don't
have time
> > > either to do a patch or merge this up to the latest kernel. I have
frozen down
> > > at 2.4.21-pre4 somewhere and I won't be merging anytime soon. It is
stable for
> > > me.
> >
> > Looks OK, although I chose another way to disable RX IRQs :)
>
> Do tell.
>
> > Sad thing is because of bad MAL design NAPI will be limited to one
> > EMAC per MAL (there is no way to disable IRQ generation on channel
> > basis), it's not an issue for 405, but for 440 I haven't figured
> > out how to overcome this limitation yet.
>
> Yep it is a problem and even for the 405's with more than one ethernet.
>
> > Andrew, I'm just curious, you probably did some measurements with
> > your NAPI driver, care to share them :) ?
>
> It may be hard to compare them. I am adding some data to each packet
> and routing them to another PCI device. I think things max out around
> 18kpps one way. Going full-duplex things even out pretty good both
> ways with the total pps being slightly higher. The most important
> thing is that when the input rate goes higher the output stays at
> the max instead of going down.
>


** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2004-01-22  9:26 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-01-19  3:44 Small UDP packet performance Jacky Lam
2004-01-19  5:28 ` Eugene Surovegin
2004-01-19  5:47   ` Eugene Surovegin
2004-01-20  0:48   ` Jacky Lam
2004-01-20  1:02     ` Eugene Surovegin
2004-01-20 23:23       ` Andrew May
2004-01-21  1:16         ` Eugene Surovegin
2004-01-21  2:05           ` Andrew May
2004-01-21  3:38             ` Eugene Surovegin
2004-01-22  9:26             ` Jacky Lam

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).