* 100Mbit ethernet performance on embedded devices
@ 2009-08-19 14:50 Johannes Stezenbach
2009-08-19 15:05 ` Ben Hutchings
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Johannes Stezenbach @ 2009-08-19 14:50 UTC (permalink / raw)
To: linux-embedded; +Cc: netdev
Hi,
a while ago I was working on a SoC with 200MHz ARM926EJ-S CPU
and integrated 100Mbit ethernet core, connected on internal
(fast) memory bus, with DMA. With iperf I measured:
TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
The CPU load during the iperf test is around
1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
The kernel used in these measurements does not have iptables
support, I think packet filtering will slow it down noticably,
but I didn't actually try. The ethernet driver uses NAPI,
but it doesn't seem to be a win judging from the irq/sec number.
The kernel was an ancient 2.6.20.
I tried hard, but I couldn't find any performance figures for
comparison. (All performance figures I found refer to 1Gbit
or 10Gbit server type systems.)
What I'm interested in are some numbers for similar hardware,
to find out if my hardware and/or ethernet driver can be improved,
or if the CPU will always be the limiting factor.
I'd also be interested to know if hardware checksumming
support would improve throughput noticably in such a system,
or if it is only useful for 1Gbit and above.
Did anyone actually manage to get close to 100Mbit/sec
with similar CPU resources?
TIA,
Johannes
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-19 14:50 100Mbit ethernet performance on embedded devices Johannes Stezenbach
@ 2009-08-19 15:05 ` Ben Hutchings
2009-08-19 15:35 ` Jamie Lokier
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Ben Hutchings @ 2009-08-19 15:05 UTC (permalink / raw)
To: Johannes Stezenbach; +Cc: linux-embedded, netdev
On Wed, 2009-08-19 at 16:50 +0200, Johannes Stezenbach wrote:
> Hi,
>
> a while ago I was working on a SoC with 200MHz ARM926EJ-S CPU
> and integrated 100Mbit ethernet core, connected on internal
> (fast) memory bus, with DMA. With iperf I measured:
>
> TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
> TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
>
> The CPU load during the iperf test is around
> 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
>
> The kernel used in these measurements does not have iptables
> support, I think packet filtering will slow it down noticably,
> but I didn't actually try. The ethernet driver uses NAPI,
> but it doesn't seem to be a win judging from the irq/sec number.
> The kernel was an ancient 2.6.20.
Which driver is this? Is it possible that it does not use NAPI
correctly?
> I tried hard, but I couldn't find any performance figures for
> comparison. (All performance figures I found refer to 1Gbit
> or 10Gbit server type systems.)
>
> What I'm interested in are some numbers for similar hardware,
> to find out if my hardware and/or ethernet driver can be improved,
> or if the CPU will always be the limiting factor.
> I'd also be interested to know if hardware checksumming
> support would improve throughput noticably in such a system,
> or if it is only useful for 1Gbit and above.
I have no recent experience with this sort of system, but checksum
offload and scatter/gather DMA support should significantly reduce both
CPU and memory bus load.
Ben.
--
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-19 14:50 100Mbit ethernet performance on embedded devices Johannes Stezenbach
2009-08-19 15:05 ` Ben Hutchings
@ 2009-08-19 15:35 ` Jamie Lokier
2009-08-20 12:56 ` Johannes Stezenbach
2009-08-27 15:38 ` H M Thalib
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Jamie Lokier @ 2009-08-19 15:35 UTC (permalink / raw)
To: Johannes Stezenbach; +Cc: linux-embedded, netdev
Johannes Stezenbach wrote:
> a while ago I was working on a SoC with 200MHz ARM926EJ-S CPU
> and integrated 100Mbit ethernet core, connected on internal
> (fast) memory bus, with DMA. With iperf I measured:
>
> TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
> TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
>
> The CPU load during the iperf test is around
> 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
>
> The kernel used in these measurements does not have iptables
> support, I think packet filtering will slow it down noticably,
> but I didn't actually try. The ethernet driver uses NAPI,
> but it doesn't seem to be a win judging from the irq/sec number.
You should see far fewer interrupts if NAPI was working properly.
Rather than NAPI not being a win, it looks like it's not active at
all.
7500/sec is close to the packet rate, for sending TCP with
full-size ethernet packages over a 100Mbit ethernet link.
> What I'm interested in are some numbers for similar hardware,
> to find out if my hardware and/or ethernet driver can be improved,
> or if the CPU will always be the limiting factor.
I have a SoC with a 166MHz ARMv4 (ARM7TDMI I think, but I'm not sure),
and an external RTL8139 100Mbit ethernet chip over the SoC's PCI bus.
It gets a little over 80Mbit/s actual data throughput in both
directions, running a simple FTP client.
> I'd also be interested to know if hardware checksumming
> support would improve throughput noticably in such a system,
> or if it is only useful for 1Gbit and above.
>
> Did anyone actually manage to get close to 100Mbit/sec
> with similar CPU resources?
Remember, the TCP throughput cannot reach 100Mbit/sec due to the
overhead of packet framing. But it should be much closer to 100 than 70.
-- Jamie
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-19 15:35 ` Jamie Lokier
@ 2009-08-20 12:56 ` Johannes Stezenbach
2009-08-28 14:41 ` Johannes Stezenbach
0 siblings, 1 reply; 11+ messages in thread
From: Johannes Stezenbach @ 2009-08-20 12:56 UTC (permalink / raw)
To: Jamie Lokier; +Cc: linux-embedded, netdev
On Wed, Aug 19, 2009 at 04:35:34PM +0100, Jamie Lokier wrote:
> Johannes Stezenbach wrote:
> >
> > TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
> > TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
> >
> > The CPU load during the iperf test is around
> > 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
> >
> > The kernel used in these measurements does not have iptables
> > support, I think packet filtering will slow it down noticably,
> > but I didn't actually try. The ethernet driver uses NAPI,
> > but it doesn't seem to be a win judging from the irq/sec number.
>
> You should see far fewer interrupts if NAPI was working properly.
> Rather than NAPI not being a win, it looks like it's not active at
> all.
>
> 7500/sec is close to the packet rate, for sending TCP with
> full-size ethernet packages over a 100Mbit ethernet link.
From debug output I can see that NAPI works in principle, however
the timing seems to be such that ->poll() almost always completes
before the next packet is received. I followed the NAPI_HOWTO.txt
which came with the 2.6.20 kernel. The delay between irq ->
netif_rx_schedule() -> NET_RX_SOFTIRQ -> ->poll() doesn't seem
to be long enough. But of course my understanding of NAPI is
very limited, probably I missed something...
> > What I'm interested in are some numbers for similar hardware,
> > to find out if my hardware and/or ethernet driver can be improved,
> > or if the CPU will always be the limiting factor.
>
> I have a SoC with a 166MHz ARMv4 (ARM7TDMI I think, but I'm not sure),
> and an external RTL8139 100Mbit ethernet chip over the SoC's PCI bus.
>
> It gets a little over 80Mbit/s actual data throughput in both
> directions, running a simple FTP client.
I found one interesting page which defines network driver performance
in terms of "CPU MHz per Mbit".
http://www.stlinux.com/drupal/node/439
I can't really tell from their table how big a win HW csum is, but
what they call "interrupt mitigation optimisations" (IOW: working NAPI)
seems important. (compare the values for STx7105)
If some has an embedded platform with 100Mbit ethernet where they can switch
HW checksum via ethtool and benchmark both under equal conditions, that
would be very interesting.
Thanks
Johannes
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-19 14:50 100Mbit ethernet performance on embedded devices Johannes Stezenbach
2009-08-19 15:05 ` Ben Hutchings
2009-08-19 15:35 ` Jamie Lokier
@ 2009-08-27 15:38 ` H M Thalib
2009-08-28 14:26 ` Johannes Stezenbach
2009-09-02 5:09 ` Aras Vaichas
2009-09-02 19:35 ` David Acker
4 siblings, 1 reply; 11+ messages in thread
From: H M Thalib @ 2009-08-27 15:38 UTC (permalink / raw)
To: Johannes Stezenbach; +Cc: linux-embedded, netdev
Hi,
Johannes Stezenbach wrote:
> Hi,
>
> a while ago I was working on a SoC with 200MHz ARM926EJ-S CPU
> and integrated 100Mbit ethernet core, connected on internal
> (fast) memory bus, with DMA. With iperf I measured:
>
Did you used Iperf it is not the correct tool to find the performance of
ethernet. use tools like Smartbits or IXIA they are special hardware to
measure the performance . They will give you better results
> TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
> TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
Did you stopped unwanted process in both PC as well as processor, make
sure PC has a bottle neck. Does it gives a through put of at least
95MBps. Is you system connected directly with crossover cables.
> The CPU load during the iperf test is around
> 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
Did you used vmast -- it is not the correct way to measure the cpu load
or do you use top -- it takes lots of you system resource .. this can
affect ehternet performance
> The kernel used in these measurements does not have iptables
> support, I think packet filtering will slow it down noticably,
> but I didn't actually try.
Thats good. iptable will dramatically affect the performance. remove all
the iptables related modules if it is loaded before performing test
The ethernet driver uses NAPI,
> but it doesn't seem to be a win judging from the irq/sec number.
> The kernel was an ancient 2.6.20.
>
not bad. worth upgrading.
> I tried hard, but I couldn't find any performance figures for
> comparison. (All performance figures I found refer to 1Gbit
> or 10Gbit server type systems.)
surely you will not find the perf data for small low end processor
because they are not made fro that. and also this data is not some thing
sharable .they are the benchmark about their product.
Industry is interested in high performance processor for network
product. beside ethernet they do have lot offloading engines.
> What I'm interested in are some numbers for similar hardware,
> to find out if my hardware and/or ethernet driver can be improved,
> or if the CPU will always be the limiting factor.
probably should be possible optimizing hardware+software but you have to
pay for that.
> I'd also be interested to know if hardware checksumming
> support would improve throughput noticably in such a system,
> or if it is only useful for 1Gbit and above.
In my experience for your cpu 80% max of ehternet speed should be ok ..
don't expect more.
>
> Did anyone actually manage to get close to 100Mbit/sec
> with similar CPU resources?
>
>
> TIA,
> Johannes
> --
> To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Thanks & Regards,
H M Thalib.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-27 15:38 ` H M Thalib
@ 2009-08-28 14:26 ` Johannes Stezenbach
0 siblings, 0 replies; 11+ messages in thread
From: Johannes Stezenbach @ 2009-08-28 14:26 UTC (permalink / raw)
To: H M Thalib; +Cc: linux-embedded, netdev
On Thu, Aug 27, 2009 at 09:08:25PM +0530, H M Thalib wrote:
> Johannes Stezenbach wrote:
> >
> >a while ago I was working on a SoC with 200MHz ARM926EJ-S CPU
> >and integrated 100Mbit ethernet core, connected on internal
> >(fast) memory bus, with DMA. With iperf I measured:
>
> Did you used Iperf it is not the correct tool to find the
> performance of ethernet. use tools like Smartbits or IXIA they are
> special hardware to measure the performance . They will give you
> better results
iperf is close to what the targeted application of this system
does -- receive a stream via TCP and process it.
Busybox wget e.g. is not good for benchmarking, it has a too small
receive buffer and adds a lot of syscall overhead.
> > TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
> > TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
>
> Did you stopped unwanted process in both PC as well as processor,
> make sure PC has a bottle neck. Does it gives a through put of at
> least 95MBps. Is you system connected directly with crossover
> cables.
They are usually cpnnected via a 100Mbit switch, direct connection
yields no measurable improvement, and the PC can RX/TX ~95Mbit/sec
at close to 0% CPI load.
> >The CPU load during the iperf test is around
> >1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
>
> Did you used vmast -- it is not the correct way to measure the cpu load
> or do you use top -- it takes lots of you system resource .. this
> can affect ehternet performance
I used a small tool similar to busybox nmeter (except that
it prints numbers instead of a bar). When this tool alone
runs the system is 100% idle.
> >I tried hard, but I couldn't find any performance figures for
> >comparison. (All performance figures I found refer to 1Gbit
> >or 10Gbit server type systems.)
>
> surely you will not find the perf data for small low end processor
> because they are not made fro that. and also this data is not some
> thing sharable .they are the benchmark about their product.
I wouldn't trust manufacturer benchmakrs anyway. But I was
hoping to get some numbers from people working on similar
networked embedded hsystems. E.g. it is hard to believe
that wireless routers running OpenWRT have trouble handling
54Mbit on the *wired* interface with a few iptables rules
enabled.
Thanks,
Johannes
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-20 12:56 ` Johannes Stezenbach
@ 2009-08-28 14:41 ` Johannes Stezenbach
2009-08-28 17:35 ` Mark Brown
2009-08-29 7:05 ` Simon Holm Thøgersen
0 siblings, 2 replies; 11+ messages in thread
From: Johannes Stezenbach @ 2009-08-28 14:41 UTC (permalink / raw)
To: Jamie Lokier; +Cc: linux-embedded, netdev
On Thu, Aug 20, 2009 at 02:56:49PM +0200, Johannes Stezenbach wrote:
> On Wed, Aug 19, 2009 at 04:35:34PM +0100, Jamie Lokier wrote:
> > Johannes Stezenbach wrote:
> > >
> > > TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
> > > TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
> > >
> > > The CPU load during the iperf test is around
> > > 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
> > >
> > > The kernel used in these measurements does not have iptables
> > > support, I think packet filtering will slow it down noticably,
> > > but I didn't actually try. The ethernet driver uses NAPI,
> > > but it doesn't seem to be a win judging from the irq/sec number.
> >
> > You should see far fewer interrupts if NAPI was working properly.
> > Rather than NAPI not being a win, it looks like it's not active at
> > all.
> >
> > 7500/sec is close to the packet rate, for sending TCP with
> > full-size ethernet packages over a 100Mbit ethernet link.
>
> From debug output I can see that NAPI works in principle, however
> the timing seems to be such that ->poll() almost always completes
> before the next packet is received. I followed the NAPI_HOWTO.txt
> which came with the 2.6.20 kernel. The delay between irq ->
> netif_rx_schedule() -> NET_RX_SOFTIRQ -> ->poll() doesn't seem
> to be long enough. But of course my understanding of NAPI is
> very limited, probably I missed something...
It would've been nice to get a comment on this. Yeah I know,
old kernel, non-mainline driver...
On this platform NAPI seems to be a win when receiving small packets,
but not for a single max-bandwidth TCP stream. The folks at
stlinux.com seem to be using a dedicated hw timer to delay
the NAPI poll() calls:
http://www.stlinux.com/drupal/kernel/network/stmmac-optimizations
This of course adds some latency to the packet processing,
however in the single TCP stream case this wouldn't matter.
Thanks,
Johannes
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-28 14:41 ` Johannes Stezenbach
@ 2009-08-28 17:35 ` Mark Brown
2009-08-29 7:05 ` Simon Holm Thøgersen
1 sibling, 0 replies; 11+ messages in thread
From: Mark Brown @ 2009-08-28 17:35 UTC (permalink / raw)
To: Johannes Stezenbach; +Cc: Jamie Lokier, linux-embedded, netdev
On Fri, Aug 28, 2009 at 04:41:38PM +0200, Johannes Stezenbach wrote:
> On Thu, Aug 20, 2009 at 02:56:49PM +0200, Johannes Stezenbach wrote:
> > which came with the 2.6.20 kernel. The delay between irq ->
> > netif_rx_schedule() -> NET_RX_SOFTIRQ -> ->poll() doesn't seem
> > to be long enough. But of course my understanding of NAPI is
> > very limited, probably I missed something...
> It would've been nice to get a comment on this. Yeah I know,
> old kernel, non-mainline driver...
> On this platform NAPI seems to be a win when receiving small packets,
> but not for a single max-bandwidth TCP stream. The folks at
> stlinux.com seem to be using a dedicated hw timer to delay
> the NAPI poll() calls:
> http://www.stlinux.com/drupal/kernel/network/stmmac-optimizations
> This of course adds some latency to the packet processing,
> however in the single TCP stream case this wouldn't matter.
Does your actual system have any appreciable CPU loading? If so that
will normally have the same effect as inserting a delay in the RX path.
Some of the numbers will often look worse with NAPI when the system is
lightly loaded (though not normally throughput).
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-28 14:41 ` Johannes Stezenbach
2009-08-28 17:35 ` Mark Brown
@ 2009-08-29 7:05 ` Simon Holm Thøgersen
1 sibling, 0 replies; 11+ messages in thread
From: Simon Holm Thøgersen @ 2009-08-29 7:05 UTC (permalink / raw)
To: Johannes Stezenbach; +Cc: Jamie Lokier, linux-embedded, netdev
fre, 28 08 2009 kl. 16:41 +0200, skrev Johannes Stezenbach:
> On Thu, Aug 20, 2009 at 02:56:49PM +0200, Johannes Stezenbach wrote:
> > On Wed, Aug 19, 2009 at 04:35:34PM +0100, Jamie Lokier wrote:
> > > Johannes Stezenbach wrote:
> > > >
> > > > TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
> > > > TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
> > > >
> > > > The CPU load during the iperf test is around
> > > > 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
> > > >
> > > > The kernel used in these measurements does not have iptables
> > > > support, I think packet filtering will slow it down noticably,
> > > > but I didn't actually try. The ethernet driver uses NAPI,
> > > > but it doesn't seem to be a win judging from the irq/sec number.
> > >
> > > You should see far fewer interrupts if NAPI was working properly.
> > > Rather than NAPI not being a win, it looks like it's not active at
> > > all.
> > >
> > > 7500/sec is close to the packet rate, for sending TCP with
> > > full-size ethernet packages over a 100Mbit ethernet link.
> >
> > From debug output I can see that NAPI works in principle, however
> > the timing seems to be such that ->poll() almost always completes
> > before the next packet is received. I followed the NAPI_HOWTO.txt
> > which came with the 2.6.20 kernel. The delay between irq ->
> > netif_rx_schedule() -> NET_RX_SOFTIRQ -> ->poll() doesn't seem
> > to be long enough. But of course my understanding of NAPI is
> > very limited, probably I missed something...
>
> It would've been nice to get a comment on this. Yeah I know,
> old kernel, non-mainline driver...
Tried porting the driver to mainline? That way you will get more than
two years of improvements to the networking stack including NAPI.
There was a rework of NAPI [1] around 2.6.24, you'd probably like to see
commit bea3348eef27e6044b6161fd04c3152215f96411. You could also ask the
linux driver project to help you make the driver suitable for mainline
inclusion.
[1] http://lwn.net/Articles/244640/
Simon Holm Th√∏gersen
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-19 14:50 100Mbit ethernet performance on embedded devices Johannes Stezenbach
` (2 preceding siblings ...)
2009-08-27 15:38 ` H M Thalib
@ 2009-09-02 5:09 ` Aras Vaichas
2009-09-02 19:35 ` David Acker
4 siblings, 0 replies; 11+ messages in thread
From: Aras Vaichas @ 2009-09-02 5:09 UTC (permalink / raw)
To: Johannes Stezenbach; +Cc: linux-embedded, netdev
On Thu, Aug 20, 2009 at 12:50 AM, Johannes Stezenbach <js@sig21.net> wrote:
>
> Hi,
>
> a while ago I was working on a SoC with 200MHz ARM926EJ-S CPU
> and integrated 100Mbit ethernet core, connected on internal
> (fast) memory bus, with DMA. With iperf I measured:
>
> TCP RX ~70Mbit/sec (iperf -s on SoC, iperf -c on destop PC)
> TCP TX ~56Mbit/sec (iperf -s on destop PC, iperf -c o SoC)
>
> What I'm interested in are some numbers for similar hardware,
> to find out if my hardware and/or ethernet driver can be improved,
> or if the CPU will always be the limiting factor.
> I'd also be interested to know if hardware checksumming
> support would improve throughput noticably in such a system,
> or if it is only useful for 1Gbit and above.
>
> Did anyone actually manage to get close to 100Mbit/sec
> with similar CPU resources?
No, but I can share results.
AT91RM9200 (ARM920T), 180MHz
Davicom 9161 PHY
Linux 2.6.26.3
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
# uptime; iperf -s; uptime
00:07:33 up 7 min, load average: 0.02, 0.10, 0.07
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 169.254.0.235 port 5001 connected with 169.254.0.2 port 50762
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 58.6 MBytes 49.1 Mbits/sec
00:07:46 up 7 min, load average: 0.17, 0.13, 0.08
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: 100Mbit ethernet performance on embedded devices
2009-08-19 14:50 100Mbit ethernet performance on embedded devices Johannes Stezenbach
` (3 preceding siblings ...)
2009-09-02 5:09 ` Aras Vaichas
@ 2009-09-02 19:35 ` David Acker
4 siblings, 0 replies; 11+ messages in thread
From: David Acker @ 2009-09-02 19:35 UTC (permalink / raw)
To: Johannes Stezenbach; +Cc: linux-embedded, netdev
Johannes Stezenbach wrote:
> What I'm interested in are some numbers for similar hardware,
> to find out if my hardware and/or ethernet driver can be improved,
> or if the CPU will always be the limiting factor.
> I'd also be interested to know if hardware checksumming
> support would improve throughput noticably in such a system,
> or if it is only useful for 1Gbit and above.
>
> Did anyone actually manage to get close to 100Mbit/sec
> with similar CPU resources?
I have a pico station, http://ubnt.com/products/picostation.php with
Atheros MIPS 4KC @ 180MHz. Iperf on this device gives 46.0 Mbits/sec
sending TCP from a PC to the device and 36.2 Mbits/sec sending TCP from
the device to a PC. The NIC is part of the Atheros chipset so PCI is
not involved.
-ack
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2009-09-02 19:35 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-08-19 14:50 100Mbit ethernet performance on embedded devices Johannes Stezenbach
2009-08-19 15:05 ` Ben Hutchings
2009-08-19 15:35 ` Jamie Lokier
2009-08-20 12:56 ` Johannes Stezenbach
2009-08-28 14:41 ` Johannes Stezenbach
2009-08-28 17:35 ` Mark Brown
2009-08-29 7:05 ` Simon Holm Thøgersen
2009-08-27 15:38 ` H M Thalib
2009-08-28 14:26 ` Johannes Stezenbach
2009-09-02 5:09 ` Aras Vaichas
2009-09-02 19:35 ` David Acker
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).