* gigabit ethernet
@ 2004-02-12 8:39 satya srikanth
2004-02-12 9:34 ` Xiaoliang (David) Wei
0 siblings, 1 reply; 8+ messages in thread
From: satya srikanth @ 2004-02-12 8:39 UTC (permalink / raw)
To: netdev
Sir,
I need some help from you. I am using 2 xeon(2
GHz) machines each with two processors running linux
2.4.20-8smp kernel and having intel PRO/1000 gigabit
adapter NIC card(e1000 driver). I tried connecting
both of them using a gigabit switch. I am getting
gigabit speed only if I use my own TCP sockets sending
packets of size around 1400 bytes. If I send packets
of size around 500 bytes, I am getting maximum of only
500 Mbps. I noticed that NIC is receiving all the
packets but they are getting dropped in the kernel.
I tried changing the settings like
netdev_max_backlog to 30000 and rmem_max, wmem_max and
txqueuelen, but of no use. Can you please suggest some
changes that I need to make to achieve this speed.
(Like changing number of Rx interrupts, Tx interrupts
etc). Can you please suggest some other links where I
can get some useful information. Will linux router
project LRP help me to achieve it.
with regards,
Satya Srikanth.
__________________________________
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: gigabit ethernet
2004-02-12 8:39 gigabit ethernet satya srikanth
@ 2004-02-12 9:34 ` Xiaoliang (David) Wei
2004-02-13 10:38 ` satya srikanth
0 siblings, 1 reply; 8+ messages in thread
From: Xiaoliang (David) Wei @ 2004-02-12 9:34 UTC (permalink / raw)
To: satya srikanth; +Cc: netdev
Hi Satya,
Did you check the CPU utilization? If you use smaller packet size,
the interrupt rate may be a problem.
You can modulate the NIC's interrupt rate. See the document for
details:
http://www.intel.com/support/network/adapter/1000/e1000.htm#parameters
Also, I assume the round trip propagation delay for your
connections is very small (such as <10ms)?
-David
satya srikanth wrote:
> Sir,
> I need some help from you. I am using 2 xeon(2
> GHz) machines each with two processors running linux
> 2.4.20-8smp kernel and having intel PRO/1000 gigabit
> adapter NIC card(e1000 driver). I tried connecting
> both of them using a gigabit switch. I am getting
> gigabit speed only if I use my own TCP sockets sending
> packets of size around 1400 bytes. If I send packets
> of size around 500 bytes, I am getting maximum of only
> 500 Mbps. I noticed that NIC is receiving all the
> packets but they are getting dropped in the kernel.
> I tried changing the settings like
> netdev_max_backlog to 30000 and rmem_max, wmem_max and
> txqueuelen, but of no use. Can you please suggest some
> changes that I need to make to achieve this speed.
> (Like changing number of Rx interrupts, Tx interrupts
> etc). Can you please suggest some other links where I
> can get some useful information. Will linux router
> project LRP help me to achieve it.
>
> with regards,
> Satya Srikanth.
>
--
------------------------------------------------------
Xiaoliang (David) Wei Graduate Student of CS@Caltech
WWW: http://www.cs.caltech.edu/~weixl
======================================================
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: gigabit ethernet
2004-02-12 9:34 ` Xiaoliang (David) Wei
@ 2004-02-13 10:38 ` satya srikanth
2004-02-13 19:36 ` Cheng Jin
0 siblings, 1 reply; 8+ messages in thread
From: satya srikanth @ 2004-02-13 10:38 UTC (permalink / raw)
To: Xiaoliang (David) Wei; +Cc: netdev
Sir,
Thanks a lot for the reply.
I have checked the CPU utilization. Its averaging
around 50% and reaching 100% and some instances. I
have also looked at the intel's document. First of all
I could not understand why all my softirqs are going
only to cpu0 when I have multiple processors. When I
used 2.4.18- kernel, I didnt face this problem. Now I
am using 2.4.20-8 smp. Intel says that e1000 does not
use NAPI by default. But, I dont know why cpu0 is
handling all softirqs while other processors are
sitting idle.
Also I found that number of interrupts are
reasonable but number of packets per interrupt is
averaging around 40 in my case. As my cpu is not able
to handle all these in 1 jiffy(time_squeeze), I am
reaching throttle count and thus drops. When I changed
netdev_max_backlog to 300000 and rmem_default to
300000000 then I am able to handle all packets
received by interface. Is it OK to have such a high
values?
My round trip propagation delay is <0.2 ms. But I
could not understand how it would affect the
performance. Please throw some light on this.
regards,
Satya
--- "Xiaoliang (David) Wei" <weixl@caltech.edu> wrote:
> Hi Satya,
>
> Did you check the CPU utilization? If you use
> smaller packet size,
> the interrupt rate may be a problem.
> You can modulate the NIC's interrupt rate. See
> the document for
> details:
>
http://www.intel.com/support/network/adapter/1000/e1000.htm#parameters
>
>
> Also, I assume the round trip propagation
> delay for your
> connections is very small (such as <10ms)?
>
>
> -David
>
> satya srikanth wrote:
> > Sir,
> > I need some help from you. I am using 2 xeon(2
> > GHz) machines each with two processors running
> linux
> > 2.4.20-8smp kernel and having intel PRO/1000
> gigabit
> > adapter NIC card(e1000 driver). I tried connecting
> > both of them using a gigabit switch. I am getting
> > gigabit speed only if I use my own TCP sockets
> sending
> > packets of size around 1400 bytes. If I send
> packets
> > of size around 500 bytes, I am getting maximum of
> only
> > 500 Mbps. I noticed that NIC is receiving all the
> > packets but they are getting dropped in the
> kernel.
> > I tried changing the settings like
> > netdev_max_backlog to 30000 and rmem_max, wmem_max
> and
> > txqueuelen, but of no use. Can you please suggest
> some
> > changes that I need to make to achieve this speed.
> > (Like changing number of Rx interrupts, Tx
> interrupts
> > etc). Can you please suggest some other links
> where I
> > can get some useful information. Will linux router
> > project LRP help me to achieve it.
> >
> > with regards,
> > Satya Srikanth.
> >
>
>
> --
>
------------------------------------------------------
> Xiaoliang (David) Wei Graduate Student of
> CS@Caltech
> WWW: http://www.cs.caltech.edu/~weixl
>
======================================================
>
>
__________________________________
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: gigabit ethernet
2004-02-13 10:38 ` satya srikanth
@ 2004-02-13 19:36 ` Cheng Jin
2004-02-13 21:31 ` Nivedita Singhvi
2004-02-14 14:09 ` satya srikanth
0 siblings, 2 replies; 8+ messages in thread
From: Cheng Jin @ 2004-02-13 19:36 UTC (permalink / raw)
To: satya srikanth; +Cc: Xiaoliang (David) Wei, netdev@oss.sgi.com
> I have checked the CPU utilization. Its averaging
> around 50% and reaching 100% and some instances. I
> have also looked at the intel's document. First of all
> I could not understand why all my softirqs are going
> only to cpu0 when I have multiple processors. When I
I read somewhere that Intel SMP has cpu0 taking care of
all the hardware interrupts, but I don't know about softirqs.
I think all softirqs related to the GbE are handled by the
same cpu.
> used 2.4.18- kernel, I didnt face this problem. Now I
> am using 2.4.20-8 smp. Intel says that e1000 does not
> use NAPI by default. But, I dont know why cpu0 is
> handling all softirqs while other processors are
> sitting idle.
NAPI for e1000 is off by default, you have to explicitly enable it
in the kernel config file (CONFIG_E1000_NAPI).
> Also I found that number of interrupts are
> reasonable but number of packets per interrupt is
> averaging around 40 in my case. As my cpu is not able
> to handle all these in 1 jiffy(time_squeeze), I am
> reaching throttle count and thus drops. When I changed
> netdev_max_backlog to 300000 and rmem_default to
> 300000000 then I am able to handle all packets
> received by interface. Is it OK to have such a high
> values?
What kind of processors do you have? I haven't tried pumping 1 Gbps
using processors slower than 2.4G Xeons. But it sounds like you did
get full utilization of the GbE card. Data traffic is about 80
pkts/ms, which translates into 40 pkts/ms for ack traffic, assuming
that you are talking about acks above.
rmem_default will only affect the TCP/socket layer. For the back2back
test you described, the bandwidth delay product is small so increasing
netdev_max_backlog queue to 300000 is unnecessary (although TCP reno
will overflow that eventually if you wait long enough).
I am not sure why you should see time squeeze with back2back tests...
you can instrument the kernel to see how large snd_cwnd gets, and I
suspect that you have slow processors... Also do the time squeeze
happen during loss recovery---when ca_state is 3 or 4?
> My round trip propagation delay is <0.2 ms. But I
> could not understand how it would affect the
> performance. Please throw some light on this.
short rtts will help performance because reno recovers much faster, and
linear increase doesnt take long to reach bandwidth delay product.
Cheng
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: gigabit ethernet
2004-02-13 19:36 ` Cheng Jin
@ 2004-02-13 21:31 ` Nivedita Singhvi
2004-02-13 22:11 ` Jeff Garzik
2004-02-14 14:09 ` satya srikanth
1 sibling, 1 reply; 8+ messages in thread
From: Nivedita Singhvi @ 2004-02-13 21:31 UTC (permalink / raw)
To: Cheng Jin; +Cc: satya srikanth, Xiaoliang (David) Wei, netdev@oss.sgi.com
Cheng Jin wrote:
>
> I read somewhere that Intel SMP has cpu0 taking care of
> all the hardware interrupts, but I don't know about softirqs.
> I think all softirqs related to the GbE are handled by the
> same cpu.
For incoming network packets, the hw interrupt handler simply
schedules a local softirq to handle the rest of the input
processing. So the softirq will execute on the same
CPU that the hw interrupt came in on.
thanks,
Nivedita
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: gigabit ethernet
2004-02-13 21:31 ` Nivedita Singhvi
@ 2004-02-13 22:11 ` Jeff Garzik
0 siblings, 0 replies; 8+ messages in thread
From: Jeff Garzik @ 2004-02-13 22:11 UTC (permalink / raw)
To: Nivedita Singhvi
Cc: Cheng Jin, satya srikanth, Xiaoliang (David) Wei,
netdev@oss.sgi.com
On Fri, Feb 13, 2004 at 01:31:54PM -0800, Nivedita Singhvi wrote:
> Cheng Jin wrote:
> >
> >I read somewhere that Intel SMP has cpu0 taking care of
> >all the hardware interrupts, but I don't know about softirqs.
> >I think all softirqs related to the GbE are handled by the
> >same cpu.
>
> For incoming network packets, the hw interrupt handler simply
> schedules a local softirq to handle the rest of the input
> processing. So the softirq will execute on the same
> CPU that the hw interrupt came in on.
Many answers ;-)
1) Starting with Pentium4, all interrupts are delivered to cpu0...
unless software directs otherwise. Thus it is helpful to have a
software irq balancing solution. Red Hat has a userspace irqbalanced,
and kernel 2.6.x also has kirqd on x86.
2) For networking, receiving all packets on one cpu has many benefits,
and avoids packet reordering problems that occasionally appear on SMP.
3) For NAPI drivers, regardless of interrupt balancing strategy, packets
are usually receiving on one cpu.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: gigabit ethernet
2004-02-13 19:36 ` Cheng Jin
2004-02-13 21:31 ` Nivedita Singhvi
@ 2004-02-14 14:09 ` satya srikanth
2004-02-16 4:32 ` Ben Greear
1 sibling, 1 reply; 8+ messages in thread
From: satya srikanth @ 2004-02-14 14:09 UTC (permalink / raw)
To: Cheng Jin; +Cc: Xiaoliang (David) Wei, netdev@oss.sgi.com
Hi,
The solution of using NAPI reduced packet drops to
some extent. I am using dual processor 2.0 GHz Xeon
Processors (32 bit 66 MHz PCI bus). For different
packet sizes maximum bandwidth that my receiver
handled without drops are like this. (Using raw
sockets)
Pkt Size Max Bandwidth
(bytes) (Mbps)
-------- ------------
100 200
500 500
1000 700
1500 900
My CPU is unable to handle anything more than this. Is
it possible to handle 1000Mbps for any packet size or
atleast something better than this by using more
powerful machines. If anyone of you have experimented
with this, can you give an idea of what kind of a
machine do I need for this.
Also my driver has IP checksum offloading option
ON. Does it mean that I can safely remove checksum
calculation in TCP/IP stack?
--- Cheng Jin <chengjin@cs.caltech.edu> wrote:
> NAPI for e1000 is off by default, you have to
> explicitly enable it
> in the kernel config file (CONFIG_E1000_NAPI).
> What kind of processors do you have? I haven't
> tried pumping 1 Gbps
> using processors slower than 2.4G Xeons. But it
__________________________________
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: gigabit ethernet
2004-02-14 14:09 ` satya srikanth
@ 2004-02-16 4:32 ` Ben Greear
0 siblings, 0 replies; 8+ messages in thread
From: Ben Greear @ 2004-02-16 4:32 UTC (permalink / raw)
To: satya srikanth; +Cc: Cheng Jin, Xiaoliang (David) Wei, netdev@oss.sgi.com
satya srikanth wrote:
> My CPU is unable to handle anything more than this. Is
> it possible to handle 1000Mbps for any packet size or
> atleast something better than this by using more
> powerful machines. If anyone of you have experimented
> with this, can you give an idea of what kind of a
> machine do I need for this.
Try a machine with a 64/66 PCI bus, or preferably,
a PCI-X bus that can run 64/100 or better.
I got line speed packet generation and reception on
a dual 2.8Ghz Xeon machine with PCI-X (64/100) bus and a dual
port Intel pro/1000 NIC. I tested with 1500 byte packets
and transmitted and received 999Mbps of traffic between
two ports (total of ~4Gbps across the PCI backplane).
I wasn't routing or anything like that, but at least for
packet generation it worked fine.
Ben
--
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc http://www.candelatech.com
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2004-02-16 4:32 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-02-12 8:39 gigabit ethernet satya srikanth
2004-02-12 9:34 ` Xiaoliang (David) Wei
2004-02-13 10:38 ` satya srikanth
2004-02-13 19:36 ` Cheng Jin
2004-02-13 21:31 ` Nivedita Singhvi
2004-02-13 22:11 ` Jeff Garzik
2004-02-14 14:09 ` satya srikanth
2004-02-16 4:32 ` Ben Greear
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).