* Update on e1000 troubles (over-heating!)
@ 2002-10-06 3:38 Ben Greear
2002-10-06 3:47 ` Andre Hedrick
0 siblings, 1 reply; 19+ messages in thread
From: Ben Greear @ 2002-10-06 3:38 UTC (permalink / raw)
To: linux-kernel, 'netdev@oss.sgi.com'
I believe I have figured out why the e1000 crashed my machine
after .5 - 1 hours: The NIC was over-heating. I measured one of
the NICs after the machine crashed with an external (cheap) temp
probe. It registered right at 50 degrees C, and this was about 15-30
seconds after it crashed.
The dual e1000 NIC I have seems to run much cooler, and has been
running at 430Mbps bi-directional on both ports for about 6 hours now
with no obvious problems.
So, I'm going to try to purchase some heat sinks and glue them onto
the e1000 server nics, to see if that fixes the problem.
Hope this proves useful to anyone experiencing similar strange
crashes!
Thanks,
Ben
--
Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
President of Candela Technologies Inc http://www.candelatech.com
ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-06 3:38 Update on e1000 troubles (over-heating!) Ben Greear
@ 2002-10-06 3:47 ` Andre Hedrick
2002-10-06 22:38 ` jamal
0 siblings, 1 reply; 19+ messages in thread
From: Andre Hedrick @ 2002-10-06 3:47 UTC (permalink / raw)
To: Ben Greear; +Cc: linux-kernel, 'netdev@oss.sgi.com'
I have a pair of Compaq e1000's which have never overheated, and I use
them for heavy duty iSCSI testing and designing of drivers. These are
massive 66/64 cards but still nothing like what you are reporting.
I will look some more at the issue soon.
Cheers,
Andre Hedrick
iSCSI Software Solutions Provider
http://www.PyXTechnologies.com/
On Sat, 5 Oct 2002, Ben Greear wrote:
> I believe I have figured out why the e1000 crashed my machine
> after .5 - 1 hours: The NIC was over-heating. I measured one of
> the NICs after the machine crashed with an external (cheap) temp
> probe. It registered right at 50 degrees C, and this was about 15-30
> seconds after it crashed.
>
> The dual e1000 NIC I have seems to run much cooler, and has been
> running at 430Mbps bi-directional on both ports for about 6 hours now
> with no obvious problems.
>
> So, I'm going to try to purchase some heat sinks and glue them onto
> the e1000 server nics, to see if that fixes the problem.
>
> Hope this proves useful to anyone experiencing similar strange
> crashes!
>
> Thanks,
> Ben
>
> --
> Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
> President of Candela Technologies Inc http://www.candelatech.com
> ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-06 3:47 ` Andre Hedrick
@ 2002-10-06 22:38 ` jamal
2002-10-07 0:14 ` Andre Hedrick
2002-10-07 3:46 ` Update on e1000 troubles (over-heating!) Ben Greear
0 siblings, 2 replies; 19+ messages in thread
From: jamal @ 2002-10-06 22:38 UTC (permalink / raw)
To: Andre Hedrick; +Cc: Ben Greear, linux-kernel, 'netdev@oss.sgi.com'
On Sat, 5 Oct 2002, Andre Hedrick wrote:
>
> I have a pair of Compaq e1000's which have never overheated, and I use
> them for heavy duty iSCSI testing and designing of drivers. These are
> massive 66/64 cards but still nothing like what you are reporting.
>
> I will look some more at the issue soon.
>
It seems like the prerequisite to reproduce it is you beat the NIC heavily
with a lot of packets/sec and then run it at that sustained rate for at
least 30 minutes. isci would tend to use MTU sized packets which will
not be that effective.
cheers,
jamal
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-06 22:38 ` jamal
@ 2002-10-07 0:14 ` Andre Hedrick
2002-10-07 11:56 ` jamal
2002-10-07 3:46 ` Update on e1000 troubles (over-heating!) Ben Greear
1 sibling, 1 reply; 19+ messages in thread
From: Andre Hedrick @ 2002-10-07 0:14 UTC (permalink / raw)
To: jamal; +Cc: Ben Greear, linux-kernel, 'netdev@oss.sgi.com'
However doing a data integrity test with a pattern buffer
write-verify-read on multi-lun, multi-session, and multiple connections
per session, while issuing load-balancing commands (ie thread tag) over
each session to roast the bandwidth of the line should be enough.
Now toss in injected errors to randomly fail data pdu's and calling a
sync-and-steering layer to scan the header and or data digests to execute
a within connection recovery, regardless if the reason, should be enough
to warm up the beast.
If that is not enough, I can toss in multi-initiators all with the
features above or invoke the interoperablity modes to add the cisco and
ibm initiator (both limited to error recovery level zero, while pyx's is
capable of error recovery level one and part of two).
Please let me know if I need to throttle it harder.
Cheers,
On Sun, 6 Oct 2002, jamal wrote:
>
>
> On Sat, 5 Oct 2002, Andre Hedrick wrote:
>
> >
> > I have a pair of Compaq e1000's which have never overheated, and I use
> > them for heavy duty iSCSI testing and designing of drivers. These are
> > massive 66/64 cards but still nothing like what you are reporting.
> >
> > I will look some more at the issue soon.
> >
>
> It seems like the prerequisite to reproduce it is you beat the NIC heavily
> with a lot of packets/sec and then run it at that sustained rate for at
> least 30 minutes. isci would tend to use MTU sized packets which will
> not be that effective.
>
> cheers,
> jamal
>
>
>
>
Andre Hedrick
iSCSI Software Solutions Provider
http://www.PyXTechnologies.com/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-06 22:38 ` jamal
2002-10-07 0:14 ` Andre Hedrick
@ 2002-10-07 3:46 ` Ben Greear
2002-10-07 5:26 ` David S. Miller
2002-10-07 11:53 ` jamal
1 sibling, 2 replies; 19+ messages in thread
From: Ben Greear @ 2002-10-07 3:46 UTC (permalink / raw)
To: jamal; +Cc: Andre Hedrick, linux-kernel, 'netdev@oss.sgi.com'
jamal wrote:
> It seems like the prerequisite to reproduce it is you beat the NIC heavily
> with a lot of packets/sec and then run it at that sustained rate for at
> least 30 minutes. isci would tend to use MTU sized packets which will
> not be that effective.
I can reproduce my crash using mtu sized pkts running only 50Mbps send + receive
on 2 nics. It took over-night to do it though. Running as hard as I can with
MTU packets will crash it as well, and much quicker.
Interestingly enough, the tg3 NIC (netgear 302t), registered 57 deg C between
the fins of it's heat sink in the 32-bit slots. Makes me wonder if my PCI bus
is running too hot :P
Dave says I'm wierd and no one else sees these bizarre problems, btw :)
More trouble-shooting to follow this next week.
Thanks,
Ben
--
Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
President of Candela Technologies Inc http://www.candelatech.com
ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-07 3:46 ` Update on e1000 troubles (over-heating!) Ben Greear
@ 2002-10-07 5:26 ` David S. Miller
2002-10-07 11:53 ` jamal
1 sibling, 0 replies; 19+ messages in thread
From: David S. Miller @ 2002-10-07 5:26 UTC (permalink / raw)
To: greearb; +Cc: hadi, andre, linux-kernel, netdev
From: Ben Greear <greearb@candelatech.com>
Date: Sun, 06 Oct 2002 20:46:42 -0700
Dave says I'm wierd and no one else sees these bizarre problems, btw :)
The only case where I'm really concerned about the health
of your PCI controller is the most recent case you've
reported to me where pci_find_capability(pdev, PCI_CAP_ID_PM)
fails. That is just completely bizarre.
I hope your boards aren't being permanently harmed by your box which
is overheating.:(
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-07 3:46 ` Update on e1000 troubles (over-heating!) Ben Greear
2002-10-07 5:26 ` David S. Miller
@ 2002-10-07 11:53 ` jamal
2002-10-07 11:58 ` David S. Miller
` (2 more replies)
1 sibling, 3 replies; 19+ messages in thread
From: jamal @ 2002-10-07 11:53 UTC (permalink / raw)
To: Ben Greear; +Cc: Andre Hedrick, linux-kernel, 'netdev@oss.sgi.com'
On Sun, 6 Oct 2002, Ben Greear wrote:
> I can reproduce my crash using mtu sized pkts running only 50Mbps
> send + receive on 2 nics. It took over-night to do it though. Running
> as hard as I can with MTU packets will crash it as well, and much
>quicker.
>
So is there a correlation with packet count then?
> Interestingly enough, the tg3 NIC (netgear 302t), registered 57 deg C between
> the fins of it's heat sink in the 32-bit slots. Makes me wonder if my PCI bus
> is running too hot :P
Does the problem happen with the tg3?
cheers,
jamal
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-07 0:14 ` Andre Hedrick
@ 2002-10-07 11:56 ` jamal
2002-10-09 1:10 ` Update on e1000 troubles (over-heating!) (problem solved) Ben Greear
0 siblings, 1 reply; 19+ messages in thread
From: jamal @ 2002-10-07 11:56 UTC (permalink / raw)
To: Andre Hedrick; +Cc: Ben Greear, linux-kernel, 'netdev@oss.sgi.com'
It does seem like you need a lot of packets over a period of time
to recreate it. So if what you are trying to do can achieve that,
you should reproduce it. How many connections and sessions can you
support? BTW, does iscsi call for a zero-copy receive?
cheers,
jamal
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-07 11:53 ` jamal
@ 2002-10-07 11:58 ` David S. Miller
2002-10-07 16:40 ` Ben Greear
2002-10-07 18:11 ` How can we bound one CPU to one Gigabit NIC? Xiaoliang (David) Wei
2 siblings, 0 replies; 19+ messages in thread
From: David S. Miller @ 2002-10-07 11:58 UTC (permalink / raw)
To: hadi; +Cc: greearb, andre, linux-kernel, netdev
From: jamal <hadi@cyberus.ca>
Date: Mon, 7 Oct 2002 07:53:26 -0400 (EDT)
Does the problem happen with the tg3?
He gets hangs in one box, inoperable PCI config space accesses for the
cards in another box.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!)
2002-10-07 11:53 ` jamal
2002-10-07 11:58 ` David S. Miller
@ 2002-10-07 16:40 ` Ben Greear
2002-10-07 18:11 ` How can we bound one CPU to one Gigabit NIC? Xiaoliang (David) Wei
2 siblings, 0 replies; 19+ messages in thread
From: Ben Greear @ 2002-10-07 16:40 UTC (permalink / raw)
To: jamal; +Cc: Andre Hedrick, linux-kernel, 'netdev@oss.sgi.com'
jamal wrote:
>
> On Sun, 6 Oct 2002, Ben Greear wrote:
>
>
>>I can reproduce my crash using mtu sized pkts running only 50Mbps
>>send + receive on 2 nics. It took over-night to do it though. Running
>>as hard as I can with MTU packets will crash it as well, and much
>>quicker.
>>
>
>
> So is there a correlation with packet count then?
No, running at slower speeds (50Mbps), the packet count was well over
4 billion (ie it successfully wrapped 32-bits). At higher speeds, it
crashes before the 32-bit wrap, generally. It also does not coorelate
to bytes-sent/received, or anything else that I could think of to look at.
>
>
>
>>Interestingly enough, the tg3 NIC (netgear 302t), registered 57 deg C between
>>the fins of it's heat sink in the 32-bit slots. Makes me wonder if my PCI bus
>>is running too hot :P
>
>
> Does the problem happen with the tg3?
As Dave mentioned, tg3 locks up almost immediately (like within 30 seconds),
and in the meantime, it's spitting out errors that are 'impossible'. The
messages I sent a day or two ago.
I may have cooked my cards, or something like that, because one of
the tg3's do not work in my other machine now. Still trouble-shooting that one.
Ben
>
> cheers,
> jamal
>
>
--
Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
President of Candela Technologies Inc http://www.candelatech.com
ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
^ permalink raw reply [flat|nested] 19+ messages in thread
* How can we bound one CPU to one Gigabit NIC?
2002-10-07 11:53 ` jamal
2002-10-07 11:58 ` David S. Miller
2002-10-07 16:40 ` Ben Greear
@ 2002-10-07 18:11 ` Xiaoliang (David) Wei
2002-10-07 18:24 ` Ben Greear
2 siblings, 1 reply; 19+ messages in thread
From: Xiaoliang (David) Wei @ 2002-10-07 18:11 UTC (permalink / raw)
To: netdev
Hi Everyone,
I am now doing some experiments on Dual CPU (2.4Ghz) with 2 Gigabit
cards. Can anyone tell me how to bound one CPU to each NIC so that we don't
need to care about the packet-reordering and the interrupt sharing problems?
Thank you very much.:)
Xiaoliang (David) Wei Graduate Student in CS@Caltech
http://www.cs.caltech.edu/~weixl
====================================================
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: How can we bound one CPU to one Gigabit NIC?
2002-10-07 18:11 ` How can we bound one CPU to one Gigabit NIC? Xiaoliang (David) Wei
@ 2002-10-07 18:24 ` Ben Greear
2002-10-07 19:12 ` Xiaoliang (David) Wei
2002-10-08 7:27 ` Xiaoliang (David) Wei
0 siblings, 2 replies; 19+ messages in thread
From: Ben Greear @ 2002-10-07 18:24 UTC (permalink / raw)
To: Xiaoliang (David) Wei; +Cc: netdev
Xiaoliang (David) Wei wrote:
> Hi Everyone,
> I am now doing some experiments on Dual CPU (2.4Ghz) with 2 Gigabit
> cards. Can anyone tell me how to bound one CPU to each NIC so that we don't
> need to care about the packet-reordering and the interrupt sharing problems?
> Thank you very much.:)
My experiments show you will still get re-ordered packets occasionally
(but then again, I'm having other wierd problems, so maybe you wont).
# Bind processor 2 (1<<1) to irq 11
echo 2 > /proc/irq/11/smp_affinity
# Bind processor 1 (1<<0) to irq 19
echo 1 > /proc/irq/9/smp_affinity
I will be interested to hear of your results, as I have been having
heating problems with e1000 and other problems with tg3 based nics!
Ben
>
>
>
> Xiaoliang (David) Wei Graduate Student in CS@Caltech
> http://www.cs.caltech.edu/~weixl
> ====================================================
>
--
Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
President of Candela Technologies Inc http://www.candelatech.com
ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: How can we bound one CPU to one Gigabit NIC?
2002-10-07 18:24 ` Ben Greear
@ 2002-10-07 19:12 ` Xiaoliang (David) Wei
2002-10-08 7:27 ` Xiaoliang (David) Wei
1 sibling, 0 replies; 19+ messages in thread
From: Xiaoliang (David) Wei @ 2002-10-07 19:12 UTC (permalink / raw)
To: Ben Greear; +Cc: netdev
Thanks Ben. We are going to use SysKonnect Cards. Are they tg3 based, too?
Thank you.
Xiaoliang (David) Wei Graduate Student in CS@Caltech
http://www.cs.caltech.edu/~weixl
====================================================
----- Original Message -----
From: "Ben Greear" <greearb@candelatech.com>
To: "Xiaoliang (David) Wei" <weixl@caltech.edu>
Cc: <netdev@oss.sgi.com>
Sent: Monday, October 07, 2002 11:24 AM
Subject: Re: How can we bound one CPU to one Gigabit NIC?
> Xiaoliang (David) Wei wrote:
> > Hi Everyone,
> > I am now doing some experiments on Dual CPU (2.4Ghz) with 2
Gigabit
> > cards. Can anyone tell me how to bound one CPU to each NIC so that we
don't
> > need to care about the packet-reordering and the interrupt sharing
problems?
> > Thank you very much.:)
>
> My experiments show you will still get re-ordered packets occasionally
> (but then again, I'm having other wierd problems, so maybe you wont).
>
> # Bind processor 2 (1<<1) to irq 11
> echo 2 > /proc/irq/11/smp_affinity
>
> # Bind processor 1 (1<<0) to irq 19
> echo 1 > /proc/irq/9/smp_affinity
>
>
> I will be interested to hear of your results, as I have been having
> heating problems with e1000 and other problems with tg3 based nics!
>
> Ben
>
> >
> >
> >
> > Xiaoliang (David) Wei Graduate Student in CS@Caltech
> > http://www.cs.caltech.edu/~weixl
> > ====================================================
> >
>
>
> --
> Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
> President of Candela Technologies Inc http://www.candelatech.com
> ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
>
>
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: How can we bound one CPU to one Gigabit NIC?
2002-10-07 18:24 ` Ben Greear
2002-10-07 19:12 ` Xiaoliang (David) Wei
@ 2002-10-08 7:27 ` Xiaoliang (David) Wei
2002-10-08 8:57 ` jamal
1 sibling, 1 reply; 19+ messages in thread
From: Xiaoliang (David) Wei @ 2002-10-08 7:27 UTC (permalink / raw)
To: Ben Greear; +Cc: netdev
Thank you Ben.
I did some UDP test with Iperf. Here are the results:
Without bounding CPU, I had thousands of pakcets out of order in
1.7GByte* 2 connection transmission, with standard MTU. The throughput is
about 550Mbps*2 (connection) with UDP packets. The sender can send 800Mbps
for each connection.
With CPU bounding, I had no packet out of order. Anyway, the two
connections got only 1.3~1.4 throughput totally. The senders seemed to be
able to send 1.6Gbps totally. (So, it seems that receiving packets takes
more time than sending packets)
Anyway, for a single connection on these machines, I could get 950Mbps.
Is there any suggestion to improve the Dual CPU-Dual NIC performance? I
looked at the "top". The two Iperf processes seemed to be using more than
60% CPU each. That means they are using different CPU. Anyway, I am not sure
if they migrated from one CPU to the other very often or not. If they
changed very often, it may resulted in the low performance, I guess. Is
there anyway to bound a process to a specific CPU?
The machines are with Dual Xeon 2.2 G CPU and Dual SysKonnect
Gigabit-Ethernet Card. All the tests were done with UDP. (Iperf -s -u /
Iperf -c -u -b1.7G.)
Thanks.
-David
Xiaoliang (David) Wei Graduate Student in CS@Caltech
http://www.cs.caltech.edu/~weixl
====================================================
----- Original Message -----
From: "Ben Greear" <greearb@candelatech.com>
To: "Xiaoliang (David) Wei" <weixl@caltech.edu>
Cc: <netdev@oss.sgi.com>
Sent: Monday, October 07, 2002 11:24 AM
Subject: Re: How can we bound one CPU to one Gigabit NIC?
> Xiaoliang (David) Wei wrote:
> > Hi Everyone,
> > I am now doing some experiments on Dual CPU (2.4Ghz) with 2
Gigabit
> > cards. Can anyone tell me how to bound one CPU to each NIC so that we
don't
> > need to care about the packet-reordering and the interrupt sharing
problems?
> > Thank you very much.:)
>
> My experiments show you will still get re-ordered packets occasionally
> (but then again, I'm having other wierd problems, so maybe you wont).
>
> # Bind processor 2 (1<<1) to irq 11
> echo 2 > /proc/irq/11/smp_affinity
>
> # Bind processor 1 (1<<0) to irq 19
> echo 1 > /proc/irq/9/smp_affinity
>
>
> I will be interested to hear of your results, as I have been having
> heating problems with e1000 and other problems with tg3 based nics!
>
> Ben
>
> >
> >
> >
> > Xiaoliang (David) Wei Graduate Student in CS@Caltech
> > http://www.cs.caltech.edu/~weixl
> > ====================================================
> >
>
>
> --
> Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
> President of Candela Technologies Inc http://www.candelatech.com
> ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
>
>
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: How can we bound one CPU to one Gigabit NIC?
2002-10-08 7:27 ` Xiaoliang (David) Wei
@ 2002-10-08 8:57 ` jamal
2002-10-08 17:41 ` Jason Lunz
0 siblings, 1 reply; 19+ messages in thread
From: jamal @ 2002-10-08 8:57 UTC (permalink / raw)
To: Xiaoliang (David) Wei; +Cc: Ben Greear, netdev
Can you repeat these tests with NAPI and no binding and see if you get any
reordering?
cheers,
jamal
On Tue, 8 Oct 2002, Xiaoliang (David) Wei wrote:
> Thank you Ben.
> I did some UDP test with Iperf. Here are the results:
> Without bounding CPU, I had thousands of pakcets out of order in
> 1.7GByte* 2 connection transmission, with standard MTU. The throughput is
> about 550Mbps*2 (connection) with UDP packets. The sender can send 800Mbps
> for each connection.
> With CPU bounding, I had no packet out of order. Anyway, the two
> connections got only 1.3~1.4 throughput totally. The senders seemed to be
> able to send 1.6Gbps totally. (So, it seems that receiving packets takes
> more time than sending packets)
> Anyway, for a single connection on these machines, I could get 950Mbps.
> Is there any suggestion to improve the Dual CPU-Dual NIC performance? I
> looked at the "top". The two Iperf processes seemed to be using more than
> 60% CPU each. That means they are using different CPU. Anyway, I am not sure
> if they migrated from one CPU to the other very often or not. If they
> changed very often, it may resulted in the low performance, I guess. Is
> there anyway to bound a process to a specific CPU?
> The machines are with Dual Xeon 2.2 G CPU and Dual SysKonnect
> Gigabit-Ethernet Card. All the tests were done with UDP. (Iperf -s -u /
> Iperf -c -u -b1.7G.)
> Thanks.
>
> -David
> Xiaoliang (David) Wei Graduate Student in CS@Caltech
> http://www.cs.caltech.edu/~weixl
> ====================================================
> ----- Original Message -----
> From: "Ben Greear" <greearb@candelatech.com>
> To: "Xiaoliang (David) Wei" <weixl@caltech.edu>
> Cc: <netdev@oss.sgi.com>
> Sent: Monday, October 07, 2002 11:24 AM
> Subject: Re: How can we bound one CPU to one Gigabit NIC?
>
>
> > Xiaoliang (David) Wei wrote:
> > > Hi Everyone,
> > > I am now doing some experiments on Dual CPU (2.4Ghz) with 2
> Gigabit
> > > cards. Can anyone tell me how to bound one CPU to each NIC so that we
> don't
> > > need to care about the packet-reordering and the interrupt sharing
> problems?
> > > Thank you very much.:)
> >
> > My experiments show you will still get re-ordered packets occasionally
> > (but then again, I'm having other wierd problems, so maybe you wont).
> >
> > # Bind processor 2 (1<<1) to irq 11
> > echo 2 > /proc/irq/11/smp_affinity
> >
> > # Bind processor 1 (1<<0) to irq 19
> > echo 1 > /proc/irq/9/smp_affinity
> >
> >
> > I will be interested to hear of your results, as I have been having
> > heating problems with e1000 and other problems with tg3 based nics!
> >
> > Ben
> >
> > >
> > >
> > >
> > > Xiaoliang (David) Wei Graduate Student in CS@Caltech
> > > http://www.cs.caltech.edu/~weixl
> > > ====================================================
> > >
> >
> >
> > --
> > Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
> > President of Candela Technologies Inc http://www.candelatech.com
> > ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
> >
> >
> >
> >
>
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: How can we bound one CPU to one Gigabit NIC?
2002-10-08 8:57 ` jamal
@ 2002-10-08 17:41 ` Jason Lunz
2002-10-10 2:22 ` jamal
0 siblings, 1 reply; 19+ messages in thread
From: Jason Lunz @ 2002-10-08 17:41 UTC (permalink / raw)
To: netdev
hadi@cyberus.ca said:
> Can you repeat these tests with NAPI and no binding and see if you get
> any reordering?
Wouldn't he need a NAPIfied SysKonnect driver? I wasn't aware anyone had
converted it yet. Or is it enough for him to check the blog_dev path?
Jason
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Update on e1000 troubles (over-heating!) (problem solved)
2002-10-07 11:56 ` jamal
@ 2002-10-09 1:10 ` Ben Greear
0 siblings, 0 replies; 19+ messages in thread
From: Ben Greear @ 2002-10-09 1:10 UTC (permalink / raw)
To: jamal; +Cc: Andre Hedrick, linux-kernel, 'netdev@oss.sgi.com'
jamal wrote:
>
> It does seem like you need a lot of packets over a period of time
> to recreate it. So if what you are trying to do can achieve that,
> you should reproduce it. How many connections and sessions can you
> support? BTW, does iscsi call for a zero-copy receive?
I ran it at top speed for 4+ hours today with no problems. I was
actively cooling the cards with an extra p-IV cpu fan sitting
precariously on top of the cards.
So, my problems were definately heat related. I have yet to try
to see if the same things makes the tg3 nics behave better, as
they ran even hotter than the e1000s.
Thanks,
Ben
--
Ben Greear <greearb@candelatech.com> <Ben_Greear AT excite.com>
President of Candela Technologies Inc http://www.candelatech.com
ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: How can we bound one CPU to one Gigabit NIC?
2002-10-08 17:41 ` Jason Lunz
@ 2002-10-10 2:22 ` jamal
2002-10-10 2:24 ` jamal
0 siblings, 1 reply; 19+ messages in thread
From: jamal @ 2002-10-10 2:22 UTC (permalink / raw)
To: Jason Lunz; +Cc: netdev
On Tue, 8 Oct 2002, Jason Lunz wrote:
> hadi@cyberus.ca said:
> > Can you repeat these tests with NAPI and no binding and see if you get
> > any reordering?
>
> Wouldn't he need a NAPIfied SysKonnect driver? I wasn't aware anyone had
> converted it yet. Or is it enough for him to check the blog_dev path?
Duh. There is no NAPIfied SysKonnect driver ;->
Well he didnt bother responding, so he may have figured that out.
SysKonnect gige is a bad piece of hardware (bad csuming hware; we tried
to build a NIC out of it) so i wouldnt recommend for him to use it.
cheers,
jamal
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: How can we bound one CPU to one Gigabit NIC?
2002-10-10 2:22 ` jamal
@ 2002-10-10 2:24 ` jamal
0 siblings, 0 replies; 19+ messages in thread
From: jamal @ 2002-10-10 2:24 UTC (permalink / raw)
To: Jason Lunz; +Cc: netdev
On Wed, 9 Oct 2002, jamal wrote:
B>
> > hadi@cyberus.ca said:
> > > Can you repeat these tests with NAPI and no binding and see if you get
> > > any reordering?
> >
> > Wouldn't he need a NAPIfied SysKonnect driver? I wasn't aware anyone had
> > converted it yet. Or is it enough for him to check the blog_dev path?
>
> Duh. There is no NAPIfied SysKonnect driver ;->
> Well he didnt bother responding, so he may have figured that out.
> SysKonnect gige is a bad piece of hardware (bad csuming hware; we tried
> to build a NIC out of it) so i wouldnt recommend for him to use it.
>
I mean tried to use it ...
cheers,
jamal
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2002-10-10 2:24 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-10-06 3:38 Update on e1000 troubles (over-heating!) Ben Greear
2002-10-06 3:47 ` Andre Hedrick
2002-10-06 22:38 ` jamal
2002-10-07 0:14 ` Andre Hedrick
2002-10-07 11:56 ` jamal
2002-10-09 1:10 ` Update on e1000 troubles (over-heating!) (problem solved) Ben Greear
2002-10-07 3:46 ` Update on e1000 troubles (over-heating!) Ben Greear
2002-10-07 5:26 ` David S. Miller
2002-10-07 11:53 ` jamal
2002-10-07 11:58 ` David S. Miller
2002-10-07 16:40 ` Ben Greear
2002-10-07 18:11 ` How can we bound one CPU to one Gigabit NIC? Xiaoliang (David) Wei
2002-10-07 18:24 ` Ben Greear
2002-10-07 19:12 ` Xiaoliang (David) Wei
2002-10-08 7:27 ` Xiaoliang (David) Wei
2002-10-08 8:57 ` jamal
2002-10-08 17:41 ` Jason Lunz
2002-10-10 2:22 ` jamal
2002-10-10 2:24 ` jamal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).