* RE: tx_full v.s tx_free race fix in 8xx_io/enet.c?
@ 2003-12-04 14:32 Joakim Tjernlund
2003-12-05 19:53 ` Tom Rini
0 siblings, 1 reply; 5+ messages in thread
From: Joakim Tjernlund @ 2003-12-04 14:32 UTC (permalink / raw)
To: 'Tom Rini'; +Cc: linuxppc-embedded
> Something I find a bit odd is that I can run ping -s 1472 -f
> <myTargetIp>
> without problems, but if i "jump start" the ping with "ping
> -s 800 -f <myTargetIp> -l 8"
> I start to loose packages. Ifconfig shows no errors for both cases.
> Do you get the same?
>
> Jocke
Found this problem. It is the backoff/retry logic thats
causing very long TX delays. If I reduce "retlim"(retry limit) from
15 to 6, the system recovers from a large packet ping storm.
Is it a bad idea to reduce the retry limit?
Jocke
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: tx_full v.s tx_free race fix in 8xx_io/enet.c?
2003-12-04 14:32 tx_full v.s tx_free race fix in 8xx_io/enet.c? Joakim Tjernlund
@ 2003-12-05 19:53 ` Tom Rini
0 siblings, 0 replies; 5+ messages in thread
From: Tom Rini @ 2003-12-05 19:53 UTC (permalink / raw)
To: Joakim Tjernlund; +Cc: linuxppc-embedded
On Thu, Dec 04, 2003 at 03:32:04PM +0100, Joakim Tjernlund wrote:
> > Something I find a bit odd is that I can run ping -s 1472 -f
> > <myTargetIp>
> > without problems, but if i "jump start" the ping with "ping
> > -s 800 -f <myTargetIp> -l 8"
> > I start to loose packages. Ifconfig shows no errors for both cases.
> > Do you get the same?
> >
> > Jocke
>
> Found this problem. It is the backoff/retry logic thats
> causing very long TX delays. If I reduce "retlim"(retry limit) from
> 15 to 6, the system recovers from a large packet ping storm.
>
> Is it a bad idea to reduce the retry limit?
I don't know, sorry.
--
Tom Rini
http://gate.crashing.org/~trini/
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 5+ messages in thread
* tx_full v.s tx_free race fix in 8xx_io/enet.c?
@ 2003-12-01 23:00 Joakim Tjernlund
2003-12-03 16:15 ` Tom Rini
0 siblings, 1 reply; 5+ messages in thread
From: Joakim Tjernlund @ 2003-12-01 23:00 UTC (permalink / raw)
To: linuxppc-embedded, Tom Rini
I don't see the race the tx_free patch fixes. So I tried to generate a race by running 3-4 parallel
ping floods from my Linux PC. Still no race/malfunction. So I removed the spin_lock_irq()/
spin_unlock_irq() in scc_enet_start_xmit(). Still no race/malfunction. I applied
the tx_free patch. Got an oops after only a few seconds. Added the spin_lock_irq()/
spin_unlock_irq() again. Now it worked again.
It seems to me that the old tx_full stuff is working better that the new tx_free. Which
race is it supposed to fix and how do I generate it?
This is the changeset that introduces tx_free:
http://ppc.bitkeeper.com:8080/linuxppc_2_4_devel/diffs/arch/ppc/8xx_io/enet.c@1.12.1.8?nav=index.html|src/.|src/arch|src/arch/ppc|src/arch/ppc/8xx_io|hist/arch/ppc/8xx_io/enet.c
Jocke
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: tx_full v.s tx_free race fix in 8xx_io/enet.c?
2003-12-01 23:00 Joakim Tjernlund
@ 2003-12-03 16:15 ` Tom Rini
2003-12-03 17:08 ` Joakim Tjernlund
0 siblings, 1 reply; 5+ messages in thread
From: Tom Rini @ 2003-12-03 16:15 UTC (permalink / raw)
To: Joakim Tjernlund; +Cc: linuxppc-embedded
On Tue, Dec 02, 2003 at 12:00:27AM +0100, Joakim Tjernlund wrote:
> It seems to me that the old tx_full stuff is working better that the
> new tx_free.
What problem are you seeing without making any changes to the code as it
stands?
> Which race is it supposed to fix and how do I generate it?
I don't recall the exact details right now, but I think it was a
potential race (hence yes, you might not be able to trigger it) where
some packets could be lost (and a retransmitt would happen). But again,
it's been a long time since I thought about this one.
--
Tom Rini
http://gate.crashing.org/~trini/
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: tx_full v.s tx_free race fix in 8xx_io/enet.c?
2003-12-03 16:15 ` Tom Rini
@ 2003-12-03 17:08 ` Joakim Tjernlund
0 siblings, 0 replies; 5+ messages in thread
From: Joakim Tjernlund @ 2003-12-03 17:08 UTC (permalink / raw)
To: 'Tom Rini'; +Cc: linuxppc-embedded
> On Tue, Dec 02, 2003 at 12:00:27AM +0100, Joakim Tjernlund wrote:
>
> > It seems to me that the old tx_full stuff is working better that the
> > new tx_free.
>
> What problem are you seeing without making any changes to the
> code as it
> stands?
I have impl. the NAPI method in enet.c and had removed the spin_lock_irq()
in the xmit procedure since I don't belive I need these any more and all was well.
Then I discoverd I had forgotten to apply the tx_free change so I added it and
then I got an oops. Restoring the spin_lock_irq() makes the behaviour identical.
Thats why I wonder what race they are supposed to fix since the only difference
I get is the above oops on a mpc862.
Something I find a bit odd is that I can run ping -s 1472 -f <myTargetIp>
without problems, but if i "jump start" the ping with "ping -s 800 -f <myTargetIp> -l 8"
I start to loose packages. Ifconfig shows no errors for both cases.
Do you get the same?
Jocke
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2003-12-05 19:53 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-12-04 14:32 tx_full v.s tx_free race fix in 8xx_io/enet.c? Joakim Tjernlund
2003-12-05 19:53 ` Tom Rini
-- strict thread matches above, loose matches on Subject: below --
2003-12-01 23:00 Joakim Tjernlund
2003-12-03 16:15 ` Tom Rini
2003-12-03 17:08 ` Joakim Tjernlund
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).