* Re: 2.6.0 NFS-server low to 0 performance
[not found] ` <1csrv-Er-9@gated-at.bofh.it>
@ 2004-01-10 16:08 ` Andi Kleen
2004-01-10 16:19 ` Trond Myklebust
0 siblings, 1 reply; 45+ messages in thread
From: Andi Kleen @ 2004-01-10 16:08 UTC (permalink / raw)
To: Trond Myklebust; +Cc: linux-kernel
Trond Myklebust <trond.myklebust@fys.uio.no> writes:
> The correct solution to this problem is (b). I.e. we convert mount to
> use TCP as the default if it is available. That is consistent with what
> all other modern implementations do.
Please do that. Fragmented UDP with 16bit ipid is just russian roulette at
today's network speeds.
One disadvantage is that some older (early 2.4) Linux nfsd servers that have
TCP enabled can cause problems. But I guess we can live with that, they
should be updated anyways.
-Andi
^ permalink raw reply [flat|nested] 45+ messages in thread* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 16:08 ` 2.6.0 NFS-server low to 0 performance Andi Kleen
@ 2004-01-10 16:19 ` Trond Myklebust
0 siblings, 0 replies; 45+ messages in thread
From: Trond Myklebust @ 2004-01-10 16:19 UTC (permalink / raw)
To: Andi Kleen; +Cc: linux-kernel
På lau , 10/01/2004 klokka 11:08, skreiv Andi Kleen:
> Trond Myklebust <trond.myklebust@fys.uio.no> writes:
>
> > The correct solution to this problem is (b). I.e. we convert mount to
> > use TCP as the default if it is available. That is consistent with what
> > all other modern implementations do.
>
> Please do that. Fragmented UDP with 16bit ipid is just russian roulette at
> today's network speeds.
I fully agree.
Chuck Lever recently sent an update for the NFS 'mount' utility to
Andries. Among other things, that update changes this default. We're
still waiting for his comments.
Cheers,
Trond
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
@ 2004-01-12 14:40 James Pearson
2004-01-12 15:22 ` Trond Myklebust
0 siblings, 1 reply; 45+ messages in thread
From: James Pearson @ 2004-01-12 14:40 UTC (permalink / raw)
To: linux-kernel; +Cc: Trond Myklebust
Trond Myklebust <trond.myklebust@fys.uio.no> writes:
> På lau , 10/01/2004 klokka 11:08, skreiv Andi Kleen:
> > Trond Myklebust <trond.myklebust@fys.uio.no> writes:
> >
> > > The correct solution to this problem is (b). I.e. we convert mount to
> > > use TCP as the default if it is available. That is consistent with what
> > > all other modern implementations do.
> >
> > Please do that. Fragmented UDP with 16bit ipid is just russian roulette at
> > today's network speeds.
>
> I fully agree.
>
> Chuck Lever recently sent an update for the NFS 'mount' utility to
> Andries. Among other things, that update changes this default. We're
> still waiting for his comments.
If mount defaults to trying TCP first then UDP if the TCP mount fails,
should there be separate options for [rw]size depending on what type of
mount actually takes place? e.g. 'ursize' and 'uwsize' for UDP and
'trsize' and 'twsize' for TCP ?
James Pearson
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-12 14:40 James Pearson
@ 2004-01-12 15:22 ` Trond Myklebust
2004-01-13 11:08 ` Muli Ben-Yehuda
0 siblings, 1 reply; 45+ messages in thread
From: Trond Myklebust @ 2004-01-12 15:22 UTC (permalink / raw)
To: James Pearson; +Cc: linux-kernel
På må , 12/01/2004 klokka 09:40, skreiv James Pearson:
> If mount defaults to trying TCP first then UDP if the TCP mount fails,
> should there be separate options for [rw]size depending on what type of
> mount actually takes place? e.g. 'ursize' and 'uwsize' for UDP and
> 'trsize' and 'twsize' for TCP ?
No. The number of "mount" options is complex enough as it is. I don't
see the above as being useful.
If you need the above tweak, you should be able to get round the above
problem by first attempting to force the TCP protocol yourself, and then
retrying using UDP if it fails.
Changing the default r/wsize should normally be unnecessary. You only
want to play with them if you actually see performance problems under
testing and find that you are unable to fix the cause of the packets
being dropped.
Cheers,
Trond
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-12 15:22 ` Trond Myklebust
@ 2004-01-13 11:08 ` Muli Ben-Yehuda
0 siblings, 0 replies; 45+ messages in thread
From: Muli Ben-Yehuda @ 2004-01-13 11:08 UTC (permalink / raw)
To: Trond Myklebust; +Cc: James Pearson, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1357 bytes --]
On Mon, Jan 12, 2004 at 10:22:31AM -0500, Trond Myklebust wrote:
> P? m? , 12/01/2004 klokka 09:40, skreiv James Pearson:
> > If mount defaults to trying TCP first then UDP if the TCP mount fails,
> > should there be separate options for [rw]size depending on what type of
> > mount actually takes place? e.g. 'ursize' and 'uwsize' for UDP and
> > 'trsize' and 'twsize' for TCP ?
>
> No. The number of "mount" options is complex enough as it is. I don't
> see the above as being useful.
> If you need the above tweak, you should be able to get round the above
> problem by first attempting to force the TCP protocol yourself, and then
> retrying using UDP if it fails.
I have a patch, sent to the util-linux maintainer, that adds a couple
of new mount options to nfsmount. Those allow you to force either of
tcp, udp, tcp then udp, and udp then tcp, using the existing proto=xxx
syntax. It's available at
http://www.mulix.org/code/patches/util-linux/tcp-udp-mount-ordering-A3.diff
It also cleans up nfsmount() somewhat, although it could certainly do
with further rewrite^Wcleanups. I'm waiting to hear from the
util-linux maintainer before embarking on that, though.
Cheers,
Muli
--
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/
"the nucleus of linux oscillates my world" - gccbot@#offtopic
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 45+ messages in thread
* 2.6.0 NFS-server low to 0 performance
@ 2004-01-06 0:46 Guennadi Liakhovetski
2004-01-07 13:36 ` Guennadi Liakhovetski
2004-01-07 17:49 ` Mike Fedyk
0 siblings, 2 replies; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-06 0:46 UTC (permalink / raw)
To: linux-kernel
Hello
The NFS server on a PC with a 2.6.0 (release) kernel slows down to a crawl
or stops completely.
Searched archives - nothing fits exact enough.
The server (PC1) is a 900MHz Duron with 384M RAM and a tulip 10/100
(LinkSys) network card (Linksys Network Everywhere Fast Ethernet 10/100
model NC100 (rev 17)).
Clients:
PC2 - Pentium 133MHz with 24M RAM and an onboard Lance 79C970 10mbps
network card,
a SA1100 platform (Tuxscreen / Shannon) with 16M RAM, PCMCIA Netgear
10/100mbps ne2000-compatible (pcnet_cs + 8390) card
a PXA250 platform (Inphinity / Triton starter-kit) with 64M RAM, onboard
SMC91C11xFD (smc91x driver) 10/100 chip
In the tests below I was copying a 4M file from an NFS-mounted
directory to a RAM-based fs (ramfs / tmpfs). Here are results:
server with 2.6.0 kernel:
fast:2.6.0-test11 2m21s (*)
fast:2.4.20 16.5s
SA1100:2.4 never finishes (*)
PXA:2.4.21-rmk1-pxa1 as above
PXA:2.6.0-rmk1-pxa as above
server: 2.4.21
fast:2.6.0-test11 6s
fast:2.4.20 5s
SA1100:2.4.19-rmk7 3.22s
PXA:2.4.21-rmk1-pxa1 7s
PXA:2.6.0-rmk2-pxa 1) 50s (**)
(***) 2) 27s (**)
(*) Messages "NFS server not responding" / "NFS server OK", "mount version
older than kernel" on mount
(**) Messages "NFS server not responding" / "NFS server OK", "mount version
older than kernel" on mount, trafic shows as several peaks
(***) 2.6.0-rmk2-pxa corresponds to the 2.6.0-rmk2 kernel with a PXA-patch
forward-ported from diff-2.6.0-test2-rmk1-pxa1.
The LinkSys card I bought recently, before I used a RTL (3c59x) card, only
capable of 10mbps, I never saw such problems with it, but I, probably,
never tried NFS under 2.6.0 with it - have to try too.
It is not just a problem of 2.6 with those specific network configurations
- ftp / http / tftp transfers work fine. E.g. wget of the same file on the
PXA with 2.6.0 from the PC1 with 2.4.21 over http takes about 2s. So, it
is 2.6 + NFS.
Is it fixed somewhere (2.6.1-rcx?), or what should I try / what further
information is required?
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-06 0:46 Guennadi Liakhovetski
@ 2004-01-07 13:36 ` Guennadi Liakhovetski
2004-01-07 22:30 ` bill davidsen
2004-01-07 17:49 ` Mike Fedyk
1 sibling, 1 reply; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-07 13:36 UTC (permalink / raw)
To: linux-kernel
On Tue, 6 Jan 2004, Guennadi Liakhovetski wrote:
> server with 2.6.0 kernel:
>
> fast:2.6.0-test11 2m21s (*)
> fast:2.4.20 16.5s
> SA1100:2.4 never finishes (*)
> PXA:2.4.21-rmk1-pxa1 as above
> PXA:2.6.0-rmk1-pxa as above
>
> server: 2.4.21
>
> fast:2.6.0-test11 6s
> fast:2.4.20 5s
> SA1100:2.4.19-rmk7 3.22s
> PXA:2.4.21-rmk1-pxa1 7s
> PXA:2.6.0-rmk2-pxa 1) 50s (**)
> (***) 2) 27s (**)
s/fast/PC2/
Further, I tried the old 3c59x card - same problems persist. Also tried
PC2 as the server - same. nfs-utils version 1.0.6 (Debian Sarge). I sent a
copy of the yesterday's email + new details to nfs@lists.sourceforge.net,
netdev@oss.sgi.com, linux-net@vger.kernel.org.
Strange, that nobody is seeing this problem, but it looks pretty bad here.
Unless I missed some necessary update somewhere? The only one that seemed
relevant - nfs-utils on the server(s) from Documentation/Changes I
checked.
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-07 13:36 ` Guennadi Liakhovetski
@ 2004-01-07 22:30 ` bill davidsen
2004-01-08 12:48 ` Guennadi Liakhovetski
0 siblings, 1 reply; 45+ messages in thread
From: bill davidsen @ 2004-01-07 22:30 UTC (permalink / raw)
To: linux-kernel
In article <Pine.LNX.4.44.0401071431520.479-100000@poirot.grange>,
Guennadi Liakhovetski <g.liakhovetski@gmx.de> wrote:
| On Tue, 6 Jan 2004, Guennadi Liakhovetski wrote:
|
| > server with 2.6.0 kernel:
| >
| > fast:2.6.0-test11 2m21s (*)
| > fast:2.4.20 16.5s
| > SA1100:2.4 never finishes (*)
| > PXA:2.4.21-rmk1-pxa1 as above
| > PXA:2.6.0-rmk1-pxa as above
| >
| > server: 2.4.21
| >
| > fast:2.6.0-test11 6s
| > fast:2.4.20 5s
| > SA1100:2.4.19-rmk7 3.22s
| > PXA:2.4.21-rmk1-pxa1 7s
| > PXA:2.6.0-rmk2-pxa 1) 50s (**)
| > (***) 2) 27s (**)
|
| s/fast/PC2/
|
| Further, I tried the old 3c59x card - same problems persist. Also tried
| PC2 as the server - same. nfs-utils version 1.0.6 (Debian Sarge). I sent a
| copy of the yesterday's email + new details to nfs@lists.sourceforge.net,
| netdev@oss.sgi.com, linux-net@vger.kernel.org.
|
| Strange, that nobody is seeing this problem, but it looks pretty bad here.
| Unless I missed some necessary update somewhere? The only one that seemed
| relevant - nfs-utils on the server(s) from Documentation/Changes I
| checked.
I'm sure you checked this, but does mii-tool show that you have
negotiated the proper connection to the hub or switch? I found that my
3cXXX and eepro100 cards were negotiating half duplex with the switches
and cable modems, causing the throughput to go forth and conjugate the
verb "to suck" until I fixed it.
--
bill davidsen <davidsen@tmr.com>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-07 22:30 ` bill davidsen
@ 2004-01-08 12:48 ` Guennadi Liakhovetski
0 siblings, 0 replies; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-08 12:48 UTC (permalink / raw)
To: bill davidsen; +Cc: linux-kernel
On 7 Jan 2004, bill davidsen wrote:
> I'm sure you checked this, but does mii-tool show that you have
> negotiated the proper connection to the hub or switch? I found that my
> 3cXXX and eepro100 cards were negotiating half duplex with the switches
> and cable modems, causing the throughput to go forth and conjugate the
> verb "to suck" until I fixed it.
Actually, I didn't. Just tried - mii-tool says
SIOCGMIIPHY on 'eth0' failed: Operation not supported
no MII interfaces found
And if you look in the tulip driver you'll see the same - ADMtek Comet
doesn't have the HAS_MII flag set:-( Is it really that bad? It was a cheap
(damn it, when will I learn not to buy cheap stuff, even if it says
"Linux supported"...) card, no technical documentation, none on
www.sitecom.com either. I've sent them a service-request though. Also
funny, on their site they say, it's a realtek 8139 chip... I would even
less hope that the other 3c59x card, which can only do half-duplex (I
think) 10mbps, has mii... But - the light on the hub and on the card say
it's 100mbps full-duplex. Actually, if something was wrong with network
settings - ftp wouldn't work reliable either, right? Well, maybe it
affects UDP only somehow. Well, yes - with TCP it seems to work. Only now
I have another problem with my PC2 with 2.6 (Pentium 133MHz, 24M). It
swaps like a mad already under very mild load... Have to narrow it down
though... So, 4M file I was able to copy without problem to tmpfs, 120M to
the disk lasts already many minutes (~30) with hard swapping. Problem is I
can't use terminals in such situation - they become nearly irresponsive.
So, only sysrq's work... Hm, just looked at the backtrace of the cp
process - it looks completely sick.
__wake_up_common
preempt_schedule
__wake_up
wakeup_kswapd
__alloc_pages
read_swap_cache_async
read_swap_cache_async
swapin_readahead
do_swap_page
handle_mm_fault
do_page_fault
do_page_fault
do_DC390_Interrupt
handle_IRQ_event
end_8259A_irq
do_IRQ
error_code
So, since TCP works - shall we consider the case closed, or shall UDP also
be fixed? Ok, presumably, Linux-Linux can always use TCP, what about other
UNIXes? Can they also do mount -otcp?
...and what is this:
RPC request reserved 0 but used 116
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-06 0:46 Guennadi Liakhovetski
2004-01-07 13:36 ` Guennadi Liakhovetski
@ 2004-01-07 17:49 ` Mike Fedyk
2004-01-07 18:13 ` Guennadi Liakhovetski
1 sibling, 1 reply; 45+ messages in thread
From: Mike Fedyk @ 2004-01-07 17:49 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-kernel
On Tue, Jan 06, 2004 at 01:46:30AM +0100, Guennadi Liakhovetski wrote:
> It is not just a problem of 2.6 with those specific network configurations
> - ftp / http / tftp transfers work fine. E.g. wget of the same file on the
> PXA with 2.6.0 from the PC1 with 2.4.21 over http takes about 2s. So, it
> is 2.6 + NFS.
>
> Is it fixed somewhere (2.6.1-rcx?), or what should I try / what further
> information is required?
You will probably need to look at some tcpdump output to debug the problem...
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-07 17:49 ` Mike Fedyk
@ 2004-01-07 18:13 ` Guennadi Liakhovetski
2004-01-07 18:19 ` Mike Fedyk
2004-01-08 21:42 ` Pavel Machek
0 siblings, 2 replies; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-07 18:13 UTC (permalink / raw)
To: Mike Fedyk; +Cc: linux-kernel
On Wed, 7 Jan 2004, Mike Fedyk wrote:
> On Tue, Jan 06, 2004 at 01:46:30AM +0100, Guennadi Liakhovetski wrote:
> > It is not just a problem of 2.6 with those specific network configurations
> > - ftp / http / tftp transfers work fine. E.g. wget of the same file on the
> > PXA with 2.6.0 from the PC1 with 2.4.21 over http takes about 2s. So, it
> > is 2.6 + NFS.
> >
> > Is it fixed somewhere (2.6.1-rcx?), or what should I try / what further
> > information is required?
>
> You will probably need to look at some tcpdump output to debug the problem...
Yep, just have done that - well, they differ... First obvious thing that I
noticed is that 2.6 is trying to read bigger blocks (32K instead of 8K),
but then - so far I cannot interpret what happens after the start of the
actual file-read. 2.6 starts getting big delays immediately, even in
cases, where eventually the file is transferred (2 PCs with 2.6). If
someone can get some information from the logs, I'll happily send them.
The bz2 tarball is 50k big, so, not too bad for the list either, but it is
not a common practice to send compressed attachments to the list, right?
It's 5M uncompressed.
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-07 18:13 ` Guennadi Liakhovetski
@ 2004-01-07 18:19 ` Mike Fedyk
2004-01-07 19:06 ` Guennadi Liakhovetski
2004-01-08 21:42 ` Pavel Machek
1 sibling, 1 reply; 45+ messages in thread
From: Mike Fedyk @ 2004-01-07 18:19 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-kernel
On Wed, Jan 07, 2004 at 07:13:46PM +0100, Guennadi Liakhovetski wrote:
> On Wed, 7 Jan 2004, Mike Fedyk wrote:
>
> > On Tue, Jan 06, 2004 at 01:46:30AM +0100, Guennadi Liakhovetski wrote:
> > > It is not just a problem of 2.6 with those specific network configurations
> > > - ftp / http / tftp transfers work fine. E.g. wget of the same file on the
> > > PXA with 2.6.0 from the PC1 with 2.4.21 over http takes about 2s. So, it
> > > is 2.6 + NFS.
> > >
> > > Is it fixed somewhere (2.6.1-rcx?), or what should I try / what further
> > > information is required?
> >
> > You will probably need to look at some tcpdump output to debug the problem...
>
> Yep, just have done that - well, they differ... First obvious thing that I
> noticed is that 2.6 is trying to read bigger blocks (32K instead of 8K),
You mean it's trying to do 32K nfs block size on the wire?
> The bz2 tarball is 50k big, so, not too bad for the list either, but it is
> not a common practice to send compressed attachments to the list, right?
> It's 5M uncompressed.
Just post a few samples of the lines that differ. Any files should be sent
off-list.
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-07 18:19 ` Mike Fedyk
@ 2004-01-07 19:06 ` Guennadi Liakhovetski
2004-01-09 10:08 ` Guennadi Liakhovetski
0 siblings, 1 reply; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-07 19:06 UTC (permalink / raw)
To: Mike Fedyk; +Cc: linux-kernel
On Wed, 7 Jan 2004, Mike Fedyk wrote:
> On Wed, Jan 07, 2004 at 07:13:46PM +0100, Guennadi Liakhovetski wrote:
> > noticed is that 2.6 is trying to read bigger blocks (32K instead of 8K),
>
> You mean it's trying to do 32K nfs block size on the wire?
Emn, no, if I understand it correctly. NFS-client requests 32K of data at
a time, but that is sent in several fragments. Actually, client is the
same (2.6 kernel), and it requests 32 or 8K depending on the
kernel-version of the server...
> Just post a few samples of the lines that differ. Any files should be sent
> off-list.
Well, I am afraid, I won't be able to identify the important differring
packets. I did
tcpdump -l -i eth0 -exX -vvv -s0
, so, the log contains complete packet dumps. Ok, I'll try just to quote
headers. poirot is the server (PC1, 2.4 / 2.6), fast is the client (PC2,
2.6). Following the first request for data (diff only in length)
2.6:
18:42:28.374430 0:80:5f:d2:53:f0 0:50:bf:a4:59:71 ip 162:
fast.grange.462443716 > poirot.grange.nfs: 120 read fh Unknown/1 32768
bytes @ 0x000008000 (DF) (ttl 64, id 15, len 148)
2.4:
18:48:57.794687 0:80:5f:d2:53:f0 0:50:bf:a4:59:71 ip 162:
fast.grange.1972393156 > poirot.grange.nfs: 120 read fh Unknown/1 8192
bytes @ 0x000002000 (DF) (ttl 64, id 6, len 148)
the server (PC1) sends the following packets:
2.6:
18:42:28.374554 0:50:bf:a4:59:71 0:80:5f:d2:53:f0 ip 1514:
poirot.grange.nfs > fast.grange.445666500: reply ok 1472 read REG 100644
ids 0/0 sz 0x00007a120 nlink 1 rdev 0/0 fsid 0x000000000 nodeid
0x000000000 a/m/ctime 1073497348.374212040 2477.000000 1064093242.000000
32768 bytes (frag 40553:1480@0+) (ttl 64, len 1500)
18:42:28.374560 0:50:bf:a4:59:71 0:80:5f:d2:53:f0 ip 1514: poirot.grange >
fast.grange: (frag 40553:1480@1480+) (ttl 64, len 1500)
2.4:
18:48:57.806270 0:50:bf:a4:59:71 0:80:5f:d2:53:f0 ip 962: poirot.grange >
fast.grange: (frag 39126:928@7400) (ttl 64, len 948)
18:48:57.806291 0:50:bf:a4:59:71 0:80:5f:d2:53:f0 ip 1514: poirot.grange >
fast.grange: (frag 39126:1480@5920+) (ttl 64, len 1500)
Well, maybe important is this place in 2.6 log - when it got the first
(2.5s) delay:
18:42:28.414903 1:80:c2:0:0:1 1:80:c2:0:0:1 8808 60:
18:42:31.033837 0:80:5f:d2:53:f0 0:50:bf:a4:59:71 ip 162:
fast.grange.479220932 > poirot.grange.nfs: 120 read fh Unknown/1 32768
bytes @ 0x000010000 (DF) (ttl 64, id 18, len 148)
18:42:31.034244 0:50:bf:a4:59:71 0:80:5f:d2:53:f0 ip 1514:
poirot.grange.nfs > fast.grange.479220932: reply ok 1472 read REG 100644
ids 0/0 sz 0x00007a120 nlink 1 rdev 0/0 fsid 0x000000000 nodeid
0x000000000 a/m/ctime 1073497351.33807720 2477.000000 1064093242.000000
32768 bytes (frag 40557:1480@0+) (ttl 64, len 1500)
So, does it say anything?
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-07 19:06 ` Guennadi Liakhovetski
@ 2004-01-09 10:08 ` Guennadi Liakhovetski
2004-01-09 18:00 ` Mike Fedyk
0 siblings, 1 reply; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-09 10:08 UTC (permalink / raw)
To: Mike Fedyk; +Cc: linux-kernel, Sam Vilain
On Wed, 7 Jan 2004, Mike Fedyk wrote:
> Just post a few samples of the lines that differ. Any files should be sent
> off-list.
Ok, This is the problem:
10:38:30.867306 0:40:f4:23:ac:91 0:50:bf:a4:59:71 ip 590: tuxscreen.grange > poirot.grange: icmp: ip reassembly time exceeded [tos 0xc0]
A similar effect was reported in 1999 with kernel 2.3.13, also between 2
100mbps cards. It also was occurring with UDP NFS:
http://www.ussg.iu.edu/hypermail/linux/net/9908.2/0039.html
But there were no answers, so, I am CC-ing Sam, hoping to hear, if he's
found the reason and a cure for his problem. Apart from this message I
didn't find any other relevant hits with Google.
Is it some physical network problem, which somehow only becomes visible
under 2.6 now, with UDP (NFS) with 100mbps?
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-09 10:08 ` Guennadi Liakhovetski
@ 2004-01-09 18:00 ` Mike Fedyk
2004-01-10 0:38 ` Guennadi Liakhovetski
0 siblings, 1 reply; 45+ messages in thread
From: Mike Fedyk @ 2004-01-09 18:00 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-kernel, Sam Vilain
On Fri, Jan 09, 2004 at 11:08:29AM +0100, Guennadi Liakhovetski wrote:
> On Wed, 7 Jan 2004, Mike Fedyk wrote:
>
> > Just post a few samples of the lines that differ. Any files should be sent
> > off-list.
>
> Ok, This is the problem:
>
> 10:38:30.867306 0:40:f4:23:ac:91 0:50:bf:a4:59:71 ip 590: tuxscreen.grange > poirot.grange: icmp: ip reassembly time exceeded [tos 0xc0]
>
> A similar effect was reported in 1999 with kernel 2.3.13, also between 2
> 100mbps cards. It also was occurring with UDP NFS:
>
> http://www.ussg.iu.edu/hypermail/linux/net/9908.2/0039.html
>
> But there were no answers, so, I am CC-ing Sam, hoping to hear, if he's
> found the reason and a cure for his problem. Apart from this message I
> didn't find any other relevant hits with Google.
>
> Is it some physical network problem, which somehow only becomes visible
> under 2.6 now, with UDP (NFS) with 100mbps?
Find out how many packets are being dropped on your two hosts with 2.4 and
2.6.
If they're not dropping packets, maybe the ordering with a large backlog has
chagned between 2.4 and 2.6 that would keep some of the fragments from being
sent in time...
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-09 18:00 ` Mike Fedyk
@ 2004-01-10 0:38 ` Guennadi Liakhovetski
2004-01-10 1:38 ` Mike Fedyk
0 siblings, 1 reply; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-10 0:38 UTC (permalink / raw)
To: Mike Fedyk; +Cc: linux-kernel
On Fri, 9 Jan 2004, Mike Fedyk wrote:
> On Fri, Jan 09, 2004 at 11:08:29AM +0100, Guennadi Liakhovetski wrote:
> > On Wed, 7 Jan 2004, Mike Fedyk wrote:
> >
> > > Just post a few samples of the lines that differ. Any files should be sent
> > > off-list.
> >
> > Ok, This is the problem:
> >
> > 10:38:30.867306 0:40:f4:23:ac:91 0:50:bf:a4:59:71 ip 590: tuxscreen.grange > poirot.grange: icmp: ip reassembly time exceeded [tos 0xc0]
>
> Find out how many packets are being dropped on your two hosts with 2.4 and
> 2.6.
So, I've run 2 tcpdumps - on server and on client. Woooo... Looks bad.
With 2.4 (_on the server_) the client reads about 8K at a time, which is
sent in 5 fragments 1500 (MTU) bytes each. And that works. Also
interesting, that fragments are sent in the reverse order.
With 2.6 (on the server, same client) the client reads about 16K at a
time, split into 11 fragments, and then packets number 9 and 10 get
lost... This all with a StrongARM client and a PCMCIA network-card. With a
PXA-client (400MHz compared to 200MHz SA) and an on-board eth smc91x, it
gets the first 5 fragments, and then misses every other fragment. Again -
in both cases I was copying files to RAM. Yes, 2.6 sends fragments in
direct order.
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 0:38 ` Guennadi Liakhovetski
@ 2004-01-10 1:38 ` Mike Fedyk
2004-01-10 11:10 ` Guennadi Liakhovetski
0 siblings, 1 reply; 45+ messages in thread
From: Mike Fedyk @ 2004-01-10 1:38 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-kernel
On Sat, Jan 10, 2004 at 01:38:00AM +0100, Guennadi Liakhovetski wrote:
> On Fri, 9 Jan 2004, Mike Fedyk wrote:
> > Find out how many packets are being dropped on your two hosts with 2.4 and
> > 2.6.
>
> So, I've run 2 tcpdumps - on server and on client. Woooo... Looks bad.
>
> With 2.4 (_on the server_) the client reads about 8K at a time, which is
> sent in 5 fragments 1500 (MTU) bytes each. And that works. Also
> interesting, that fragments are sent in the reverse order.
>
> With 2.6 (on the server, same client) the client reads about 16K at a
> time, split into 11 fragments, and then packets number 9 and 10 get
> lost... This all with a StrongARM client and a PCMCIA network-card. With a
> PXA-client (400MHz compared to 200MHz SA) and an on-board eth smc91x, it
> gets the first 5 fragments, and then misses every other fragment. Again -
> in both cases I was copying files to RAM. Yes, 2.6 sends fragments in
> direct order.
Is that an x86 server, and an arm client?
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 1:38 ` Mike Fedyk
@ 2004-01-10 11:10 ` Guennadi Liakhovetski
2004-01-10 14:30 ` Trond Myklebust
2004-01-10 22:34 ` Mike Fedyk
0 siblings, 2 replies; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-10 11:10 UTC (permalink / raw)
To: Mike Fedyk; +Cc: linux-kernel
On Fri, 9 Jan 2004, Mike Fedyk wrote:
> On Sat, Jan 10, 2004 at 01:38:00AM +0100, Guennadi Liakhovetski wrote:
> > On Fri, 9 Jan 2004, Mike Fedyk wrote:
> > > Find out how many packets are being dropped on your two hosts with 2.4 and
> > > 2.6.
> >
> > So, I've run 2 tcpdumps - on server and on client. Woooo... Looks bad.
> >
> > With 2.4 (_on the server_) the client reads about 8K at a time, which is
> > sent in 5 fragments 1500 (MTU) bytes each. And that works. Also
> > interesting, that fragments are sent in the reverse order.
> >
> > With 2.6 (on the server, same client) the client reads about 16K at a
> > time, split into 11 fragments, and then packets number 9 and 10 get
> > lost... This all with a StrongARM client and a PCMCIA network-card. With a
> > PXA-client (400MHz compared to 200MHz SA) and an on-board eth smc91x, it
> > gets the first 5 fragments, and then misses every other fragment. Again -
> > in both cases I was copying files to RAM. Yes, 2.6 sends fragments in
> > direct order.
>
> Is that an x86 server, and an arm client?
Yes. The reason for the problem seems to be the increased default size of
the transfer unit of NFS from 2.4 to 2.6. 8K under 2.4 was still ok, 16K
is too much - only the first 5 fragments pass fine, then data starts to
get lost. If it is a hardware limitation (not all platforms can manage
16K), it should be probably set back to 8K. If the reason is that some
buffer size was not increased correspondingly, then this should be done.
Just checked - mounting with rsize=8192,wsize=8192 fixes the problem -
there are again 5 fragments and they all are received properly.
Anyway, I think, default values should be safe on all platforms, with
further optimisations being possible, where it is safe.
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 11:10 ` Guennadi Liakhovetski
@ 2004-01-10 14:30 ` Trond Myklebust
2004-01-10 20:04 ` Guennadi Liakhovetski
2004-01-12 5:06 ` Bill Davidsen
2004-01-10 22:34 ` Mike Fedyk
1 sibling, 2 replies; 45+ messages in thread
From: Trond Myklebust @ 2004-01-10 14:30 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: Mike Fedyk, linux-kernel
På lau , 10/01/2004 klokka 06:10, skreiv Guennadi Liakhovetski:
> Yes. The reason for the problem seems to be the increased default size of
> the transfer unit of NFS from 2.4 to 2.6. 8K under 2.4 was still ok, 16K
> is too much - only the first 5 fragments pass fine, then data starts to
> get lost. If it is a hardware limitation (not all platforms can manage
> 16K), it should be probably set back to 8K. If the reason is that some
> buffer size was not increased correspondingly, then this should be done.
No! People who have problems with the support for large rsize/wsize
under UDP due to lost fragments can
a) Reduce r/wsize themselves using mount
b) Use TCP instead
The correct solution to this problem is (b). I.e. we convert mount to
use TCP as the default if it is available. That is consistent with what
all other modern implementations do.
Changing a hard maximum on the server in order to fit the lowest common
denominator client is simply wrong.
Cheers,
Trond
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 14:30 ` Trond Myklebust
@ 2004-01-10 20:04 ` Guennadi Liakhovetski
2004-01-10 21:57 ` Trond Myklebust
2004-01-12 5:06 ` Bill Davidsen
1 sibling, 1 reply; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-10 20:04 UTC (permalink / raw)
To: Trond Myklebust; +Cc: Mike Fedyk, linux-kernel
On Sat, 10 Jan 2004, Trond Myklebust wrote:
> PЕ lau , 10/01/2004 klokka 06:10, skreiv Guennadi Liakhovetski:
> > Yes. The reason for the problem seems to be the increased default size of
> > the transfer unit of NFS from 2.4 to 2.6. 8K under 2.4 was still ok, 16K
> > is too much - only the first 5 fragments pass fine, then data starts to
> > get lost. If it is a hardware limitation (not all platforms can manage
> > 16K), it should be probably set back to 8K. If the reason is that some
> > buffer size was not increased correspondingly, then this should be done.
>
> No! People who have problems with the support for large rsize/wsize
> under UDP due to lost fragments can
>
> a) Reduce r/wsize themselves using mount
> b) Use TCP instead
>
> The correct solution to this problem is (b). I.e. we convert mount to
> use TCP as the default if it is available. That is consistent with what
> all other modern implementations do.
>
> Changing a hard maximum on the server in order to fit the lowest common
> denominator client is simply wrong.
Not change - keep (from 2.4). You see, the problem might be - somebody
updates the NFS-server from 2.4 to 2.6 and then suddenly some clients fail
to work with it. Seems a non-obvious fact, that after upgrading the server
clients' configuration might have to be changed. At the very least this
must be documented in Kconfig.
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 20:04 ` Guennadi Liakhovetski
@ 2004-01-10 21:57 ` Trond Myklebust
2004-01-10 22:14 ` Mike Fedyk
2004-01-10 22:42 ` Guennadi Liakhovetski
0 siblings, 2 replies; 45+ messages in thread
From: Trond Myklebust @ 2004-01-10 21:57 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: Mike Fedyk, linux-kernel
På lau , 10/01/2004 klokka 15:04, skreiv Guennadi Liakhovetski:
> Not change - keep (from 2.4). You see, the problem might be - somebody
> updates the NFS-server from 2.4 to 2.6 and then suddenly some clients fail
> to work with it. Seems a non-obvious fact, that after upgrading the server
> clients' configuration might have to be changed. At the very least this
> must be documented in Kconfig.
Non-obvious????? You have to change modutils, you have to upgrade
nfs-utils, glibc, gcc... and that's only the beginning of the list.
2.6.x is a new kernel it differs from 2.4.x, which again differs from
2.2.x, ... Get over it! There are workarounds for your problem, so use
them.
Trond
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 21:57 ` Trond Myklebust
@ 2004-01-10 22:14 ` Mike Fedyk
2004-01-10 22:47 ` Trond Myklebust
2004-01-10 22:42 ` Guennadi Liakhovetski
1 sibling, 1 reply; 45+ messages in thread
From: Mike Fedyk @ 2004-01-10 22:14 UTC (permalink / raw)
To: Trond Myklebust; +Cc: Guennadi Liakhovetski, linux-kernel
On Sat, Jan 10, 2004 at 04:57:36PM -0500, Trond Myklebust wrote:
> P? lau , 10/01/2004 klokka 15:04, skreiv Guennadi Liakhovetski:
> > Not change - keep (from 2.4). You see, the problem might be - somebody
> > updates the NFS-server from 2.4 to 2.6 and then suddenly some clients fail
> > to work with it. Seems a non-obvious fact, that after upgrading the server
> > clients' configuration might have to be changed. At the very least this
> > must be documented in Kconfig.
>
> Non-obvious????? You have to change modutils, you have to upgrade
> nfs-utils, glibc, gcc... and that's only the beginning of the list.
>
> 2.6.x is a new kernel it differs from 2.4.x, which again differs from
> 2.2.x, ... Get over it! There are workarounds for your problem, so use
> them.
I have to admit, I haven't been following NFS on TCP very much. Is the code
in the stock 2.4 and 2.6 kernels ready for production use? It seemed from
what I read it was still experemental (and even marked as such in the
config).
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 22:14 ` Mike Fedyk
@ 2004-01-10 22:47 ` Trond Myklebust
0 siblings, 0 replies; 45+ messages in thread
From: Trond Myklebust @ 2004-01-10 22:47 UTC (permalink / raw)
To: Mike Fedyk; +Cc: Guennadi Liakhovetski, linux-kernel
På lau , 10/01/2004 klokka 17:14, skreiv Mike Fedyk:
> I have to admit, I haven't been following NFS on TCP very much. Is the code
> in the stock 2.4 and 2.6 kernels ready for production use? It seemed from
> what I read it was still experemental (and even marked as such in the
> config).
The client code has been very heavily tested. It is not marked as
experimental.
The server code is marked as "officially experimental, but seems to work
well". You'll have to talk to Neil to find out what that means. In
practice, though, it performs at least as well as the UDP code.
If you are in a production environment and really don't want to trust
the TCP code, you can disable it, and use the option I mentioned earlier
of setting a low value of r/wsize.
Or better still: fix your network setup so that you don't lose all those
UDP fragments (check switches, NICs, drivers,...). The icmp time
exceeded error is a sign of a lossy network, NOT a broken NFS
implementation.
Trond
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 21:57 ` Trond Myklebust
2004-01-10 22:14 ` Mike Fedyk
@ 2004-01-10 22:42 ` Guennadi Liakhovetski
2004-01-10 22:51 ` Jesper Juhl
2004-01-11 13:18 ` Helge Hafting
1 sibling, 2 replies; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-10 22:42 UTC (permalink / raw)
To: Trond Myklebust; +Cc: Guennadi Liakhovetski, Mike Fedyk, linux-kernel
On Sat, 10 Jan 2004, Trond Myklebust wrote:
> PЕ lau , 10/01/2004 klokka 15:04, skreiv Guennadi Liakhovetski:
> > Not change - keep (from 2.4). You see, the problem might be - somebody
> > updates the NFS-server from 2.4 to 2.6 and then suddenly some clients fail
> > to work with it. Seems a non-obvious fact, that after upgrading the server
> > clients' configuration might have to be changed. At the very least this
> > must be documented in Kconfig.
>
> Non-obvious????? You have to change modutils, you have to upgrade
> nfs-utils, glibc, gcc... and that's only the beginning of the list.
>
> 2.6.x is a new kernel it differs from 2.4.x, which again differs from
> 2.2.x, ... Get over it! There are workarounds for your problem, so use
> them.
Please, calm down:-)), I am not fighting, I am just thinking aloud, I have
no intention whatsoever to attack your aor anybody else's work / ideas /
decisions, etc.
The only my doubt was - yes, you upgrade the __server__, so, you look in
Changes, upgrade all necessary stuff, or just upgrade blindly (as does
happen sometimes, I believe) a distribution - and the server works, fine.
What I find non-obvious, is that on updating the server you have to
re-configure __clients__, see? Just think about a network somewhere in a
uni / company / whatever. Sysadmins update the server, and then
NFS-clients suddenly cannot use NFS any more...
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 22:42 ` Guennadi Liakhovetski
@ 2004-01-10 22:51 ` Jesper Juhl
2004-01-11 13:18 ` Helge Hafting
1 sibling, 0 replies; 45+ messages in thread
From: Jesper Juhl @ 2004-01-10 22:51 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: Trond Myklebust, Mike Fedyk, linux-kernel
On Sat, 10 Jan 2004, Guennadi Liakhovetski wrote:
> On Sat, 10 Jan 2004, Trond Myklebust wrote:
>
> > P? lau , 10/01/2004 klokka 15:04, skreiv Guennadi Liakhovetski:
> > > Not change - keep (from 2.4). You see, the problem might be - somebody
> > > updates the NFS-server from 2.4 to 2.6 and then suddenly some clients fail
> > > to work with it. Seems a non-obvious fact, that after upgrading the server
> > > clients' configuration might have to be changed. At the very least this
> > > must be documented in Kconfig.
> >
> > Non-obvious????? You have to change modutils, you have to upgrade
> > nfs-utils, glibc, gcc... and that's only the beginning of the list.
> >
> > 2.6.x is a new kernel it differs from 2.4.x, which again differs from
> > 2.2.x, ... Get over it! There are workarounds for your problem, so use
> > them.
>
> Please, calm down:-)), I am not fighting, I am just thinking aloud, I have
> no intention whatsoever to attack your aor anybody else's work / ideas /
> decisions, etc.
>
> The only my doubt was - yes, you upgrade the __server__, so, you look in
> Changes, upgrade all necessary stuff, or just upgrade blindly (as does
> happen sometimes, I believe) a distribution - and the server works, fine.
> What I find non-obvious, is that on updating the server you have to
> re-configure __clients__, see? Just think about a network somewhere in a
> uni / company / whatever. Sysadmins update the server, and then
> NFS-clients suddenly cannot use NFS any more...
>
Ever tried upgrading a WinNT server to Win2k or Win2003 Server? Don't
expect all your Win95, Win98 and WinNT clients to just work the same as
they did previously...
Same goes with other OSs. Software that has requirements on both the client and
server side naturally has to be kept in sync, and NFS is not the only case
where not everything is 100% backwards compatible.
This shouldn't really be surprising.
-- Jesper Juhl
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 22:42 ` Guennadi Liakhovetski
2004-01-10 22:51 ` Jesper Juhl
@ 2004-01-11 13:18 ` Helge Hafting
2004-01-11 13:53 ` Russell King
1 sibling, 1 reply; 45+ messages in thread
From: Helge Hafting @ 2004-01-11 13:18 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-kernel
On Sat, Jan 10, 2004 at 11:42:45PM +0100, Guennadi Liakhovetski wrote:
>
> The only my doubt was - yes, you upgrade the __server__, so, you look in
> Changes, upgrade all necessary stuff, or just upgrade blindly (as does
> happen sometimes, I believe) a distribution - and the server works, fine.
> What I find non-obvious, is that on updating the server you have to
> re-configure __clients__, see? Just think about a network somewhere in a
If you upgrade the server and read "Changes", then a note in changes might
say that "you need to configure carefully or some clients could get in trouble."
(If the current "Changes" don't have that - post a documentation patch.)
If you use a distro, then hopefully the distro takes care of the
problem for you. Or at least brings it to your attention somehow.
It should not come as a surprise that changing a server might have an
effect on the clients - clients and servers are connected after all!
Helge Hafting
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-11 13:18 ` Helge Hafting
@ 2004-01-11 13:53 ` Russell King
2004-01-11 14:24 ` Guennadi Liakhovetski
2004-01-15 11:38 ` Pavel Machek
0 siblings, 2 replies; 45+ messages in thread
From: Russell King @ 2004-01-11 13:53 UTC (permalink / raw)
To: Guennadi Liakhovetski, Helge Hafting; +Cc: linux-kernel
On Sun, Jan 11, 2004 at 02:18:57PM +0100, Helge Hafting wrote:
> On Sat, Jan 10, 2004 at 11:42:45PM +0100, Guennadi Liakhovetski wrote:
> > The only my doubt was - yes, you upgrade the __server__, so, you look in
> > Changes, upgrade all necessary stuff, or just upgrade blindly (as does
> > happen sometimes, I believe) a distribution - and the server works, fine.
> > What I find non-obvious, is that on updating the server you have to
> > re-configure __clients__, see? Just think about a network somewhere in a
>
> If you upgrade the server and read "Changes", then a note in changes might
> say that "you need to configure carefully or some clients could get in trouble."
> (If the current "Changes" don't have that - post a documentation patch.)
[This is more to Guennadi than Helge]
I don't see why such a patch to "Changes" should be necessary. The
problem is most definitely with the client hardware, and not the
server software.
The crux of this problem comes down to the SMC91C111 having only a
small on-board packet buffer, which is capable of storing only about
4 packets (both TX and RX). This means that if you receive 8 packets
with high enough interrupt latency, you _will_ drop some of those
packets.
Note that this is independent of whether you're using DMA mode with
the SMC91C111 - DMA mode only allows you to off load the packets from
the chip faster once you've discovered you have a packet to off load
via an interrupt.
It won't be just NFS that's affected - eg, if you have 4kB NFS packets
and several machines broadcast an ARP at the same time, you'll again
run out of packet space on the SMC91C111. Does that mean you should
somehow change the way ARP works?
Sure, reducing the NFS packet size relieves the problem, but that's
just a work around for the symptom and nothing more. It's exactly
the same type of work around as switching the SMC91C111 to operate at
10mbps only - both work by reducing the rate at which packets are
received by the target, thereby offsetting the interrupt latency
and packet unload times.
Basically, the SMC91C111 is great for use on small, *well controlled*
embedded networks, but anything else is asking for trouble.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/
2.6 Serial core
^ permalink raw reply [flat|nested] 45+ messages in thread* Re: 2.6.0 NFS-server low to 0 performance
2004-01-11 13:53 ` Russell King
@ 2004-01-11 14:24 ` Guennadi Liakhovetski
2004-01-15 11:38 ` Pavel Machek
1 sibling, 0 replies; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-11 14:24 UTC (permalink / raw)
To: Russell King; +Cc: Helge Hafting, linux-kernel
On Sun, 11 Jan 2004, Russell King wrote:
> Basically, the SMC91C111 is great for use on small, *well controlled*
> embedded networks, but anything else is asking for trouble.
Ok, thanks. Well, just out of curiousity (also, why I concluded it might
have been a more general problem - because I had it on both my ARM boards)
- where is the bottleneck likely to be on a SA system with a Netgear-FA411
PCMCIA card (NE2000-compatible)? Just a slow CPU?
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-11 13:53 ` Russell King
2004-01-11 14:24 ` Guennadi Liakhovetski
@ 2004-01-15 11:38 ` Pavel Machek
1 sibling, 0 replies; 45+ messages in thread
From: Pavel Machek @ 2004-01-15 11:38 UTC (permalink / raw)
To: Guennadi Liakhovetski, Helge Hafting, linux-kernel
Hi!
> > > The only my doubt was - yes, you upgrade the __server__, so, you look in
> > > Changes, upgrade all necessary stuff, or just upgrade blindly (as does
> > > happen sometimes, I believe) a distribution - and the server works, fine.
> > > What I find non-obvious, is that on updating the server you have to
> > > re-configure __clients__, see? Just think about a network somewhere in a
> >
> > If you upgrade the server and read "Changes", then a note in changes might
> > say that "you need to configure carefully or some clients could get in trouble."
> > (If the current "Changes" don't have that - post a documentation patch.)
>
> [This is more to Guennadi than Helge]
>
> I don't see why such a patch to "Changes" should be necessary. The
> problem is most definitely with the client hardware, and not the
> server software.
>
> The crux of this problem comes down to the SMC91C111 having only a
> small on-board packet buffer, which is capable of storing only about
> 4 packets (both TX and RX). This means that if you receive 8 packets
> with high enough interrupt latency, you _will_ drop some of those
> packets.
I believe problem is in software... basically UDP is broken. I don't
think you can call hw broken just because small RX ring. RX ring has
to have some fixed size, and if the OS is not fast enough, well, some
packets are going on the floor.
I believe SW should deal with RX ring being just one packet big, and
believe that UDP is to blame...
Pavel
--
When do you have a heart between your knees?
[Johanka's followup: and *two* hearts?]
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 14:30 ` Trond Myklebust
2004-01-10 20:04 ` Guennadi Liakhovetski
@ 2004-01-12 5:06 ` Bill Davidsen
2004-01-12 14:27 ` Trond Myklebust
1 sibling, 1 reply; 45+ messages in thread
From: Bill Davidsen @ 2004-01-12 5:06 UTC (permalink / raw)
To: linux-kernel
Trond Myklebust wrote:
> No! People who have problems with the support for large rsize/wsize
> under UDP due to lost fragments can
>
> a) Reduce r/wsize themselves using mount
> b) Use TCP instead
>
> The correct solution to this problem is (b). I.e. we convert mount to
> use TCP as the default if it is available. That is consistent with what
> all other modern implementations do.
>
> Changing a hard maximum on the server in order to fit the lowest common
> denominator client is simply wrong.
So set the default buffer size to 8k if UDP is being used. Other than
getting people to believe 2.6 is broken, you buy nothing. People running
UDP are probably not cutting edge state of the art, let the default be
small and the client negotiate up if desired.
Why do so many Linux people have the idea that because a standard says
they CAN do something, it's fine to do it in a way which doesn't conform
to common practice. And Linux 2.4 practice should count even if you
pretend that Solaris, AIX, Windows and BSD don't count...
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-12 5:06 ` Bill Davidsen
@ 2004-01-12 14:27 ` Trond Myklebust
2004-01-12 15:12 ` Trond Myklebust
0 siblings, 1 reply; 45+ messages in thread
From: Trond Myklebust @ 2004-01-12 14:27 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-kernel
På må , 12/01/2004 klokka 00:06, skreiv Bill Davidsen:
> Why do so many Linux people have the idea that because a standard says
> they CAN do something, it's fine to do it in a way which doesn't conform
> to common practice. And Linux 2.4 practice should count even if you
> pretend that Solaris, AIX, Windows and BSD don't count...
Wake up and smell the new millennium. Networking has all grown up while
you were asleep. We have these new cool things called "switches", NICs
with bigger buffers,...
The 8k limit that you find in RFC1094 was an ad-hoc "limit" based purely
on testing using pre-1989 hardware. AFAIK most if not all of the
commercial vendors (Solaris, AIX, Windows/Hummingbird, EMC and Netapp)
are all currently setting the defaults to 32k block sizes for both TCP
and UDP.
Most of them want to bump that to a couple of Mbyte in the very near
future.
Linux 2.4 didn't have support for anything beyond 8k. BSD sets 32k for
TCP, and 8k for UDP for some reason.
Trond
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-12 14:27 ` Trond Myklebust
@ 2004-01-12 15:12 ` Trond Myklebust
2004-01-16 5:44 ` Mike Fedyk
0 siblings, 1 reply; 45+ messages in thread
From: Trond Myklebust @ 2004-01-12 15:12 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-kernel
> The 8k limit that you find in RFC1094 was an ad-hoc "limit" based purely
> on testing using pre-1989 hardware. AFAIK most if not all of the
> commercial vendors (Solaris, AIX, Windows/Hummingbird, EMC and Netapp)
> are all currently setting the defaults to 32k block sizes for both TCP
> and UDP.
> Most of them want to bump that to a couple of Mbyte in the very near
> future.
Note: the future Mbyte sizes can, of course, only be supported on TCP
since UDP has an inherent limit at 64k. The de-facto limit on UDP is
therefore likely to remain at 32k (although I think at least one vendor
has already tried pushing it to 48k).
Trond
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-12 15:12 ` Trond Myklebust
@ 2004-01-16 5:44 ` Mike Fedyk
2004-01-16 6:05 ` Trond Myklebust
0 siblings, 1 reply; 45+ messages in thread
From: Mike Fedyk @ 2004-01-16 5:44 UTC (permalink / raw)
To: Trond Myklebust; +Cc: Bill Davidsen, linux-kernel
On Mon, Jan 12, 2004 at 10:12:03AM -0500, Trond Myklebust wrote:
>
> > The 8k limit that you find in RFC1094 was an ad-hoc "limit" based purely
> > on testing using pre-1989 hardware. AFAIK most if not all of the
> > commercial vendors (Solaris, AIX, Windows/Hummingbird, EMC and Netapp)
> > are all currently setting the defaults to 32k block sizes for both TCP
> > and UDP.
> > Most of them want to bump that to a couple of Mbyte in the very near
> > future.
>
> Note: the future Mbyte sizes can, of course, only be supported on TCP
> since UDP has an inherent limit at 64k. The de-facto limit on UDP is
> therefore likely to remain at 32k (although I think at least one vendor
> has already tried pushing it to 48k).
Does the RPC max size limit change with memory or filesystem?
I have one system (K7 2200, 1.5GB, ext3) where it uses 32K RPCs, and another
(P2 300, 168MB, reiserfs3) and it uses 8k RPCs, even if I request larger max
sizes, and they're both running 2.6.1-bk2.
Strange...
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-16 5:44 ` Mike Fedyk
@ 2004-01-16 6:05 ` Trond Myklebust
2004-01-16 6:53 ` Mike Fedyk
0 siblings, 1 reply; 45+ messages in thread
From: Trond Myklebust @ 2004-01-16 6:05 UTC (permalink / raw)
To: Mike Fedyk; +Cc: Bill Davidsen, linux-kernel
På fr , 16/01/2004 klokka 00:44, skreiv Mike Fedyk:
> Does the RPC max size limit change with memory or filesystem?
>
> I have one system (K7 2200, 1.5GB, ext3) where it uses 32K RPCs, and another
> (P2 300, 168MB, reiserfs3) and it uses 8k RPCs, even if I request larger max
> sizes, and they're both running 2.6.1-bk2.
The maximum allowable size is set by the server. If the server is
running 2.6.1, then it should normally support 32k reads and writes
(unless there is a bug somewhere).
Cheers,
Trond
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-16 6:05 ` Trond Myklebust
@ 2004-01-16 6:53 ` Mike Fedyk
0 siblings, 0 replies; 45+ messages in thread
From: Mike Fedyk @ 2004-01-16 6:53 UTC (permalink / raw)
To: Trond Myklebust; +Cc: Bill Davidsen, linux-kernel
On Fri, Jan 16, 2004 at 01:05:45AM -0500, Trond Myklebust wrote:
> P? fr , 16/01/2004 klokka 00:44, skreiv Mike Fedyk:
> > Does the RPC max size limit change with memory or filesystem?
> >
> > I have one system (K7 2200, 1.5GB, ext3) where it uses 32K RPCs, and another
> > (P2 300, 168MB, reiserfs3) and it uses 8k RPCs, even if I request larger max
> > sizes, and they're both running 2.6.1-bk2.
>
> The maximum allowable size is set by the server. If the server is
> running 2.6.1, then it should normally support 32k reads and writes
> (unless there is a bug somewhere).
The two systems above are nfs servers.
Mike
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 11:10 ` Guennadi Liakhovetski
2004-01-10 14:30 ` Trond Myklebust
@ 2004-01-10 22:34 ` Mike Fedyk
2004-01-10 22:52 ` Guennadi Liakhovetski
1 sibling, 1 reply; 45+ messages in thread
From: Mike Fedyk @ 2004-01-10 22:34 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-kernel
On Sat, Jan 10, 2004 at 12:10:46PM +0100, Guennadi Liakhovetski wrote:
> On Fri, 9 Jan 2004, Mike Fedyk wrote:
>
> > On Sat, Jan 10, 2004 at 01:38:00AM +0100, Guennadi Liakhovetski wrote:
> > > With 2.6 (on the server, same client) the client reads about 16K at a
> > > time, split into 11 fragments, and then packets number 9 and 10 get
> > > lost... This all with a StrongARM client and a PCMCIA network-card. With a
> > > PXA-client (400MHz compared to 200MHz SA) and an on-board eth smc91x, it
> > > gets the first 5 fragments, and then misses every other fragment. Again -
> > > in both cases I was copying files to RAM. Yes, 2.6 sends fragments in
> > > direct order.
> >
> > Is that an x86 server, and an arm client?
>
> Yes. The reason for the problem seems to be the increased default size of
> the transfer unit of NFS from 2.4 to 2.6. 8K under 2.4 was still ok, 16K
> is too much - only the first 5 fragments pass fine, then data starts to
> get lost. If it is a hardware limitation (not all platforms can manage
> 16K), it should be probably set back to 8K. If the reason is that some
> buffer size was not increased correspondingly, then this should be done.
>
> Just checked - mounting with rsize=8192,wsize=8192 fixes the problem -
> there are again 5 fragments and they all are received properly.
What version is the arm kernel you're running on the client, and where is it
from?
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 22:34 ` Mike Fedyk
@ 2004-01-10 22:52 ` Guennadi Liakhovetski
2004-01-10 22:57 ` Mike Fedyk
0 siblings, 1 reply; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-10 22:52 UTC (permalink / raw)
To: Mike Fedyk; +Cc: linux-kernel
On Sat, 10 Jan 2004, Mike Fedyk wrote:
> What version is the arm kernel you're running on the client, and where is it
> from?
2.4.19-rmk7, 24.4.21-rmk1-pxa1, 2.6.0-rmk2-pxa. All self-compiled with
self-ported platform-specific patches. Sure, none of those patches touches
any NFS / network general code. It might modify some (including network)
drivers, and, of course the core functionality (interrupt-handling,
memory, DMA, etc.) The first 2 also had real-time patches (RTAI), 2.6 on
PXA didn't. The pxa-patch for 2.6 was self-ported from 2.6.0-rmk1-test2,
IIRC. So, theoretically, you can blame any of those modifications, but I
highly doubt, that I managed to mess up all 3 kernels on 2 different
platforms to produce the same error, whereas all the rest (of course,
those, that I checked, i.e. ftp, http, telnet, tftp, tcp-nfs) network
protocols work.
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-10 22:52 ` Guennadi Liakhovetski
@ 2004-01-10 22:57 ` Mike Fedyk
2004-01-10 23:00 ` Guennadi Liakhovetski
0 siblings, 1 reply; 45+ messages in thread
From: Mike Fedyk @ 2004-01-10 22:57 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: linux-kernel
On Sat, Jan 10, 2004 at 11:52:16PM +0100, Guennadi Liakhovetski wrote:
> On Sat, 10 Jan 2004, Mike Fedyk wrote:
>
> > What version is the arm kernel you're running on the client, and where is it
> > from?
>
> 2.4.19-rmk7, 24.4.21-rmk1-pxa1, 2.6.0-rmk2-pxa. All self-compiled with
> self-ported platform-specific patches. Sure, none of those patches touches
> any NFS / network general code. It might modify some (including network)
> drivers, and, of course the core functionality (interrupt-handling,
> memory, DMA, etc.) The first 2 also had real-time patches (RTAI), 2.6 on
> PXA didn't. The pxa-patch for 2.6 was self-ported from 2.6.0-rmk1-test2,
> IIRC. So, theoretically, you can blame any of those modifications, but I
> highly doubt, that I managed to mess up all 3 kernels on 2 different
> platforms to produce the same error, whereas all the rest (of course,
> those, that I checked, i.e. ftp, http, telnet, tftp, tcp-nfs) network
> protocols work.
Can you double check with a vanilla kernel.org 2.4.24 x86 client?
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-07 18:13 ` Guennadi Liakhovetski
2004-01-07 18:19 ` Mike Fedyk
@ 2004-01-08 21:42 ` Pavel Machek
2004-01-12 23:18 ` Guennadi Liakhovetski
1 sibling, 1 reply; 45+ messages in thread
From: Pavel Machek @ 2004-01-08 21:42 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: Mike Fedyk, linux-kernel
Hi!
> > > It is not just a problem of 2.6 with those specific network configurations
> > > - ftp / http / tftp transfers work fine. E.g. wget of the same file on the
> > > PXA with 2.6.0 from the PC1 with 2.4.21 over http takes about 2s. So, it
> > > is 2.6 + NFS.
> > >
> > > Is it fixed somewhere (2.6.1-rcx?), or what should I try / what further
> > > information is required?
> >
> > You will probably need to look at some tcpdump output to debug the problem...
>
> Yep, just have done that - well, they differ... First obvious thing that I
> noticed is that 2.6 is trying to read bigger blocks (32K instead of 8K),
> but then - so far I cannot interpret what happens after the start of the
I've seen slow machine (386sx with ne1000) that could not receive 7 full-sized packets
back-to-back. You are sending 22 full packets back-to-back.
I'd expect some of them to be (almost deterministicaly) lost,
and no progress ever made.
In same scenario, TCP detects "congestion" and works mostly okay.
On ne1000 machine, TCP was still able to do 200KB/sec on
10Mbps network. Check if your slow machines are seeing all the packets you
send.
Pavel
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-08 21:42 ` Pavel Machek
@ 2004-01-12 23:18 ` Guennadi Liakhovetski
2004-01-12 23:28 ` Jesper Juhl
2004-01-13 0:39 ` Pavel Machek
0 siblings, 2 replies; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-12 23:18 UTC (permalink / raw)
To: Pavel Machek; +Cc: Mike Fedyk, linux-kernel
On Thu, 8 Jan 2004, Pavel Machek wrote:
> I've seen slow machine (386sx with ne1000) that could not receive 7 full-sized packets
> back-to-back. You are sending 22 full packets back-to-back.
> I'd expect some of them to be (almost deterministicaly) lost,
> and no progress ever made.
As you, probably, have already seen from further emails on this thread, we
did find out that packets were indeed lost due to various performance
reasons. And the best solution does seem to be switching to TCP-NFS, and
making it the default choice for mount (where available) seems to be a
very good idea.
Thanks for replying anyway.
> In same scenario, TCP detects "congestion" and works mostly okay.
Hm, as long as we are already on this - can you give me a hint / pointer
how does TCP _detect_ a congestion? Does it adjust packet sizes, some
other parameters? Just for the curiousity sake.
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-12 23:18 ` Guennadi Liakhovetski
@ 2004-01-12 23:28 ` Jesper Juhl
2004-01-13 0:39 ` Pavel Machek
1 sibling, 0 replies; 45+ messages in thread
From: Jesper Juhl @ 2004-01-12 23:28 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: Pavel Machek, Mike Fedyk, linux-kernel
On Tue, 13 Jan 2004, Guennadi Liakhovetski wrote:
> On Thu, 8 Jan 2004, Pavel Machek wrote:
>
> > In same scenario, TCP detects "congestion" and works mostly okay.
>
> Hm, as long as we are already on this - can you give me a hint / pointer
> how does TCP _detect_ a congestion? Does it adjust packet sizes, some
> other parameters? Just for the curiousity sake.
>
RFC 2581 describes this :
http://www.rfc-editor.org/cgi-bin/rfcdoctype.pl?loc=RFC&letsgo=2581&type=ftp&file_format=txt
3390 updates 2581 :
http://www.rfc-editor.org/cgi-bin/rfcdoctype.pl?loc=RFC&letsgo=3390&type=ftp&file_format=txt
-- Jesper Juhl
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-12 23:18 ` Guennadi Liakhovetski
2004-01-12 23:28 ` Jesper Juhl
@ 2004-01-13 0:39 ` Pavel Machek
2004-01-14 3:00 ` Daniel Roesen
2004-01-14 18:16 ` Guennadi Liakhovetski
1 sibling, 2 replies; 45+ messages in thread
From: Pavel Machek @ 2004-01-13 0:39 UTC (permalink / raw)
To: Guennadi Liakhovetski; +Cc: Mike Fedyk, linux-kernel
Hi!
> > I've seen slow machine (386sx with ne1000) that could not receive 7 full-sized packets
> > back-to-back. You are sending 22 full packets back-to-back.
> > I'd expect some of them to be (almost deterministicaly) lost,
> > and no progress ever made.
>
> As you, probably, have already seen from further emails on this thread, we
> did find out that packets were indeed lost due to various performance
> reasons. And the best solution does seem to be switching to TCP-NFS, and
> making it the default choice for mount (where available) seems to be a
> very good idea.
>
> Thanks for replying anyway.
>
> > In same scenario, TCP detects "congestion" and works mostly okay.
>
> Hm, as long as we are already on this - can you give me a hint / pointer
> how does TCP _detect_ a congestion? Does it adjust packet sizes, some
> other parameters? Just for the curiousity sake.
If TCP sees packets are lost, it says "oh, congestion", and starts
sending packets more slowly ie introduces delays
between packets. When they no longer get lost, it
speeds up to full speed.
Pavel
--
When do you have a heart between your knees?
[Johanka's followup: and *two* hearts?]
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-13 0:39 ` Pavel Machek
@ 2004-01-14 3:00 ` Daniel Roesen
2004-01-14 18:16 ` Guennadi Liakhovetski
1 sibling, 0 replies; 45+ messages in thread
From: Daniel Roesen @ 2004-01-14 3:00 UTC (permalink / raw)
To: linux-kernel
On Tue, Jan 13, 2004 at 01:39:08AM +0100, Pavel Machek wrote:
> > Hm, as long as we are already on this - can you give me a hint / pointer
> > how does TCP _detect_ a congestion? Does it adjust packet sizes, some
> > other parameters? Just for the curiousity sake.
>
> If TCP sees packets are lost, it says "oh, congestion", and starts
> sending packets more slowly ie introduces delays
> between packets. When they no longer get lost, it
> speeds up to full speed.
You missed the important part... TCP measures latency and adjusts to
that. TCP overreacts on sudden unexpected packetloss by shrinking window
down.
This is why traffic "policing" sucks for TCP, and "shaping" (queuing)
works much better (as latency rises when limit is reached, and TCP
sender adapts by sending slower, thus preventing packet loss).
Regards,
Daniel
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: 2.6.0 NFS-server low to 0 performance
2004-01-13 0:39 ` Pavel Machek
2004-01-14 3:00 ` Daniel Roesen
@ 2004-01-14 18:16 ` Guennadi Liakhovetski
1 sibling, 0 replies; 45+ messages in thread
From: Guennadi Liakhovetski @ 2004-01-14 18:16 UTC (permalink / raw)
To: Pavel Machek; +Cc: Mike Fedyk, linux-kernel
On Tue, 13 Jan 2004, Pavel Machek wrote:
> If TCP sees packets are lost, it says "oh, congestion", and starts
> sending packets more slowly ie introduces delays
> between packets. When they no longer get lost, it
> speeds up to full speed.
Thanks to all!
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 45+ messages in thread
end of thread, other threads:[~2004-01-16 6:54 UTC | newest]
Thread overview: 45+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1cpDr-5az-11@gated-at.bofh.it>
[not found] ` <1csrv-Er-9@gated-at.bofh.it>
2004-01-10 16:08 ` 2.6.0 NFS-server low to 0 performance Andi Kleen
2004-01-10 16:19 ` Trond Myklebust
2004-01-12 14:40 James Pearson
2004-01-12 15:22 ` Trond Myklebust
2004-01-13 11:08 ` Muli Ben-Yehuda
-- strict thread matches above, loose matches on Subject: below --
2004-01-06 0:46 Guennadi Liakhovetski
2004-01-07 13:36 ` Guennadi Liakhovetski
2004-01-07 22:30 ` bill davidsen
2004-01-08 12:48 ` Guennadi Liakhovetski
2004-01-07 17:49 ` Mike Fedyk
2004-01-07 18:13 ` Guennadi Liakhovetski
2004-01-07 18:19 ` Mike Fedyk
2004-01-07 19:06 ` Guennadi Liakhovetski
2004-01-09 10:08 ` Guennadi Liakhovetski
2004-01-09 18:00 ` Mike Fedyk
2004-01-10 0:38 ` Guennadi Liakhovetski
2004-01-10 1:38 ` Mike Fedyk
2004-01-10 11:10 ` Guennadi Liakhovetski
2004-01-10 14:30 ` Trond Myklebust
2004-01-10 20:04 ` Guennadi Liakhovetski
2004-01-10 21:57 ` Trond Myklebust
2004-01-10 22:14 ` Mike Fedyk
2004-01-10 22:47 ` Trond Myklebust
2004-01-10 22:42 ` Guennadi Liakhovetski
2004-01-10 22:51 ` Jesper Juhl
2004-01-11 13:18 ` Helge Hafting
2004-01-11 13:53 ` Russell King
2004-01-11 14:24 ` Guennadi Liakhovetski
2004-01-15 11:38 ` Pavel Machek
2004-01-12 5:06 ` Bill Davidsen
2004-01-12 14:27 ` Trond Myklebust
2004-01-12 15:12 ` Trond Myklebust
2004-01-16 5:44 ` Mike Fedyk
2004-01-16 6:05 ` Trond Myklebust
2004-01-16 6:53 ` Mike Fedyk
2004-01-10 22:34 ` Mike Fedyk
2004-01-10 22:52 ` Guennadi Liakhovetski
2004-01-10 22:57 ` Mike Fedyk
2004-01-10 23:00 ` Guennadi Liakhovetski
2004-01-08 21:42 ` Pavel Machek
2004-01-12 23:18 ` Guennadi Liakhovetski
2004-01-12 23:28 ` Jesper Juhl
2004-01-13 0:39 ` Pavel Machek
2004-01-14 3:00 ` Daniel Roesen
2004-01-14 18:16 ` Guennadi Liakhovetski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox