* NFSv4 memory allocation bug?
@ 2011-01-12 17:14 Txema Heredia Genestar
2011-01-12 18:35 ` J. Bruce Fields
0 siblings, 1 reply; 10+ messages in thread
From: Txema Heredia Genestar @ 2011-01-12 17:14 UTC (permalink / raw)
To: linux-nfs
Hello everybody,
I have a host serving disk through the network using NFS v3, and it has
been working ok for 3 years. Recently I have been benchmarking its
throughput and I wanted to test how it would work if I used NFS v4
instead, but I couldn't mount a single drive.
This is what I see:
Client:
client# mount -v -t nfs4 serverurl:/ /mnt/NFS_test/
mount: pinging: prog 100003 vers 4 prot tcp port 2049
---no response---
Server:
No error messages. Once I enabled debugging with echo "65535" >
/proc/sys/sunrpc/nfsd_debug :
/var/log/messages:
Jan 12 13:02:35 server kernel: NFSD: laundromat service - starting
Jan 12 13:02:35 server kernel: NFSD: laundromat_main - sleeping for 90
seconds
Jan 12 13:02:40 server kernel: nfsd_dispatch: vers 4 proc 0
Jan 12 13:02:40 server kernel: nfsd_dispatch: vers 4 proc 0
Jan 12 13:02:40 server kernel: nfsd_dispatch: vers 4 proc 1
Jan 12 13:02:40 server kernel: nfsv4 compound op #1: 24
Jan 12 13:02:40 server kernel: nfsd: Dropping request due to malloc failure!
And if rpc debugging is enabled:
Jan 11 18:50:48 server kernel: svc: socket ffff810252868680 busy, not
enqueued
Jan 11 18:50:48 server kernel: svc: calling dispatcher
Jan 11 18:50:48 server kernel: nfsd_dispatch: vers 4 proc 0
Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580
sendto([ffff8102273eb000 28... ], 28) = 28 (addr cbc916ac)
Jan 11 18:50:48 server kernel: svc: socket ffff810252868680 busy, not
enqueued
Jan 11 18:50:48 server kernel: svc: server ffff81027d33d000 waiting for
data (to = 900000)
Jan 11 18:50:48 server kernel: svc: server ffff81025e481400, socket
ffff810127c28880, inuse=1
Jan 11 18:50:48 server kernel: svc: tcp_recv ffff810127c28880 data 0
conn 1 close 0
Jan 11 18:50:48 server kernel: svc: tcp_accept ffff810127c28880 sock
ffff81029821b340
Jan 11 18:50:48 server kernel: svc: tcp_accept ffff81027c7f08c0 allocated
Jan 11 18:50:48 server kernel: svc: got len=0
Jan 11 18:50:48 server kernel: svc: server ffff81025e481400 waiting for
data (to = 900000)
Jan 11 18:50:48 server kernel: svc: server ffff810251911800, socket
ffff81029c53a580, inuse=1
Jan 11 18:50:48 server kernel: svc: tcp_recv ffff81029c53a580 data 1
conn 0 close 0
Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580
recvfrom(ffff81029c53a628, 4) = -11
Jan 11 18:50:48 server kernel: RPC: TCP recvfrom got EAGAIN
Jan 11 18:50:48 server kernel: svc: got len=-11
Jan 11 18:50:48 server kernel: svc: server ffff810251911800 waiting for
data (to = 900000)
Jan 11 18:50:48 server kernel: svc: socket ffff810252868680 TCP data
ready (svsk ffff81029c53a580)
Jan 11 18:50:48 server kernel: svc: socket ffff810252868680 served by
daemon ffff810251911800
Jan 11 18:50:48 server kernel: svc: server ffff810251911800, socket
ffff81029c53a580, inuse=1
Jan 11 18:50:48 server kernel: svc: tcp_recv ffff81029c53a580 data 1
conn 0 close 0
Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580
recvfrom(ffff81029c53a62c, 0) = 4
Jan 11 18:50:48 server kernel: svc: TCP record, 144 bytes
Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580
recvfrom(ffff81011f93a090, 3952) = 144
Jan 11 18:50:48 server kernel: svc: TCP complete record (144 bytes)
Jan 11 18:50:48 server kernel: svc: socket ffff810252868680 served by
daemon ffff81025e481400
Jan 11 18:50:48 server kernel: svc: got len=144
Jan 11 18:50:48 server kernel: svc: server ffff81025e481400, socket
ffff81029c53a580, inuse=2
Jan 11 18:50:48 server kernel: svc: svc_authenticate (1)
Jan 11 18:50:48 server kernel: svc: tcp_recv ffff81029c53a580 data 1
conn 0 close 0
Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580
recvfrom(ffff81029c53a628, 4) = -11
Jan 11 18:50:48 server kernel: Want update, refage=120, age=0
Jan 11 18:50:48 server kernel: RPC: TCP recvfrom got EAGAIN
Jan 11 18:50:48 server kernel: svc: got len=-11
Jan 11 18:50:48 server kernel: svc: server ffff81025e481400 waiting for
data (to = 900000)
Jan 11 18:50:48 server kernel: svc: svc_process dropit
Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580 dropped request
Jan 11 18:50:48 server kernel: svc: server ffff810251911800 waiting for
data (to = 900000)
Jan 11 18:50:48 server kernel: revisit queued
Jan 11 18:50:48 server kernel: svc: socket ffff810252868680 served by
daemon ffff810251911800
Jan 11 18:50:48 server kernel: svc: server ffff810251911800, socket
ffff81029c53a580, inuse=1
Jan 11 18:50:48 server kernel: svc: tcp_recv ffff81029c53a580 data 0
conn 0 close 0
Jan 11 18:50:48 server kernel: svc: socket ffff810252868680 served by
daemon ffff81025e481400
Jan 11 18:50:48 server kernel: svc: got len=144
Jan 11 18:50:48 server kernel: svc: svc_authenticate (1)
Jan 11 18:50:48 server kernel: svc: server ffff81025e481400, socket
ffff81029c53a580, inuse=2
Jan 11 18:50:48 server kernel: svc: calling dispatcher
Jan 11 18:50:48 server kernel: nfsd_dispatch: vers 4 proc 1
Jan 11 18:50:48 server kernel: svc: tcp_recv ffff81029c53a580 data 0
conn 0 close 0
Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580
recvfrom(ffff81029c53a628, 4) = -11
Jan 11 18:50:48 server kernel: nfsv4 compound op #1: 24
Jan 11 18:50:48 server kernel: RPC: TCP recvfrom got EAGAIN
Jan 11 18:50:48 server kernel: svc: got len=-11
Jan 11 18:50:48 server kernel: svc: server ffff81025e481400 waiting for
data (to = 900000)
Jan 11 18:50:48 server kernel: Want update, refage=120, age=0
Jan 11 18:50:48 server kernel: nfsd: Dropping request due to malloc failure!
Jan 11 18:50:48 server kernel: svc: svc_process dropit
Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580 dropped request
Jan 11 18:50:48 server kernel: svc: server ffff810251911800 waiting for
data (to = 900000)
Additionally, I have checked tcpdump and found, when mounting an NFS4
drive from a working storage-system:
...
12:38:06.372303 IP client.907 > storage.nfs: . ack 29 win 46
<nop,nop,timestamp 4063464822 174132214>
12:38:06.372429 IP client.2364980656 > storage.nfs: 148 getattr [|nfs]
12:38:06.372792 IP storage.nfs > client.2364980656: reply ok 248 getattr
[|nfs]
12:38:06.372958 IP client.2381757872 > storage.nfs: 172 getattr [|nfs]
12:38:06.373132 IP storage.nfs > client.2381757872: reply ok 88 getattr
[|nfs]
12:38:06.373157 IP client.2398535088 > storage.nfs: 176 getattr [|nfs]
12:38:06.373316 IP storage.nfs > client.2398535088: reply ok 100 getattr
[|nfs]
12:38:06.373339 IP client.2415312304 > storage.nfs: 172 getattr [|nfs]
But when I mount from the same client, the NFS4 share from my server
gets stuck on the "getattr" call
...
12:36:37.051840 IP client.926 > server.nfs: . ack 29 win 140
<nop,nop,timestamp 4063375488 434039929>
12:36:37.051903 IP client.1734362088 > server.nfs: 148 getattr [|nfs]
12:36:37.090274 IP server.nfs > client.926: . ack 192 win 4742
<nop,nop,timestamp 434039939 4063375488>
---silence---
So I suppose that the "RPC: TCP recvfrom got EAGAIN" on the messages log
corresponds to that "getattr[|nfs]" call.
I have been searching around and I have found several threads about
either the "malloc failure" message or the "EAGAIN" message. But I
haven't found anything concerning them both at the same time. I have
also checked for this kind of problems in NFS4 and found nothing useful.
May this be some kind of (already solved) bug in my nfs implementation?
I'm running a pretty old version (SuSE LES 10.2, nfs-utils 1.0.7-36.2)
Thanks in advance,
Txema
My configuration files:
# cat /etc/sysconfig/nfs
USE_KERNEL_NFSD_NUMBER="4"
MOUNTD_PORT="980"
NFS_SECURITY_GSS="no"
NFS4_SUPPORT="yes"
# cat /etc/exports
#TESTS NFS 4
/NFS4exports
172.22.201.0/24(rw,sync,no_root_squash,no_subtree_check,secure,secure_locks,acl,mp,fsid=0,no_wdelay)
/NFS4exports
172.22.202.0/24(rw,sync,no_root_squash,no_subtree_check,secure,secure_locks,acl,mp,fsid=0,no_wdelay)
/NFS4exports/NFS_test_params
172.22.201.0/24(rw,sync,no_root_squash,no_subtree_check,secure,hide,secure_locks,acl,mp,fsid=255,no_wdelay)
/NFS4exports/NFS_test_params
172.22.202.0/24(rw,sync,no_root_squash,no_subtree_check,secure,hide,secure_locks,acl,mp,fsid=255,no_wdelay)
#NFS 3 working ones
#/sp/fs/NFS_test_params
172.22.201.0/24(rw,sync,no_root_squash,no_subtree_check,secure,hide,secure_locks,acl,mp,fsid=255,no_wdelay)
172.22.202.0/24(rw,sync,no_root_squash,no_subtree_check,secure,hide,secure_locks,acl,mp,fsid=255,no_wdelay)
# cat /etc/idmapd.conf
[General]
Verbosity=1
Pipefs-Directory=/var/lib/rpc_pipefs
Domain=mydomain
[Mapping]
Nobody-User=nobody
Nobody-Group=nobody
PS: This is my very first post on this list, so I'm not sure if I am
missing something.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: NFSv4 memory allocation bug?
2011-01-12 17:14 NFSv4 memory allocation bug? Txema Heredia Genestar
@ 2011-01-12 18:35 ` J. Bruce Fields
2011-01-13 15:48 ` Txema Heredia Genestar
0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2011-01-12 18:35 UTC (permalink / raw)
To: Txema Heredia Genestar; +Cc: linux-nfs
On Wed, Jan 12, 2011 at 06:14:53PM +0100, Txema Heredia Genestar wrote:
> Hello everybody,
>
> I have a host serving disk through the network using NFS v3, and it
> has been working ok for 3 years. Recently I have been benchmarking
> its throughput and I wanted to test how it would work if I used NFS
> v4 instead, but I couldn't mount a single drive.
>
> This is what I see:
>
> Client:
> client# mount -v -t nfs4 serverurl:/ /mnt/NFS_test/
> mount: pinging: prog 100003 vers 4 prot tcp port 2049
> ---no response---
>
> Server:
> No error messages. Once I enabled debugging with echo "65535" >
> /proc/sys/sunrpc/nfsd_debug :
...
> Jan 11 18:50:48 server kernel: RPC: TCP recvfrom got EAGAIN
> Jan 11 18:50:48 server kernel: svc: got len=-11
> Jan 11 18:50:48 server kernel: svc: server ffff81025e481400 waiting
> for data (to = 900000)
> Jan 11 18:50:48 server kernel: Want update, refage=120, age=0
> Jan 11 18:50:48 server kernel: nfsd: Dropping request due to malloc failure!
I wouldn't take that error too seriously; it's normal when the server is
doing upcalls to rpc.svcgssd or rpc.idmapd.
> Jan 11 18:50:48 server kernel: svc: svc_process dropit
> Jan 11 18:50:48 server kernel: svc: socket ffff81029c53a580 dropped request
> Jan 11 18:50:48 server kernel: svc: server ffff810251911800 waiting
> for data (to = 900000)
>
>
>
> Additionally, I have checked tcpdump and found, when mounting an
> NFS4 drive from a working storage-system:
> ...
> 12:38:06.372303 IP client.907 > storage.nfs: . ack 29 win 46
> <nop,nop,timestamp 4063464822 174132214>
> 12:38:06.372429 IP client.2364980656 > storage.nfs: 148 getattr [|nfs]
> 12:38:06.372792 IP storage.nfs > client.2364980656: reply ok 248
> getattr [|nfs]
> 12:38:06.372958 IP client.2381757872 > storage.nfs: 172 getattr [|nfs]
> 12:38:06.373132 IP storage.nfs > client.2381757872: reply ok 88
> getattr [|nfs]
> 12:38:06.373157 IP client.2398535088 > storage.nfs: 176 getattr [|nfs]
> 12:38:06.373316 IP storage.nfs > client.2398535088: reply ok 100
> getattr [|nfs]
> 12:38:06.373339 IP client.2415312304 > storage.nfs: 172 getattr [|nfs]
>
>
> But when I mount from the same client, the NFS4 share from my server
> gets stuck on the "getattr" call
> ...
> 12:36:37.051840 IP client.926 > server.nfs: . ack 29 win 140
> <nop,nop,timestamp 4063375488 434039929>
> 12:36:37.051903 IP client.1734362088 > server.nfs: 148 getattr [|nfs]
> 12:36:37.090274 IP server.nfs > client.926: . ack 192 win 4742
> <nop,nop,timestamp 434039939 4063375488>
> ---silence---
Something like wireshark would give a few more details.
> So I suppose that the "RPC: TCP recvfrom got EAGAIN" on the messages
> log corresponds to that "getattr[|nfs]" call.
>
> I have been searching around and I have found several threads about
> either the "malloc failure" message or the "EAGAIN" message. But I
> haven't found anything concerning them both at the same time. I have
> also checked for this kind of problems in NFS4 and found nothing
> useful.
>
> May this be some kind of (already solved) bug in my nfs
> implementation? I'm running a pretty old version (SuSE LES 10.2,
> nfs-utils 1.0.7-36.2)
What kernel version does that correspond to?
My first impulse would be to make sure rpc.idmapd is running. (If not,
the server would do an upcall to idmapd and never get a response, hence
fail to respond to a client getattr.)
--b.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: NFSv4 memory allocation bug?
2011-01-12 18:35 ` J. Bruce Fields
@ 2011-01-13 15:48 ` Txema Heredia Genestar
2011-01-13 16:19 ` J. Bruce Fields
0 siblings, 1 reply; 10+ messages in thread
From: Txema Heredia Genestar @ 2011-01-13 15:48 UTC (permalink / raw)
To: J. Bruce Fields; +Cc: linux-nfs
Hi Bruce, thanks for your answer
El 12/01/11 19:35, J. Bruce Fields escribió:
> On Wed, Jan 12, 2011 at 06:14:53PM +0100, Txema Heredia Genestar wrote:
>> Additionally, I have checked tcpdump and found, when mounting an
>> NFS4 drive from a working storage-system:
>> ...
>> 12:38:06.372303 IP client.907> storage.nfs: . ack 29 win 46
>> <nop,nop,timestamp 4063464822 174132214>
>> 12:38:06.372429 IP client.2364980656> storage.nfs: 148 getattr [|nfs]
>> 12:38:06.372792 IP storage.nfs> client.2364980656: reply ok 248
>> getattr [|nfs]
>> 12:38:06.372958 IP client.2381757872> storage.nfs: 172 getattr [|nfs]
>> 12:38:06.373132 IP storage.nfs> client.2381757872: reply ok 88
>> getattr [|nfs]
>> 12:38:06.373157 IP client.2398535088> storage.nfs: 176 getattr [|nfs]
>> 12:38:06.373316 IP storage.nfs> client.2398535088: reply ok 100
>> getattr [|nfs]
>> 12:38:06.373339 IP client.2415312304> storage.nfs: 172 getattr [|nfs]
>>
>>
>> But when I mount from the same client, the NFS4 share from my server
>> gets stuck on the "getattr" call
>> ...
>> 12:36:37.051840 IP client.926> server.nfs: . ack 29 win 140
>> <nop,nop,timestamp 4063375488 434039929>
>> 12:36:37.051903 IP client.1734362088> server.nfs: 148 getattr [|nfs]
>> 12:36:37.090274 IP server.nfs> client.926: . ack 192 win 4742
>> <nop,nop,timestamp 434039939 4063375488>
>> ---silence---
> Something like wireshark would give a few more details.
I have wiresharked it and I don't see any differences between the
"getattr" packages in both cases. Do you want me to paste them in a
specific format?
>> So I suppose that the "RPC: TCP recvfrom got EAGAIN" on the messages
>> log corresponds to that "getattr[|nfs]" call.
>>
>> I have been searching around and I have found several threads about
>> either the "malloc failure" message or the "EAGAIN" message. But I
>> haven't found anything concerning them both at the same time. I have
>> also checked for this kind of problems in NFS4 and found nothing
>> useful.
>>
>> May this be some kind of (already solved) bug in my nfs
>> implementation? I'm running a pretty old version (SuSE LES 10.2,
>> nfs-utils 1.0.7-36.2)
> What kernel version does that correspond to?
>
> My first impulse would be to make sure rpc.idmapd is running. (If not,
> the server would do an upcall to idmapd and never get a response, hence
> fail to respond to a client getattr.)
>
> --b.
My server kernel is 2.6.16.60-0.39.3
# uname -a
Linux bhsrv2 2.6.16.60-0.39.3-smp #1 SMP Mon May 11 11:46:34 UTC 2009
x86_64 x86_64 x86_64 GNU/Linux
I'm positive idmapd is running in both, server and client:
server
# ps -ef | grep idmap
root 11254 1 0 Jan12 ? 00:00:00 /usr/sbin/rpc.idmapd
client
# ps -ef | grep idmap
root 3262 1 0 2010 ? 00:00:02 rpc.idmapd
but it doesn't appear in rpcinfo -p, should it?
server
# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100024 1 udp 2526 status
100021 1 udp 2526 nlockmgr
100021 3 udp 2526 nlockmgr
100021 4 udp 2526 nlockmgr
100024 1 tcp 5726 status
100021 1 tcp 5726 nlockmgr
100021 3 tcp 5726 nlockmgr
100021 4 tcp 5726 nlockmgr
100005 1 udp 980 mountd
100005 1 tcp 980 mountd
100005 2 udp 980 mountd
100005 2 tcp 980 mountd
100005 3 udp 980 mountd
100005 3 tcp 980 mountd
1073741824 1 tcp 13587
and client:
# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 850 status
100024 1 tcp 853 status
100021 1 tcp 42074 nlockmgr
100021 3 tcp 42074 nlockmgr
100021 4 tcp 42074 nlockmgr
100021 1 udp 45871 nlockmgr
100021 3 udp 45871 nlockmgr
100021 4 udp 45871 nlockmgr
1073741824 1 tcp 57121
Thanks for any insight,
Txema
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: NFSv4 memory allocation bug?
2011-01-13 15:48 ` Txema Heredia Genestar
@ 2011-01-13 16:19 ` J. Bruce Fields
2011-01-13 17:25 ` Txema Heredia Genestar
0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2011-01-13 16:19 UTC (permalink / raw)
To: Txema Heredia Genestar; +Cc: linux-nfs
On Thu, Jan 13, 2011 at 04:48:26PM +0100, Txema Heredia Genestar wrote:
> Hi Bruce, thanks for your answer
>
>
> El 12/01/11 19:35, J. Bruce Fields escribió:
> >On Wed, Jan 12, 2011 at 06:14:53PM +0100, Txema Heredia Genestar wrote:
> >>Additionally, I have checked tcpdump and found, when mounting an
> >>NFS4 drive from a working storage-system:
> >>...
> >>12:38:06.372303 IP client.907> storage.nfs: . ack 29 win 46
> >><nop,nop,timestamp 4063464822 174132214>
> >>12:38:06.372429 IP client.2364980656> storage.nfs: 148 getattr [|nfs]
> >>12:38:06.372792 IP storage.nfs> client.2364980656: reply ok 248
> >>getattr [|nfs]
> >>12:38:06.372958 IP client.2381757872> storage.nfs: 172 getattr [|nfs]
> >>12:38:06.373132 IP storage.nfs> client.2381757872: reply ok 88
> >>getattr [|nfs]
> >>12:38:06.373157 IP client.2398535088> storage.nfs: 176 getattr [|nfs]
> >>12:38:06.373316 IP storage.nfs> client.2398535088: reply ok 100
> >>getattr [|nfs]
> >>12:38:06.373339 IP client.2415312304> storage.nfs: 172 getattr [|nfs]
> >>
> >>
> >>But when I mount from the same client, the NFS4 share from my server
> >>gets stuck on the "getattr" call
> >>...
> >>12:36:37.051840 IP client.926> server.nfs: . ack 29 win 140
> >><nop,nop,timestamp 4063375488 434039929>
> >>12:36:37.051903 IP client.1734362088> server.nfs: 148 getattr [|nfs]
> >>12:36:37.090274 IP server.nfs> client.926: . ack 192 win 4742
> >><nop,nop,timestamp 434039939 4063375488>
> >>---silence---
> >Something like wireshark would give a few more details.
>
> I have wiresharked it and I don't see any differences between the
> "getattr" packages in both cases. Do you want me to paste them in a
> specific format?
I'm curious which attributes were requested. In particular, is the
unreplied-to getattr the *first* time that the client requests the owner
or owner_group attributes?
> My server kernel is 2.6.16.60-0.39.3
> # uname -a
> Linux bhsrv2 2.6.16.60-0.39.3-smp #1 SMP Mon May 11 11:46:34 UTC
> 2009 x86_64 x86_64 x86_64 GNU/Linux
>
>
> I'm positive idmapd is running in both, server and client:
>
> server
> # ps -ef | grep idmap
> root 11254 1 0 Jan12 ? 00:00:00 /usr/sbin/rpc.idmapd
OK.
>
> client
> # ps -ef | grep idmap
> root 3262 1 0 2010 ? 00:00:02 rpc.idmapd
>
> but it doesn't appear in rpcinfo -p, should it?
No, it just handles requests from the kernel, not from the network.
Might also be worth looking at the nfs4.idtoname cache contents after
the hang:
rpcdebug -m rpc -s cache
cat /proc/net/rpc/nfs4.idtoname/content
I seem to recall c9b6cbe56d3ac471e6cd72a59ec9e324b3417016 or
0a725fc4d3bfc4734164863d6c50208b109ca5c7 being possible causes of hangs.
--b.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: NFSv4 memory allocation bug?
2011-01-13 16:19 ` J. Bruce Fields
@ 2011-01-13 17:25 ` Txema Heredia Genestar
2011-01-13 18:05 ` J. Bruce Fields
0 siblings, 1 reply; 10+ messages in thread
From: Txema Heredia Genestar @ 2011-01-13 17:25 UTC (permalink / raw)
To: J. Bruce Fields; +Cc: linux-nfs
El 13/01/11 17:19, J. Bruce Fields escribió:
> On Thu, Jan 13, 2011 at 04:48:26PM +0100, Txema Heredia Genestar wrote:
>> Hi Bruce, thanks for your answer
>>
>>
>> El 12/01/11 19:35, J. Bruce Fields escribió:
>>> On Wed, Jan 12, 2011 at 06:14:53PM +0100, Txema Heredia Genestar wrote:
>>>> Additionally, I have checked tcpdump and found, when mounting an
>>>> NFS4 drive from a working storage-system:
>>>> ...
>>>> 12:38:06.372303 IP client.907> storage.nfs: . ack 29 win 46
>>>> <nop,nop,timestamp 4063464822 174132214>
>>>> 12:38:06.372429 IP client.2364980656> storage.nfs: 148 getattr [|nfs]
>>>> 12:38:06.372792 IP storage.nfs> client.2364980656: reply ok 248
>>>> getattr [|nfs]
>>>> 12:38:06.372958 IP client.2381757872> storage.nfs: 172 getattr [|nfs]
>>>> 12:38:06.373132 IP storage.nfs> client.2381757872: reply ok 88
>>>> getattr [|nfs]
>>>> 12:38:06.373157 IP client.2398535088> storage.nfs: 176 getattr [|nfs]
>>>> 12:38:06.373316 IP storage.nfs> client.2398535088: reply ok 100
>>>> getattr [|nfs]
>>>> 12:38:06.373339 IP client.2415312304> storage.nfs: 172 getattr [|nfs]
>>>>
>>>>
>>>> But when I mount from the same client, the NFS4 share from my server
>>>> gets stuck on the "getattr" call
>>>> ...
>>>> 12:36:37.051840 IP client.926> server.nfs: . ack 29 win 140
>>>> <nop,nop,timestamp 4063375488 434039929>
>>>> 12:36:37.051903 IP client.1734362088> server.nfs: 148 getattr [|nfs]
>>>> 12:36:37.090274 IP server.nfs> client.926: . ack 192 win 4742
>>>> <nop,nop,timestamp 434039939 4063375488>
>>>> ---silence---
>>> Something like wireshark would give a few more details.
>> I have wiresharked it and I don't see any differences between the
>> "getattr" packages in both cases. Do you want me to paste them in a
>> specific format?
> I'm curious which attributes were requested. In particular, is the
> unreplied-to getattr the *first* time that the client requests the owner
> or owner_group attributes?
>
Yes, the "unreplied-to" getattr call is the very first (and only) time
it those are requested:
Network File System
[Program Version: 4]
[V4 Procedure: COMPOUND (1)]
Tag: <EMPTY>
length: 0
contents: <EMPTY>
minorversion: 0
Operations (count: 3)
Opcode: PUTROOTFH (24)
Opcode: GETFH (10)
Opcode: GETATTR (9)
GETATTR4args
attr_request
bitmap[0] = 0x0010011a
[5 attributes requested]
mand_attr: FATTR4_TYPE (1)
mand_attr: FATTR4_CHANGE (3)
mand_attr: FATTR4_SIZE (4)
mand_attr: FATTR4_FSID (8)
recc_attr: FATTR4_FILEID (20)
bitmap[1] = 0x0030a23a
[9 attributes requested]
recc_attr: FATTR4_MODE (33)
recc_attr: FATTR4_NUMLINKS (35)
*recc_attr: FATTR4_OWNER (36)*
*recc_attr: FATTR4_OWNER_GROUP (37)*
recc_attr: FATTR4_RAWDEV (41)
recc_attr: FATTR4_SPACE_USED (45)
recc_attr: FATTR4_TIME_ACCESS (47)
recc_attr: FATTR4_TIME_METADATA (52)
recc_attr: FATTR4_TIME_MODIFY (53)
>> My server kernel is 2.6.16.60-0.39.3
>> # uname -a
>> Linux bhsrv2 2.6.16.60-0.39.3-smp #1 SMP Mon May 11 11:46:34 UTC
>> 2009 x86_64 x86_64 x86_64 GNU/Linux
>>
>>
>> I'm positive idmapd is running in both, server and client:
>>
>> server
>> # ps -ef | grep idmap
>> root 11254 1 0 Jan12 ? 00:00:00 /usr/sbin/rpc.idmapd
> OK.
>
>> client
>> # ps -ef | grep idmap
>> root 3262 1 0 2010 ? 00:00:02 rpc.idmapd
>>
>> but it doesn't appear in rpcinfo -p, should it?
> No, it just handles requests from the kernel, not from the network.
>
> Might also be worth looking at the nfs4.idtoname cache contents after
> the hang:
>
> rpcdebug -m rpc -s cache
> cat /proc/net/rpc/nfs4.idtoname/content
>
> I seem to recall c9b6cbe56d3ac471e6cd72a59ec9e324b3417016 or
> 0a725fc4d3bfc4734164863d6c50208b109ca5c7 being possible causes of hangs.
>
> --b.
Unfortunately, rpcdebug is not present in this server. So my
/proc/net/rpc/nfs4.idtoname/content file is empty.
May this command be of any use?
"echo "65535" > /proc/sys/sunrpc/rpc_debug"
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: NFSv4 memory allocation bug?
2011-01-13 17:25 ` Txema Heredia Genestar
@ 2011-01-13 18:05 ` J. Bruce Fields
2011-01-14 12:11 ` Txema Heredia Genestar
0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2011-01-13 18:05 UTC (permalink / raw)
To: Txema Heredia Genestar; +Cc: linux-nfs
On Thu, Jan 13, 2011 at 06:25:23PM +0100, Txema Heredia Genestar wrote:
> El 13/01/11 17:19, J. Bruce Fields escribió:
> >On Thu, Jan 13, 2011 at 04:48:26PM +0100, Txema Heredia Genestar wrote:
> >> Hi Bruce, thanks for your answer
> >>
> >>
> >>El 12/01/11 19:35, J. Bruce Fields escribió:
> >>>On Wed, Jan 12, 2011 at 06:14:53PM +0100, Txema Heredia Genestar wrote:
> >>>>Additionally, I have checked tcpdump and found, when mounting an
> >>>>NFS4 drive from a working storage-system:
> >>>>...
> >>>>12:38:06.372303 IP client.907> storage.nfs: . ack 29 win 46
> >>>><nop,nop,timestamp 4063464822 174132214>
> >>>>12:38:06.372429 IP client.2364980656> storage.nfs: 148 getattr [|nfs]
> >>>>12:38:06.372792 IP storage.nfs> client.2364980656: reply ok 248
> >>>>getattr [|nfs]
> >>>>12:38:06.372958 IP client.2381757872> storage.nfs: 172 getattr [|nfs]
> >>>>12:38:06.373132 IP storage.nfs> client.2381757872: reply ok 88
> >>>>getattr [|nfs]
> >>>>12:38:06.373157 IP client.2398535088> storage.nfs: 176 getattr [|nfs]
> >>>>12:38:06.373316 IP storage.nfs> client.2398535088: reply ok 100
> >>>>getattr [|nfs]
> >>>>12:38:06.373339 IP client.2415312304> storage.nfs: 172 getattr [|nfs]
> >>>>
> >>>>
> >>>>But when I mount from the same client, the NFS4 share from my server
> >>>>gets stuck on the "getattr" call
> >>>>...
> >>>>12:36:37.051840 IP client.926> server.nfs: . ack 29 win 140
> >>>><nop,nop,timestamp 4063375488 434039929>
> >>>>12:36:37.051903 IP client.1734362088> server.nfs: 148 getattr [|nfs]
> >>>>12:36:37.090274 IP server.nfs> client.926: . ack 192 win 4742
> >>>><nop,nop,timestamp 434039939 4063375488>
> >>>>---silence---
> >>>Something like wireshark would give a few more details.
> >>I have wiresharked it and I don't see any differences between the
> >>"getattr" packages in both cases. Do you want me to paste them in a
> >>specific format?
> >I'm curious which attributes were requested. In particular, is the
> >unreplied-to getattr the *first* time that the client requests the owner
> >or owner_group attributes?
> >
>
> Yes, the "unreplied-to" getattr call is the very first (and only)
> time it those are requested:
Yeah, so almost certainly some idmapping problem.
> >Might also be worth looking at the nfs4.idtoname cache contents after
> >the hang:
> >
> > rpcdebug -m rpc -s cache
> > cat /proc/net/rpc/nfs4.idtoname/content
> >
> >I seem to recall c9b6cbe56d3ac471e6cd72a59ec9e324b3417016 or
> >0a725fc4d3bfc4734164863d6c50208b109ca5c7 being possible causes of hangs.
> >
> >--b.
>
> Unfortunately, rpcdebug is not present in this server. So my
> /proc/net/rpc/nfs4.idtoname/content file is empty.
>
> May this command be of any use?
> "echo "65535" > /proc/sys/sunrpc/rpc_debug"
Yeah doing that may get you a little more information.
Note that kernel's pretty out of date with respect to mainstream so your
best bet is your distro support or seeing if you can reproduce the
problem on a more recent kernel.
--b.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: NFSv4 memory allocation bug?
2011-01-13 18:05 ` J. Bruce Fields
@ 2011-01-14 12:11 ` Txema Heredia Genestar
2011-02-08 18:07 ` Txema Heredia
0 siblings, 1 reply; 10+ messages in thread
From: Txema Heredia Genestar @ 2011-01-14 12:11 UTC (permalink / raw)
To: J. Bruce Fields; +Cc: linux-nfs
El 13/01/11 19:05, J. Bruce Fields escribió:
> On Thu, Jan 13, 2011 at 06:25:23PM +0100, Txema Heredia Genestar wrote:
>> El 13/01/11 17:19, J. Bruce Fields escribió:
>>> On Thu, Jan 13, 2011 at 04:48:26PM +0100, Txema Heredia Genestar wrote:
>>>> Hi Bruce, thanks for your answer
>>>>
>>>>
>>>> El 12/01/11 19:35, J. Bruce Fields escribió:
>>>>> On Wed, Jan 12, 2011 at 06:14:53PM +0100, Txema Heredia Genestar wrote:
>>>>>> Additionally, I have checked tcpdump and found, when mounting an
>>>>>> NFS4 drive from a working storage-system:
>>>>>> ...
>>>>>> 12:38:06.372303 IP client.907> storage.nfs: . ack 29 win 46
>>>>>> <nop,nop,timestamp 4063464822 174132214>
>>>>>> 12:38:06.372429 IP client.2364980656> storage.nfs: 148 getattr [|nfs]
>>>>>> 12:38:06.372792 IP storage.nfs> client.2364980656: reply ok 248
>>>>>> getattr [|nfs]
>>>>>> 12:38:06.372958 IP client.2381757872> storage.nfs: 172 getattr [|nfs]
>>>>>> 12:38:06.373132 IP storage.nfs> client.2381757872: reply ok 88
>>>>>> getattr [|nfs]
>>>>>> 12:38:06.373157 IP client.2398535088> storage.nfs: 176 getattr [|nfs]
>>>>>> 12:38:06.373316 IP storage.nfs> client.2398535088: reply ok 100
>>>>>> getattr [|nfs]
>>>>>> 12:38:06.373339 IP client.2415312304> storage.nfs: 172 getattr [|nfs]
>>>>>>
>>>>>>
>>>>>> But when I mount from the same client, the NFS4 share from my server
>>>>>> gets stuck on the "getattr" call
>>>>>> ...
>>>>>> 12:36:37.051840 IP client.926> server.nfs: . ack 29 win 140
>>>>>> <nop,nop,timestamp 4063375488 434039929>
>>>>>> 12:36:37.051903 IP client.1734362088> server.nfs: 148 getattr [|nfs]
>>>>>> 12:36:37.090274 IP server.nfs> client.926: . ack 192 win 4742
>>>>>> <nop,nop,timestamp 434039939 4063375488>
>>>>>> ---silence---
>>>>> Something like wireshark would give a few more details.
>>>> I have wiresharked it and I don't see any differences between the
>>>> "getattr" packages in both cases. Do you want me to paste them in a
>>>> specific format?
>>> I'm curious which attributes were requested. In particular, is the
>>> unreplied-to getattr the *first* time that the client requests the owner
>>> or owner_group attributes?
>>>
>> Yes, the "unreplied-to" getattr call is the very first (and only)
>> time it those are requested:
> Yeah, so almost certainly some idmapping problem.
Mi current idmap library is 0.12-16.20, built on 16/1/2009
>>> Might also be worth looking at the nfs4.idtoname cache contents after
>>> the hang:
>>>
>>> rpcdebug -m rpc -s cache
>>> cat /proc/net/rpc/nfs4.idtoname/content
>>>
>>> I seem to recall c9b6cbe56d3ac471e6cd72a59ec9e324b3417016 or
>>> 0a725fc4d3bfc4734164863d6c50208b109ca5c7 being possible causes of hangs.
>>>
>>> --b.
>> Unfortunately, rpcdebug is not present in this server. So my
>> /proc/net/rpc/nfs4.idtoname/content file is empty.
>>
>> May this command be of any use?
>> "echo "65535"> /proc/sys/sunrpc/rpc_debug"
> Yeah doing that may get you a little more information.
>
> Note that kernel's pretty out of date with respect to mainstream so your
> best bet is your distro support or seeing if you can reproduce the
> problem on a more recent kernel.
>
> --b.
This is the /var/log/messages log after enabling debugging for nfs, nfsd
and rpc:
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 TCP (listen)
state change 10
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 served by
daemon ffff8102a5ede400
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400, socket
ffff8101fd09c080, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff8101fd09c080 data 0
conn 1 close 0
Jan 14 12:01:49 server kernel: svc: tcp_accept ffff8101fd09c080 sock
ffff81028b747380
Jan 14 12:01:49 server kernel: svc: tcp_accept ffff810282a67780 allocated
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: nfsd: connect from unprivileged port:
172.22.201.203:40892
Jan 14 12:01:49 server kernel: nfsd: connect from 172.22.201.203:9fbc
Jan 14 12:01:49 server kernel: svc: svc_setup_socket ffff810282a67780
Jan 14 12:01:49 server kernel: setting up TCP socket for reading
Jan 14 12:01:49 server kernel: svc: svc_setup_socket created
ffff810167ed3d80 (inet ffff810125e547c0)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff810236f4f000
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 served by
daemon ffff81029edad000
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 TCP data
ready (svsk ffff810167ed3d80)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: got len=0
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: server ffff810236f4f000, socket
ffff810167ed3d80, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 1
conn 0 close 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810167ed3e2c, 0) = 4
Jan 14 12:01:49 server kernel: svc: TCP record, 40 bytes
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810149105028, 4056) = 40
Jan 14 12:01:49 server kernel: svc: TCP complete record (40 bytes)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff8102a5ede400
Jan 14 12:01:49 server kernel: svc: got len=40
Jan 14 12:01:49 server kernel: svc: svc_authenticate (0)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: calling dispatcher
Jan 14 12:01:49 server kernel: nfsd_dispatch: vers 4 proc 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
sendto([ffff8101dfe10000 28... ], 28) = 28 (addr cbc916ac)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: server ffff810236f4f000 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: server ffff81029edad000, socket
ffff8101fd09c080, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff8101fd09c080 data 0
conn 1 close 0
Jan 14 12:01:49 server kernel: svc: tcp_accept ffff8101fd09c080 sock
ffff81028b747380
Jan 14 12:01:49 server kernel: svc: tcp_accept ffff810282a67d00 allocated
Jan 14 12:01:49 server kernel: svc: got len=0
Jan 14 12:01:49 server kernel: svc: server ffff81029edad000 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400, socket
ffff810167ed3d80, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 1
conn 0 close 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810167ed3e28, 4) = -11
Jan 14 12:01:49 server kernel: RPC: TCP recvfrom got EAGAIN
Jan 14 12:01:49 server kernel: svc: got len=-11
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 TCP
(connected) state change 8 (svsk ffff810167ed3d80)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff8102a5ede400
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 TCP data
ready (svsk ffff810167ed3d80)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400, socket
ffff810167ed3d80, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 1
conn 0 close 1
Jan 14 12:01:49 server kernel: svc: svc_delete_socket(ffff810167ed3d80)
Jan 14 12:01:49 server kernel: svc: server socket destroy delayed
Jan 14 12:01:49 server kernel: svc: got len=0
Jan 14 12:01:49 server kernel: svc: releasing dead socket
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 TCP (listen)
state change 10
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 served by
daemon ffff8102a5ede400
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400, socket
ffff8101fd09c080, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff8101fd09c080 data 0
conn 1 close 0
Jan 14 12:01:49 server kernel: svc: tcp_accept ffff8101fd09c080 sock
ffff81028b747380
Jan 14 12:01:49 server kernel: svc: tcp_accept ffff810282a67780 allocated
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: nfsd: connect from 172.22.201.203:03d0
Jan 14 12:01:49 server kernel: svc: svc_setup_socket ffff810282a67780
Jan 14 12:01:49 server kernel: setting up TCP socket for reading
Jan 14 12:01:49 server kernel: svc: svc_setup_socket created
ffff810167ed3d80 (inet ffff810125e547c0)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 TCP data
ready (svsk ffff810167ed3d80)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff81029edad000
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 served by
daemon ffff810236f4f000
Jan 14 12:01:49 server kernel: svc: got len=0
Jan 14 12:01:49 server kernel: svc: socket ffff81012024c7c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: server ffff81029edad000, socket
ffff810167ed3d80, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 1
conn 0 close 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810167ed3e2c, 0) = 4
Jan 14 12:01:49 server kernel: svc: TCP record, 40 bytes
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff8102451ec028, 4056) = 40
Jan 14 12:01:49 server kernel: svc: TCP complete record (40 bytes)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff8102a5ede400
Jan 14 12:01:49 server kernel: svc: got len=40
Jan 14 12:01:49 server kernel: svc: svc_authenticate (0)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: calling dispatcher
Jan 14 12:01:49 server kernel: nfsd_dispatch: vers 4 proc 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
sendto([ffff81017bafe000 28... ], 28) = 28 (addr cbc916ac)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: server ffff81029edad000 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: server ffff810236f4f000, socket
ffff8101fd09c080, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff8101fd09c080 data 0
conn 1 close 0
Jan 14 12:01:49 server kernel: svc: tcp_accept ffff8101fd09c080 sock
ffff81028b747380
Jan 14 12:01:49 server kernel: svc: tcp_accept ffff810282a67d00 allocated
Jan 14 12:01:49 server kernel: svc: got len=0
Jan 14 12:01:49 server kernel: svc: server ffff810236f4f000 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400, socket
ffff810167ed3d80, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 1
conn 0 close 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810167ed3e28, 4) = -11
Jan 14 12:01:49 server kernel: RPC: TCP recvfrom got EAGAIN
Jan 14 12:01:49 server kernel: svc: got len=-11
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 TCP data
ready (svsk ffff810167ed3d80)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff8102a5ede400
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400, socket
ffff810167ed3d80, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 1
conn 0 close 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810167ed3e2c, 0) = 4
Jan 14 12:01:49 server kernel: svc: TCP record, 144 bytes
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810273143090, 3952) = 144
Jan 14 12:01:49 server kernel: svc: TCP complete record (144 bytes)
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff810236f4f000
Jan 14 12:01:49 server kernel: svc: got len=144
Jan 14 12:01:49 server kernel: svc: svc_authenticate (1)
Jan 14 12:01:49 server kernel: Want update, refage=120, age=0
Jan 14 12:01:49 server kernel: svc: svc_process dropit
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80 dropped request
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: server ffff810236f4f000, socket
ffff810167ed3d80, inuse=2
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 1
conn 0 close 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810167ed3e28, 4) = -11
Jan 14 12:01:49 server kernel: RPC: TCP recvfrom got EAGAIN
Jan 14 12:01:49 server kernel: svc: got len=-11
Jan 14 12:01:49 server kernel: svc: server ffff810236f4f000 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: revisit queued
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff810236f4f000
Jan 14 12:01:49 server kernel: svc: server ffff810236f4f000, socket
ffff810167ed3d80, inuse=1
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 0
conn 0 close 0
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff8102a5ede400
Jan 14 12:01:49 server kernel: svc: got len=144
Jan 14 12:01:49 server kernel: svc: svc_authenticate (1)
Jan 14 12:01:49 server kernel: svc: calling dispatcher
Jan 14 12:01:49 server kernel: nfsd_dispatch: vers 4 proc 1
Jan 14 12:01:49 server kernel: nfsv4 compound op #1: 24
Jan 14 12:01:49 server kernel: Want update, refage=120, age=0
Jan 14 12:01:49 server kernel: nfsd: Dropping request due to malloc failure!
**Jan 14 12:01:49 server kernel: svc: svc_process dropit**
**Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80 dropped
request**
Jan 14 12:01:49 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:49 server kernel: svc: server ffff810236f4f000 waiting for
data (to = 900000)
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400, socket
ffff810167ed3d80, inuse=2
Jan 14 12:01:49 server kernel: svc: tcp_recv ffff810167ed3d80 data 0
conn 0 close 0
Jan 14 12:01:49 server kernel: svc: socket ffff810167ed3d80
recvfrom(ffff810167ed3e28, 4) = -11
Jan 14 12:01:49 server kernel: RPC: TCP recvfrom got EAGAIN
Jan 14 12:01:49 server kernel: svc: got len=-11
Jan 14 12:01:49 server kernel: svc: server ffff8102a5ede400 waiting for
data (to = 900000)
---After I Ctrl+C 'd the mount command in the client:---
Jan 14 12:01:57 server kernel: svc: socket ffff810125e547c0 TCP
(connected) state change 8 (svsk ffff810167ed3d80)
Jan 14 12:01:57 server kernel: svc: socket ffff810125e547c0 served by
daemon ffff8102a5ede400
Jan 14 12:01:57 server kernel: svc: socket ffff810125e547c0 TCP data
ready (svsk ffff810167ed3d80)
Jan 14 12:01:57 server kernel: svc: socket ffff810125e547c0 busy, not
enqueued
Jan 14 12:01:57 server kernel: svc: server ffff8102a5ede400, socket
ffff810167ed3d80, inuse=2
Jan 14 12:01:57 server kernel: svc: tcp_recv ffff810167ed3d80 data 1
conn 0 close 1
Jan 14 12:01:57 server kernel: svc: svc_delete_socket(ffff810167ed3d80)
Jan 14 12:01:57 server kernel: svc: server socket destroy delayed
Jan 14 12:01:57 server kernel: svc: got len=0
Jan 14 12:01:57 server kernel: svc: server ffff8102a5ede400 waiting for
data (to = 900000)
I also tried to run idmap in verbose mode to see what it says
# rpc.idmapd -vvv -f
rpc.idmapd: Opened /proc/net/rpc/nfs4.nametoid/channel
rpc.idmapd: Opened /proc/net/rpc/nfs4.idtoname/channel
but it doesn't send anything to messages and, when checking
/proc/net/rpc/nfs4.idtoname/content, it's still empty
# cat /proc/net/rpc/nfs4.idtoname/content
#domain type id [name]
May this be a hint that my nfsd and idmapd are not communicating? (and
so the getattr is never answered)
As for the kernel version, unfortunately I am not allowed to upgrade it
right now, as this is a production host (it's in a High Availability
cluster but I can do the nfs tests because the other one is serving the
"main" nfs right now), but I expect there will be an updating stop
during this year.
Txema
PS: Damn! I have just realized you are the author of idmap, thanks man!
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: NFSv4 memory allocation bug?
2011-01-14 12:11 ` Txema Heredia Genestar
@ 2011-02-08 18:07 ` Txema Heredia
2011-02-09 0:09 ` 'J. Bruce Fields'
0 siblings, 1 reply; 10+ messages in thread
From: Txema Heredia @ 2011-02-08 18:07 UTC (permalink / raw)
To: 'Txema Heredia Genestar', 'J. Bruce Fields'; +Cc: linux-nfs
Hi all,
After a month or so struggling with this, and some other problems with NFSD
in my "old" kernel (2.6.16.60-0.39.3-smp) related with MTUs larger than 1500
stalling the server, I think I have found something related with my
inability to serve v4 filesystems:
In /usr/src/linux/include/linux/nfsd/const.h there is this defined:
/*
* Maximum protocol version supported by knfsd
*/
#define NFSSVC_MAXVERS 3
And in /usr/src/linux/fs/nfsd/nfsctl.c we can find this:
err = -EINVAL;
if (data->gd_version < 2 || data->gd_version > NFSSVC_MAXVERS)
goto out;
...
out:
return err;
And I found exactly the same in 2.6.34.7
Is this "real" or some old thing that is no longer used and I shouldn't
worry about?
Thanks in advance,
Txema
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: NFSv4 memory allocation bug?
2011-02-08 18:07 ` Txema Heredia
@ 2011-02-09 0:09 ` 'J. Bruce Fields'
2011-02-09 13:57 ` Txema Heredia Genestar
0 siblings, 1 reply; 10+ messages in thread
From: 'J. Bruce Fields' @ 2011-02-09 0:09 UTC (permalink / raw)
To: Txema Heredia; +Cc: linux-nfs
On Tue, Feb 08, 2011 at 07:07:08PM +0100, Txema Heredia wrote:
> Hi all,
>
> After a month or so struggling with this, and some other problems with NFSD
> in my "old" kernel (2.6.16.60-0.39.3-smp) related with MTUs larger than 1500
> stalling the server, I think I have found something related with my
> inability to serve v4 filesystems:
>
> In /usr/src/linux/include/linux/nfsd/const.h there is this defined:
> /*
> * Maximum protocol version supported by knfsd
> */
> #define NFSSVC_MAXVERS 3
>
> And in /usr/src/linux/fs/nfsd/nfsctl.c we can find this:
> err = -EINVAL;
> if (data->gd_version < 2 || data->gd_version > NFSSVC_MAXVERS)
> goto out;
> ...
> out:
> return err;
>
>
> And I found exactly the same in 2.6.34.7
>
> Is this "real" or some old thing that is no longer used and I shouldn't
> worry about?
Probably irrelevant.
Could you tell us exactly what you've tried to do and why it's failing?
--b.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: NFSv4 memory allocation bug?
2011-02-09 0:09 ` 'J. Bruce Fields'
@ 2011-02-09 13:57 ` Txema Heredia Genestar
0 siblings, 0 replies; 10+ messages in thread
From: Txema Heredia Genestar @ 2011-02-09 13:57 UTC (permalink / raw)
To: 'J. Bruce Fields'; +Cc: linux-nfs
El 09/02/11 1:09, 'J. Bruce Fields' escribió:
> On Tue, Feb 08, 2011 at 07:07:08PM +0100, Txema Heredia wrote:
>> Hi all,
>>
>> After a month or so struggling with this, and some other problems
>> with NFSD
>> in my "old" kernel (2.6.16.60-0.39.3-smp) related with MTUs larger
>> than 1500
>> stalling the server, I think I have found something related with my
>> inability to serve v4 filesystems:
>>
>> In /usr/src/linux/include/linux/nfsd/const.h there is this defined:
>> /*
>> * Maximum protocol version supported by knfsd
>> */
>> #define NFSSVC_MAXVERS 3
>>
>> And in /usr/src/linux/fs/nfsd/nfsctl.c we can find this:
>> err = -EINVAL;
>> if (data->gd_version < 2 || data->gd_version > NFSSVC_MAXVERS)
>> goto out;
>> ...
>> out:
>> return err;
>>
>>
>> And I found exactly the same in 2.6.34.7
>>
>> Is this "real" or some old thing that is no longer used and I shouldn't
>> worry about?
>
> Probably irrelevant.
>
> Could you tell us exactly what you've tried to do and why it's failing?
>
> --b.
I am still having the same problems with NFSv4 as described here:
http://thread.gmane.org/gmane.linux.nfs/38156
We reached the conclusion that my kernel was way too old and I would
need to update it in order to get new versions of pretty much everything
involved in NFS. But as the server was in production, I wasn't (and
still am not) able to update it in a while.
More recently I have found some other issues with NFS, this time v3:
If MTUs in both client and server are set to 9000, the server starts 16
or more threads (in an 8 core, 10Gb RAM, 10Gb Swap system), and 24
clients start sending write requests, the server crashes, usually (but
not always) leaving a message as follows:
Jan 31 12:40:34 server kernel: Unable to handle kernel paging request at
ffffa63e7c000000 RIP:
Jan 31 12:40:34 server kernel: <ffffffff8016efdd>{__handle_mm_fault+201}
Jan 31 12:40:34 server kernel: PGD 0
Jan 31 12:40:34 server kernel: Oops: 0000 [1] SMP
Jan 31 12:40:34 server kernel: last sysfs file:
/devices/pci0000:00/0000:00:07.0/0000:05:00.0/0000:06:00.0/irq
Jan 31 12:40:34 server kernel: CPU 3
Jan 31 12:40:34 server kernel: Modules linked in: nfsd exportfs lockd
nfs_acl xt_pkttype ipt_TCPMSS ipt_LOG xt_limit autofs4 sunrpc dock
button battery ac softdog ip6t_REJECT xt_tcpudp ipt_REJECT xt_state ipta
ble_mangle iptable_nat ip_nat iptable_filter ip6table_mangle
ip_conntrack nfnetlink ip_tables ip6table_filter ip6_tables x_tables
ipv6 apparmor ext3 jbd loop usbhid uhci_hcd ehci_hcd mptctl shpchp bnx2
usbcore
pci_hotplug hw_random reiserfs dm_alua dm_hp_sw dm_rdac dm_emc
dm_round_robin dm_multipath dm_snapshot edd dm_mod fan thermal processor
qla2xxx sg firmware_class scsi_transport_fc mptsas mptscsih mptbase scsi
_transport_sas ata_piix libata sd_mod scsi_mod
Jan 31 12:40:34 server kernel: Pid: 11609, comm: top Not tainted
2.6.16.60-0.39.3-smp #1
Jan 31 12:40:34 server kernel: RIP: 0010:[<ffffffff8016efdd>]
<ffffffff8016efdd>{__handle_mm_fault+201}
Jan 31 12:40:34 server kernel: RSP: 0018:ffff81027b427cd8 EFLAGS: 00010286
Jan 31 12:40:34 server kernel: RAX: 0000000000000000 RBX:
ffffa63e7c000000 RCX: ffff8102a0fb4f00
Jan 31 12:40:34 server kernel: RDX: 0000253e7c000000 RSI:
0000000000000001 RDI: 0000000000000090
Jan 31 12:40:34 server kernel: RBP: ffff81029354c140 R08:
0000000000000000 R09: ffff8102a0fb4f00
Jan 31 12:40:34 server kernel: R10: 0000000000000000 R11:
ffff810299bb1f70 R12: ffff810000000000
Jan 31 12:40:34 server kernel: R13: ffff81027b427e68 R14:
00000000005184a0 R15: 00003ffffffff000
Jan 31 12:40:34 server kernel: FS: 00002b8f521f7d70(0000)
GS:ffff8102a5ddc940(0000) knlGS:0000000000000000
Jan 31 12:40:34 server kernel: CS: 0010 DS: 0000 ES: 0000 CR0:
000000008005003b
Jan 31 12:40:34 server kernel: CR2: ffffa63e7c000000 CR3:
000000029128b000 CR4: 00000000000006e0
Jan 31 12:40:34 server kernel: Process top (pid: 11609, threadinfo
ffff81027b426000, task ffff8102a63d17e0)
Jan 31 12:40:34 server kernel: Stack: 0000000000000286 000000018013cea6
ffff8102a0fb4f00 00000000ffffffff
Jan 31 12:40:34 server kernel: 0000000000000286 ffffffff8013cf1b
ffff8102a53d82f0 00000001000a3051
Jan 31 12:40:34 server kernel: 0000000000000286 ffff81027b427d48
Jan 31 12:40:34 server kernel: Call Trace:
<ffffffff8013cf1b>{try_to_del_timer_sync+84}
Jan 31 12:40:34 server kernel: <ffffffff8013cf30>{del_timer_sync+12}
<ffffffff802ede4c>{do_page_fault+966}
Jan 31 12:40:34 server kernel: <ffffffff80199215>{__pollwait+0}
<ffffffff8010bced>{error_exit+0}
Jan 31 12:40:34 server kernel: <ffffffff801fb293>{copy_user_generic+147}
<ffffffff80199604>{sys_select+297}
Jan 31 12:40:34 server kernel: <ffffffff8010ae16>{system_call+126}
Jan 31 12:40:34 server kernel:
Jan 31 12:40:34 server kernel: Code: 48 83 3b 00 75 18 48 8b 7c 24 10 4c
89 f2 48 89 de e8 d9 e1
Jan 31 12:40:34 server kernel: RIP
<ffffffff8016efdd>{__handle_mm_fault+201} RSP <ffff81027b427cd8>
Jan 31 12:40:34 server kernel: CR2: ffffa63e7c000000
Jan 31 12:40:34 server kernel: mm/memory.c:104: bad pgd
ffff81029128b000(5f88a53e7c000080).
Jan 31 12:40:34 server kernel: ----------- [cut here ] --------- [please
bite here ] ---------
Jan 31 12:40:34 server kernel: Kernel BUG at mm/mmap.c:1994
Jan 31 12:40:34 server kernel: invalid opcode: 0000 [2] SMP
Jan 31 12:40:34 server kernel: last sysfs file:
/devices/pci0000:00/0000:00:07.0/0000:05:00.0/0000:06:00.0/irq
Jan 31 12:40:34 server kernel: CPU 3
Jan 31 12:40:34 server kernel: Modules linked in: nfsd exportfs lockd
nfs_acl xt_pkttype ipt_TCPMSS ipt_LOG xt_limit autofs4 sunrpc dock
button battery ac softdog ip6t_REJECT xt_tcpudp ipt_REJECT xt_state ipta
ble_mangle iptable_nat ip_nat iptable_filter ip6table_mangle
ip_conntrack nfnetlink ip_tables ip6table_filter ip6_tables x_tables
ipv6 apparmor ext3 jbd loop usbhid uhci_hcd ehci_hcd mptctl shpchp bnx2
usbcore
pci_hotplug hw_random reiserfs dm_alua dm_hp_sw dm_rdac dm_emc
dm_round_robin dm_multipath dm_snapshot edd dm_mod fan thermal processor
qla2xxx sg firmware_class scsi_transport_fc mptsas mptscsih mptbase scsi
_transport_sas ata_piix libata sd_mod scsi_mod
Jan 31 12:40:34 server kernel: Pid: 11609, comm: top Not tainted
2.6.16.60-0.39.3-smp #1
Jan 31 12:40:34 server kernel: RIP: 0010:[<ffffffff80171b83>]
<ffffffff80171b83>{exit_mmap+244}
Jan 31 12:40:34 server kernel: RSP: 0018:ffff81027b427a88 EFLAGS: 00010202
Jan 31 12:40:34 server kernel: RAX: 0000000000000000 RBX:
00007fff58e76000 RCX: 000000000000003e
Jan 31 12:40:34 server kernel: RDX: ffff8102936d1a98 RSI:
ffff8102936d1590 RDI: 00000000002936d1
Jan 31 12:40:34 server kernel: RBP: ffff8102a0fb4f00 R08:
0000000000000000 R09: 0000000000000010
Jan 31 12:40:34 server kernel: R10: 0000000000000000 R11:
0000000000000000 R12: ffff810001058580
Jan 31 12:40:34 server kernel: R13: 0000000000000000 R14:
0000000000000000 R15: ffff8102a63d17e0
Jan 31 12:40:34 server kernel: FS: 0000000000000000(0000)
GS:ffff8102a5ddc940(0000) knlGS:0000000000000000
Jan 31 12:40:34 server kernel: CS: 0010 DS: 0000 ES: 0000 CR0:
000000008005003b
Jan 31 12:40:34 server kernel: CR2: ffffa63e7c000000 CR3:
0000000000101000 CR4: 00000000000006e0
Jan 31 12:40:34 server kernel: Process top (pid: 11609, threadinfo
ffff81027b426000, task ffff8102a63d17e0)
Jan 31 12:40:34 server kernel: Stack: 0000000000000246 0000000000000098
ffff810001058580 ffff8102a0fb4f00
Jan 31 12:40:34 server kernel: ffff8102a0fb4f80 ffff8102a0fb4f00
0000000000000001 ffffffff80131770
Jan 31 12:40:34 server kernel: ffff8102a63d17e0 0000000000000009
Jan 31 12:40:34 server kernel: Call Trace: <ffffffff80131770>{mmput+47}
<ffffffff8013724a>{do_exit+614}
Jan 31 12:40:34 server kernel: <ffffffff802ec7fc>{__die+218}
<ffffffff802ee153>{do_page_fault+1741}
Jan 31 12:40:34 server kernel: <ffffffff801bbe47>{proc_alloc_inode+64}
<ffffffff8019e499>{alloc_inode+266}
Jan 31 12:40:34 server kernel: <ffffffff801fb013>{find_next_bit+89}
<ffffffff8010bced>{error_exit+0}
Jan 31 12:40:34 server kernel: <ffffffff8016efdd>{__handle_mm_fault+201}
<ffffffff8016ef50>{__handle_mm_fault+60}
Jan 31 12:40:34 server kernel:
<ffffffff8013cf1b>{try_to_del_timer_sync+84}
<ffffffff8013cf30>{del_timer_sync+12}
Jan 31 12:40:34 server kernel: <ffffffff802ede4c>{do_page_fault+966}
<ffffffff80199215>{__pollwait+0}
Jan 31 12:40:34 server kernel: <ffffffff8010bced>{error_exit+0}
<ffffffff801fb293>{copy_user_generic+147}
Jan 31 12:40:34 server kernel: <ffffffff80199604>{sys_select+297}
<ffffffff8010ae16>{system_call+126}
Jan 31 12:40:34 server kernel:
Jan 31 12:40:34 server kernel: Code: 0f 0b 68 80 46 31 80 c2 ca 07 48 83
c4 18 5b 5d 41 5c 41 5d
Jan 31 12:40:34 server kernel: RIP <ffffffff80171b83>{exit_mmap+244} RSP
<ffff81027b427a88>
Jan 31 12:40:34 server kernel: <1>Fixing recursive fault but reboot is
needed!
etc...
and then the system freezes.
That kernel bug points here:
(in function "void exit_mmap(struct mm_struct *mm)" )
1994: BUG_ON(mm->nr_ptes > (FIRST_USER_ADDRESS+PMD_SIZE-1)>>PMD_SHIFT);
From what I have read I can tell that this bug has been fixed recently
in the kernel. The problem is that the fix was only to prevent showing
the bug message when an OOM happens, as they simply added this:
vma = mm->mmap;
if (!vma) /* Can happen if dup_mmap() received an OOM */
return;
So the issue that completely freezes the system is still not handled.
This happens immediately after the first write requests are received
when USE_KERNEL_NFSD_NUMBER is set to 16 or higher. When the number of
threads is set to 4, this tends to also happen most of the time, but not
always. When this occurs, that bug message is not always shown (but
still completely freezing the server). This happens both in TCP and UDP,
and with any r/wsize.
Nothing of this happens with MTU=1500.
Could this all be due to my old kernel or is there something else I'm
missing?
Txema.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2011-02-09 13:57 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-01-12 17:14 NFSv4 memory allocation bug? Txema Heredia Genestar
2011-01-12 18:35 ` J. Bruce Fields
2011-01-13 15:48 ` Txema Heredia Genestar
2011-01-13 16:19 ` J. Bruce Fields
2011-01-13 17:25 ` Txema Heredia Genestar
2011-01-13 18:05 ` J. Bruce Fields
2011-01-14 12:11 ` Txema Heredia Genestar
2011-02-08 18:07 ` Txema Heredia
2011-02-09 0:09 ` 'J. Bruce Fields'
2011-02-09 13:57 ` Txema Heredia Genestar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).