* Re: ELOOP from getdents
[not found] <87lh1fizyy.fsf@thinkpad.rath.org>
@ 2016-07-12 22:35 ` Nikolaus Rath
2016-07-12 23:26 ` Trond Myklebust
0 siblings, 1 reply; 5+ messages in thread
From: Nikolaus Rath @ 2016-07-12 22:35 UTC (permalink / raw)
To: linux-nfs, linux-fsdevel
Hi,
Really no one any idea about this?
Adding fsdevel- to Cc, maybe someone there can help..
Best,
Nikolaus'
On Jul 05 2016, Nikolaus Rath <Nikolaus-BTH8mxji4b0@public.gmane.org> wrote:
> Hello,
>
> I'm having trouble exporting a FUSE file system over nfs4
> (cf. https://bitbucket.org/nikratio/s3ql/issues/221/). If there are only
> a few entries in exported directory, `ls` on the NFS mountpoint fails
> with:
>
> # ls -li /mnt/nfs/
> /bin/ls: reading directory /mnt/nfs/: Too many levels of symbolic links
> total 1
> 3 drwx------ 1 root root 0 Jul 5 11:07 lost+found/
> 3 drwx------ 1 root root 0 Jul 5 11:07 lost+found/
> 4 -rw-r--r-- 1 root root 4 Jul 5 11:07 testfile
> 4 -rw-r--r-- 1 root root 4 Jul 5 11:07 testfile
>
> Running strace shows that the getdents() syscall fails with ELOOP:
>
> stat("/mnt/nfs", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
> openat(AT_FDCWD, "/mnt/nfs", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
> getdents(3, /* 4 entries */, 32768) = 112
> getdents(3, /* 2 entries */, 32768) = 64
> getdents(3, 0xf15c90, 32768) = -1 ELOOP (Too many levels of symbolic links)
> open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 4
>
> This happens only when using NFSv4, when mounting with vers=3 the error
> does not occur.
>
> The FUSE file system receives the same requests and responds in the same
> way in both cases:
>
> 2016-07-05 12:22:31.477 21519:fuse-worker-7 s3ql.fs.opendir: started with 1
> 2016-07-05 12:22:31.477 21519:fuse-worker-8 s3ql.fs.readdir: started with 1, 0
> 2016-07-05 12:22:31.478 21519:fuse-worker-8 s3ql.fs.readdir: reporting lost+found with inode 3, generation 0, nlink 1
> 2016-07-05 12:22:31.478 21519:fuse-worker-8 s3ql.fs.readdir: reporting testfile with inode 4, generation 0, nlink 1
> 2016-07-05 12:22:31.479 21519:fuse-worker-9 s3ql.fs.getattr: started with 1
> 2016-07-05 12:22:31.479 21519:fuse-worker-10 s3ql.fs._lookup: started with 1, b'lost+found'
> 2016-07-05 12:22:31.480 21519:fuse-worker-11 s3ql.fs._lookup: started with 1, b'testfile'
> 2016-07-05 12:22:31.481 21519:fuse-worker-12 s3ql.fs.readdir: started with 1, 2
> 2016-07-05 12:22:31.484 21519:fuse-worker-13 s3ql.fs.releasedir: started with 1
>
>
> The numbers refer to inodes. So FUSE first receives an opendir() request
> for inode 1 (the file system root / mountpoint), followed by readdir()
> for the same directory with offset 0. It reports two entries. It then
> receives another readdir for this directory with offset 2, and reports
> that all entries have been returned.
>
> However, for some reason NFSv4 gets confused by this and reports 6
> entries to ls.
>
>
> Can anyone advise what might be happening here?
>
>
> Best,
> -Nikolaus
>
> --
> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
>
> »Time flies like an arrow, fruit flies like a Banana.«
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ELOOP from getdents
2016-07-12 22:35 ` ELOOP from getdents Nikolaus Rath
@ 2016-07-12 23:26 ` Trond Myklebust
2016-07-13 14:45 ` J. Bruce Fields
2016-07-13 15:06 ` Nikolaus Rath
0 siblings, 2 replies; 5+ messages in thread
From: Trond Myklebust @ 2016-07-12 23:26 UTC (permalink / raw)
To: Nikolaus Rath; +Cc: linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org
> On Jul 12, 2016, at 18:35, Nikolaus Rath <Nikolaus@rath.org> wrote:
>
> Hi,
>
> Really no one any idea about this?
>
In NFSv4, offsets 1 and 2 are reserved: https://tools.ietf.org/html/rfc7530#section-16.24
> Adding fsdevel- to Cc, maybe someone there can help..
>
> Best,
> Nikolaus'
>
> On Jul 05 2016, Nikolaus Rath <Nikolaus-BTH8mxji4b0@public.gmane.org> wrote:
>> Hello,
>>
>> I'm having trouble exporting a FUSE file system over nfs4
>> (cf. https://bitbucket.org/nikratio/s3ql/issues/221/). If there are only
>> a few entries in exported directory, `ls` on the NFS mountpoint fails
>> with:
>>
>> # ls -li /mnt/nfs/
>> /bin/ls: reading directory /mnt/nfs/: Too many levels of symbolic links
>> total 1
>> 3 drwx------ 1 root root 0 Jul 5 11:07 lost+found/
>> 3 drwx------ 1 root root 0 Jul 5 11:07 lost+found/
>> 4 -rw-r--r-- 1 root root 4 Jul 5 11:07 testfile
>> 4 -rw-r--r-- 1 root root 4 Jul 5 11:07 testfile
>>
>> Running strace shows that the getdents() syscall fails with ELOOP:
>>
>> stat("/mnt/nfs", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
>> openat(AT_FDCWD, "/mnt/nfs", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
>> getdents(3, /* 4 entries */, 32768) = 112
>> getdents(3, /* 2 entries */, 32768) = 64
>> getdents(3, 0xf15c90, 32768) = -1 ELOOP (Too many levels of symbolic links)
>> open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 4
>>
>> This happens only when using NFSv4, when mounting with vers=3 the error
>> does not occur.
>>
>> The FUSE file system receives the same requests and responds in the same
>> way in both cases:
>>
>> 2016-07-05 12:22:31.477 21519:fuse-worker-7 s3ql.fs.opendir: started with 1
>> 2016-07-05 12:22:31.477 21519:fuse-worker-8 s3ql.fs.readdir: started with 1, 0
>> 2016-07-05 12:22:31.478 21519:fuse-worker-8 s3ql.fs.readdir: reporting lost+found with inode 3, generation 0, nlink 1
>> 2016-07-05 12:22:31.478 21519:fuse-worker-8 s3ql.fs.readdir: reporting testfile with inode 4, generation 0, nlink 1
>> 2016-07-05 12:22:31.479 21519:fuse-worker-9 s3ql.fs.getattr: started with 1
>> 2016-07-05 12:22:31.479 21519:fuse-worker-10 s3ql.fs._lookup: started with 1, b'lost+found'
>> 2016-07-05 12:22:31.480 21519:fuse-worker-11 s3ql.fs._lookup: started with 1, b'testfile'
>> 2016-07-05 12:22:31.481 21519:fuse-worker-12 s3ql.fs.readdir: started with 1, 2
>> 2016-07-05 12:22:31.484 21519:fuse-worker-13 s3ql.fs.releasedir: started with 1
>>
>>
>> The numbers refer to inodes. So FUSE first receives an opendir() request
>> for inode 1 (the file system root / mountpoint), followed by readdir()
>> for the same directory with offset 0. It reports two entries. It then
>> receives another readdir for this directory with offset 2, and reports
>> that all entries have been returned.
>>
>> However, for some reason NFSv4 gets confused by this and reports 6
>> entries to ls.
>>
>>
>> Can anyone advise what might be happening here?
>>
>>
>> Best,
>> -Nikolaus
>>
>> --
>> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
>> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
>>
>> »Time flies like an arrow, fruit flies like a Banana.«
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
> --
> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
>
> »Time flies like an arrow, fruit flies like a Banana.«
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ELOOP from getdents
2016-07-12 23:26 ` Trond Myklebust
@ 2016-07-13 14:45 ` J. Bruce Fields
2016-07-13 15:06 ` Nikolaus Rath
1 sibling, 0 replies; 5+ messages in thread
From: J. Bruce Fields @ 2016-07-13 14:45 UTC (permalink / raw)
To: Trond Myklebust
Cc: Nikolaus Rath, linux-nfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org
On Tue, Jul 12, 2016 at 11:26:00PM +0000, Trond Myklebust wrote:
>
> > On Jul 12, 2016, at 18:35, Nikolaus Rath <Nikolaus@rath.org> wrote:
> >
> > Hi,
> >
> > Really no one any idea about this?
> >
>
> In NFSv4, offsets 1 and 2 are reserved: https://tools.ietf.org/html/rfc7530#section-16.24
I think fuse is just getting the readdir offsets from userspace, so I
guess this is the fault of the userspace filesystem. Though maybe the
fuse kernel driver should be doing some more sanity-checking, I don't
know.
--b.
>
> > Adding fsdevel- to Cc, maybe someone there can help..
> >
> > Best,
> > Nikolaus'
> >
> > On Jul 05 2016, Nikolaus Rath <Nikolaus-BTH8mxji4b0@public.gmane.org> wrote:
> >> Hello,
> >>
> >> I'm having trouble exporting a FUSE file system over nfs4
> >> (cf. https://bitbucket.org/nikratio/s3ql/issues/221/). If there are only
> >> a few entries in exported directory, `ls` on the NFS mountpoint fails
> >> with:
> >>
> >> # ls -li /mnt/nfs/
> >> /bin/ls: reading directory /mnt/nfs/: Too many levels of symbolic links
> >> total 1
> >> 3 drwx------ 1 root root 0 Jul 5 11:07 lost+found/
> >> 3 drwx------ 1 root root 0 Jul 5 11:07 lost+found/
> >> 4 -rw-r--r-- 1 root root 4 Jul 5 11:07 testfile
> >> 4 -rw-r--r-- 1 root root 4 Jul 5 11:07 testfile
> >>
> >> Running strace shows that the getdents() syscall fails with ELOOP:
> >>
> >> stat("/mnt/nfs", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
> >> openat(AT_FDCWD, "/mnt/nfs", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
> >> getdents(3, /* 4 entries */, 32768) = 112
> >> getdents(3, /* 2 entries */, 32768) = 64
> >> getdents(3, 0xf15c90, 32768) = -1 ELOOP (Too many levels of symbolic links)
> >> open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 4
> >>
> >> This happens only when using NFSv4, when mounting with vers=3 the error
> >> does not occur.
> >>
> >> The FUSE file system receives the same requests and responds in the same
> >> way in both cases:
> >>
> >> 2016-07-05 12:22:31.477 21519:fuse-worker-7 s3ql.fs.opendir: started with 1
> >> 2016-07-05 12:22:31.477 21519:fuse-worker-8 s3ql.fs.readdir: started with 1, 0
> >> 2016-07-05 12:22:31.478 21519:fuse-worker-8 s3ql.fs.readdir: reporting lost+found with inode 3, generation 0, nlink 1
> >> 2016-07-05 12:22:31.478 21519:fuse-worker-8 s3ql.fs.readdir: reporting testfile with inode 4, generation 0, nlink 1
> >> 2016-07-05 12:22:31.479 21519:fuse-worker-9 s3ql.fs.getattr: started with 1
> >> 2016-07-05 12:22:31.479 21519:fuse-worker-10 s3ql.fs._lookup: started with 1, b'lost+found'
> >> 2016-07-05 12:22:31.480 21519:fuse-worker-11 s3ql.fs._lookup: started with 1, b'testfile'
> >> 2016-07-05 12:22:31.481 21519:fuse-worker-12 s3ql.fs.readdir: started with 1, 2
> >> 2016-07-05 12:22:31.484 21519:fuse-worker-13 s3ql.fs.releasedir: started with 1
> >>
> >>
> >> The numbers refer to inodes. So FUSE first receives an opendir() request
> >> for inode 1 (the file system root / mountpoint), followed by readdir()
> >> for the same directory with offset 0. It reports two entries. It then
> >> receives another readdir for this directory with offset 2, and reports
> >> that all entries have been returned.
> >>
> >> However, for some reason NFSv4 gets confused by this and reports 6
> >> entries to ls.
> >>
> >>
> >> Can anyone advise what might be happening here?
> >>
> >>
> >> Best,
> >> -Nikolaus
> >>
> >> --
> >> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
> >> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
> >>
> >> »Time flies like an arrow, fruit flies like a Banana.«
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> >> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> >
> > --
> > GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
> > Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
> >
> > »Time flies like an arrow, fruit flies like a Banana.«
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ELOOP from getdents
2016-07-12 23:26 ` Trond Myklebust
2016-07-13 14:45 ` J. Bruce Fields
@ 2016-07-13 15:06 ` Nikolaus Rath
2016-07-13 15:34 ` J. Bruce Fields
1 sibling, 1 reply; 5+ messages in thread
From: Nikolaus Rath @ 2016-07-13 15:06 UTC (permalink / raw)
To: Trond Myklebust; +Cc: linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org
On Jul 12 2016, Trond Myklebust <trondmy@primarydata.com> wrote:
> In NFSv4, offsets 1 and 2 are reserved:
> https://tools.ietf.org/html/rfc7530#section-16.24
Ah, that explains it. Thanks!
I was assuming that I could export any "proper" unix file system over
NFS - and as far as I know, the rest of the VFS does not make any
assumptions (or reservations) about readdir offsets. Are there other
such constraints? I looked at the RFC, but it's rather hard to extract
that specific information...
Best,
Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ELOOP from getdents
2016-07-13 15:06 ` Nikolaus Rath
@ 2016-07-13 15:34 ` J. Bruce Fields
0 siblings, 0 replies; 5+ messages in thread
From: J. Bruce Fields @ 2016-07-13 15:34 UTC (permalink / raw)
To: Nikolaus Rath
Cc: Trond Myklebust, linux-nfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org
On Wed, Jul 13, 2016 at 05:06:34PM +0200, Nikolaus Rath wrote:
> On Jul 12 2016, Trond Myklebust <trondmy@primarydata.com> wrote:
> > In NFSv4, offsets 1 and 2 are reserved:
> > https://tools.ietf.org/html/rfc7530#section-16.24
>
> Ah, that explains it. Thanks!
>
> I was assuming that I could export any "proper" unix file system over
> NFS - and as far as I know, the rest of the VFS does not make any
> assumptions (or reservations) about readdir offsets. Are there other
> such constraints? I looked at the RFC, but it's rather hard to extract
> that specific information...
Local filesystems only need to generate readdir offsets that work for a
given open, while exportable filesystem need to readdir offsets that
they can still interpret at an arbitrary future point (possibly after a
reboot).
The other main requirements are on filehandles.
An in-kernel filesystem shouldn't define export_ops if it doesn't
support export, so should be able to fail export attempts early on
rather than seeming to work and then behaving weird later, as in this
case. I don't know if there's a comparable way for a fuse filesystem to
say "don't even try exporting me".
--b.
>
> Best,
> Nikolaus
>
> --
> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
>
> »Time flies like an arrow, fruit flies like a Banana.«
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-07-13 15:35 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <87lh1fizyy.fsf@thinkpad.rath.org>
2016-07-12 22:35 ` ELOOP from getdents Nikolaus Rath
2016-07-12 23:26 ` Trond Myklebust
2016-07-13 14:45 ` J. Bruce Fields
2016-07-13 15:06 ` Nikolaus Rath
2016-07-13 15:34 ` J. Bruce Fields
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).