From mboxrd@z Thu Jan 1 00:00:00 1970 From: "J. Bruce Fields" Subject: Re: [PATCH review 6/6] vfs: Cache the results of path_connected Date: Wed, 5 Aug 2015 11:59:48 -0400 Message-ID: <20150805155948.GD17797@fieldses.org> References: <871tncuaf6.fsf@x220.int.ebiederm.org> <87mw5xq7lt.fsf@x220.int.ebiederm.org> <87a8yqou41.fsf_-_@x220.int.ebiederm.org> <874moq9oyb.fsf_-_@x220.int.ebiederm.org> <871tfkawu9.fsf_-_@x220.int.ebiederm.org> <8738009i0h.fsf_-_@x220.int.ebiederm.org> <20150804115215.GA317@odin.com> <871tfj0x4j.fsf@x220.int.ebiederm.org> <20150804194447.GB6664@fieldses.org> <874mkey824.fsf@x220.int.ebiederm.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Andrey Vagin , Miklos Szeredi , Richard Weinberger , Linux Containers , Andy Lutomirski , Al Viro , linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Jann Horn , Linus Torvalds , Willy Tarreau To: "Eric W. Biederman" Return-path: Content-Disposition: inline In-Reply-To: <874mkey824.fsf-JOvCrm2gF+uungPnsOpG7nhyD016LWXt@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org List-Id: linux-fsdevel.vger.kernel.org On Tue, Aug 04, 2015 at 05:58:59PM -0500, Eric W. Biederman wrote: > "J. Bruce Fields" writes: > > > On Tue, Aug 04, 2015 at 12:41:32PM -0500, Eric W. Biederman wrote: > >> A pathname lookup taking 16 seconds seems absurd. But perhaps in the > >> worst case. > >> > >> The maximum length of a path that can be passed into path_lookup is > >> 4096. For a lookup to be problematic there must be at least as many > >> instances of .. as there are of any other path component. So each pair > >> of a minium length path element and a .. element must take at least 5 > >> bytes. Which in 4096 bytes leaves room for 819 path elements. If every > >> one of those 819 path components triggered a disk seek at 100 seeks per > >> second I could see a path name lookup potentially taking 8 seconds. > > > > A lookup on NFS while a server's rebooting or the network's flaky could > > take arbitrary long. Other network filesystems and fuse can have > > similar problems. Depending on threat model an attacker might have > > quite precise control over that timing. Disk filesystems could have all > > the same problems since there's no guarantee the underlying block device > > is really local. Even ignoring that, hardware can be slow or flaky. > > And couldn't an allocation in theory block for an arbitrary long time? > > > > Apologies for just dropping into the middle here! I haven't read the > > rest and don't have the context to know whether any of that's relevant. > > No problem. The basic question is: can 2Billion renames be performed on > the same filesystem in less time than a single path lookup? Allowing > the use of a 32bit counter. Certainly if you have control over an NFS or FUSE server then you can arrange for that to happen--just delay the lookup until you've processed enough renames. I don't know if that's interesting.... > If you could look up thread and tell me what you think of the issue with > file handle to dentry conversion and bind mounts I would be appreciate. OK, I see your comments in "[PATCH review 0/6] Bind mount escape fixes", I'm not sure I understand yet, I'll take a closer look. --b.