From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: Re: [PATCH] vfs: get_next_ino(), never inum=0 Date: Tue, 29 Apr 2014 19:53:42 +0200 Message-ID: <20140429175342.GA26109@lst.de> References: <1398786301-11484-1-git-send-email-hooanon05g@gmail.com> <18587.1398793322@jrobl> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: dchinner@redhat.com, viro@zeniv.linux.org.uk, linux-fsdevel@vger.kernel.org To: "J. R. Okajima" Return-path: Received: from verein.lst.de ([213.95.11.211]:54304 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932491AbaD2Rxp (ORCPT ); Tue, 29 Apr 2014 13:53:45 -0400 Content-Disposition: inline In-Reply-To: <18587.1398793322@jrobl> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Wed, Apr 30, 2014 at 02:42:02AM +0900, J. R. Okajima wrote: > > > There is another unpleasant effect when get_next_ino() wraps > > around. When there is a file whose inum=100 on tmpfs, a new file may get > > inum=100. I am not sure what will happen when the duplicated inums exist > > on tmpfs. ... > > Undeterministic behaviour when exporting via NFS? If you care about really unique inode numbers you shouldn't use get_next_ino but something like an idr allocator. The default i_ino assigned in new_inode() from which get_next_ino was factored out was mostly intended for small synthetic filesystems with few enough inodes that it wouldn't wrap around. And yes, file handle based lookups are screwed by duplicated inode numbers, as are tools trying to do file level de-duplication, mostly in the backup or achival space.