From mboxrd@z Thu Jan 1 00:00:00 1970 From: Frank van Maarseveen Subject: Re: Finding hardlinks Date: Wed, 3 Jan 2007 23:01:29 +0100 Message-ID: <20070103220129.GA4788@janus> References: <20070103185815.GA2182@janus> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Arjan van de Ven , Jan Harkes , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Miklos Szeredi , Mikulas Patocka , Pavel Machek Return-path: Received: from frankvm.xs4all.nl ([80.126.170.174]:43624 "EHLO janus.localdomain" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932116AbXACWBa (ORCPT ); Wed, 3 Jan 2007 17:01:30 -0500 To: Bryan Henderson Content-Disposition: inline In-Reply-To: Sender: linux-fsdevel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org On Wed, Jan 03, 2007 at 01:09:41PM -0800, Bryan Henderson wrote: > >On any decent filesystem st_ino should uniquely identify an object and > >reliably provide hardlink information. The UNIX world has relied upon > this > >for decades. A filesystem with st_ino collisions without being hardlinked > >(or the other way around) needs a fix. > > But for at least the last of those decades, filesystems that could not do > that were not uncommon. They had to present 32 bit inode numbers and > either allowed more than 4G files or just didn't have the means of > assigning inode numbers with the proper uniqueness to files. And the sky > did not fall. I don't have an explanation why, I think it's mostly high end use and high end users tend to understand more. But we're going to see more really large filesystems in "normal" use so.. Currently, large file support is already necessary to handle dvd and video. It's also useful for images for virtualization. So the failing stat() calls should already be a thing of the past with modern distributions. -- Frank