From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andreas Dilger Subject: Re: stat64 for over 2TB file returned invalid st_blocks Date: Fri, 2 Dec 2005 11:58:05 -0700 Message-ID: <20051202185805.GS14509@schatzie.adilger.int> References: <01e901c5f66e$d4551b70$4168010a@bsd.tnes.nec.co.jp> <1133447539.8557.14.camel@kleikamp.austin.ibm.com> <041701c5f742$d6b0a450$4168010a@bsd.tnes.nec.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Dave Kleikamp , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Return-path: To: Takashi Sato Content-Disposition: inline In-Reply-To: <041701c5f742$d6b0a450$4168010a@bsd.tnes.nec.co.jp> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org On Dec 02, 2005 22:18 +0900, Takashi Sato wrote: > I also found another problem on generic quota code. In > dquot_transfer(), the file usage is calculated from i_blocks via > inode_get_bytes(). If the file is over 2TB, the change of usage is > less than expected. > > To solve this problem, I think inode.i_blocks should be 8 byte. Actually, it should probably be "sector_t", because it isn't really possible to have a file with more blocks than the size of the block device. This avoids memory overhead for small systems that have no need for it in a very highly-used struct. It may be for some network filesystems that support gigantic non-sparse files they would need to enable CONFIG_LBD in order to get support for this. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.