From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Chow Subject: Re: fs block size and PAGE_CACHE_SIZE Date: Mon, 12 May 2003 09:59:36 +0800 Sender: linux-fsdevel-owner@vger.kernel.org Message-ID: <3EBF0008.5020306@shaolinmicro.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Cc: linux-fsdevel@vger.kernel.org Return-path: Received: from [202.94.238.145] ([202.94.238.145]:38792 "EHLO mail.shaolinmicro.com") by vger.kernel.org with ESMTP id S261847AbTELBrJ (ORCPT ); Sun, 11 May 2003 21:47:09 -0400 To: Bryan Henderson List-Id: linux-fsdevel.vger.kernel.org Bryan Henderson wrote: > > >I don't know why there would be any issue having page size != block size, >as long as one is a multiple of the other. Maybe you have a particular >issue in mind? > >I know one area where having a block larger than a page is a pain: When >you allocate a new block in order to write just one page, you have to >separately initialize the rest of the block. > > > >>how could I efficient populate one 16k block to page cache(4 pages) at one >> >> >readpage() op? > >Why would you want to? If someone wants to access those other 3 pages, >they'll have page faults of their own. > > Yes, for uncompressed file systems it doesn't really matter. If I am writing a file system that supports 16k blocks compression, I have to spend 3 extra (12k extra) compression work for only one 4k data. Since there is no way for me to read in the middle of the block. If the extra 12k data read from the block didn't put into page cache and set up-to-date, it will be wasted. That's why. regards, David Chow