From mboxrd@z Thu Jan 1 00:00:00 1970 From: Troy Benjegerdes Subject: Re: Re: NFS Patch for FSCache Date: Fri, 13 May 2005 21:18:42 -0500 Message-ID: <20050514021842.GR999@kalmia.hozed.org> References: <428100D2.5030806@RedHat.com> <427F3BEC.4010900@RedHat.com> <20050509141945.60fecec0.akpm@osdl.org> <13188.1115752371@redhat.com> Reply-To: Linux filesystem caching discussion list Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: Andrew Morton , linux-fsdevel@vger.kernel.org Return-path: To: Linux filesystem caching discussion list Content-Disposition: inline In-Reply-To: <13188.1115752371@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-cachefs-bounces@redhat.com Errors-To: linux-cachefs-bounces@redhat.com List-Id: linux-fsdevel.vger.kernel.org On Tue, May 10, 2005 at 08:12:51PM +0100, David Howells wrote: > > Steve Dickson wrote: > > > But the real saving, imho, is the fact those reads were measured after the > > filesystem was umount then remounted. So system wise, there should be some > > gain due to the fact that NFS is not using the network.... > > I tested md5sum read speed also. My testbox is a dual 200MHz PPro. It's got > 128MB of RAM. I've got a 100MB file on the NFS server for it to read. > > No Cache: ~14s > Cold Cache: ~15s > Warm Cache: ~2s > > Now these numbers are approximate because they're from memory. > > Note that a cold cache is worse than no cache because CacheFS (a) has to check > the disk before NFS goes to the server, and (b) has to journal the allocations > of new data blocks. It may also have to wait whilst pages are written to disk > before it can get new ones rather than just dropping them (100MB is big enough > wrt 128MB that this will happen) and 100MB is sufficient to cause it to start > using single- and double-indirection pointers to find its blocks on disk, > though these are cached in the page cache. How big was the cachefs filesystem? Now try reading a 1GB file over nfs.. I have found (with openafs), that I either need a really small cache, or a really big one.. The bigger the openafs cache gets, the slower it goes. The only place i run with a > 1GB openafs cache is on an imap server that has an 8gb cache for maildirs.