linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RE: [Linux-cachefs] Re: NFS Patch for FSCache
@ 2005-05-12 22:43 Lever, Charles
  2005-05-13 11:17 ` David Howells
  0 siblings, 1 reply; 9+ messages in thread
From: Lever, Charles @ 2005-05-12 22:43 UTC (permalink / raw)
  To: David Howells, SteveD
  Cc: linux-fsdevel, Linux filesystem caching discussion list

preface:  i think this is interesting important work.

> Steve Dickson <SteveD@redhat.com> wrote:
> 
> > But the real saving, imho, is the fact those reads were 
> measured after the
> > filesystem was umount then remounted. So system wise, there 
> should be some
> > gain due to the fact that NFS is not using the network....

i expect to see those gains when either the network and server are
slower than the client's local disk, or when the cached files are
significantly larger than the client's local RAM.  these conditions will
not always be the case, so i'm interested to know how performance is
affected when the system is running outside this sweet spot.

> I tested md5sum read speed also. My testbox is a dual 200MHz 
> PPro. It's got
> 128MB of RAM. I've got a 100MB file on the NFS server for it to read.
> 
> 	No Cache:	~14s
> 	Cold Cache:	~15s
> 	Warm Cache:	~2s
> 
> Now these numbers are approximate because they're from memory.

to benchmark this i think you need to explore the architectural
weaknesses of your approach.  how bad will it get using cachefs with
badly designed applications or client/server setups?

for instance, what happens when the client's cache disk is much slower
than the server (high performance RAID with high speed networking)?
what happens when the client's cache disk fills up so the disk cache is
constantly turning over (which files are kicked out of your backing
cachefs to make room for new data)?  what happens with multi-threaded
I/O-bound applications when the cachefs is on a single spindle?  is
there any performance dependency on the size of the backing cachefs?

do you also cache directory contents on disk?

remember that the application you designed this for (preserving cache
contents across client reboots) is only one way this will be used.  some
of us would like to use this facility to provide a high-performance
local cache larger than the client's RAM.  :^)

> Note that a cold cache is worse than no cache because CacheFS 
> (a) has to check
> the disk before NFS goes to the server, and (b) has to 
> journal the allocations
> of new data blocks. It may also have to wait whilst pages are 
> written to disk
> before it can get new ones rather than just dropping them 
> (100MB is big enough
> wrt 128MB that this will happen) and 100MB is sufficient to 
> cause it to start
> using single- and double-indirection pointers to find its 
> blocks on disk,
> though these are cached in the page cache.

synchronous file system metadata management is the bane of every cachefs
implementation i know about.  have you measured what performance impact
there is when cache files go from no indirection to single indirect
blocks, or from single to double indirection?  have you measured how
expensive it is to reuse a single cache file because the cachefs file
system is already full?  how expensive is it to invalidate the data in
the cache (say, if some other client changes a file you already have
cached in your cachefs)?

what about using an extent-based file system for the backing cachefs?
that would probably not be too difficult because you have a good
prediction already of how large the file will be (just look at the file
size on the server).

how about using smallish chunks, like the AFS cache manager, to avoid
indirection entirely?  would there be any performance advantage to
caching small files in memory and large files on disk, or vice versa?

^ permalink raw reply	[flat|nested] 9+ messages in thread
* Re: NFS Patch for FSCache
  2005-05-09 21:19   ` Andrew Morton
@ 2005-05-10 18:43 Steve Dickson
  2005-05-09 10:31 ` Steve Dickson
  0 siblings, 1 reply; 9+ messages in thread
From: Steve Dickson @ 2005-05-10 18:43 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-fsdevel, linux-cachefs

Andrew Morton wrote:
> Steve Dickson <SteveD@redhat.com> wrote:
> 
>>Attached is a patch that enables NFS to use David Howells'
>>File System Caching implementation (FSCache).
> 
> 
> Do you have any performance results for this?
I haven't done any formal performance testing, but from
the functionality testing I've done, I've seen
a ~20% increase in reads speed (verses otw reads).
Mainly due to the fact NFS only needs to do getattrs
and such when the data is cached. But buyer beware...
this a very rough number, so mileage may very. ;-)

I don't have a number for writes, (maybe David does)
but I'm sure there will be a penalty to cache that
data, but its something that can be improve over time.

But the real saving, imho, is the fact those
reads were measured after the filesystem was
umount then remounted. So system wise, there
should be some gain due to the fact that NFS
is not using the network....

steved.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2005-05-19  6:48 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-05-12 22:43 [Linux-cachefs] Re: NFS Patch for FSCache Lever, Charles
2005-05-13 11:17 ` David Howells
2005-05-14  2:08   ` Troy Benjegerdes
2005-05-16 12:47   ` [Linux-cachefs] " David Howells
2005-05-17 21:42     ` David Masover
2005-05-18 10:28     ` [Linux-cachefs] " David Howells
2005-05-19  2:18       ` Troy Benjegerdes
2005-05-19  6:48         ` David Masover
  -- strict thread matches above, loose matches on Subject: below --
2005-05-10 18:43 Steve Dickson
2005-05-09 10:31 ` Steve Dickson
2005-05-09 21:19   ` Andrew Morton
2005-05-10 19:12     ` [Linux-cachefs] " David Howells

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).