public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andreas Dilger <adilger@clusterfs.com>
To: Linus Torvalds <torvalds@transmeta.com>
Cc: Christoph Hellwig <hch@infradead.org>,
	Nikita Danilov <Nikita@Namesys.COM>,
	Linux Kernel Mailing List <Linux-Kernel@vger.kernel.org>,
	Reiserfs mail-list <Reiserfs-List@Namesys.COM>
Subject: Re: [PATCH]: reiser4 [5/8] export remove_from_page_cache()
Date: Fri, 1 Nov 2002 14:56:54 -0700	[thread overview]
Message-ID: <20021101215653.GH8864@clusterfs.com> (raw)
In-Reply-To: <Pine.LNX.4.44.0210311053170.1526-100000@penguin.transmeta.com>

On Oct 31, 2002  10:57 -0800, Linus Torvalds wrote:
> On Thu, 31 Oct 2002, Christoph Hellwig wrote:
> > What about chaing truncate_inode_pages to take an additional len
> > argument so you don't have to remove all pages past an offset?
> 
> Actually, we may want that for other reasons anyway. In particular, I can
> well imagine why a networked filesystem in particular might want to
> invalidate a range of a file cache, but not necessarily all of it.
> 
> (Yeah, I don't know of any network filesystem that does invalidation on
> anything but a file granularity, but I assume such filesystems have to
> exist. Especially in cluster environments it sounds like a sane thing to
> do invalidates on a finer granularity)

Yes, we definitely need such a beast for Lustre.  Currently (because we
haven't gotten around to fixing it) we invalidate the whole file if
there is a lock conflict when we really only want to invalidate a
page or range of pages.

Our "performance" release isn't until next year - we're still working
on "performant" right now, but in the case of multiple clients writing
to non-overlapping areas in a file, or different files we're still
pretty good - abount 1.5GB/s aggregate write speed with 20 storage targets.

We have 62 storage targets in our target environment, but haven't done
a full tests because we're working on some nasty distributed metadata
bugs right now.  Since the client->target I/O is pretty much independent,
there should be no problems hitting 4.5 GB/s aggregate write speed.

Cheers, Andreas
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/


      reply	other threads:[~2002-11-01 21:53 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-31 16:03 [PATCH]: reiser4 [5/8] export remove_from_page_cache() Nikita Danilov
2002-10-31 16:18 ` Christoph Hellwig
2002-10-31 16:24   ` Nikita Danilov
2002-10-31 16:31     ` Christoph Hellwig
2002-10-31 16:45       ` Nikita Danilov
2002-10-31 16:58         ` Christoph Hellwig
2002-10-31 17:04           ` Nikita Danilov
2002-10-31 17:12             ` Christoph Hellwig
2002-10-31 17:33       ` Andreas Dilger
2002-10-31 18:25         ` Nikita Danilov
2002-10-31 20:17         ` Ragnar Kjørstad
2002-10-31 18:57       ` Linus Torvalds
2002-11-01 21:56         ` Andreas Dilger [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20021101215653.GH8864@clusterfs.com \
    --to=adilger@clusterfs.com \
    --cc=Linux-Kernel@vger.kernel.org \
    --cc=Nikita@Namesys.COM \
    --cc=Reiserfs-List@Namesys.COM \
    --cc=hch@infradead.org \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox