linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Delaying page writeouts
@ 2005-11-09 16:43 Martin Jambor
  0 siblings, 0 replies; only message in thread
From: Martin Jambor @ 2005-11-09 16:43 UTC (permalink / raw)
  To: linux-fsdevel

Hi,

our filesystem does not write individual pages at a time but rather a
bunch of them together. Because of that our writepage(s) handlers need
to queue pages until there is enough of them.  In case of sync
operations, the queue may be flushed to disk at any time but doing so
for every inode would be terribly ineffective.

The solution I am thinking of at the moment is following:

1. Put the page into my queue of requests.
2. Get an extra reference on every page passed to writepage with
page_cache_get().
3. Call set_page_writeback(), unlock_page() and end_page_writeback().
4. if there is enough pages, build necessary metadata structures and
start BIOing the
    whole thing to disk.
5. Once BIOs return call page_cache_put() to release the pages.

In case of sync and fsync, the queue would be flushed after all
relevant inodes have written their dirty data into it.

The question is: Do you think this is viable?

Thank you for any comment on this,

Martin Jambor

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2005-11-09 16:43 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-11-09 16:43 Delaying page writeouts Martin Jambor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).