From: Steven Whitehouse <swhiteho@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] Re: [GFS2] Don't flush everything on fdatasync [70/70]
Date: Fri, 08 Dec 2006 10:29:27 +0000 [thread overview]
Message-ID: <1165573767.3752.927.camel@quoit.chygwyn.com> (raw)
In-Reply-To: <457865DC.3020608@redhat.com>
Hi,
On Thu, 2006-12-07 at 14:05 -0500, Wendy Cheng wrote:
> Steven Whitehouse wrote:
> > Hi,
> >
> > On Fri, 2006-12-01 at 11:09 -0800, Andrew Morton wrote:
> >
> >>> I was taking my cue here from ext3 which does something similar. The
> >>> filemap_fdatawrite() is done by the VFS before this is called with a
> >>> filemap_fdatawait() afterwards. This was intended to flush the metadata
> >>> via (eventually) ->write_inode() although I guess I should be calling
> >>> write_inode_now() instead?
> >>>
> >> oh I see, you're jsut trying to write the inode itself, not the pages.
> >>
> >> write_inode_now() will write the pages, which you seem to not want to do.
> >> Whatever. The APIs here are a bit awkward.
> >>
> >
> > I've added a comment to explain whats going on here, and also the
> > following patch. I know it could be better, but its still an improvement
> > on what was there before,
> >
> >
> >
> Steve,
>
> I'm in the middle of something else and don't have upstream kernel
> source handy at this moment. But I read akpm's comment as
> "write_inode_now" would do writepage and that is *not* what you want (?)
> (since vfs has done that before this call is invoked). I vaguely
> recalled I did try write_inode_now() on GFS1 once but had to replace it
> with "sync_inode" on RHEL4 (for the reason that I can't remember at this
> moment). I suggest you keep "sync_inode" (at least for a while until we
> can prove other call can do better). This "sync_inode" has been well
> tested out (with GFS1's fsync call).
>
Ok. Its gone upstream now, but I'm happy to revert that change if it
turns out to be a problem. My tests show identical performance between
the two calls, but maybe there is a corner case I missed?
Both calls do writepage() but since the VFS has already done it for us,
the chances of there being any more dirty pages to write is fairly
small, so its unlikely to be much of a problem I think, but I'm willing
to be proved wrong if there is a good reason.
> There is another issue. It is a gray area. Note that you don't grab any
> glock here ... so if someone *has* written something in other nodes,
> this sync could miss it (?). This depends on how people expects a
> fsync/fdatasync should behave in a cluster filesystem. GFS1 asks for a
> shared lock here so it will force other node to flush the data (I
> personally think this is a more correct behavior). Your call though.
>
> -- Wendy
>
Its a tricky one to deal with. I would expect that the chances of an
application relying on an fsync on one node to cause a cross-cluster
flush is relatively unlikely. It would mean that there would have to be
another communication channel between the processes on the different
nodes through which the node that was writing data could request a flush
and then receive notification that it has finished, otherwise it would
not seem to make any sense. It would seem an odd way to write an
application, but maybe one does exist which does this somewhere.
Delving back into the history it looks like this is a change (with
respect to gfs1) made by Ken rather than myself. I don't mind adding
this feature though, but even so what we have now is still a marked
improvement on what was there previously I think,
Steve.
WARNING: multiple messages have this Message-ID (diff)
From: Steven Whitehouse <swhiteho@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] Re: [GFS2] Don't flush everything on fdatasync [70/70]
Date: Fri, 08 Dec 2006 10:29:27 +0000 [thread overview]
Message-ID: <1165573767.3752.927.camel@quoit.chygwyn.com> (raw)
Message-ID: <20061208102927.280M3LxIeJ9Ht-cfDQ90Na-8STauCibZ9KUu4Qu4WdU@z> (raw)
In-Reply-To: <457865DC.3020608@redhat.com>
Hi,
On Thu, 2006-12-07 at 14:05 -0500, Wendy Cheng wrote:
> Steven Whitehouse wrote:
> > Hi,
> >
> > On Fri, 2006-12-01 at 11:09 -0800, Andrew Morton wrote:
> >
> >>> I was taking my cue here from ext3 which does something similar. The
> >>> filemap_fdatawrite() is done by the VFS before this is called with a
> >>> filemap_fdatawait() afterwards. This was intended to flush the metadata
> >>> via (eventually) ->write_inode() although I guess I should be calling
> >>> write_inode_now() instead?
> >>>
> >> oh I see, you're jsut trying to write the inode itself, not the pages.
> >>
> >> write_inode_now() will write the pages, which you seem to not want to do.
> >> Whatever. The APIs here are a bit awkward.
> >>
> >
> > I've added a comment to explain whats going on here, and also the
> > following patch. I know it could be better, but its still an improvement
> > on what was there before,
> >
> >
> >
> Steve,
>
> I'm in the middle of something else and don't have upstream kernel
> source handy at this moment. But I read akpm's comment as
> "write_inode_now" would do writepage and that is *not* what you want (?)
> (since vfs has done that before this call is invoked). I vaguely
> recalled I did try write_inode_now() on GFS1 once but had to replace it
> with "sync_inode" on RHEL4 (for the reason that I can't remember at this
> moment). I suggest you keep "sync_inode" (at least for a while until we
> can prove other call can do better). This "sync_inode" has been well
> tested out (with GFS1's fsync call).
>
Ok. Its gone upstream now, but I'm happy to revert that change if it
turns out to be a problem. My tests show identical performance between
the two calls, but maybe there is a corner case I missed?
Both calls do writepage() but since the VFS has already done it for us,
the chances of there being any more dirty pages to write is fairly
small, so its unlikely to be much of a problem I think, but I'm willing
to be proved wrong if there is a good reason.
> There is another issue. It is a gray area. Note that you don't grab any
> glock here ... so if someone *has* written something in other nodes,
> this sync could miss it (?). This depends on how people expects a
> fsync/fdatasync should behave in a cluster filesystem. GFS1 asks for a
> shared lock here so it will force other node to flush the data (I
> personally think this is a more correct behavior). Your call though.
>
> -- Wendy
>
Its a tricky one to deal with. I would expect that the chances of an
application relying on an fsync on one node to cause a cross-cluster
flush is relatively unlikely. It would mean that there would have to be
another communication channel between the processes on the different
nodes through which the node that was writing data could request a flush
and then receive notification that it has finished, otherwise it would
not seem to make any sense. It would seem an odd way to write an
application, but maybe one does exist which does this somewhere.
Delving back into the history it looks like this is a change (with
respect to gfs1) made by Ken rather than myself. I don't mind adding
this feature though, but even so what we have now is still a marked
improvement on what was there previously I think,
Steve.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
next prev parent reply other threads:[~2006-12-08 10:29 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-11-30 12:24 [Cluster-devel] [GFS2] Don't flush everything on fdatasync [70/70] Steven Whitehouse
2006-12-01 7:01 ` [Cluster-devel] " Andrew Morton
2006-12-01 10:58 ` Steven Whitehouse
2006-12-01 19:09 ` Andrew Morton
2006-12-05 14:36 ` Steven Whitehouse
2006-12-07 9:11 ` Steven Whitehouse
2006-12-07 19:05 ` Wendy Cheng
2006-12-08 10:29 ` Steven Whitehouse [this message]
2006-12-08 10:29 ` Steven Whitehouse
2006-12-07 12:17 ` [Cluster-devel] [GFS2 & DLM] Pull request Steven Whitehouse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1165573767.3752.927.camel@quoit.chygwyn.com \
--to=swhiteho@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).