From: David Teigland <teigland@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCH 5/5] gfs2: dlm based recovery coordination
Date: Mon, 9 Jan 2012 12:00:40 -0500 [thread overview]
Message-ID: <20120109170040.GB9956@redhat.com> (raw)
In-Reply-To: <20120109164626.GA9956@redhat.com>
On Mon, Jan 09, 2012 at 11:46:26AM -0500, David Teigland wrote:
> On Mon, Jan 09, 2012 at 04:36:30PM +0000, Steven Whitehouse wrote:
> > On Thu, 2012-01-05 at 10:46 -0600, David Teigland wrote:
> > > This new method of managing recovery is an alternative to
> > > the previous approach of using the userland gfs_controld.
> > >
> > > - use dlm slot numbers to assign journal id's
> > > - use dlm recovery callbacks to initiate journal recovery
> > > - use a dlm lock to determine the first node to mount fs
> > > - use a dlm lock to track journals that need recovery
> >
> > I've just been looking at this again, and a question springs to mind...
> > how does this deal with nodes which are read-only or spectator mounts?
> > In the old system we used to propagate that information to gfs_controld
> > but I've not spotted anything similar in the patch so far, so I'm
> > wondering whether it needs to know that information or not,
>
> The dlm allocates a "slot" for all lockspace members, so spectator mounts
> (like readonly mounts) would be given a slot/jid. In gfs_controld,
> spectator mounts are not be given a jid (that came from the time when
> adding a journal required extending the device+fs.) These days, there's
> probably no meaningful difference between spectator and readonly mounts.
There's one other part, and that's whether a readonly or spectator node
should attempt to recover the journal of a failed node. In cluster3 this
decision was always a bit mixed up, with some logic in gfs_controld and
some in gfs2.
We should make a clear decision now and include it in this patch.
I think gfs2_recover_func() should return GAVEUP right at the start
for any of the cases where you don't want it doing recovery. What
cases would you prefer?
next prev parent reply other threads:[~2012-01-09 17:00 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-05 16:46 [Cluster-devel] [PATCH 5/5] gfs2: dlm based recovery coordination David Teigland
2012-01-05 16:58 ` Steven Whitehouse
2012-01-05 17:13 ` David Teigland
2012-01-09 16:36 ` Steven Whitehouse
2012-01-09 16:46 ` David Teigland
2012-01-09 17:00 ` David Teigland [this message]
2012-01-09 17:04 ` Steven Whitehouse
2012-01-09 17:02 ` Steven Whitehouse
-- strict thread matches above, loose matches on Subject: below --
2011-12-16 22:03 David Teigland
2011-12-19 13:07 ` Steven Whitehouse
2011-12-19 17:47 ` David Teigland
2011-12-20 10:39 ` Steven Whitehouse
2011-12-20 19:16 ` David Teigland
2011-12-20 21:04 ` David Teigland
2011-12-21 10:45 ` Steven Whitehouse
2011-12-21 15:40 ` David Teigland
2011-12-22 21:23 ` David Teigland
2011-12-23 9:19 ` Steven Whitehouse
2011-12-19 15:17 ` Steven Whitehouse
2012-01-05 15:08 ` Bob Peterson
2012-01-05 15:21 ` David Teigland
2012-01-05 15:40 ` Steven Whitehouse
2012-01-05 16:16 ` David Teigland
2012-01-05 16:45 ` Bob Peterson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120109170040.GB9956@redhat.com \
--to=teigland@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).