From: Steven Whitehouse <swhiteho@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCH 5/5] gfs2: dlm based recovery coordination
Date: Thu, 05 Jan 2012 15:40:09 +0000 [thread overview]
Message-ID: <1325778009.2690.42.camel@menhir> (raw)
In-Reply-To: <20120105152105.GA22610@redhat.com>
Hi,
On Thu, 2012-01-05 at 10:21 -0500, David Teigland wrote:
> On Thu, Jan 05, 2012 at 10:08:15AM -0500, Bob Peterson wrote:
> > ----- Original Message -----
> > | This new method of managing recovery is an alternative to
> > | the previous approach of using the userland gfs_controld.
> > |
> > | - use dlm slot numbers to assign journal id's
> > | - use dlm recovery callbacks to initiate journal recovery
> > | - use a dlm lock to determine the first node to mount fs
> > | - use a dlm lock to track journals that need recovery
> > |
> > | Signed-off-by: David Teigland <teigland@redhat.com>
> > | ---
> > | --- a/fs/gfs2/lock_dlm.c
> > | +++ b/fs/gfs2/lock_dlm.c
> > (snip)
> > | +#include <linux/gfs2_ondisk.h>
> > | #include <linux/gfs2_ondisk.h>
> >
> > Hi,
> >
> > Dave, are you going to post a replacement patch or addendum patch
> > that addresses Steve's concerns, such as the above?
> > I'd like to review this, but I want the review the latest/greatest.
>
> I haven't resent the patches after making the changes (which were fairly
> minor.) I'll resend them shortly for another check before a pull request.
>
> Dave
>
I think it would be a good plan to not send this last patch for the
current merge window and let it settle for a bit longer. Running things
so fine with the timing makes me nervous bearing in mind the number of
changes, and that three issues have been caught in the last few days.
Lets try and resolve the remaining points and then we can have something
really solid ready for the next window. I don't think there is any
particular rush to get it in at the moment.
I know its taken a bit longer than is ideal to get through the review,
but we've had a major holiday in the way which hasn't helped,
Steve.
next prev parent reply other threads:[~2012-01-05 15:40 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-16 22:03 [Cluster-devel] [PATCH 5/5] gfs2: dlm based recovery coordination David Teigland
2011-12-19 13:07 ` Steven Whitehouse
2011-12-19 17:47 ` David Teigland
2011-12-20 10:39 ` Steven Whitehouse
2011-12-20 19:16 ` David Teigland
2011-12-20 21:04 ` David Teigland
2011-12-21 10:45 ` Steven Whitehouse
2011-12-21 15:40 ` David Teigland
2011-12-22 21:23 ` David Teigland
2011-12-23 9:19 ` Steven Whitehouse
2011-12-19 15:17 ` Steven Whitehouse
2012-01-05 15:08 ` Bob Peterson
2012-01-05 15:21 ` David Teigland
2012-01-05 15:40 ` Steven Whitehouse [this message]
2012-01-05 16:16 ` David Teigland
2012-01-05 16:45 ` Bob Peterson
-- strict thread matches above, loose matches on Subject: below --
2012-01-05 16:46 David Teigland
2012-01-05 16:58 ` Steven Whitehouse
2012-01-05 17:13 ` David Teigland
2012-01-09 16:36 ` Steven Whitehouse
2012-01-09 16:46 ` David Teigland
2012-01-09 17:00 ` David Teigland
2012-01-09 17:04 ` Steven Whitehouse
2012-01-09 17:02 ` Steven Whitehouse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1325778009.2690.42.camel@menhir \
--to=swhiteho@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).