From mboxrd@z Thu Jan 1 00:00:00 1970 From: Fabio M. Di Nitto Date: Thu, 19 Nov 2009 19:10:54 +0100 Subject: [Cluster-devel] fencing conditions: what should trigger a fencing operation? In-Reply-To: <20091119170404.GA23287@redhat.com> References: <4B052D69.3010502@redhat.com> <20091119170404.GA23287@redhat.com> Message-ID: <4B058A2E.9070600@redhat.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit David Teigland wrote: > On Thu, Nov 19, 2009 at 12:35:05PM +0100, Fabio M. Di Nitto wrote: > >> - what are the current fencing policies? > > node failure > >> - what can we do to improve them? > > node failure is a simple, black and white, fact > >> - should we monitor for more failures than we do now? > > corosync *exists* to to detect node failure > >> It is a known issue that node1 will crash at some point (kernel OOPS). > > oops is not necessarily node failure; if you *want* it to be, then you > sysctl -w kernel.panic_on_oops=1 > > (gfs has also had it's own mount options over the years to force this > behavior, even if the sysctl isn't set properly; it's a common issue. > It seems panic_on_oops has had inconsistent default values over various > releases, sometimes 0, sometimes 1; setting it has historically been part > of cluster/gfs documentation since most customers want it to be 1.) So a cluster can hang because our code failed, but we don?t detect that it did fail.... so what determines a node failure? only when corosync dies? panic_on_oops is not cluster specific and not all OOPS are panic == not a clean solution. Fabio