cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Fabio M. Di Nitto <fdinitto@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] Re: [Debian-ha-maintainers] again: "redhat-cluster: services are not relocated when a node fails"
Date: Thu, 19 Nov 2009 23:01:01 +0100	[thread overview]
Message-ID: <4B05C01D.8040603@redhat.com> (raw)
In-Reply-To: <20091119124739.GA30480@bogon.sigxcpu.org>

Guido G?nther wrote:
> Hi Ernesto,
> On Wed, Nov 18, 2009 at 02:30:57PM +0100, Ernesto Rodriguez Reina wrote:
>> Hi everyone!
>>
>> I recently start using RHCS for a project I'm working on but I found
>> that RHCS2 in Debian Lenny do not relocate services when a node fails.
>> I found the thread [1] where Guido G?nther says that this problem was
>> solved on RHCS 3.0.2. Then I downloaded and installed RHCS 3.0.4 (the
>> deb packages from debian mirror) and reproduced the experiment of
>> Martin Waite and again the service was not relocated on node fail.
>> Does someone had make it work as it should in Debian? Martin, or Guido
>> or anybody can you please help me to find out why it is not working as
>> it should?

> I checked with RHCS 3.0.4 as it's currently in unstable rebuilt for
> Lenny. The kernel enters a soft lock after I shut off one node (see
> attached log) and no resource takeover happens. Fabione, any idea what
> triggers this?

since you guys are running cluster 3.0.4, please do the following:

1) add <logging debug="on"/> in cluster.conf

<cluster...
 <logging debug="on"/>
...

2) reproduce the above scenario, then collect all the logs, from all
daemons, from all nodes from /var/log/cluster (this is upstream default,
check with Debian if they have changed it please).

then I?d like to see your cluster.conf and have a better idea on how a
node is "killed". If cluster.conf contains sensitive data such as
passwords, either blank them or send the file to me only. I?ll keep it
confidential but please do NOT randomly mangle the configuration to hide
bits.

The recovery operation is strictly dependent on different things. The
configuration and the logs should be able to tell us something.

Thanks
Fabio



       reply	other threads:[~2009-11-19 22:01 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <ac20632d0911180530w39d72b5et2d8d673525232352@mail.gmail.com>
     [not found] ` <20091119124739.GA30480@bogon.sigxcpu.org>
2009-11-19 22:01   ` Fabio M. Di Nitto [this message]
     [not found]     ` <ac20632d0911210755v6229e34cq4cf2628a1dc643eb@mail.gmail.com>
     [not found]       ` <4B0AE4D5.6030302@redhat.com>
     [not found]         ` <ac20632d0911231150u408d526cr9f01e48f7a855e8a@mail.gmail.com>
     [not found]           ` <4B0B743B.7080803@redhat.com>
     [not found]             ` <ac20632d0911240732q9e56b58va0a38e2033d87064@mail.gmail.com>
     [not found]               ` <ac20632d0911251015t440582bct10f6ef53329d149b@mail.gmail.com>
2009-11-25 18:19                 ` [Cluster-devel] Re: [Debian-ha-maintainers] again: "redhat-cluster: services are not relocated when a node fails" Fabio M. Di Nitto

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B05C01D.8040603@redhat.com \
    --to=fdinitto@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).