From: Lon Hohberger <lhh@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] rind-0.8.1 patch
Date: Fri, 08 Feb 2008 15:56:50 -0500 [thread overview]
Message-ID: <1202504210.6443.84.camel@ayanami.boston.devel.redhat.com> (raw)
In-Reply-To: <200802070938.48446.grimme@atix.de>
On Thu, 2008-02-07 at 09:38 +0100, Marc Grimme wrote:
> Something else I was thinking about when playing with those things:
> 1. Why are USER, CONFIG and MIGRATION events not yet being passed? It could be
> quite interesting as well to trigger those.
USER + CONFIG is being passed to the event handlers in CVS, you just
can't define events off of them currently in the configuration. I think
what we have right now is plenty for blowing your own foot off, but we
certainly could add those.
Virtual machine requests (e.g. clusvcadm -M) aren't going out with 5.2
for central_processing.
> 2. And wouldn't it be a good idea to being able to call some kind of
> higherlevel os-skript?
I disagree here, sort of:
* I don't think the possibility of lots of fork/execs while trying to
determine service placement after a failure is a great idea. We want to
try to be as neutral as we can during this situation. A really
low-impact script interface that reorders a node list might be okay;
i.e.:
node_list = external_reorder("my_script", old_node_list);
I suppose it's kind of like shuffle(), but with intelligence. That
script could then sort the node IDs by whatever criteria it wanted.
As for processing events in external scripts, I disagree fairly
strongly:
* The data rgmanager is currently using to make decisions (e.g.
configuration info such as failover domains, service recovery policies,
and extended stuff which you can randomly add) is difficult to access
from shell scripts.
* Internal rgmanager operations (flipping service states for example)
can't be done from outside rgmanager in a sane way.
> I thought it might then be possible to generate a more
> dynamic failoverdomain.
Agreed.
> For example one with the lowest loaded node being
> lowest prioritized. That can be quite nice when having services or vms which
> produce very high load.
There are lots of kinds of load:
* memory pressure
* cpu load
* run queue average length (the 'uptime' load)
* i/o bandwidth to shared storage
* network bandwidth
I'd recommend whatever load monitoring we care about be done
proactively. That is, have something publish current load states
periodically, and have the data 'already there' - so that in the event
of a failure, we can just act on what is known - rather than asking
around for various pieces of data.
However, we're getting a little far out though - does what's in CVS work
for doing the 'follows' logic or not? :)
-- Lon
next prev parent reply other threads:[~2008-02-08 20:56 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-30 16:49 [Cluster-devel] rind-0.8.1 patch Lon Hohberger
2008-02-04 17:41 ` Marc Grimme
2008-02-05 17:58 ` Lon Hohberger
2008-02-06 9:03 ` Marc Grimme
2008-02-06 17:01 ` Lon Hohberger
2008-02-06 17:22 ` Lon Hohberger
2008-02-06 19:18 ` Marc Grimme
2008-02-07 8:38 ` Marc Grimme
2008-02-08 20:56 ` Lon Hohberger [this message]
2008-02-14 21:56 ` Lon Hohberger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1202504210.6443.84.camel@ayanami.boston.devel.redhat.com \
--to=lhh@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).