From: lhh@sourceware.org <lhh@sourceware.org>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] cluster/rgmanager/src/daemons/tests deptest1.c ...
Date: 21 Feb 2007 15:36:45 -0000 [thread overview]
Message-ID: <20070221153645.28408.qmail@sourceware.org> (raw)
CVSROOT: /cvs/cluster
Module name: cluster
Changes by: lhh at sourceware.org 2007-02-21 15:36:45
Added files:
rgmanager/src/daemons/tests: deptest1.conf deptest1.in
Log message:
Add example test configuration for dtest
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/rgmanager/src/daemons/tests/deptest1.conf.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/rgmanager/src/daemons/tests/deptest1.in.diff?cvsroot=cluster&r1=NONE&r2=1.1
/cvs/cluster/cluster/rgmanager/src/daemons/tests/deptest1.conf,v --> standard output
revision 1.1
--- cluster/rgmanager/src/daemons/tests/deptest1.conf
+++ - 2007-02-21 15:36:45.834200000 +0000
@@ -0,0 +1,63 @@
+<?xml version="1.0"?>
+<!--
+ Basic "whiteboard" case:
+
+ * 4 nodes
+ * 2 restricted failover domains:
+ * A is allowed to run on {1 4}
+ * B is allowed to run on {3 4}
+ * 3 services
+ * A requires B to operate
+ * B requires C to operate
+ * A must NEVER run on the same node as B.
+
+ Setup:
+ * Start service C on node 2
+ * Start service B on node 4
+ * Start service A on node 1
+ * Nothing is running on node 3
+
+ Introduce a failure:
+ * Kill off node 1
+
+ Solution:
+ * A must be moved to the stopped state (its owner is dead)
+ * B must be stopped on node 4, and started on node 3
+ * A must be started on node 4, since that is the only legal target
+ of service A
+
+ try: ../dtest ../../resources deptest1.conf < deptest1.in
+-->
+<cluster>
+ <clusternodes>
+ <clusternode name="node1" nodeid="1"/>
+ <clusternode name="node2" nodeid="2"/>
+ <clusternode name="node3" nodeid="3"/>
+ <clusternode name="node4" nodeid="4"/>
+ </clusternodes>
+ <rm>
+ <dependencies>
+ <dependency name="service:a">
+ <target name="service:b" require="always" colocate="never"/>
+ </dependency>
+ <dependency name="service:b">
+ <target name="service:c" require="always" />
+ </dependency>
+ </dependencies>
+ <failoverdomains>
+ <failoverdomain name="nodes-14" restricted="1">
+ <failoverdomainnode name="node1" priority="1"/>
+ <failoverdomainnode name="node4" priority="1"/>
+ </failoverdomain>
+ <failoverdomain name="nodes-34" restricted="1">
+ <failoverdomainnode name="node3" priority="1"/>
+ <failoverdomainnode name="node4" priority="1"/>
+ </failoverdomain>
+ </failoverdomains>
+ <resources/>
+ <service name="a" domain="nodes-14" />
+ <service name="b" domain="nodes-34" />
+ <service name="c" />
+ </rm>
+ <fence_daemon post_fail_delay="0" post_join_delay="3"/>
+</cluster>
/cvs/cluster/cluster/rgmanager/src/daemons/tests/deptest1.in,v --> standard output
revision 1.1
--- cluster/rgmanager/src/daemons/tests/deptest1.in
+++ - 2007-02-21 15:36:45.929635000 +0000
@@ -0,0 +1,11 @@
+dep
+online 1 2 3 4
+start a 1
+start c 2
+start b 4
+state
+online 2 3 4
+check
+calc
+apply
+state
reply other threads:[~2007-02-21 15:36 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070221153645.28408.qmail@sourceware.org \
--to=lhh@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).