From: lhh@sourceware.org <lhh@sourceware.org>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] cluster/rgmanager/src/daemons/tests test018.co ...
Date: 3 May 2007 15:16:49 -0000 [thread overview]
Message-ID: <20070503151649.11910.qmail@sourceware.org> (raw)
CVSROOT: /cvs/cluster
Module name: cluster
Changes by: lhh at sourceware.org 2007-05-03 15:16:49
Added files:
rgmanager/src/daemons/tests: test018.conf test018.start.expected
test018.stop.expected
Log message:
Add test case from RHEL4 branch
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/rgmanager/src/daemons/tests/test018.conf.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/rgmanager/src/daemons/tests/test018.start.expected.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/rgmanager/src/daemons/tests/test018.stop.expected.diff?cvsroot=cluster&r1=1.1&r2=1.2
--- cluster/rgmanager/src/daemons/tests/test018.conf 2007/05/02 22:46:27 1.1
+++ cluster/rgmanager/src/daemons/tests/test018.conf 2007/05/03 15:16:48 1.2
@@ -0,0 +1,78 @@
+<?xml version="1.0"?>
+<!--
+ while testing for #212121, I found that if you had multiple
+ instances of untyped children where the untyped children were
+ multi-instance resources, you could end up with resource duplication
+ the second time around.
+
+ For example:
+
+ start test2
+ start initscript
+ start clusterfs - this should not happen twice
+ start clusterfs
+ start ip .1.3
+ start mount2
+ start dummy export
+ start admin group
+ start user group
+ start red
+ start script2
+ start .1.4
+ start script3
+
+ ... would occur without the change to restree.c which removes
+ the addition of newchild to the known-children.
+
+-->
+<cluster>
+<rm>
+ <resources>
+ <service name="test1"/>
+ <service name="test2"/>
+ <script name="initscript" file="/etc/init.d/sshd"/>
+ <script name="script2" file="/etc/init.d/script2"/>
+ <script name="script3" file="/etc/init.d/script3"/>
+ <ip address="192.168.1.3" monitor_link="yes"/>
+ <ip address="192.168.1.4" monitor_link="yes"/>
+ <fs fstype="ext3" name="mount1" mountpoint="/mnt/cluster" device="/dev/sdb8"/>
+ <fs fstype="ext3" name="mount2" mountpoint="/mnt/cluster2" device="/dev/sdb9"/>
+ <nfsexport name="Dummy Export"/>
+ <nfsclient name="User group" target="@users" options="rw,sync"/>
+ <nfsclient name="Admin group" target="@admin" options="rw"/>
+ <nfsclient name="yellow" target="yellow" options="rw,no_root_squash"/>
+ <nfsclient name="magenta" target="magenta" options="rw,no_root_squash"/>
+ <nfsclient name="red" target="red" options="rw"/>
+ <clusterfs name="argle" mountpoint="/mnt/cluster3" device="/dev/sdb10"/>
+
+ </resources>
+ <service ref="test1">
+ <script ref="initscript">
+ <clusterfs ref="argle"/>
+ </script>
+ <fs ref="mount1">
+ <nfsexport ref="Dummy Export">
+ <nfsclient ref="Admin group"/>
+ <nfsclient ref="User group"/>
+ <nfsclient ref="red"/>
+ </nfsexport>
+ </fs>
+ </service>
+ <service ref="test2">
+ <script ref="initscript">
+ <clusterfs ref="argle"/>
+ <ip ref="192.168.1.3"/>
+ <fs ref="mount2">
+ <nfsexport ref="Dummy Export">
+ <nfsclient ref="Admin group"/>
+ <nfsclient ref="User group"/>
+ <nfsclient ref="red"/>
+ </nfsexport>
+ </fs>
+ <script ref="script2"/>
+ <ip ref="192.168.1.4"/>
+ </script>
+ <script ref="script3"/>
+ </service>
+</rm>
+</cluster>
--- cluster/rgmanager/src/daemons/tests/test018.start.expected 2007/05/02 22:46:27 1.1
+++ cluster/rgmanager/src/daemons/tests/test018.start.expected 2007/05/03 15:16:48 1.2
@@ -0,0 +1,24 @@
+Starting test1...
+[start] service:test1
+[start] fs:mount1
+[start] nfsexport:Dummy Export
+[start] nfsclient:Admin group
+[start] nfsclient:User group
+[start] nfsclient:red
+[start] script:initscript
+[start] clusterfs:argle
+Start of test1 complete
+Starting test2...
+[start] service:test2
+[start] script:initscript
+[start] clusterfs:argle
+[start] ip:192.168.1.3
+[start] fs:mount2
+[start] nfsexport:Dummy Export
+[start] nfsclient:Admin group
+[start] nfsclient:User group
+[start] nfsclient:red
+[start] script:script2
+[start] ip:192.168.1.4
+[start] script:script3
+Start of test2 complete
--- cluster/rgmanager/src/daemons/tests/test018.stop.expected 2007/05/02 22:46:27 1.1
+++ cluster/rgmanager/src/daemons/tests/test018.stop.expected 2007/05/03 15:16:48 1.2
@@ -0,0 +1,24 @@
+Stopping test1...
+[stop] clusterfs:argle
+[stop] script:initscript
+[stop] nfsclient:red
+[stop] nfsclient:User group
+[stop] nfsclient:Admin group
+[stop] nfsexport:Dummy Export
+[stop] fs:mount1
+[stop] service:test1
+Stop of test1 complete
+Stopping test2...
+[stop] script:script3
+[stop] ip:192.168.1.4
+[stop] script:script2
+[stop] nfsclient:red
+[stop] nfsclient:User group
+[stop] nfsclient:Admin group
+[stop] nfsexport:Dummy Export
+[stop] fs:mount2
+[stop] ip:192.168.1.3
+[stop] clusterfs:argle
+[stop] script:initscript
+[stop] service:test2
+Stop of test2 complete
next reply other threads:[~2007-05-03 15:16 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-03 15:16 lhh [this message]
-- strict thread matches above, loose matches on Subject: below --
2007-05-02 22:46 [Cluster-devel] cluster/rgmanager/src/daemons/tests test018.co lhh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070503151649.11910.qmail@sourceware.org \
--to=lhh@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).