cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-08-09 21:13 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-08-09 21:13 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-08-09 21:13:21

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	Version Bump

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.27&r2=1.28
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.12&r2=1.13

--- conga/clustermon.spec.in.in	2006/08/09 20:53:21	1.1
+++ conga/clustermon.spec.in.in	2006/08/09 21:13:21	1.2
@@ -216,29 +216,8 @@
 ###  changelog ###
 
 
+
 %changelog
-* Thu Aug 03 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-10
-- Luci: fix login issues, add cluster resources, styling... 
-* Wed Jul 26 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-9
-- Update Luci to Plone 2.5
-* Tue Jul 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-8
-- New build with a lot of implementation details on Luci
-- Last build with plone 2.1.2
-* Thu Jul 06 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-7
-- More compliant specfile, minor fixes
-* Tue Jun 27 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-6
-- Luci persists users/clusters/systems/permissions across upgrades 
-* Fri Jun 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-4
-- Moved storage, service, log and rpm modules into main ricci.rpm
-* Wed Jun 14 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-1
-- Packaged cluster-snmp (cluster snmp agent)
-- Packaged cluster-cim (cluster CIM provider)
-* Mon Jun 06 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.7-5
-- Disable non-https access to Luci, enable https on port 8084
-* Mon Jun 02 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.7-1
-- Packaged Luci - ricci's www frontend
-- Added logging module
-* Mon May 26 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.6-1
-- Multitude of fixes and new features
-* Mon Apr 10 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.5-1
-- First official build of conga project
+* Wed Aug 09 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11
+- Rebirth: separate clustermon.srpm (modcluster, cluster-snmp and 
+   cluster-cim) from conga.srpm
--- conga/conga.spec.in.in	2006/08/09 20:53:21	1.27
+++ conga/conga.spec.in.in	2006/08/09 21:13:21	1.28
@@ -315,6 +315,10 @@
 
 
 %changelog
+* Wed Aug 09 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11
+- Separate clustermon.srpm (modcluster, cluster-snmp and 
+   cluster-cim) from conga.srpm
+- Luci: tighten down security
 * Thu Aug 03 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-10
 - Luci: fix login issues, add cluster resources, styling... 
 * Wed Jul 26 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-9
--- conga/make/version.in	2006/08/03 20:54:04	1.12
+++ conga/make/version.in	2006/08/09 21:13:21	1.13
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=10
+RELEASE=11



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-08-15  4:15 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-08-15  4:15 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-08-15 04:15:54

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	ricci/common   : Module.cpp 
	ricci/include  : Module.h shred_allocator.h 
	ricci/modules/cluster: Makefile main.cpp 
	ricci/modules/log: Makefile main.cpp 
	ricci/modules/rpm: Makefile main.cpp 
	ricci/modules/service: Makefile main.cpp 
	ricci/modules/storage: Makefile main.cpp 
Removed files:
	ricci/modules/cluster: modcluster 
	ricci/modules/log: ricci-modlog 
	ricci/modules/rpm: ricci-modrpm 
	ricci/modules/service: ricci-modservice 
	ricci/modules/storage: ricci-modstorage 

Log message:
	- remove astray defines in spec files
	- change buildroot
	- ricci: remove stderr redirection scripts (.exe files) - reimplement in code

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.2&r2=1.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.28&r2=1.29
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/common/Module.cpp.diff?cvsroot=cluster&r1=1.4&r2=1.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/include/Module.h.diff?cvsroot=cluster&r1=1.2&r2=1.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/include/shred_allocator.h.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/Makefile.diff?cvsroot=cluster&r1=1.12&r2=1.13
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/main.cpp.diff?cvsroot=cluster&r1=1.3&r2=1.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/modcluster.diff?cvsroot=cluster&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/log/Makefile.diff?cvsroot=cluster&r1=1.4&r2=1.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/log/main.cpp.diff?cvsroot=cluster&r1=1.2&r2=1.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/log/ricci-modlog.diff?cvsroot=cluster&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/rpm/Makefile.diff?cvsroot=cluster&r1=1.5&r2=1.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/rpm/main.cpp.diff?cvsroot=cluster&r1=1.3&r2=1.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/rpm/ricci-modrpm.diff?cvsroot=cluster&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/Makefile.diff?cvsroot=cluster&r1=1.6&r2=1.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/main.cpp.diff?cvsroot=cluster&r1=1.3&r2=1.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/ricci-modservice.diff?cvsroot=cluster&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/Makefile.diff?cvsroot=cluster&r1=1.8&r2=1.9
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/main.cpp.diff?cvsroot=cluster&r1=1.3&r2=1.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/ricci-modstorage.diff?cvsroot=cluster&r1=1.2&r2=NONE

--- conga/clustermon.spec.in.in	2006/08/09 21:13:21	1.2
+++ conga/clustermon.spec.in.in	2006/08/15 04:15:51	1.3
@@ -10,9 +10,6 @@
 ###############################################################################
 ###############################################################################
 
-%define vers @@VERS@@
-%define rel  @@REL@@%{?dist}
-
 
 %define PEGASUS_PROVIDERS_DIR %{_libdir}/Pegasus/providers
 
@@ -21,8 +18,8 @@
 
 
 Name: clustermon
-Version: %vers
-Release: %rel
+Version: @@VERS@@
+Release: @@REL@@%{?dist}
 License: GPL
 URL: http://sources.redhat.com/cluster/conga
 
@@ -30,8 +27,7 @@
 Summary: cluster snmp agent, cim provider and ricci module - source code
 
 Source0: %{name}-%{version}.tar.gz
-Buildroot: %{_tmppath}/%{name}-%{version}-buildroot
-
+Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
 BuildRequires: glibc-devel gcc-c++ libxml2-devel make
 BuildRequires: openssl-devel dbus-devel pam-devel
@@ -92,7 +88,6 @@
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/modcluster.systembus.conf
 			%{_sysconfdir}/rc.d/init.d/modclusterd
 			%{_sbindir}/modcluster
-			%{_sbindir}/modcluster.exe
 			%{_sbindir}/modclusterd
 			%{_docdir}/modcluster-%{version}/
 
--- conga/conga.spec.in.in	2006/08/09 21:13:21	1.28
+++ conga/conga.spec.in.in	2006/08/15 04:15:51	1.29
@@ -10,8 +10,6 @@
 ###############################################################################
 ###############################################################################
 
-%define vers @@VERS@@
-%define rel  @@REL@@%{?dist}
 
 %define include_zope_and_plone     @@INCLUDE_ZOPE_AND_PLONE@@
 %define zope_archive               @@ZOPE_ARCHIVE@@
@@ -30,8 +28,8 @@
 
 
 Name: conga
-Version: %vers
-Release: %rel
+Version: @@VERS@@
+Release: @@REL@@%{?dist}
 License: GPL
 URL: http://sources.redhat.com/cluster/conga
 
@@ -44,7 +42,7 @@
 Source2: %{plone_archive_file}
 Patch2:  Plone-2.5_CMFPlone.patch
 %endif
-Buildroot: %{_tmppath}/%{name}-%{version}-buildroot
+Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
 
 %if "%{include_zope_and_plone}" == yes
@@ -247,22 +245,18 @@
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modrpm.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modrpm.systembus.conf
 			%{_sbindir}/ricci-modrpm
-			%{_sbindir}/ricci-modrpm.exe
 # modstorage
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modstorage.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modstorage.systembus.conf
 			%{_sbindir}/ricci-modstorage
-			%{_sbindir}/ricci-modstorage.exe
 # modservice
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modservice.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modservice.systembus.conf
 			%{_sbindir}/ricci-modservice
-			%{_sbindir}/ricci-modservice.exe
 # modlog
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modlog.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modlog.systembus.conf
 			%{_sbindir}/ricci-modlog
-			%{_sbindir}/ricci-modlog.exe
 
 %pre -n ricci
 /usr/sbin/groupadd -r -f ricci >/dev/null 2>&1
--- conga/ricci/common/Module.cpp	2006/08/10 22:53:07	1.4
+++ conga/ricci/common/Module.cpp	2006/08/15 04:15:52	1.5
@@ -44,8 +44,8 @@
 static VarMap list_APIs(const VarMap& args);
 static VarMap extract_vars(const XMLObject& xml);
 static void insert_vars(const VarMap& vars, XMLObject& xml);
-  
-static String clean_string(const String& msg);
+
+
 
 
 
@@ -116,7 +116,7 @@
     } catch ( String e ) {
       func_resp_xml.add_child(Variable("success", false).xml());
       func_resp_xml.add_child(Variable("error_code", Except::generic_error).xml());
-      func_resp_xml.add_child(Variable("error_description", clean_string(e)).xml());
+      func_resp_xml.add_child(Variable("error_description", e).xml());
     } catch ( APIerror e ) {
       throw;
     } catch ( ... ) {
@@ -179,15 +179,6 @@
 }
 
 
-String 
-clean_string(const String& msg)
-{
-  String ret(msg);
-  String::size_type i;
-  while ((i = ret.find_first_of("\"<>")) != ret.npos)
-    ret[i] = ' ';
-  return ret;
-}
 
 
 
@@ -200,16 +191,67 @@
 
 
 
+//  ################    ModuleDriver    ######################
+
 
 
 
-//  ################    ModuleDriver    ######################
+#include <fcntl.h>
 
+static void
+close_fd(int fd);
 
+static int
+__stdin_out_module_driver(Module& module);
 
 
 int
-stdin_out_module_driver(Module& module)
+stdin_out_module_driver(Module& module,
+			int argc,
+			char** argv)
+{
+  bool display_err = false;
+  int rv;
+  while ((rv = getopt(argc, argv, "e")) != EOF)
+    switch (rv) {
+    case 'e':
+      display_err = true;
+      break;
+    default:
+      break;
+    }
+  
+  int old_err;
+  if (!display_err) {
+    // redirect stderr to /dev/null
+    old_err = dup(2);
+    int devnull = open("/dev/null", O_RDWR);
+    if (devnull == -1) {
+      perror("stdin_out_module_driver(): Can't open /dev/null");
+      exit(1);
+    }
+    dup2(devnull, 2);
+    close_fd(devnull);
+  }
+  
+  try {
+    
+    return __stdin_out_module_driver(module);
+    
+  } catch ( ... ) {
+    if (!display_err) {
+      // restore stderr
+      dup2(old_err, 2);
+      close_fd(old_err);
+    }
+    throw;
+  }
+}  
+
+
+
+int
+__stdin_out_module_driver(Module& module)
 {
   unsigned int time_beg = time_mil();
   String data;
@@ -225,8 +267,7 @@
     if (ret == 0) {
       // continue waiting
       continue;
-    }
-    else if (ret == -1) {
+    } else if (ret == -1) {
       if (errno == EINTR)
         continue;
       else
@@ -244,11 +285,14 @@
       }
       try {
 	data.append(buff, ret);
+	shred(buff, sizeof(buff));
 	XMLObject request = parseXML(data);
 	XMLObject response = module.process(request);
 	cout << generateXML(response) << endl;
 	return 0;
-      } catch ( ... ) {}
+      } catch ( ... ) {
+	shred(buff, sizeof(buff));
+      }
       continue;
     }
     if (poll_data.revents & (POLLERR | POLLHUP | POLLNVAL))
@@ -260,3 +304,15 @@
   
   throw String("invalid input");
 }
+
+
+
+
+void
+close_fd(int fd)
+{
+  int e;
+  do {
+    e = close(fd);
+  } while (e && (errno == EINTR));
+}
--- conga/ricci/include/Module.h	2006/08/10 22:53:07	1.2
+++ conga/ricci/include/Module.h	2006/08/15 04:15:53	1.3
@@ -57,7 +57,10 @@
 };  // class Module
 
 
-int stdin_out_module_driver(Module& module);
+int 
+stdin_out_module_driver(Module& module,
+			int argc,
+			char** argv);
 
 
 #endif  // Module_h
--- conga/ricci/include/shred_allocator.h	2006/08/10 22:53:07	1.1
+++ conga/ricci/include/shred_allocator.h	2006/08/15 04:15:53	1.2
@@ -41,13 +41,11 @@
 shred(_Tp* __p, size_t n)
 {
   size_t size = sizeof(_Tp) / sizeof(char) * n; 
-  if (size && __p) {
-    // shred memory
+  if (size && __p)
     for (char *ptr = (char*) __p; 
 	 ptr < ((char*) __p) + size;
 	 ptr++)
       *ptr = 'o';
-  }
 }
 
 
--- conga/ricci/modules/cluster/Makefile	2006/08/09 20:53:22	1.12
+++ conga/ricci/modules/cluster/Makefile	2006/08/15 04:15:53	1.13
@@ -14,7 +14,7 @@
 include ${top_srcdir}/make/defines.mk
 
 
-TARGET = modcluster.exe
+TARGET = modcluster
 
 OBJECTS = main.o \
 	ClusterModule.o \
@@ -38,7 +38,6 @@
 install: 
 	$(INSTALL_DIR)  ${sbindir}
 	$(INSTALL_BIN)  ${TARGET} ${sbindir}
-	$(INSTALL_BIN)  modcluster ${sbindir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/modcluster.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/cluster/main.cpp	2006/08/10 22:53:08	1.3
+++ conga/ricci/modules/cluster/main.cpp	2006/08/15 04:15:53	1.4
@@ -31,7 +31,9 @@
 {
   try {
     ClusterModule m;
-    return stdin_out_module_driver(m);
+    return stdin_out_module_driver(m,
+				   argc,
+				   argv);
   } catch (String e) {
     cerr << e << endl;
     return 1;
--- conga/ricci/modules/log/Makefile	2006/06/30 22:26:13	1.4
+++ conga/ricci/modules/log/Makefile	2006/08/15 04:15:53	1.5
@@ -14,7 +14,7 @@
 include ${top_srcdir}/make/defines.mk
 
 
-TARGET = ricci-modlog.exe
+TARGET = ricci-modlog
 
 OBJECTS = main.o \
 	LoggingModule.o \
@@ -33,7 +33,6 @@
 install: 
 	$(INSTALL_DIR)  ${sbindir}
 	$(INSTALL_BIN)  ${TARGET} ${sbindir}
-	$(INSTALL_BIN)  ricci-modlog ${sbindir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/ricci-modlog.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/log/main.cpp	2006/08/10 22:53:08	1.2
+++ conga/ricci/modules/log/main.cpp	2006/08/15 04:15:53	1.3
@@ -31,7 +31,9 @@
 {
   try {
     LoggingModule m;
-    return stdin_out_module_driver(m);
+    return stdin_out_module_driver(m,
+				   argc,
+				   argv);
   } catch (String e) {
     cerr << e << endl;
     return 1;
--- conga/ricci/modules/rpm/Makefile	2006/06/30 22:26:13	1.5
+++ conga/ricci/modules/rpm/Makefile	2006/08/15 04:15:53	1.6
@@ -14,7 +14,7 @@
 include ${top_srcdir}/make/defines.mk
 
 
-TARGET = ricci-modrpm.exe
+TARGET = ricci-modrpm
 
 OBJECTS = main.o \
 	RpmModule.o \
@@ -34,7 +34,6 @@
 install: 
 	$(INSTALL_DIR)  ${sbindir}
 	$(INSTALL_BIN)  ${TARGET} ${sbindir}
-	$(INSTALL_BIN)  ricci-modrpm ${sbindir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/ricci-modrpm.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/rpm/main.cpp	2006/08/10 22:53:08	1.3
+++ conga/ricci/modules/rpm/main.cpp	2006/08/15 04:15:53	1.4
@@ -31,7 +31,9 @@
 {
   try {
     RpmModule m;
-    return stdin_out_module_driver(m);
+    return stdin_out_module_driver(m,
+				   argc,
+				   argv);
   } catch (String e) {
     cerr << e << endl;
     return 1;
--- conga/ricci/modules/service/Makefile	2006/06/30 22:26:13	1.6
+++ conga/ricci/modules/service/Makefile	2006/08/15 04:15:53	1.7
@@ -14,7 +14,7 @@
 include ${top_srcdir}/make/defines.mk
 
 
-TARGET = ricci-modservice.exe
+TARGET = ricci-modservice
 
 OBJECTS = main.o \
 	ServiceManager.o \
@@ -33,7 +33,6 @@
 install: 
 	$(INSTALL_DIR)  ${sbindir}
 	$(INSTALL_BIN)  ${TARGET} ${sbindir}
-	$(INSTALL_BIN)  ricci-modservice ${sbindir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/ricci-modservice.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/service/main.cpp	2006/08/10 22:53:09	1.3
+++ conga/ricci/modules/service/main.cpp	2006/08/15 04:15:53	1.4
@@ -31,7 +31,9 @@
 {
   try {
     ServiceModule m;
-    return stdin_out_module_driver(m);
+    return stdin_out_module_driver(m,
+				   argc,
+				   argv);
   } catch (String e) {
     cerr << e << endl;
     return 1;
--- conga/ricci/modules/storage/Makefile	2006/06/30 22:26:13	1.8
+++ conga/ricci/modules/storage/Makefile	2006/08/15 04:15:54	1.9
@@ -14,7 +14,7 @@
 include ${top_srcdir}/make/defines.mk
 
 
-TARGET = ricci-modstorage.exe
+TARGET = ricci-modstorage
 
 OBJECTS = main.o \
 	Props.o \
@@ -62,7 +62,6 @@
 install: 
 	$(INSTALL_DIR)  ${sbindir}
 	$(INSTALL_BIN)  ${TARGET} ${sbindir}
-	$(INSTALL_BIN)  ricci-modstorage ${sbindir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/ricci-modstorage.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/storage/main.cpp	2006/08/10 22:53:09	1.3
+++ conga/ricci/modules/storage/main.cpp	2006/08/15 04:15:54	1.4
@@ -31,7 +31,9 @@
 {
   try {
     StorageModule m;
-    return stdin_out_module_driver(m);
+    return stdin_out_module_driver(m,
+				   argc,
+				   argv);
   } catch (String e) {
     cerr << e << endl;
     return 1;



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-08-16  6:34 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-08-16  6:34 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-08-16 06:34:20

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	ricci/modules/cluster: Makefile 
	ricci/modules/cluster/d-bus: modcluster.oddjob.conf 
	ricci/modules/log: Makefile 
	ricci/modules/log/d-bus: ricci-modlog.oddjob.conf 
	ricci/modules/rpm: Makefile 
	ricci/modules/rpm/d-bus: ricci-modrpm.oddjob.conf 
	ricci/modules/service: Makefile 
	ricci/modules/service/d-bus: ricci-modservice.oddjob.conf 
	ricci/modules/storage: Makefile 
	ricci/modules/storage/d-bus: ricci-modstorage.oddjob.conf 
	ricci/ricci    : Makefile ricci_defines.h 

Log message:
	Move proper binaries (the ones not to be directly run by the user)
	to /usr/libexec - fulfill packaging guidelines

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.6&r2=1.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.30&r2=1.31
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/Makefile.diff?cvsroot=cluster&r1=1.13&r2=1.14
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/d-bus/modcluster.oddjob.conf.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/log/Makefile.diff?cvsroot=cluster&r1=1.5&r2=1.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/log/d-bus/ricci-modlog.oddjob.conf.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/rpm/Makefile.diff?cvsroot=cluster&r1=1.6&r2=1.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/rpm/d-bus/ricci-modrpm.oddjob.conf.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/Makefile.diff?cvsroot=cluster&r1=1.7&r2=1.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/d-bus/ricci-modservice.oddjob.conf.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/Makefile.diff?cvsroot=cluster&r1=1.9&r2=1.10
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/d-bus/ricci-modstorage.oddjob.conf.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/ricci/Makefile.diff?cvsroot=cluster&r1=1.14&r2=1.15
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/ricci/ricci_defines.h.diff?cvsroot=cluster&r1=1.7&r2=1.8

--- conga/clustermon.spec.in.in	2006/08/16 03:02:26	1.6
+++ conga/clustermon.spec.in.in	2006/08/16 06:34:19	1.7
@@ -88,7 +88,7 @@
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/modcluster.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/modcluster.systembus.conf
 			%{_sysconfdir}/rc.d/init.d/modclusterd
-			%{_sbindir}/modcluster
+			%{_libexecdir}/modcluster
 			%{_sbindir}/modclusterd
 			%{_docdir}/modcluster-%{version}/
 
@@ -206,6 +206,9 @@
 
 
 %changelog
+* Wed Aug 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11.7
+- Move modcluster from /usr/sbin to /usr/libexec
+
 * Tue Aug 15 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11.6
 - Implement support for Cluster Suite 5
 
--- conga/conga.spec.in.in	2006/08/16 03:08:43	1.30
+++ conga/conga.spec.in.in	2006/08/16 06:34:19	1.31
@@ -233,25 +233,24 @@
 			%{_sysconfdir}/rc.d/init.d/ricci
 %attr(-,ricci,ricci)	%{_localstatedir}/lib/ricci
 			%{_sbindir}/ricci
-%attr(4755,root,root)	%{_sbindir}/ricci-auth
-			%{_sbindir}/ricci-worker
+%attr(-,root,ricci)	%{_libexecdir}/ricci/
 			%{_docdir}/ricci-%{version}/
 # modrpm
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modrpm.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modrpm.systembus.conf
-			%{_sbindir}/ricci-modrpm
+			%{_libexecdir}/ricci-modrpm
 # modstorage
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modstorage.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modstorage.systembus.conf
-			%{_sbindir}/ricci-modstorage
+			%{_libexecdir}/ricci-modstorage
 # modservice
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modservice.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modservice.systembus.conf
-			%{_sbindir}/ricci-modservice
+			%{_libexecdir}/ricci-modservice
 # modlog
 %config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modlog.oddjob.conf
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modlog.systembus.conf
-			%{_sbindir}/ricci-modlog
+			%{_libexecdir}/ricci-modlog
 
 %pre -n ricci
 if [ "B`/bin/grep ricci\:x /etc/group`" = "B" ]; then
@@ -297,6 +296,10 @@
 
 
 %changelog
+* Wed Aug 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11.7
+- Move ricci-modrpm, ricci-modlog, ricci-modstorage, ricci-modservice
+   from /usr/sbin to /usr/libexec
+
 * Wed Aug 09 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11
 - Spin off clustermon.srpm (modcluster, cluster-snmp and 
    cluster-cim) from conga.srpm
--- conga/ricci/modules/cluster/Makefile	2006/08/15 04:15:53	1.13
+++ conga/ricci/modules/cluster/Makefile	2006/08/16 06:34:19	1.14
@@ -36,8 +36,8 @@
 
 
 install: 
-	$(INSTALL_DIR)  ${sbindir}
-	$(INSTALL_BIN)  ${TARGET} ${sbindir}
+	$(INSTALL_DIR)  ${libexecdir}
+	$(INSTALL_BIN)  ${TARGET} ${libexecdir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/modcluster.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/cluster/d-bus/modcluster.oddjob.conf	2006/08/09 20:53:22	1.1
+++ conga/ricci/modules/cluster/d-bus/modcluster.oddjob.conf	2006/08/16 06:34:19	1.2
@@ -4,7 +4,7 @@
 	<object name="/com/redhat/ricci">
 		<interface name="com.redhat.ricci">
 			<method name="modcluster_rw">
-				<helper exec="/usr/sbin/modcluster"
+				<helper exec="/usr/libexec/modcluster"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
@@ -12,7 +12,7 @@
 				<allow user="root"/>
 			</method>
 			<method name="modcluster_ro">
-				<helper exec="/usr/sbin/modcluster_ro"
+				<helper exec="/usr/libexec/modcluster_ro"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
--- conga/ricci/modules/log/Makefile	2006/08/15 04:15:53	1.5
+++ conga/ricci/modules/log/Makefile	2006/08/16 06:34:19	1.6
@@ -31,8 +31,8 @@
 
 
 install: 
-	$(INSTALL_DIR)  ${sbindir}
-	$(INSTALL_BIN)  ${TARGET} ${sbindir}
+	$(INSTALL_DIR)  ${libexecdir}
+	$(INSTALL_BIN)  ${TARGET} ${libexecdir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/ricci-modlog.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/log/d-bus/ricci-modlog.oddjob.conf	2006/06/15 03:08:36	1.1
+++ conga/ricci/modules/log/d-bus/ricci-modlog.oddjob.conf	2006/08/16 06:34:20	1.2
@@ -4,7 +4,7 @@
 	<object name="/com/redhat/ricci">
 		<interface name="com.redhat.ricci">
 			<method name="modlog_rw">
-				<helper exec="/usr/sbin/ricci-modlog"
+				<helper exec="/usr/libexec/ricci-modlog"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
@@ -12,7 +12,7 @@
 				<allow user="root"/>
 			</method>
 			<method name="modlog_ro">
-				<helper exec="/usr/sbin/ricci-modlog_ro"
+				<helper exec="/usr/libexec/ricci-modlog_ro"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
--- conga/ricci/modules/rpm/Makefile	2006/08/15 04:15:53	1.6
+++ conga/ricci/modules/rpm/Makefile	2006/08/16 06:34:20	1.7
@@ -32,8 +32,8 @@
 
 
 install: 
-	$(INSTALL_DIR)  ${sbindir}
-	$(INSTALL_BIN)  ${TARGET} ${sbindir}
+	$(INSTALL_DIR)  ${libexecdir}
+	$(INSTALL_BIN)  ${TARGET} ${libexecdir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/ricci-modrpm.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/rpm/d-bus/ricci-modrpm.oddjob.conf	2006/06/15 03:08:36	1.1
+++ conga/ricci/modules/rpm/d-bus/ricci-modrpm.oddjob.conf	2006/08/16 06:34:20	1.2
@@ -4,7 +4,7 @@
 	<object name="/com/redhat/ricci">
 		<interface name="com.redhat.ricci">
 			<method name="modrpm_rw">
-				<helper exec="/usr/sbin/ricci-modrpm"
+				<helper exec="/usr/libexec/ricci-modrpm"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
@@ -12,7 +12,7 @@
 				<allow user="root"/>
 			</method>
 			<method name="modrpm_ro">
-				<helper exec="/usr/sbin/ricci-modrpm_ro"
+				<helper exec="/usr/libexec/ricci-modrpm_ro"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
--- conga/ricci/modules/service/Makefile	2006/08/15 04:15:53	1.7
+++ conga/ricci/modules/service/Makefile	2006/08/16 06:34:20	1.8
@@ -31,8 +31,8 @@
 
 
 install: 
-	$(INSTALL_DIR)  ${sbindir}
-	$(INSTALL_BIN)  ${TARGET} ${sbindir}
+	$(INSTALL_DIR)  ${libexecdir}
+	$(INSTALL_BIN)  ${TARGET} ${libexecdir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/ricci-modservice.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/service/d-bus/ricci-modservice.oddjob.conf	2006/06/15 03:08:36	1.1
+++ conga/ricci/modules/service/d-bus/ricci-modservice.oddjob.conf	2006/08/16 06:34:20	1.2
@@ -4,7 +4,7 @@
 	<object name="/com/redhat/ricci">
 		<interface name="com.redhat.ricci">
 			<method name="modservice_rw">
-				<helper exec="/usr/sbin/ricci-modservice"
+				<helper exec="/usr/libexec/ricci-modservice"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
@@ -12,7 +12,7 @@
 				<allow user="root"/>
 			</method>
 			<method name="modservice_ro">
-				<helper exec="/usr/sbin/ricci-modservice_ro"
+				<helper exec="/usr/libexec/ricci-modservice_ro"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
--- conga/ricci/modules/storage/Makefile	2006/08/15 04:15:54	1.9
+++ conga/ricci/modules/storage/Makefile	2006/08/16 06:34:20	1.10
@@ -60,8 +60,8 @@
 
 
 install: 
-	$(INSTALL_DIR)  ${sbindir}
-	$(INSTALL_BIN)  ${TARGET} ${sbindir}
+	$(INSTALL_DIR)  ${libexecdir}
+	$(INSTALL_BIN)  ${TARGET} ${libexecdir}
 	$(INSTALL_DIR)  ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_FILE) d-bus/ricci-modstorage.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR)  ${sysconfdir}/dbus-1/system.d
--- conga/ricci/modules/storage/d-bus/ricci-modstorage.oddjob.conf	2006/06/15 03:08:37	1.1
+++ conga/ricci/modules/storage/d-bus/ricci-modstorage.oddjob.conf	2006/08/16 06:34:20	1.2
@@ -4,7 +4,7 @@
 	<object name="/com/redhat/ricci">
 		<interface name="com.redhat.ricci">
 			<method name="modstorage_rw">
-				<helper exec="/usr/sbin/ricci-modstorage"
+				<helper exec="/usr/libexec/ricci-modstorage"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
@@ -12,7 +12,7 @@
 				<allow user="root"/>
 			</method>
 			<method name="modstorage_ro">
-				<helper exec="/usr/sbin/ricci-modstorage_ro"
+				<helper exec="/usr/libexec/ricci-modstorage_ro"
 					arguments="1"
 					prepend_user_name="no"
 					argument_passing_method="stdin"
--- conga/ricci/ricci/Makefile	2006/08/04 17:55:24	1.14
+++ conga/ricci/ricci/Makefile	2006/08/16 06:34:20	1.15
@@ -51,8 +51,9 @@
 install: 
 	$(INSTALL_DIR)  ${sbindir}
 	$(INSTALL_BIN)  ${TARGET} ${sbindir}
-	$(INSTALL_BIN)  ${TARGET_AUTH} ${sbindir}
-	$(INSTALL_BIN)  ${TARGET_WORKER} ${sbindir}
+	$(INSTALL_DIR)  ${libexecdir}/ricci
+	install -m 4755 ${TARGET_AUTH}   ${libexecdir}/ricci
+	$(INSTALL_BIN)  ${TARGET_WORKER} ${libexecdir}/ricci
 	$(INSTALL_DIR)  ${localstatedir}/lib/ricci/queue
 	$(INSTALL_DIR)  ${localstatedir}/lib/ricci/certs
 	$(INSTALL_FILE) cacert.config ${localstatedir}/lib/ricci/certs/
--- conga/ricci/ricci/ricci_defines.h	2006/06/09 16:32:19	1.7
+++ conga/ricci/ricci/ricci_defines.h	2006/08/16 06:34:20	1.8
@@ -36,8 +36,8 @@
 #define QUEUE_DIR_PATH     "/var/lib/ricci/queue/"
 #define QUEUE_LOCK_PATH    "/var/lib/ricci/queue/lock"
 
-#define AUTH_HELPER_PATH   "/usr/sbin/ricci-auth"
-#define RICCI_WORKER_PATH  "/usr/sbin/ricci-worker"
+#define AUTH_HELPER_PATH   "/usr/libexec/ricci/ricci-auth"
+#define RICCI_WORKER_PATH  "/usr/libexec/ricci/ricci-worker"
 
 
 #endif  // ricci_defines_h



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-08-22 20:12 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-08-22 20:12 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-08-22 20:12:39

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	version bump

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.10&r2=1.11
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.36&r2=1.37
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.16&r2=1.17

--- conga/clustermon.spec.in.in	2006/08/17 22:25:12	1.10
+++ conga/clustermon.spec.in.in	2006/08/22 20:12:38	1.11
@@ -192,6 +192,12 @@
 
 
 %changelog
+* Fri Aug 21 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-14
+- Version bump
+
+* Fri Aug 18 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-13
+- Version bump
+
 * Wed Aug 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-12
 - Move modcluster from /usr/sbin to /usr/libexec
 - Implement support for Cluster Suite 5
--- conga/conga.spec.in.in	2006/08/22 17:32:06	1.36
+++ conga/conga.spec.in.in	2006/08/22 20:12:38	1.37
@@ -278,6 +278,9 @@
 
 
 %changelog
+* Fri Aug 21 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-14
+- Version bump
+
 * Fri Aug 18 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-13
 - Version bump
 
--- conga/make/version.in	2006/08/22 17:32:06	1.16
+++ conga/make/version.in	2006/08/22 20:12:39	1.17
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=13
+RELEASE=14



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-08-22 23:01 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-08-22 23:01 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-08-22 23:01:17

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	ricci          : configure 
	ricci/make     : defines.mk.in 
	ricci/ricci    : Makefile 

Log message:
	ricci build: use pkgconfig for libs detection

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.11&r2=1.12
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.37&r2=1.38
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/configure.diff?cvsroot=cluster&r1=1.7&r2=1.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/make/defines.mk.in.diff?cvsroot=cluster&r1=1.5&r2=1.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/ricci/Makefile.diff?cvsroot=cluster&r1=1.15&r2=1.16

--- conga/clustermon.spec.in.in	2006/08/22 20:12:38	1.11
+++ conga/clustermon.spec.in.in	2006/08/22 23:01:17	1.12
@@ -30,7 +30,7 @@
 Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
 BuildRequires: glibc-devel gcc-c++ libxml2-devel
-BuildRequires: openssl-devel dbus-devel pam-devel
+BuildRequires: openssl-devel dbus-devel pam-devel pkgconfig
 BuildRequires: net-snmp-devel tog-pegasus-devel
 
 %description
--- conga/conga.spec.in.in	2006/08/22 20:12:38	1.37
+++ conga/conga.spec.in.in	2006/08/22 23:01:17	1.38
@@ -40,7 +40,7 @@
 BuildRequires: python-devel >= 2.4.1
 %endif
 BuildRequires: glibc-devel gcc-c++ libxml2-devel sed
-BuildRequires: openssl-devel dbus-devel pam-devel
+BuildRequires: openssl-devel dbus-devel pam-devel pkgconfig
 
 %description
 Conga is a project developing management system for remote stations. 
--- conga/ricci/configure	2006/08/21 20:13:00	1.7
+++ conga/ricci/configure	2006/08/22 23:01:17	1.8
@@ -45,7 +45,7 @@
 
 
 # D-BUS version
-DBUS_VERSION=`dbus-cleanup-sockets --version | grep -i D-BUS | sed -e s,^D-B[Uu][Ss]\ Socket\ Cleanup\ Utility\ \\\\\([012]\.[0123456789]*\\\\\),\\\1,`
+DBUS_VERSION=`pkg-config --modversion dbus-1`
 if [ -z "$DBUS_VERSION" ] ; then 
     echo "missing d-bus"
     rm -f $MAKE_DEFINES
--- conga/ricci/make/defines.mk.in	2006/07/25 19:10:18	1.5
+++ conga/ricci/make/defines.mk.in	2006/08/22 23:01:17	1.6
@@ -35,10 +35,14 @@
 
 PEGASUS_PLATFORM  ?= @@PEGASUS_PLATFORM@@
 
-INCLUDE         += -I $(top_srcdir)/include `xml2-config --cflags`
+INCLUDE         += -I $(top_srcdir)/include \
+			`pkg-config --cflags libxml-2.0` \
+			`pkg-config --cflags openssl`
 CFLAGS          += -Wall -Wno-unused -fPIC -g ${INCLUDE}
 CXXFLAGS        += -Wall -Wno-unused -fPIC -g ${INCLUDE}
-LDFLAGS         += -fPIC `xml2-config --libs` -lssl -lpthread ${top_srcdir}/common/*.o
+LDFLAGS         += -fPIC -lpthread ${top_srcdir}/common/*.o \
+			`pkg-config --libs libxml-2.0` \
+			`pkg-config --libs openssl`
 
 CC              = gcc
 CXX             = g++
--- conga/ricci/ricci/Makefile	2006/08/16 06:34:20	1.15
+++ conga/ricci/ricci/Makefile	2006/08/22 23:01:17	1.16
@@ -38,10 +38,10 @@
 #OBJECTS = ssl_test.o
 
 
-INCLUDE     += -I ${includedir}/dbus-1.0 -I ${libdir}/dbus-1.0/include
+INCLUDE     += `pkg-config --cflags dbus-1`
 CFLAGS      += 
 CXXFLAGS    += -DDBUS_MAJOR_VERSION="${dbus_major_version}" -DDBUS_MINOR_VERSION="${dbus_minor_version}"
-LDFLAGS     += -l dbus-1 
+LDFLAGS     += `pkg-config --libs dbus-1`
 
 
 all: ${TARGET} ${TARGET_AUTH} ${TARGET_WORKER}



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-09-26  5:21 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-09-26  5:21 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-09-26 05:21:03

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	changelog

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.13&r2=1.14
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.41&r2=1.42
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.17&r2=1.18

--- conga/clustermon.spec.in.in	2006/09/05 19:53:40	1.13
+++ conga/clustermon.spec.in.in	2006/09/26 05:21:03	1.14
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Fri Sep 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-16
+- Suppress msgs from init script (bz204235)
+
 * Fri Aug 21 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-14
 - Version bump
 
--- conga/conga.spec.in.in	2006/09/14 17:52:04	1.41
+++ conga/conga.spec.in.in	2006/09/26 05:21:03	1.42
@@ -102,7 +102,6 @@
 %endif
 Requires: grep openssl mailcap stunnel 
 Requires: sed util-linux
-Requires: ricci = %{version}-%{release}
 
 Requires(pre): grep shadow-utils
 Requires(post): chkconfig initscripts
@@ -280,6 +279,17 @@
 
 
 %changelog
+* Mon Sep 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-16
+- Enable zope/plone inclusion on RHELs
+- Remove luci's dependency on ricci
+- Many improvements of cluster management interface
+- Add full GFS1/2 support; detect many other filesystems
+- Suppress msgs from init script (bz204235)
+- Upgrade zope to 2.9.4 (track Fedora Core)
+
+* Thu Aug 24 2006 Paul Nasrat <pnasrat@redhat.com> 0.8-15
+- Disable inclusion of plone/zope and comment out requires
+
 * Fri Aug 21 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-14
 - Version bump
 
--- conga/make/version.in	2006/08/22 20:12:39	1.17
+++ conga/make/version.in	2006/09/26 05:21:03	1.18
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=14
+RELEASE=16



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-10-04 16:32 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-10-04 16:32 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-10-04 16:32:41

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	New version, and changelog entries

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.14&r2=1.15
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.42&r2=1.43
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.18&r2=1.19

--- conga/clustermon.spec.in.in	2006/09/26 05:21:03	1.14
+++ conga/clustermon.spec.in.in	2006/10/04 16:32:40	1.15
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Wed Oct 04 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-17
+- Version bump
+
 * Fri Sep 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-16
 - Suppress msgs from init script (bz204235)
 
--- conga/conga.spec.in.in	2006/09/26 05:21:03	1.42
+++ conga/conga.spec.in.in	2006/10/04 16:32:41	1.43
@@ -279,6 +279,9 @@
 
 
 %changelog
+* Wed Oct 04 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-17
+- Many luci improvements
+
 * Mon Sep 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-16
 - Enable zope/plone inclusion on RHELs
 - Remove luci's dependency on ricci
--- conga/make/version.in	2006/09/26 05:21:03	1.18
+++ conga/make/version.in	2006/10/04 16:32:41	1.19
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=16
+RELEASE=17



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-10-16 15:56 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-10-16 15:56 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-10-16 15:56:05

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	Changelog and version update

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.16&r2=1.17
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.43&r2=1.44
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.19&r2=1.20

--- conga/clustermon.spec.in.in	2006/10/05 17:38:01	1.16
+++ conga/clustermon.spec.in.in	2006/10/16 15:56:05	1.17
@@ -194,6 +194,10 @@
 
 
 %changelog
+* Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-19
+- Fixed bz 206571 (clustat changed output)
+- modclusterd startup/shutdown improvements
+
 * Fri Oct 06 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-18
 - Add purge_conf argument to stop_node modcluster call (bz202314)
 
--- conga/conga.spec.in.in	2006/10/04 16:32:41	1.43
+++ conga/conga.spec.in.in	2006/10/16 15:56:05	1.44
@@ -279,6 +279,13 @@
 
 
 %changelog
+* Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-19
+- GUI nits
+- Fixed bzs: 206663, 206567, 206571, 206572
+
+* Wed Oct 06 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-18
+- Version bump
+
 * Wed Oct 04 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-17
 - Many luci improvements
 
--- conga/make/version.in	2006/10/04 16:32:41	1.19
+++ conga/make/version.in	2006/10/16 15:56:05	1.20
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=17
+RELEASE=19



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-10-16 21:01 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-10-16 21:01 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2006-10-16 21:01:40

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	changelog and version bump

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.17&r2=1.18
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.44&r2=1.45
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.20&r2=1.21

--- conga/clustermon.spec.in.in	2006/10/16 15:56:05	1.17
+++ conga/clustermon.spec.in.in	2006/10/16 21:01:40	1.18
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-20
+- cluster module: mark services as being xenvms, in status report
+
 * Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-19
 - Fixed bz 206571 (clustat changed output)
 - modclusterd startup/shutdown improvements
--- conga/conga.spec.in.in	2006/10/16 15:56:05	1.44
+++ conga/conga.spec.in.in	2006/10/16 21:01:40	1.45
@@ -279,6 +279,9 @@
 
 
 %changelog
+* Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-20
+- Minor GUI nits
+
 * Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-19
 - GUI nits
 - Fixed bzs: 206663, 206567, 206571, 206572
--- conga/make/version.in	2006/10/16 15:56:05	1.20
+++ conga/make/version.in	2006/10/16 21:01:40	1.21
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=19
+RELEASE=20



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-10-25 16:35 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-10-25 16:35 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2006-10-25 16:35:37

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	changelogs and version update

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18&r2=1.18.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.1&r2=1.45.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.1&r2=1.21.2.2

--- conga/clustermon.spec.in.in	2006/10/16 21:01:40	1.18
+++ conga/clustermon.spec.in.in	2006/10/25 16:35:37	1.18.2.1
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Wed Oct 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-21
+- 211942: Xenvm moniker must be eradicated
+
 * Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-20
 - cluster module: mark services as being xenvms, in status report
 
--- conga/conga.spec.in.in	2006/10/24 21:59:55	1.45.2.1
+++ conga/conga.spec.in.in	2006/10/25 16:35:37	1.45.2.2
@@ -282,6 +282,14 @@
 
 
 %changelog
+* Wed Oct 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-21
+- 211564: ricci allocates huge memory chunk, causing long swapping
+- 211345: cluster_adapters is missing import of Xenvm module
+- 211191: SELinux issues tracking bug
+- 211370: retrieving log from node fails
+- 211942: Xenvm moniker must be eradicated from UI
+- 211375: unable to reliably create a cluster
+- 211373: reboot node fails from storage tab
 
 * Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-20
 - Minor GUI nits
--- conga/make/version.in	2006/10/24 21:59:55	1.21.2.1
+++ conga/make/version.in	2006/10/25 16:35:37	1.21.2.2
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=20.4
+RELEASE=21



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-10-25 18:47 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2006-10-25 18:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-10-25 18:47:16

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/storage   : form-macros 
	make           : version.in 
	ricci/common   : utils.cpp 
	ricci/modules/service: ServiceManager.cpp 

Log message:
	sync up changes from the -RHEL5 branch

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.18&r2=1.19
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.46&r2=1.47
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/storage/form-macros.diff?cvsroot=cluster&r1=1.17&r2=1.18
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.22&r2=1.23
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/common/utils.cpp.diff?cvsroot=cluster&r1=1.7&r2=1.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/ServiceManager.cpp.diff?cvsroot=cluster&r1=1.6&r2=1.7

--- conga/clustermon.spec.in.in	2006/10/16 21:01:40	1.18
+++ conga/clustermon.spec.in.in	2006/10/25 18:47:16	1.19
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Wed Oct 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-21
+- 211942: Xenvm moniker must be eradicated
+
 * Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-20
 - cluster module: mark services as being xenvms, in status report
 
--- conga/conga.spec.in.in	2006/10/24 21:54:29	1.46
+++ conga/conga.spec.in.in	2006/10/25 18:47:16	1.47
@@ -282,6 +282,14 @@
 
 
 %changelog
+* Wed Oct 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-21
+- 211564: ricci allocates huge memory chunk, causing long swapping
+- 211345: cluster_adapters is missing import of Xenvm module
+- 211191: SELinux issues tracking bug
+- 211370: retrieving log from node fails
+- 211942: Xenvm moniker must be eradicated from UI
+- 211375: unable to reliably create a cluster
+- 211373: reboot node fails from storage tab
 
 * Wed Oct 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-20
 - Minor GUI nits
--- conga/luci/storage/form-macros	2006/10/16 20:57:13	1.17
+++ conga/luci/storage/form-macros	2006/10/25 18:47:16	1.18
@@ -445,17 +445,12 @@
    <div metal:use-macro="here/form-macros/macros/single-visible-span"/>
    <div metal:use-macro="here/form-macros/macros/form-scripts"/>
 
-   <a tal:define="main_log_URL  context/cluster/index_html/absolute_url"
-      tal:attributes="href python:main_log_URL + '?nodename=' + storagename + '&pagetype=17'"
+   <a tal:define="main_log_URL  context/logs/index_html/absolute_url"
+      tal:attributes="href python:main_log_URL + '?nodename=' + storagename"
       onClick="return popup_log(this, 'notes')">
     View recent log activity
    </a>
    <br/>
-   <a tal:define="main_reboot_URL  python:'./?storagename=' + storagename + '&pagetype=44'"
-      tal:attributes="href main_reboot_URL">
-    Reboot this machine
-   </a>
-   <br/>
    <br/>
 
    <span tal:omit-tag=""
--- conga/make/version.in	2006/10/24 21:54:29	1.22
+++ conga/make/version.in	2006/10/25 18:47:16	1.23
@@ -1,11 +1,2 @@
-#VERSION=0.9
-#RELEASE=0_TMP_BUILD___WILL_BE_1
-
-# RELEASE has such a strange format just to make sure 
-# people notice that version is not completed
-#
-# after version is ready, replace with real release number
-# 
-
 VERSION=0.8
-RELEASE=20.4
+RELEASE=21
--- conga/ricci/common/utils.cpp	2006/10/23 18:43:35	1.7
+++ conga/ricci/common/utils.cpp	2006/10/25 18:47:16	1.8
@@ -27,6 +27,8 @@
 #include <openssl/md5.h>
 #include <stdlib.h>
 
+//#include <iostream>
+
 
 using namespace std;
 
--- conga/ricci/modules/service/ServiceManager.cpp	2006/10/23 18:43:36	1.6
+++ conga/ricci/modules/service/ServiceManager.cpp	2006/10/25 18:47:16	1.7
@@ -677,7 +677,8 @@
 	     release.find("6") != release.npos)
       // TODO: detect FC6
       FC6 = true;
-    // TODO: detect RHEL5
+    else if (release.find("Tikanga") != release.npos)
+      RHEL5 = true;
     
     release_set = true;
   }



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-10-31 20:34 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-10-31 20:34 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2006-10-31 20:34:48

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	changelog and version update

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.1&r2=1.18.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.2&r2=1.45.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.2&r2=1.21.2.3

--- conga/clustermon.spec.in.in	2006/10/25 16:35:37	1.18.2.1
+++ conga/clustermon.spec.in.in	2006/10/31 20:34:47	1.18.2.2
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Tue Oct 31 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-22
+- Version bump
+
 * Wed Oct 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-21
 - 211942: Xenvm moniker must be eradicated
 
--- conga/conga.spec.in.in	2006/10/25 16:35:37	1.45.2.2
+++ conga/conga.spec.in.in	2006/10/31 20:34:47	1.45.2.3
@@ -282,6 +282,9 @@
 
 
 %changelog
+* Tue Oct 31 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-22
+- 212582: SELinux prevents creation of /etc/cluster/cluster.conf
+
 * Wed Oct 25 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-21
 - 211564: ricci allocates huge memory chunk, causing long swapping
 - 211345: cluster_adapters is missing import of Xenvm module
--- conga/make/version.in	2006/10/25 16:35:37	1.21.2.2
+++ conga/make/version.in	2006/10/31 20:34:48	1.21.2.3
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=21
+RELEASE=22



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-11-01 20:43 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2006-11-01 20:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-11-01 20:43:45

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/cluster   : index_html 
	luci/homebase  : form-macros 
	luci/site/luci/var: Data.fs 
	make           : version.in 

Log message:
	copy over newer bits from the -RHEL5 tree

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.19&r2=1.20
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.47&r2=1.48
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/index_html.diff?cvsroot=cluster&r1=1.23&r2=1.24
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/form-macros.diff?cvsroot=cluster&r1=1.46&r2=1.47
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/var/Data.fs.diff?cvsroot=cluster&r1=1.16&r2=1.17
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.23&r2=1.24




^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-11-01 23:11 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-11-01 23:11 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2006-11-01 23:11:25

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	changelog and update to version 0.8-23

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.2&r2=1.18.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.3&r2=1.45.2.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.3&r2=1.21.2.4

--- conga/clustermon.spec.in.in	2006/10/31 20:34:47	1.18.2.2
+++ conga/clustermon.spec.in.in	2006/11/01 23:11:25	1.18.2.3
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Wed Nov 01 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-23
+- version bump
+
 * Tue Oct 31 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-22
 - Version bump
 
--- conga/conga.spec.in.in	2006/10/31 20:34:47	1.45.2.3
+++ conga/conga.spec.in.in	2006/11/01 23:11:25	1.45.2.4
@@ -282,6 +282,10 @@
 
 
 %changelog
+* Wed Nov 01 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-23
+- 213504: luci does not correctly handle cluster.conf with 
+  nodes lacking FQDN
+
 * Tue Oct 31 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-22
 - 212582: SELinux prevents creation of /etc/cluster/cluster.conf
 
--- conga/make/version.in	2006/10/31 20:34:48	1.21.2.3
+++ conga/make/version.in	2006/11/01 23:11:25	1.21.2.4
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=22
+RELEASE=23



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-11-02  0:46 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2006-11-02  0:46 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2006-11-02 00:46:53

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci           : Makefile load_site.py pack.py 
	luci/init.d    : luci 
	luci/site/luci/var: Data.fs 
	make           : version.in 

Log message:
	copy over newer bits from -RHEL5

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.20&r2=1.21
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.48&r2=1.49
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/Makefile.diff?cvsroot=cluster&r1=1.20&r2=1.21
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/load_site.py.diff?cvsroot=cluster&r1=1.14&r2=1.15
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/pack.py.diff?cvsroot=cluster&r1=1.4&r2=1.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/init.d/luci.diff?cvsroot=cluster&r1=1.12&r2=1.13
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/var/Data.fs.diff?cvsroot=cluster&r1=1.17&r2=1.18
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.24&r2=1.25

--- conga/clustermon.spec.in.in	2006/11/01 20:43:39	1.20
+++ conga/clustermon.spec.in.in	2006/11/02 00:46:49	1.21
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Wed Nov 01 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-23
+- version bump
+
 * Tue Oct 31 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-22
 - Version bump
 
--- conga/conga.spec.in.in	2006/11/01 20:43:39	1.48
+++ conga/conga.spec.in.in	2006/11/02 00:46:49	1.49
@@ -282,6 +282,10 @@
 
 
 %changelog
+* Wed Nov 01 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-23
+- 213504: luci does not correctly handle cluster.conf with 
+  nodes lacking FQDN
+
 * Tue Oct 31 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-22
 - 212582: SELinux prevents creation of /etc/cluster/cluster.conf
 
--- conga/luci/Makefile	2006/08/09 15:52:14	1.20
+++ conga/luci/Makefile	2006/11/02 00:46:49	1.21
@@ -1,4 +1,3 @@
-# $Id: Makefile,v 1.20 2006/08/09 15:52:14 rmccabe Exp $
 ZOPEINSTANCE=/var/lib/luci
 
 include ../make/version.in
--- conga/luci/load_site.py	2006/09/19 15:01:20	1.14
+++ conga/luci/load_site.py	2006/11/02 00:46:49	1.15
@@ -1,5 +1,4 @@
 #!/usr/bin/python
-# $Id: load_site.py,v 1.14 2006/09/19 15:01:20 rmccabe Exp $
 
 ##############################################################################
 #
--- conga/luci/pack.py	2006/07/24 20:17:01	1.4
+++ conga/luci/pack.py	2006/11/02 00:46:49	1.5
@@ -1,5 +1,4 @@
 #!/usr/bin/python
-# $Id: pack.py,v 1.4 2006/07/24 20:17:01 kupcevic Exp $
 
 import os, sys, string
 
--- conga/luci/init.d/luci	2006/11/02 00:23:28	1.12
+++ conga/luci/init.d/luci	2006/11/02 00:46:49	1.13
@@ -203,8 +203,7 @@
 	        system_running
 		rtrn=$?
 		if [ "1$rtrn" = "10" ] ; then
-		    echo "$ID is running..." 
-		    echo "$ID listens on port $LUCI_HTTPS_PORT; accessible using url $LUCI_URL" 
+		    echo "$ID is running..."
 		else
 		    echo "$ID is stopped"
 		fi
Binary files /cvs/cluster/conga/luci/site/luci/var/Data.fs	2006/11/01 20:43:39	1.17 and /cvs/cluster/conga/luci/site/luci/var/Data.fs	2006/11/02 00:46:49	1.18 differ
rcsdiff: /cvs/cluster/conga/luci/site/luci/var/Data.fs: diff failed
--- conga/make/version.in	2006/11/01 20:43:45	1.24
+++ conga/make/version.in	2006/11/02 00:46:52	1.25
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=22
+RELEASE=23



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-11-16 19:35 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-11-16 19:35 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2006-11-16 19:34:54

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci           : Makefile load_site.py pack.py 
	luci/cluster   : form-chooser form-macros index_html 
	                 portlet_cluconfig resource-form-macros 
	                 resource_form_handlers.js 
	luci/homebase  : form-chooser form-macros homebase_common.js 
	                 homebase_portlet_fetcher index_html 
	                 luci_homebase.css portlet_homebase 
	luci/plone-custom: conga.js footer 
	luci/site/luci/Extensions: FenceDaemon.py FenceHandler.py 
	                           LuciSyslog.py cluster_adapters.py 
	                           conga_constants.py 
	                           homebase_adapters.py ricci_bridge.py 
	                           ricci_communicator.py 
	make           : version.in 
	ricci/modules/log: LogParser.cpp 
Added files:
	doc            : config_rhel5.html 
	luci/docs      : config_rhel5 
Removed files:
	luci/site/luci/Extensions: Quorumd.py ricci_test.py 

Log message:
	sync with HEAD, bump to version 0.8-24

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.3&r2=1.18.2.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.4&r2=1.45.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/doc/config_rhel5.html.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/Makefile.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.20&r2=1.20.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/load_site.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.14&r2=1.14.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/pack.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.4&r2=1.4.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-chooser.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.12&r2=1.12.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.90.2.2&r2=1.90.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/index_html.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.20.2.3&r2=1.20.2.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/portlet_cluconfig.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.2&r2=1.2.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/resource-form-macros.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.1&r2=1.21.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/resource_form_handlers.js.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.20.2.1&r2=1.20.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/docs/config_rhel5.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.2.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/form-chooser.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.10&r2=1.10.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/form-macros.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.44.2.3&r2=1.44.2.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/homebase_common.js.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.13&r2=1.13.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/homebase_portlet_fetcher.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3&r2=1.3.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/index_html.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.1&r2=1.18.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/luci_homebase.css.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.28&r2=1.28.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/portlet_homebase.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.7&r2=1.7.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/plone-custom/conga.js.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3&r2=1.3.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/plone-custom/footer.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.2&r2=1.2.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceDaemon.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceHandler.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.4&r2=1.4.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciSyslog.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.2.2.2&r2=1.2.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.120.2.8&r2=1.120.2.9
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.19.2.1&r2=1.19.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/homebase_adapters.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.34.2.5&r2=1.34.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.30.2.6&r2=1.30.2.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_communicator.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.9.2.3&r2=1.9.2.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Quorumd.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_test.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.4&r2=1.21.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/log/LogParser.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.6.2.1&r2=1.6.2.2

--- conga/clustermon.spec.in.in	2006/11/01 23:11:25	1.18.2.3
+++ conga/clustermon.spec.in.in	2006/11/16 19:34:52	1.18.2.4
@@ -194,6 +194,10 @@
 
 
 %changelog
+
+* Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-24
+ - version bump
+
 * Wed Nov 01 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-23
 - version bump
 
--- conga/conga.spec.in.in	2006/11/01 23:11:25	1.45.2.4
+++ conga/conga.spec.in.in	2006/11/16 19:34:52	1.45.2.5
@@ -282,6 +282,31 @@
 
 
 %changelog
+
+* Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-24
+- Fixed bz215039 (Cannot create a new resource via luci web app)
+- Fixed bz215034 (Cannot change daemon properties via luci web app)
+- Fixed bz214790 (Stop/restart cluster not working via luci web app)
+- Fixed bz213690 (luci - Reversed links in colophon (gui - minor))
+- Fixed bz213266 (Conga - modifying a cluster node's cluster membership in a subnet with other clusters results in the wrong cluster.conf)
+- Fixed bz213083 (luci - should display usernames in some logical/sorted order (usability))
+- Fixed bz212601 (luci - selecting cluster name or cluster node name indicates error in install and displays empty form)
+- Fixed bz212021 (various luci buttons do nothing)
+- Fixed bz212006 (create cluster does not show status as cluster is being created)
+- Fixed bz212584 (luci does not retrieve failed ricci queue elements)
+- Fixed bz212440 (luci persists possibly incorrect name for a system)
+- Improved bz213306 (ricci - log probing can take minutes to complete)
+- Fixed starting/stopping services
+- Fixed deleting cluster
+- Fixed deleting node
+- Fixed redirection for all async->busy wait calls
+- Storage module: properly probe cluster quorum if LVM locking 
+  is marked as clustered
+- Resolves: bz215039, bz215034, bz214790, bz213690, bz213266
+- Resolves: bz213083, bz212601, bz212021, bz212006, bz212584
+- Resolves: bz212440
+- Related: bz213306
+
 * Wed Nov 01 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-23
 - 213504: luci does not correctly handle cluster.conf with 
   nodes lacking FQDN
/cvs/cluster/conga/doc/config_rhel5.html,v  -->  standard output
revision 1.1.2.1
--- conga/doc/config_rhel5.html
+++ -	2006-11-16 19:34:56.739739000 +0000
@@ -0,0 +1,260 @@
+<html><head><title>Advanced Cluster Configuration Parameters</title>
+</head><body>
+<h2>Advanced Cluster Configuration Parameters</h2>
+<p>
+<dl compact>
+<dt><a name="secauth"><strong>secauth</strong></a><dd>
+This specifies that HMAC/SHA1 authentication should be used to authenticate
+all messages.  It further specifies that all data should be encrypted with the
+sober128 encryption algorithm to protect data from eavesdropping.
+<p>
+Enabling this option adds a 36 byte header to every message sent by totem which
+reduces total throughput.  Encryption and authentication consume 75% of CPU
+cycles in aisexec as measured with gprof when enabled.
+<p>
+For 100mbit networks with 1500 MTU frame transmissions:
+A throughput of 9mb/sec is possible with 100% cpu utilization when this
+option is enabled on 3ghz cpus.
+A throughput of 10mb/sec is possible wth 20% cpu utilization when this
+optin is disabled on 3ghz cpus.
+<p>
+For gig-e networks with large frame transmissions:
+A throughput of 20mb/sec is possible when this option is enabled on
+3ghz cpus.
+A throughput of 60mb/sec is possible when this option is disabled on
+3ghz cpus.
+<p>
+The default is on.
+<p>
+<dt><a name="rrp_mode"><strong>rrp_mode</strong></a><dd>
+This specifies the mode of redundant ring, which may be none, active, or
+passive.  Active replication offers slightly lower latency from transmit
+to delivery in faulty network environments but with less performance.
+Passive replication may nearly double the speed of the totem protocol
+if the protocol doesn't become cpu bound.  The final option is none, in
+which case only one network interface will be used to operate the totem
+protocol.
+<p>
+If only one interface directive is specified, none is automatically chosen.
+If multiple interface directives are specified, only active or passive may
+be chosen.
+<p>
+<dt><a name="netmtu"><strong>netmtu</strong></a><dd>
+This specifies the network maximum transmit unit.  To set this value beyond
+1500, the regular frame MTU, requires ethernet devices that support large, or
+also called jumbo, frames.  If any device in the network doesn't support large
+frames, the protocol will not operate properly.  The hosts must also have their
+mtu size set from 1500 to whatever frame size is specified here.
+<p>
+Please note while some NICs or switches claim large frame support, they support
+9000 MTU as the maximum frame size including the IP header.  Setting the netmtu
+and host MTUs to 9000 will cause totem to use the full 9000 bytes of the frame.
+Then Linux will add a 18 byte header moving the full frame size to 9018.  As a
+result some hardware will not operate properly with this size of data.  A netmtu 
+of 8982 seems to work for the few large frame devices that have been tested.
+Some manufacturers claim large frame support when in fact they support frame
+sizes of 4500 bytes.
+<p>
+Increasing the MTU from 1500 to 8982 doubles throughput performance from 30MB/sec
+to 60MB/sec as measured with evsbench with 175000 byte messages with the secauth 
+directive set to off.
+<p>
+When sending multicast traffic, if the network frequently reconfigures, chances are
+that some device in the network doesn't support large frames.
+<p>
+Choose hardware carefully if intending to use large frame support.
+<p>
+The default is 1500.
+<p>
+<dt><a name="threads"><strong>threads</strong></a><dd>
+This directive controls how many threads are used to encrypt and send multicast
+messages.  If secauth is off, the protocol will never use threaded sending.
+If secauth is on, this directive allows systems to be configured to use
+multiple threads to encrypt and send multicast messages.
+<p>
+A thread directive of 0 indicates that no threaded send should be used.  This
+mode offers best performance for non-SMP systems. 
+<p>
+The default is 0.
+<p>
+<dt><a name="vsftype"><strong>vsftype</strong></a><dd>
+This directive controls the virtual synchrony filter type used to identify
+a primary component.  The preferred choice is YKD dynamic linear voting,
+however, for clusters larger then 32 nodes YKD consumes alot of memory.  For
+large scale clusters that are created by changing the MAX_PROCESSORS_COUNT 
+#define in the C code totem.h file, the virtual synchrony filter &quot;none&quot; is
+recommended but then AMF and DLCK services (which are currently experimental)
+are not safe for use.
+<p>
+The default is ykd.  The vsftype can also be set to none.
+<p>
+Within the 
+<B>totem </B>
+
+directive, there are several configuration options which are used to control
+the operation of the protocol.  It is generally not recommended to change any
+of these values without proper guidance and sufficient testing.  Some networks
+may require larger values if suffering from frequent reconfigurations.  Some
+applications may require faster failure detection times which can be achieved
+by reducing the token timeout.
+<p>
+<dt><a name="token"><strong>token</strong></a><dd>
+This timeout specifies in milliseconds until a token loss is declared after not
+receiving a token.  This is the time spent detecting a failure of a processor
+in the current configuration.  Reforming a new configuration takes about 50
+milliseconds in addition to this timeout.
+<p>
+The default is 5000 milliseconds.
+<p>
+<dt><a name="token_retransmit"><strong>token_retransmit</strong></a><dd>
+This timeout specifies in milliseconds after how long before receiving a token
+the token is retransmitted.  This will be automatically calculated if token
+is modified.  It is not recommended to alter this value without guidance from
+the openais community.
+<p>
+The default is 238 milliseconds.
+<p>
+<dt><a name="hold"><strong>hold</strong></a><dd>
+This timeout specifies in milliseconds how long the token should be held by
+the representative when the protocol is under low utilization.   It is not
+recommended to alter this value without guidance from the openais community.
+<p>
+The default is 180 milliseconds.
+<p>
+<dt><a name="retransmits_before_loss"><strong>retransmits_before_loss</strong></a><dd>
+This value identifies how many token retransmits should be attempted before
+forming a new configuration.  If this value is set, retransmit and hold will
+be automatically calculated from retransmits_before_loss and token.
+<p>
+The default is 4 retransmissions.
+<p>
+<dt><a name="join"><strong>join</strong></a><dd>
+This timeout specifies in milliseconds how long to wait for join messages in 
+the membership protocol.
+<p>
+The default is 100 milliseconds.
+<p>
+<dt><a name="send_join"><strong>send_join</strong></a><dd>
+This timeout specifies in milliseconds an upper range between 0 and send_join
+to wait before sending a join message.  For configurations with less then
+32 nodes, this parameter is not necessary.  For larger rings, this parameter
+is necessary to ensure the NIC is not overflowed with join messages on
+formation of a new ring.  A reasonable value for large rings (128 nodes) would
+be 80msec.  Other timer values must also change if this value is changed.  Seek
+advice from the openais mailing list if trying to run larger configurations.
+<p>
+The default is 0 milliseconds.
+<p>
+<dt><a name="consensus"><strong>consensus</strong></a><dd>
+This timeout specifies in milliseconds how long to wait for consensus to be
+achieved before starting a new round of membership configuration.
+<p>
+The default is 200 milliseconds.
+<p>
+<dt><a name="merge"><strong>merge</strong></a><dd>
+This timeout specifies in milliseconds how long to wait before checking for
+a partition when no multicast traffic is being sent.  If multicast traffic
+is being sent, the merge detection happens automatically as a function of
+the protocol.
+<p>
+The default is 200 milliseconds.
+<p>
+<dt><a name="downcheck"><strong>downcheck</strong></a><dd>
+This timeout specifies in milliseconds how long to wait before checking
+that a network interface is back up after it has been downed.
+<p>
+The default is 1000 millseconds.
+<p>
+<dt><a name="fail_to_recv_const"><strong>fail_to_recv_const</strong></a><dd>
+This constant specifies how many rotations of the token without receiving any
+of the messages when messages should be received may occur before a new
+configuration is formed.
+<p>
+The default is 50 failures to receive a message.
+<p>
+<dt><a name="seqno_unchanged_const"><strong>seqno_unchanged_const</strong></a><dd>
+This constant specifies how many rotations of the token without any multicast
+traffic should occur before the merge detection timeout is started.
+<p>
+The default is 30 rotations.
+<p>
+<dt><a name="heartbeat_failures_allowed"><strong>heartbeat_failures_allowed</strong></a><dd>
+[HeartBeating mechanism]
+Configures the optional HeartBeating mechanism for faster failure detection. Keep in
+mind that engaging this mechanism in lossy networks could cause faulty loss declaration 
+as the mechanism relies on the network for heartbeating. 
+<p>
+So as a rule of thumb use this mechanism if you require improved failure in low to 
+medium utilized networks.
+<p>
+This constant specifies the number of heartbeat failures the system should tolerate
+before declaring heartbeat failure e.g 3. Also if this value is not set or is 0 then the
+heartbeat mechanism is not engaged in the system and token rotation is the method
+of failure detection
+<p>
+The default is 0 (disabled).
+<p>
+<dt><a name="max_network_delay"><strong>max_network_delay</strong></a><dd>
+[HeartBeating mechanism]
+This constant specifies in milliseconds the approximate delay that your network takes
+to transport one packet from one machine to another. This value is to be set by system
+engineers and please dont change if not sure as this effects the failure detection
+mechanism using heartbeat.
+<p>
+The default is 50 milliseconds.
+<p>
+<dt><a name="window_size"><strong>window_size</strong></a><dd>
+This constant specifies the maximum number of messages that may be sent on one
+token rotation.  If all processors perform equally well, this value could be
+large (300), which would introduce higher latency from origination to delivery
+for very large rings.  To reduce latency in large rings(16+), the defaults are
+a safe compromise.  If 1 or more slow processor(s) are present among fast
+processors, window_size should be no larger then 256000 / netmtu to avoid
+overflow of the kernel receive buffers.  The user is notified of this by
+the display of a retransmit list in the notification logs.  There is no loss
+of data, but performance is reduced when these errors occur.
+<p>
+The default is 50 messages.
+<p>
+<dt><a name="max_messages"><strong>max_messages</strong></a><dd>
+This constant specifies the maximum number of messages that may be sent by one
+processor on receipt of the token.  The max_messages parameter is limited to
+256000 / netmtu to prevent overflow of the kernel transmit buffers.
+<p>
+The default is 17 messages.
+<p>
+<dt><a name="rrp_problem_count_timeout"><strong>rrp_problem_count_timeout</strong></a><dd>
+This specifies the time in milliseconds to wait before decrementing the
+problem count by 1 for a particular ring to ensure a link is not marked
+faulty for transient network failures.
+<p>
+The default is 1000 milliseconds.
+<p>
+<dt><a name="rrp_problem_count_threshold"><strong>rrp_problem_count_threshold</strong></a><dd>
+This specifies the number of times a problem is detected with a link before
+setting the link faulty.  Once a link is set faulty, no more data is
+transmitted upon it.  Also, the problem counter is no longer decremented when
+the problem count timeout expires.
+<p>
+A problem is detected whenever all tokens from the proceeding processor have
+not been received within the rrp_token_expired_timeout.  The
+rrp_problem_count_threshold * rrp_token_expired_timeout should be atleast 50
+milliseconds less then the token timeout, or a complete reconfiguration
+may occur.
+<p>
+The default is 20 problem counts.
+<p>
+<dt><a name="rrp_token_expired_timeout"><strong>rrp_token_expired_timeout</strong></a><dd>
+This specifies the time in milliseconds to increment the problem counter for
+the redundant ring protocol after not having received a token from all rings
+for a particular processor.
+<p>
+This value will automatically be calculated from the token timeout and
+problem_count_threshold but may be overridden.  It is not recommended to
+override this value without guidance from the openais community.
+<p>
+The default is 47 milliseconds.
+<p>
+</dl>
+</body>
+</html>
--- conga/luci/Makefile	2006/08/09 15:52:14	1.20
+++ conga/luci/Makefile	2006/11/16 19:34:52	1.20.2.1
@@ -1,4 +1,3 @@
-# $Id: Makefile,v 1.20 2006/08/09 15:52:14 rmccabe Exp $
 ZOPEINSTANCE=/var/lib/luci
 
 include ../make/version.in
--- conga/luci/load_site.py	2006/09/19 15:01:20	1.14
+++ conga/luci/load_site.py	2006/11/16 19:34:52	1.14.2.1
@@ -1,5 +1,4 @@
 #!/usr/bin/python
-# $Id: load_site.py,v 1.14 2006/09/19 15:01:20 rmccabe Exp $
 
 ##############################################################################
 #
--- conga/luci/pack.py	2006/07/24 20:17:01	1.4
+++ conga/luci/pack.py	2006/11/16 19:34:52	1.4.2.1
@@ -1,5 +1,4 @@
 #!/usr/bin/python
-# $Id: pack.py,v 1.4 2006/07/24 20:17:01 kupcevic Exp $
 
 import os, sys, string
 
--- conga/luci/cluster/form-chooser	2006/10/16 20:25:33	1.12
+++ conga/luci/cluster/form-chooser	2006/11/16 19:34:52	1.12.2.1
@@ -12,7 +12,7 @@
   <span tal:condition="not: busywaiting">
     <span tal:omit-tag="" tal:define="global ptype request/pagetype |nothing"/>
     <span tal:omit-tag="" tal:condition="python: not ptype">
-     <div metal:use-macro="here/form-macros/macros/entry-form"/>
+     <div metal:use-macro="here/form-macros/macros/clusters-form"/>
     </span>
     <span tal:omit-tag="" tal:condition="python: ptype == '0' or ptype == '1' or ptype == '2' or ptype == '3'">
      <div metal:use-macro="here/form-macros/macros/clusters-form"/>
--- conga/luci/cluster/form-macros	2006/10/31 17:28:03	1.90.2.2
+++ conga/luci/cluster/form-macros	2006/11/16 19:34:52	1.90.2.3
@@ -7,7 +7,6 @@
 <body>
 
 <div metal:define-macro="entry-form">
-	<h2>Entry Form</h2>
 </div>
 
 <div metal:define-macro="busywaitpage">
@@ -26,26 +25,33 @@
       </span>
       <span tal:condition="python: 'isnodecreation' in nodereport and nodereport['isnodecreation'] == True">
        <span tal:condition="python: nodereport['iserror'] == True">
-			  <h2><span tal:content="nodereport/desc" /></h2>
-         <font color="red"><span tal:content="nodereport/errormessage"/></font>
+		<h2><span tal:content="nodereport/desc" /></h2>
+		<span class="errmsg" tal:content="nodereport/errormessage"/>
        </span>
+
        <span tal:condition="python: nodereport['iserror'] == False">
-			  <h2><span tal:content="nodereport/desc" /></h2>
-         <i><span tal:content="nodereport/statusmessage"/></i><br/>
-          <span tal:condition="python: nodereport['statusindex'] == 0">
+		<h2><span tal:content="nodereport/desc" /></h2>
+		<em tal:content="nodereport/statusmessage | nothing"/><br/>
+          <span tal:condition="python: nodereport['statusindex'] < 1">
            <img src="notstarted.png"/>
           </span>
-          <span tal:condition="python: nodereport['statusindex'] == 1">
-           <img src="installed.png"/>
-          </span>
-          <span tal:condition="python: nodereport['statusindex'] == 2">
-           <img src="rebooted.png"/>
+
+          <span tal:condition="
+			python: nodereport['statusindex'] == 1 or nodereport['statusindex'] == 2">
+           <img src="installed.png" alt="[cluster software installed]" />
           </span>
+
           <span tal:condition="python: nodereport['statusindex'] == 3">
-           <img src="configured.png"/>
+           <img src="rebooted.png" alt="[cluster node rebooted]" />
+          </span>
+
+          <span tal:condition="
+				python: nodereport['statusindex'] == 4 or nodereport['statusindex'] == 5">
+           <img src="configured.png" alt="[cluster node configured]" />
           </span>
-          <span tal:condition="python: nodereport['statusindex'] == 4">
-           <img src="joined.png"/>
+
+          <span tal:condition="python: nodereport['statusindex'] == 6">
+           <img src="joined.png" alt="[cluster node joined cluster]" />
           </span>
        </span>
       </span>
@@ -61,18 +67,17 @@
 
 <div id="cluster_list">
 <div class="cluster" tal:repeat="clu clusystems">
-
 	<tal:block tal:define="
-		global ragent python: here.getRicciAgent(clu[0])" />
+		global ricci_agent python: here.getRicciAgent(clu[0])" />
 
-	<div tal:condition="python: not ragent">
+	<div tal:condition="python: not ricci_agent">
 		<strong class="errmsgs">An error occurred when trying to contact any of the nodes in the <span tal:replace="python: clu[0]"/> cluster.</strong>
 		<hr/>
 	</div>
 
-	<tal:block tal:condition="python: ragent">
+	<tal:block tal:condition="python: ricci_agent">
 		<tal:block tal:define="
-			global stat python: here.getClusterStatus(ragent);
+			global stat python: here.getClusterStatus(ricci_agent);
 			global cstatus python: here.getClustersInfo(stat, request);
 			global cluster_status python: 'cluster ' + (('running' in cstatus and cstatus['running'] == 'true') and 'running' or 'stopped');"
 	 	/>
@@ -84,15 +89,33 @@
 			<a href=""
 				tal:attributes="href cstatus/clucfg | nothing;
 								class python: 'cluster ' + cluster_status;"
-				tal:content="cstatus/clusteralias | string: [unknown]" />
+				tal:content="cstatus/clusteralias | string:[unknown]" />
 		</td>
 
 		<td class="cluster cluster_action">
 			<form method="post" onSubmit="return dropdown(this.gourl)">
 				<select name="gourl" id="cluster_action" class="cluster">
-					<option tal:condition="python: 'running' in cstatus and cstatus['running'] != 'true'" value="" class="cluster running">Start this cluster</option>
-					<option tal:condition="python: 'running' in cstatus and cstatus['running'] == 'true'" value="" class="cluster stopped">Stop this cluster</option>
-					<option value="" class="cluster">Restart this cluster</option>
+					<option class="cluster running"
+						tal:condition="python: 'running' in cstatus and cstatus['running'] != 'true'"
+						tal:attributes="value cstatus/start_url | nothing">
+						Start this cluster
+					</option>
+
+					<option class="cluster"
+						tal:attributes="value cstatus/restart_url | nothing">
+						Restart this cluster
+					</option>
+
+					<option class="cluster stopped"
+						tal:condition="python: 'running' in cstatus and cstatus['running'] == 'true'"
+						tal:attributes="value cstatus/stop_url | nothing">
+						Stop this cluster
+					</option>
+
+					<option class="cluster stopped"
+						tal:attributes="value cstatus/delete_url | nothing">
+						Delete this cluster
+					</option>
 				</select>
 				<input class="cluster" type="submit" value="Go" />
 			</form>
@@ -352,34 +375,41 @@
 		set_page_title('Luci ??? cluster ??? Configure cluster properties');
 	</script>
 
-	<span tal:define="global ricci_agent python: here.getRicciAgentForCluster(request)" />
+	<tal:block tal:define="
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
+	<tal:block tal:condition="not: exists: modelb">
+		<tal:block tal:define="global modelb python: None" />
+	</tal:block>
+
 	<tal:block
 		tal:define="global clusterinfo python: here.getClusterInfo(modelb, request)" />
 
+<tal:block tal:condition="clusterinfo">
 	<span tal:omit-tag="" tal:define="global configTabNum python: 'tab' in request and int(request['tab']) or 1" />
 
 	<ul class="configTab">
 		<li class="configTab">
 			<a tal:attributes="
-				href python: request['URL'] + '?pagetype=' + request['pagetype'] + '&clustername=' + request['clustername'] + '&tab=1';
+				href clusterinfo/basecluster_url | nothing;
 				class python: 'configTab' + (configTabNum == 1 and ' configTabActive' or '');
 			">General</a>
 		</li>
 		<li class="configTab">
 			<a tal:attributes="
-				href python: request['URL'] + '?pagetype=' + request['pagetype'] + '&clustername=' + request['clustername'] + '&tab=2';
+				href clusterinfo/fencedaemon_url | nothing;
 				class python: 'configTab' + (configTabNum == 2 and ' configTabActive' or '');
 			">Fence</a>
 		</li>
 		<li class="configTab">
 			<a tal:attributes="
-				href python: request['URL'] + '?pagetype=' + request['pagetype'] + '&clustername=' + request['clustername'] + '&tab=3';
+				href clusterinfo/multicast_url | nothing;
 				class python: 'configTab' + (configTabNum == 3 and ' configTabActive' or '');
 			">Multicast</a>
 		</li>
 		<li class="configTab">
 			<a tal:attributes="
-				href python: request['URL'] + '?pagetype=' + request['pagetype'] + '&clustername=' + request['clustername'] + '&tab=4';
+				href clusterinfo/quorumd_url | nothing;
 				class python: 'configTab' + (configTabNum == 4 and ' configTabActive' or '');
 			">Quorum Partition</a>
 		</li>
@@ -394,10 +424,15 @@
 		</script>
 
 		<form name="basecluster" action="" method="post">
+			<input type="hidden" name="cluster_version"
+				tal:attributes="value os_version | nothing" />
 			<input type="hidden" name="pagetype"
 				tal:attributes="value request/pagetype | request/form/pagetype"
 			/>
 			<input type="hidden" name="configtype" value="general" />
+			<input type="hidden" name="clustername"
+				tal:attributes="value request/clustername | clusterinfo/clustername | nothing" />
+
 		<table id="systemsTable" class="systemsTable" border="0" cellspacing="0">
 			<thead class="systemsTable">
 				<tr class="systemsTable"><td class="systemsTable" colspan="1">
@@ -412,7 +447,7 @@
 					<td class="systemsTable">Cluster Name</td>
 					<td class="systemsTable">
 						<input type="text" name="cluname"
-							tal:attributes="value clusterinfo/clustername"/>
+							tal:attributes="value clusterinfo/clustername" />
 					</td>
 				</tr>
 				<tr class="systemsTable">
@@ -423,8 +458,272 @@
 					</td>
 				</tr>
 			</tbody>
+		</table>
 
-			<tfoot class="systemsTable">
+		<table tal:condition="python: os_version and os_version == 'rhel5'">
+			<tr class="systemsTable">
+				<td class="systemsTable" colspan="2">
+					<img src="/luci/cluster/arrow_right.png" alt="[+]"
+						onclick="toggle_visible(this, 'genprops_advanced', 'genprops_advanced_label')">
+					<span id="genprops_advanced_label">Show</span>
+					advanced cluster properties
+				</td>
+			</tr>
+
+			<tr class="systemsTable invisible" id="genprops_advanced">
+				<td class="systemsTable" colspan="2">
+					<table class="systemsTable">
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#secauth', 55, 65);">Secure Authentication</a>
+							</td>
+							<td class="systemsTable">
+								<input type="checkbox" name="secauth" checked="checked" />
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#rrp_mode', 55, 65);">Redundant Ring Protocol Mode</a>
+							</td>
+							<td class="systemsTable">
+								<select name="text" name="rrp_mode">
+									<option value="none">
+										None
+									</option>
+									<option value="active">
+										Active
+									</option>
+									<option value="passive">
+										Passive
+									</option>
+								</select>
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#netmtu', 55, 65);">Network MTU</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="netmtu"
+									tal:attributes="value string:1500" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#threads', 55, 65);">Number of Threads
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10" name="threads"
+									tal:attributes="value string:0" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#vsftype', 55, 65);">Virtual Synchrony Type
+							</td>
+							<td class="systemsTable">
+								<select name="vsftype">
+									<option value="none">
+										None
+									</option>
+									<option value="ykd">
+										YKD
+									</option>
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#token', 55, 65);">Token Timeout</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10" name="token"
+									tal:attributes="value string:5000" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#token_retransmit', 55, 65);">Token Retransmit</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="token_retransmit"
+									tal:attributes="value string:238" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#hold', 55, 65);">Hold Token Timeout</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10" name="hold"
+									tal:attributes="value string:180" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#retransmits_before_loss', 55, 65);">Number of retransmits before loss</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="retransmits_before_loss"
+									tal:attributes="value string:4" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#join', 55, 65);">Join Timeout</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10" name="join"
+									tal:attributes="value string:100" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#consensus', 55, 65);">Consensus Timeout</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="consensus"
+									tal:attributes="value string:100" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#merge', 55, 65);">Merge Detection Timeout</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="merge"
+									tal:attributes="value string:200" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#downcheck', 55, 65);">Interface Down Check Timeout</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="downcheck"
+									tal:attributes="value string:1000" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#fail_to_recv_const', 55, 65);">Fail to Receive Constant</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="fail_to_recv_const"
+									tal:attributes="value string:50" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#seqno_unchanged_const', 55, 65);">Rotations with no mcast traffic before merge detection timeout started</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="seqno_unchanged_const"
+									tal:attributes="value string:30" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#heartbeat_failures_allowed', 55, 65);">Number of Heartbeat Failures Allowed</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="heartbeat_failures_allowed"
+									tal:attributes="value string:0" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#max_network_delay', 55, 65);">Maximum Network Delay</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="max_network_delay"
+									tal:attributes="value string:50" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#window_size', 55, 65);">Window Size</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="window_size"
+									tal:attributes="value string:50" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#max_messages', 55, 65);">Maximum Messages</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="max_messages"
+									tal:attributes="value string:17" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#rrp_problem_count_timeout', 55, 65);">RRP Problem Count Timeout</a> (ms)
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="rrp_problem_count_timeout"
+									tal:attributes="value string:1000" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#rrp_problem_count_threshold', 55, 65);">RRP Problem Count Threshold</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="rrp_problem_count_threshold"
+									tal:attributes="value string:20" />
+							</td>
+						</tr>
+
+						<tr class="systemsTable">
+							<td class="systemsTable">
+								<a class="cluster_help" href="javascript:popup_window('/luci/doc/config_rhel5#rrp_token_expired_timeout', 55, 65);">RRP Token Expired Timeout</a>
+							</td>
+							<td class="systemsTable">
+								<input type="text" size="10"
+									name="rrp_token_expired_timeout"
+									tal:attributes="value string:47" />
+							</td>
+						</tr>
+					</table>
+				</td></tr>
+			</table>
+
+			<table class="systemsTable">
 				<tr class="systemsTable">
 					<td class="systemsTable" colspan="2">
 						<div class="systemsTableEnd">
@@ -433,17 +732,21 @@
 						</div>
 					</td>
 				</tr>
-			</tfoot>
-		</table>
+			</table>
 		</form>
 	</div>
 
 	<div id="configTabContent" tal:condition="python: configTabNum == 2">
-		<form name="fencedaemon" method="post">
+		<form name="fencedaemon" method="post" action="">
 			<input type="hidden" name="configtype" value="fence" />
 			<input type="hidden" name="pagetype"
 				tal:attributes="value request/pagetype | request/form/pagetype"
 			/>
+			<input type="hidden" name="cluster_version"
+				tal:attributes="value os_version | nothing" />
+			<input type="hidden" name="clustername"
+				tal:attributes="value request/clustername | clusterinfo/clustername | nothing" />
+
 		<script type="text/javascript"
 			src="/luci/homebase/homebase_common.js">
 		</script>
@@ -504,6 +807,10 @@
 			<input type="hidden" name="pagetype"
 				tal:attributes="value request/pagetype | request/form/pagetype"
 			/>
+			<input type="hidden" name="cluster_version"
+				tal:attributes="value os_version | nothing" />
+			<input type="hidden" name="clustername"
+				tal:attributes="value request/clustername | clusterinfo/clustername | nothing" />
 		<table id="systemsTable" class="systemsTable" border="0" cellspacing="0">
 			<thead class="systemsTable">
 				<tr class="systemsTable"><td class="systemsTable" colspan="1">
@@ -570,6 +877,10 @@
 				tal:attributes="value request/pagetype | request/form/pagetype"
 			/>
 			<input type="hidden" name="configtype" value="qdisk" />
+			<input type="hidden" name="cluster_version"
+				tal:attributes="value os_version | nothing" />
+			<input type="hidden" name="clustername"
+				tal:attributes="value request/clustername | clusterinfo/clustername | nothing" />
 		<div class="configTabContent">
 		<table id="systemsTable" class="systemsTable" border="0" cellspacing="0">
 			<thead class="systemsTable">
@@ -779,11 +1090,13 @@
 		</script>
 		</form>
 	</div>
+</tal:block>
 </div>
 
 <div metal:define-macro="clusterprocess-form">
-	<span tal:define="global r_agent python: here.getRicciAgentForCluster(request)"/>
-	<span tal:define="res python: here.processClusterProps(r_agent, request)"/>
+	<tal:block
+		tal:define="result python: here.clusterTaskProcess(modelb, request)"/>
+	<h2>Cluster Process Form</h2>
 </div>
 
 <div metal:define-macro="fence-option-list">
@@ -1198,7 +1511,7 @@
 				<td>ESH Path (Optional)</td>
 				<td>
 					<input name="login" type="text"
-						tal:attributes="cur_fencedev/login | string: /opt/pan-mgr/bin/esh" />
+						tal:attributes="cur_fencedev/login | string:/opt/pan-mgr/bin/esh" />
 				</td>
 			</tr>
 		</table>
@@ -1412,7 +1725,9 @@
 	</tal:comment>
 
 	<tal:block tal:define="
-		global ricci_agent python: here.getRicciAgentForCluster(request);
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
+	<tal:block tal:define="
 		global nodestatus python: here.getClusterStatus(ricci_agent);
 		global nodeinfo python: here.getNodeInfo(modelb, nodestatus, request);
 		global status_class python: 'node_' + (nodeinfo['nodestate'] == '0' and 'active' or (nodeinfo['nodestate'] == '1' and 'inactive' or 'unknown'));
@@ -1476,7 +1791,7 @@
 		tal:condition="python: nodeinfo['nodestate'] == '0' or nodeinfo['nodestate'] == '1'">
 
 	<h3>Cluster daemons running on this node</h3>
-	<form name="daemon_form">
+	<form name="daemon_form" method="post">
 	<table class="systemsTable">
 		<thead>
 			<tr class="systemsTable">
@@ -1488,23 +1803,38 @@
 		<tfoot class="systemsTable">
 			<tr class="systemsTable"><td class="systemsTable" colspan="3">
 				<div class="systemsTableEnd">
-					<input type="button" value="Update node daemon properties" />
+					<input type="Submit" value="Update node daemon properties" />
 				</div>
 			</td></tr>
 		</tfoot>
 		<tbody class="systemsTable">
 			<tr class="systemsTable" tal:repeat="demon nodeinfo/d_states">
 				<td class="systemsTable"><span tal:replace="demon/name"/></td>
-				<td class="systemsTable"><span tal:replace="python: demon['running'] and 'yes' or 'no'" /></td>
+				<td class="systemsTable"><span tal:replace="python: demon['running'] == 'true' and 'yes' or 'no'" /></td>
 				<td class="systemsTable">
-					<input type="checkbox"
-						tal:attributes="
-							name python: nodeinfo['nodename'] + ':' + demon['name'];
-							checked python: demon['enabled'] and 'checked'" />
+					<input type="hidden" tal:attributes="
+						name python: '__daemon__:' + demon['name'] + ':';
+						value demon/name" />
+
+					<input type="hidden" tal:attributes="
+						name python: '__daemon__:' + demon['name'] + ':';
+						value python: demon['enabled'] == 'true' and '1' or '0'" />
+
+					<input type="checkbox" tal:attributes="
+						name python: '__daemon__:' + demon['name'] + ':';
+						checked python: demon['enabled'] == 'true' and 'checked'" />
 				</td>
 			</tr>
 		</tbody>
 	</table>
+
+	<input type="hidden" name="nodename"
+		tal:attributes="value nodeinfo/nodename | request/nodename | nothing" />
+
+	<input type="hidden" name="clustername"
+		tal:attributes="value request/clustername | nothing" />
+
+	<input type="hidden" name="pagetype" value="55" />
 	</form>
 	<hr/>
 	</tal:block>
@@ -1604,10 +1934,13 @@
 		set_page_title('Luci ??? cluster ??? nodes');
 	</script>
 
-<div id="node_list" tal:define="
-	global ricci_agent python: here.getRicciAgentForCluster(request);
-	global status python: here.getClusterStatus(ricci_agent);
-	global nds python: here.getNodesInfo(modelb, status, request)">
+<div id="node_list">
+	<tal:block tal:define="
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
+	<tal:block tal:define="
+		global status python: here.getClusterStatus(ricci_agent);
+		global nds python: here.getNodesInfo(modelb, status, request)" />
 
 	<div tal:repeat="nd nds">
 		<tal:block
@@ -1628,8 +1961,8 @@
 						<select class="node" name="gourl">
 							<option value="">Choose a Task...</option>
 							<option tal:attributes="value nd/jl_url">
-								<span tal:condition="python: nd['status'] == '0'" tal:replace="string: Have node leave cluster" />
-								<span tal:condition="python: nd['status'] == '1'" tal:replace="string: Have node join cluster" />
+								<span tal:condition="python: nd['status'] == '0'" tal:replace="string:Have node leave cluster" />
+								<span tal:condition="python: nd['status'] == '1'" tal:replace="string:Have node join cluster" />
 							</option>
 							<option value="">----------</option>
 							<option tal:attributes="value nd/fence_it_url">Fence this node</option>
@@ -1743,7 +2076,7 @@
 				value request/form/clusterName | request/clustername | nothing"
 		/>
 
-		<h2>Add a node to <span tal:replace="request/form/clusterName | request/clustername | string: the cluster" /></h2>
+		<h2>Add a node to <span tal:replace="request/form/clusterName | request/clustername | string:the cluster" /></h2>
 
 		<table id="systemsTable" class="systemsTable" border="0" cellspacing="0">
 			<thead class="systemsTable">
@@ -1808,7 +2141,10 @@
 <div metal:define-macro="nodeprocess-form">
 	<tal:block
 		tal:define="result python: here.nodeTaskProcess(modelb, request)"/>
-	<h2>Node Process Form</h2>
+
+	<div>
+		<span tal:replace="result | nothing" />
+	</div>
 </div>
 
 <div metal:define-macro="services-form">
@@ -1820,12 +2156,13 @@
 		set_page_title('Luci ??? cluster ??? services');
 	</script>
 
-	<tal:block tal:omit-tag=""
-		tal:define="
-			global ricci_agent python: here.getRicciAgentForCluster(request);
-			global svcstatus python: here.getClusterStatus(ricci_agent);
-			global svcinf python: here.getServicesInfo(svcstatus,modelb,request);
-			global svcs svcinf/services" />
+	<tal:block tal:define="
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
+	<tal:block tal:define="
+		global svcstatus python: here.getClusterStatus(ricci_agent);
+		global svcinf python: here.getServicesInfo(svcstatus,modelb,request);
+		global svcs svcinf/services" />
 
 	<tal:block tal:repeat="svc svcs">
 		<table class="cluster service" width="100%"
@@ -1866,7 +2203,7 @@
 							This service is stopped
 						</tal:block>
 					</div>
-					<p>Autostart is <span tal:condition="not: autostart" tal:replace="string: not" /> enabled for this service</p>
+					<p>Autostart is <span tal:condition="not: autostart" tal:replace="string:not" /> enabled for this service</p>
 				</td>
 			</tr>
 
@@ -1881,7 +2218,6 @@
 </div>
 
 <div metal:define-macro="xenvmadd-form">
-  <span tal:define="ress python:here.appendModel(request, modelb)"/>
   <form method="get" action="" tal:attributes="action python:request['baseurl'] + '?clustername=' + request['clustername'] + '&pagetype=29'">
   <h4>Path to configuration file: </h4><input type="text" name="xenvmpath" value=""/>
   <h4>Name of configuration file: </h4><input type="text" name="xenvmname" value=""/>
@@ -1890,7 +2226,6 @@
 </div>
 
 <div metal:define-macro="xenvmconfig-form">
-  <span tal:define="ress python:here.appendModel(request, modelb)"/>
   <h4>Properties for Xen VM <font color="green"><span tal:content="request/servicename"/></font></h4>
   <span tal:define="global xeninfo python:here.getXenVMInfo(modelb, request)">
   <form method="get" action="" tal:attributes="action python:request['baseurl'] + '?clustername=' + request['clustername'] + '&pagetype=29&servicename=' + request['servicename']">
@@ -1974,7 +2309,9 @@
 	</script>
 
 	<tal:block tal:define="
-		global ricci_agent python: here.getRicciAgentForCluster(request);
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
+	<tal:block tal:define="
 		result python: here.serviceStart(ricci_agent, request)" />
 
 	<!-- <span metal:use-macro="here/form-macros/macros/serviceconfig-form"/> -->
@@ -1987,7 +2324,9 @@
 	</script>
 
 	<tal:block tal:define="
-		global ricci_agent python: here.getRicciAgentForCluster(request);
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
+	<tal:block tal:define="
 		result python: here.serviceRestart(ricci_agent, request)" />
 
 	<!-- <span metal:use-macro="here/form-macros/macros/serviceconfig-form"/> -->
@@ -1998,15 +2337,17 @@
 		set_page_title('Luci ??? cluster ??? services ??? Stop a service');
 	</script>
 
+	<tal:block tal:define="
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
 	<span tal:define="
-		global ricci_agent python: here.getRicciAgentForCluster(request);
 		result python: here.serviceStop(ricci_agent, request)" />
 
 	<!-- <span metal:use-macro="here/form-macros/macros/serviceconfig-form"/> -->
 </div>
 
 <div metal:define-macro="serviceconfig-type-macro" tal:omit-tag="">
-	<span tal:omit-tag="" tal:condition="python: type == 'IP Address: '">
+	<span tal:omit-tag="" tal:condition="python: type == 'ip'">
 		<tal:block metal:use-macro="here/resource-form-macros/macros/ip_macro" />
 	</span>
 	<span tal:omit-tag="" tal:condition="python: type == 'fs'">
@@ -2027,7 +2368,7 @@
 	<span tal:omit-tag="" tal:condition="python: type == 'smb'">
 		<tal:block metal:use-macro="here/resource-form-macros/macros/smb_macro" />
 	</span>
-	<span tal:omit-tag="" tal:condition="python: type == 'Script: '">
+	<span tal:omit-tag="" tal:condition="python: type == 'script'">
 		<tal:block metal:use-macro="here/resource-form-macros/macros/scr_macro" />
 	</span>
 </div>
@@ -2041,7 +2382,9 @@
 	</script>
 
 	<tal:block tal:define="
-		global ricci_agent python: here.getRicciAgentForCluster(request);
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
+	<tal:block tal:define="
 		global global_resources python: here.getResourcesInfo(modelb, request);
 		global sstat python: here.getClusterStatus(ricci_agent);
 		global sinfo python: here.getServiceInfo(sstat, modelb, request);
@@ -2209,8 +2552,10 @@
 	</script>
 
 	<tal:block tal:define="
-		global ragent python: here.getRicciAgentForCluster(request);
-		global sta python: here.getClusterStatus(ragent);
+		global ricci_agent ri_agent | python: here.getRicciAgentForCluster(request)" />
+
+	<tal:block tal:define="
+		global sta python: here.getClusterStatus(ricci_agent);
 		global fdominfo python: here.getFdomsInfo(modelb, request, sta);" />
 
 	<div class="cluster fdom" tal:repeat="fdom fdominfo">
@@ -2299,11 +2644,24 @@
 		set_page_title('Luci ??? cluster ??? fence devices');
 	</script>
 	<h2>Shared Fence Devices for Cluster: <span tal:content="request/clustername"/></h2>
-  <tal:block tal:define="global fencedevinfo python: here.getFenceInfo(modelb, None)"/>
+  <tal:block tal:define="global fencedevinfo python: here.getFencesInfo(modelb, request)"/>
 <tal:block tal:define="global fencedevs python: fencedevinfo['fencedevs']"/>
   <span tal:repeat="fencedev fencedevs">
+   <h3>Agent type: <span tal:content="fencedev/pretty_name"/></h3>
    <h3>Name: <span tal:content="fencedev/name"/></h3>
-   <h3>Agent type: <span tal:content="fencedev/agent"/></h3>
+   <h3>Nodes using this device for fencing:</h3>
+   <ul>
+     <tal:block tal:define="global usednodes python:fencedev['nodesused']"/>
+     <span tal:condition="python: len(usednodes) == 0">
+      <li>No nodes currently employ this fence device</li>
+     </span>
+    <span tal:repeat="usednode usednodes">
+     <li><font color="green">
+      <a href="" tal:attributes="href usednode/nodeurl"><tal:block tal:content="usednode/nodename"/>
+      </a></font>
+     </li>
+    </span>
+   </ul>
    <hr/>
   </span>
 </div>
--- conga/luci/cluster/index_html	2006/11/01 22:06:55	1.20.2.3
+++ conga/luci/cluster/index_html	2006/11/16 19:34:52	1.20.2.4
@@ -9,11 +9,6 @@
                       xml:lang language">
 
    <head metal:use-macro="here/header/macros/html_header">
-
-
-
-
-
      <metal:fillbase fill-slot="base">
       <metal:baseslot define-slot="base">
         <base href="" tal:attributes="href here/renderBase" />
@@ -29,47 +24,54 @@
       </metal:cache>
 
       <metal:headslot define-slot="head_slot" />
-		<tal:block tal:define="global sinfo nothing" />
-    <div tal:define="global hascluster request/clustername |nothing; global busywaiting python:None;"/>
-    <span tal:condition="not: hascluster">
-    <meta googaa="ooo"/>
-    </span>
-
-    <span tal:condition="hascluster">
-      <span tal:define="ri_agent python:here.getRicciAgentForCluster(request)">
-
-        <span tal:define="resmap python:here.getClusterOS(ri_agent);
-                          global isVirtualized resmap/isVirtualized;
-                          global os_version resmap/os;"/>
-      </span>
-      <span tal:define="global isBusy python:here.isClusterBusy(request)"/>
-      <span tal:define="global firsttime request/busyfirst |nothing"/>
-      <span tal:condition="firsttime">
-       <span tal:define="global busywaiting python:True"/>
-        <meta http-equiv="refresh" content="" tal:attributes="content isBusy/refreshurl"/>
-      </span>
-      <span tal:define="global busy isBusy/busy |nothing"/>
-       <span tal:condition="busy">
-        <span tal:define="global busywaiting python:True"/>
-        <meta http-equiv="refresh" content="" tal:attributes="content isBusy/refreshurl"/> 
-       </span>
-    </span>
-
-      <tal:comment replace="nothing"> A slot where you can insert elements in the header from a template </tal:comment>
+	    <tal:block tal:define="
+			global sinfo nothing;
+			global hascluster request/clustername |nothing;
+			global isBusy python: False;
+			global firsttime nothing;
+			global ri_agent nothing;
+			global busywaiting python:None" />
+
+		<tal:block tal:condition="not: hascluster">
+		    <meta googaa="ooo"/>
+		</tal:block>
+
+		<tal:block tal:condition="hascluster">
+			<tal:block tal:define="
+				global ri_agent python:here.getRicciAgentForCluster(request);
+				resmap python:here.getClusterOS(ri_agent);
+				global isVirtualized resmap/isVirtualized | nothing;
+				global os_version resmap/os | nothing;
+				global isBusy python:here.isClusterBusy(request);
+				global firsttime request/busyfirst |nothing" />
+
+			<tal:block tal:condition="firsttime">
+				<tal:block tal:define="global busywaiting python:True" />
+				<meta http-equiv="refresh"
+					tal:attributes="content isBusy/refreshurl | string:." />
+			</tal:block>
+
+			<tal:block tal:define="global busy isBusy/busy |nothing"/>
+
+			<tal:block tal:condition="busy">
+				<tal:block tal:define="global busywaiting python:True" />
+				<meta http-equiv="refresh"
+					tal:attributes="content isBusy/refreshurl | string:." />
+			</tal:block>
+		</tal:block>
     </metal:headslot>
 
-
-
     <metal:cssslot fill-slot="css_slot">
-      <tal:comment replace="nothing"> A slot where you can insert CSS in the header from a template </tal:comment>
-
-  <style type="text/css"><!-- @import url(./clusterportlet.css); --></style>
-  <style type="text/css"><!-- @import url(/luci/homebase/luci_homebase.css); --></style>
-      <metal:cssslot define-slot="css_slot" />
+		<style type="text/css">
+			<!-- @import url(./clusterportlet.css); -->
+		</style>
+		<style type="text/css">
+			<!-- @import url(/luci/homebase/luci_homebase.css); -->
+		</style>
+		<metal:cssslot define-slot="css_slot" />
     </metal:cssslot>
 
     <metal:javascriptslot fill-slot="javascript_head_slot">
-      <tal:comment replace="nothing"> A slot where you can insert javascript in the header from a template </tal:comment>
 		<script type="text/javascript" src="/luci/conga.js"></script>
       <SCRIPT TYPE="text/javascript">
       <!--
@@ -121,7 +123,6 @@
        </SCRIPT>
       <metal:javascriptslot define-slot="javascript_head_slot" />
     </metal:javascriptslot>
-
   </head>
 
   <body tal:attributes="class here/getSectionFromURL;
@@ -158,15 +159,22 @@
       alternative in the plone_tableless skin layer that you can use if you
       prefer layouts that don't use tables.
       </tal:comment>
-    <!-- <div tal:define="global hascluster request/clustername |nothing"/>  -->
 
     <tal:block tal:condition="hascluster">
-	    <tal:block tal:define="global ricci_agent python: here.getRicciAgentForCluster(request)" />
-		<tal:block tal:condition="python: ricci_agent"
-			tal:define="
-				global modelb python:here.getmodelbuilder(ricci_agent,isVirtualized)" />
+		<tal:block tal:condition="python: ri_agent">
+			<tal:block tal:define="
+				global modelb python:here.getmodelbuilder(ri_agent, isVirtualized)" />
+			<tal:block tal:condition="python: modelb">
+				<tal:block
+					tal:define="dummy python: here.appendModel(request, modelb)" />
+			</tal:block>
+		</tal:block>
     </tal:block>
 
+	<tal:block tal:condition="not: exists: modelb">
+		<tal:block tal:define="global modelb nothing" />
+	</tal:block>
+
       <table id="portal-columns">
         <tbody>
           <tr>
@@ -214,8 +222,8 @@
              <metal:main-form-content use-macro="here/form-chooser/macros/main-form">
              </metal:main-form-content>
 
-	<tal:block tal:condition="python: request.SESSION.has_key('checkRet')"
-		tal:define="ret python: request.SESSION.get('checkRet')">
+	<tal:block tal:condition="python: request.SESSION.has_key('checkRet')">
+		<tal:block tal:define="ret python: request.SESSION.get('checkRet')">
 		<div class="retmsgs" id="retmsgsdiv" tal:condition="python:(ret and 'messages' in ret and len(ret['messages']))">
 			<div class="hbclosebox">
 				<a href="javascript:hide_element('retmsgsdiv');"><img src="../homebase/x.png"></a>
@@ -238,6 +246,7 @@
 				</tal:block>
 			</ul>
 		</div>
+		</tal:block>
 	</tal:block>
                   </div>
 
--- conga/luci/cluster/portlet_cluconfig	2006/09/27 22:24:11	1.2
+++ conga/luci/cluster/portlet_cluconfig	2006/11/16 19:34:52	1.2.2.1
@@ -10,7 +10,7 @@
 
 <dl class="portlet" id="portlet-cluconfig-tree">
     <dt class="portletHeader">
-        <a href="#">
+        <a href="/luci/cluster/index_html?pagetype=3">
           Clusters
         </a>
     </dt>
@@ -36,7 +36,8 @@
 
 <dl class="portlet" id="portlet-cluconfig-tree">
     <dt class="portletHeader">
-        <a href="#" tal:attributes="href python:here.getClusterURL(request,modelb)">
+        <a href="/luci/cluster/index_html?pagetype=3"
+			tal:attributes="href python:here.getClusterURL(request,modelb)">
           <div tal:omit-tag="" tal:content="python: here.getClusterAlias(modelb)" />
         </a>
     </dt>
--- conga/luci/cluster/resource-form-macros	2006/10/31 17:28:03	1.21.2.1
+++ conga/luci/cluster/resource-form-macros	2006/11/16 19:34:53	1.21.2.2
@@ -43,8 +43,7 @@
 
 	<tal:block
 		tal:define="
-			global rescInf python: here.getResourcesInfo(modelb, request);
-			global msg python: here.appendModel(request, modelb)" />
+			global rescInf python: here.getResourcesInfo(modelb, request)" />
 
 	<table class="systemsTable">
 		<thead class="systemsTable">
@@ -258,44 +257,43 @@
 	<tal:block tal:define="global resourcename request/resourcename | request/form/resourceName | nothing" />
 	<tal:block tal:condition="resourcename"
 		tal:define="
-			global msg python: here.appendModel(request, modelb);
 			global res python: here.getResourceInfo(modelb, request);
 			global type python: 'tag_name' in res and res['tag_name'] or ''">
 
 	<h2>Configure <span tal:replace="res/name | string: resource" /></h2>
 
 	<div class="reschoose">
-		<span tal:omit-tag="" tal:condition="python: type == 'ip'">
+		<tal:block tal:condition="python: type == 'ip'">
 			<div metal:use-macro="here/resource-form-macros/macros/ip_macro" />
-		</span>
+		</tal:block>
 
-		<span tal:omit-tag="" tal:condition="python: type == 'fs'">
+		<tal:block tal:condition="python: type == 'fs'">
 			<div metal:use-macro="here/resource-form-macros/macros/fs_macro" />
-		</span>
+		</tal:block>
 
-		<span tal:omit-tag="" tal:condition="python: type == 'gfs'">
+		<tal:block tal:condition="python: type == 'gfs'">
 			<div metal:use-macro="here/resource-form-macros/macros/gfs_macro" />
-		</span>
+		</tal:block>
 
-		<span tal:omit-tag="" tal:condition="python: type == 'nfsm'">
+		<tal:block tal:condition="python: type == 'nfsm'">
 			<div metal:use-macro="here/resource-form-macros/macros/nfsm_macro"/>
-		</span>
+		</tal:block>
 
-		<span tal:omit-tag="" tal:condition="python: type == 'nfsx'">
+		<tal:block tal:condition="python: type == 'nfsx'">
 			<div metal:use-macro="here/resource-form-macros/macros/nfsx_macro"/>
-		</span>
+		</tal:block>
 
-		<span tal:omit-tag="" tal:condition="python: type == 'nfsc'">
+		<tal:block tal:condition="python: type == 'nfsc'">
 			<div metal:use-macro="here/resource-form-macros/macros/nfsc_macro"/>
-		</span>
+		</tal:block>
 
-		<span tal:omit-tag="" tal:condition="python: type == 'smb'">
+		<tal:block tal:condition="python: type == 'smb'">
 			<div metal:use-macro="here/resource-form-macros/macros/smb_macro" />
-		</span>
+		</tal:block>
 
-		<span tal:omit-tag="" tal:condition="python: type == 'script'">
+		<tal:block tal:condition="python: type == 'script'">
 			<div metal:use-macro="here/resource-form-macros/macros/scr_macro" />
-		</span>
+		</tal:block>
 	</div>
 	</tal:block>
 </div>
@@ -686,14 +684,12 @@
 				<input type="radio" name="nfstype" value="nfs"
 					tal:attributes="
 						disabled python: editDisabled;
-						content string: NFS (version 3);
-						checked python: nfstype == 'nfs' and 'checked'" />
+						checked python: nfstype != 'nfs4' and 'checked'" />NFS3
 				<br/>
 				<input type="radio" name="nfstype" value="nfs4"
 					tal:attributes="
 						disabled python: editDisabled;
-						content string: NFS4;
-						checked python: nfstype == 'nfs4' and 'checked'" />
+						checked python: nfstype == 'nfs4' and 'checked'" />NFS4
 			</td>
 		</tr>
 
--- conga/luci/cluster/resource_form_handlers.js	2006/10/31 17:28:03	1.20.2.1
+++ conga/luci/cluster/resource_form_handlers.js	2006/11/16 19:34:53	1.20.2.2
@@ -101,7 +101,7 @@
 function validate_nfs_mount(form) {
 	var errors = new Array();
 
-	if (!form.mountpoint || str_is_blank(form.mounpoint.value)) {
+	if (!form.mountpoint || str_is_blank(form.mountpoint.value)) {
 		errors.push('No mount point was given.');
 		set_form_err(form.mountpoint);
 	} else
/cvs/cluster/conga/luci/docs/config_rhel5,v  -->  standard output
revision 1.2.2.1
--- conga/luci/docs/config_rhel5
+++ -	2006-11-16 19:34:57.842868000 +0000
@@ -0,0 +1,260 @@
+<html><head><title>Advanced Cluster Configuration Parameters</title>
+</head><body>
+<h2>Advanced Cluster Configuration Parameters</h2>
+<p>
+<dl compact>
+<dt><a name="secauth"><strong>secauth</strong></a><dd>
+This specifies that HMAC/SHA1 authentication should be used to authenticate
+all messages.  It further specifies that all data should be encrypted with the
+sober128 encryption algorithm to protect data from eavesdropping.
+<p>
+Enabling this option adds a 36 byte header to every message sent by totem which
+reduces total throughput.  Encryption and authentication consume 75% of CPU
+cycles in aisexec as measured with gprof when enabled.
+<p>
+For 100mbit networks with 1500 MTU frame transmissions:
+A throughput of 9mb/sec is possible with 100% cpu utilization when this
+option is enabled on 3ghz cpus.
+A throughput of 10mb/sec is possible wth 20% cpu utilization when this
+optin is disabled on 3ghz cpus.
+<p>
+For gig-e networks with large frame transmissions:
+A throughput of 20mb/sec is possible when this option is enabled on
+3ghz cpus.
+A throughput of 60mb/sec is possible when this option is disabled on
+3ghz cpus.
+<p>
+The default is on.
+<p>
+<dt><a name="rrp_mode"><strong>rrp_mode</strong></a><dd>
+This specifies the mode of redundant ring, which may be none, active, or
+passive.  Active replication offers slightly lower latency from transmit
+to delivery in faulty network environments but with less performance.
+Passive replication may nearly double the speed of the totem protocol
+if the protocol doesn't become cpu bound.  The final option is none, in
+which case only one network interface will be used to operate the totem
+protocol.
+<p>
+If only one interface directive is specified, none is automatically chosen.
+If multiple interface directives are specified, only active or passive may
+be chosen.
+<p>
+<dt><a name="netmtu"><strong>netmtu</strong></a><dd>
+This specifies the network maximum transmit unit.  To set this value beyond
+1500, the regular frame MTU, requires ethernet devices that support large, or
+also called jumbo, frames.  If any device in the network doesn't support large
+frames, the protocol will not operate properly.  The hosts must also have their
+mtu size set from 1500 to whatever frame size is specified here.
+<p>
+Please note while some NICs or switches claim large frame support, they support
+9000 MTU as the maximum frame size including the IP header.  Setting the netmtu
+and host MTUs to 9000 will cause totem to use the full 9000 bytes of the frame.
+Then Linux will add a 18 byte header moving the full frame size to 9018.  As a
+result some hardware will not operate properly with this size of data.  A netmtu 
+of 8982 seems to work for the few large frame devices that have been tested.
+Some manufacturers claim large frame support when in fact they support frame
+sizes of 4500 bytes.
+<p>
+Increasing the MTU from 1500 to 8982 doubles throughput performance from 30MB/sec
+to 60MB/sec as measured with evsbench with 175000 byte messages with the secauth 
+directive set to off.
+<p>
+When sending multicast traffic, if the network frequently reconfigures, chances are
+that some device in the network doesn't support large frames.
+<p>
+Choose hardware carefully if intending to use large frame support.
+<p>
+The default is 1500.
+<p>
+<dt><a name="threads"><strong>threads</strong></a><dd>
+This directive controls how many threads are used to encrypt and send multicast
+messages.  If secauth is off, the protocol will never use threaded sending.
+If secauth is on, this directive allows systems to be configured to use
+multiple threads to encrypt and send multicast messages.
+<p>
+A thread directive of 0 indicates that no threaded send should be used.  This
+mode offers best performance for non-SMP systems. 
+<p>
+The default is 0.
+<p>
+<dt><a name="vsftype"><strong>vsftype</strong></a><dd>
+This directive controls the virtual synchrony filter type used to identify
+a primary component.  The preferred choice is YKD dynamic linear voting,
+however, for clusters larger then 32 nodes YKD consumes alot of memory.  For
+large scale clusters that are created by changing the MAX_PROCESSORS_COUNT 
+#define in the C code totem.h file, the virtual synchrony filter &quot;none&quot; is
+recommended but then AMF and DLCK services (which are currently experimental)
+are not safe for use.
+<p>
+The default is ykd.  The vsftype can also be set to none.
+<p>
+Within the 
+<B>totem </B>
+
+directive, there are several configuration options which are used to control
+the operation of the protocol.  It is generally not recommended to change any
+of these values without proper guidance and sufficient testing.  Some networks
+may require larger values if suffering from frequent reconfigurations.  Some
+applications may require faster failure detection times which can be achieved
+by reducing the token timeout.
+<p>
+<dt><a name="token"><strong>token</strong></a><dd>
+This timeout specifies in milliseconds until a token loss is declared after not
+receiving a token.  This is the time spent detecting a failure of a processor
+in the current configuration.  Reforming a new configuration takes about 50
+milliseconds in addition to this timeout.
+<p>
+The default is 5000 milliseconds.
+<p>
+<dt><a name="token_retransmit"><strong>token_retransmit</strong></a><dd>
+This timeout specifies in milliseconds after how long before receiving a token
+the token is retransmitted.  This will be automatically calculated if token
+is modified.  It is not recommended to alter this value without guidance from
+the openais community.
+<p>
+The default is 238 milliseconds.
+<p>
+<dt><a name="hold"><strong>hold</strong></a><dd>
+This timeout specifies in milliseconds how long the token should be held by
+the representative when the protocol is under low utilization.   It is not
+recommended to alter this value without guidance from the openais community.
+<p>
+The default is 180 milliseconds.
+<p>
+<dt><a name="retransmits_before_loss"><strong>retransmits_before_loss</strong></a><dd>
+This value identifies how many token retransmits should be attempted before
+forming a new configuration.  If this value is set, retransmit and hold will
+be automatically calculated from retransmits_before_loss and token.
+<p>
+The default is 4 retransmissions.
+<p>
+<dt><a name="join"><strong>join</strong></a><dd>
+This timeout specifies in milliseconds how long to wait for join messages in 
+the membership protocol.
+<p>
+The default is 100 milliseconds.
+<p>
+<dt><a name="send_join"><strong>send_join</strong></a><dd>
+This timeout specifies in milliseconds an upper range between 0 and send_join
+to wait before sending a join message.  For configurations with less then
+32 nodes, this parameter is not necessary.  For larger rings, this parameter
+is necessary to ensure the NIC is not overflowed with join messages on
+formation of a new ring.  A reasonable value for large rings (128 nodes) would
+be 80msec.  Other timer values must also change if this value is changed.  Seek
+advice from the openais mailing list if trying to run larger configurations.
+<p>
+The default is 0 milliseconds.
+<p>
+<dt><a name="consensus"><strong>consensus</strong></a><dd>
+This timeout specifies in milliseconds how long to wait for consensus to be
+achieved before starting a new round of membership configuration.
+<p>
+The default is 200 milliseconds.
+<p>
+<dt><a name="merge"><strong>merge</strong></a><dd>
+This timeout specifies in milliseconds how long to wait before checking for
+a partition when no multicast traffic is being sent.  If multicast traffic
+is being sent, the merge detection happens automatically as a function of
+the protocol.
+<p>
+The default is 200 milliseconds.
+<p>
+<dt><a name="downcheck"><strong>downcheck</strong></a><dd>
+This timeout specifies in milliseconds how long to wait before checking
+that a network interface is back up after it has been downed.
+<p>
+The default is 1000 millseconds.
+<p>
+<dt><a name="fail_to_recv_const"><strong>fail_to_recv_const</strong></a><dd>
+This constant specifies how many rotations of the token without receiving any
+of the messages when messages should be received may occur before a new
+configuration is formed.
+<p>
+The default is 50 failures to receive a message.
+<p>
+<dt><a name="seqno_unchanged_const"><strong>seqno_unchanged_const</strong></a><dd>
+This constant specifies how many rotations of the token without any multicast
+traffic should occur before the merge detection timeout is started.
+<p>
+The default is 30 rotations.
+<p>
+<dt><a name="heartbeat_failures_allowed"><strong>heartbeat_failures_allowed</strong></a><dd>
+[HeartBeating mechanism]
+Configures the optional HeartBeating mechanism for faster failure detection. Keep in
+mind that engaging this mechanism in lossy networks could cause faulty loss declaration 
+as the mechanism relies on the network for heartbeating. 
+<p>
+So as a rule of thumb use this mechanism if you require improved failure in low to 
+medium utilized networks.
+<p>
+This constant specifies the number of heartbeat failures the system should tolerate
+before declaring heartbeat failure e.g 3. Also if this value is not set or is 0 then the
+heartbeat mechanism is not engaged in the system and token rotation is the method
+of failure detection
+<p>
+The default is 0 (disabled).
+<p>
+<dt><a name="max_network_delay"><strong>max_network_delay</strong></a><dd>
+[HeartBeating mechanism]
+This constant specifies in milliseconds the approximate delay that your network takes
+to transport one packet from one machine to another. This value is to be set by system
+engineers and please dont change if not sure as this effects the failure detection
+mechanism using heartbeat.
+<p>
+The default is 50 milliseconds.
+<p>
+<dt><a name="window_size"><strong>window_size</strong></a><dd>
+This constant specifies the maximum number of messages that may be sent on one
+token rotation.  If all processors perform equally well, this value could be
+large (300), which would introduce higher latency from origination to delivery
+for very large rings.  To reduce latency in large rings(16+), the defaults are
+a safe compromise.  If 1 or more slow processor(s) are present among fast
+processors, window_size should be no larger then 256000 / netmtu to avoid
+overflow of the kernel receive buffers.  The user is notified of this by
+the display of a retransmit list in the notification logs.  There is no loss
+of data, but performance is reduced when these errors occur.
+<p>
+The default is 50 messages.
+<p>
+<dt><a name="max_messages"><strong>max_messages</strong></a><dd>
+This constant specifies the maximum number of messages that may be sent by one
+processor on receipt of the token.  The max_messages parameter is limited to
+256000 / netmtu to prevent overflow of the kernel transmit buffers.
+<p>
+The default is 17 messages.
+<p>
+<dt><a name="rrp_problem_count_timeout"><strong>rrp_problem_count_timeout</strong></a><dd>
+This specifies the time in milliseconds to wait before decrementing the
+problem count by 1 for a particular ring to ensure a link is not marked
+faulty for transient network failures.
+<p>
+The default is 1000 milliseconds.
+<p>
+<dt><a name="rrp_problem_count_threshold"><strong>rrp_problem_count_threshold</strong></a><dd>
+This specifies the number of times a problem is detected with a link before
+setting the link faulty.  Once a link is set faulty, no more data is
+transmitted upon it.  Also, the problem counter is no longer decremented when
+the problem count timeout expires.
+<p>
+A problem is detected whenever all tokens from the proceeding processor have
+not been received within the rrp_token_expired_timeout.  The
+rrp_problem_count_threshold * rrp_token_expired_timeout should be atleast 50
+milliseconds less then the token timeout, or a complete reconfiguration
+may occur.
+<p>
+The default is 20 problem counts.
+<p>
+<dt><a name="rrp_token_expired_timeout"><strong>rrp_token_expired_timeout</strong></a><dd>
+This specifies the time in milliseconds to increment the problem counter for
+the redundant ring protocol after not having received a token from all rings
+for a particular processor.
+<p>
+This value will automatically be calculated from the token timeout and
+problem_count_threshold but may be overridden.  It is not recommended to
+override this value without guidance from the openais community.
+<p>
+The default is 47 milliseconds.
+<p>
+</dl>
+</body>
+</html>
--- conga/luci/homebase/form-chooser	2006/10/09 16:16:11	1.10
+++ conga/luci/homebase/form-chooser	2006/11/16 19:34:53	1.10.2.1
@@ -4,10 +4,6 @@
 	<title id="pagetitle" tal:content="title">The title</title>
 </head>
 
-<tal:comment replace="nothing">
-	$Id: form-chooser,v 1.10 2006/10/09 16:16:11 rmccabe Exp $
-</tal:comment>
-
 <body>
 
 <metal:choose-form metal:define-macro="main-form">
--- conga/luci/homebase/form-macros	2006/11/01 22:06:55	1.44.2.3
+++ conga/luci/homebase/form-macros	2006/11/16 19:34:53	1.44.2.4
@@ -1,9 +1,5 @@
 <html>
 
-<tal:comment tal:replace="nothing">
-	$Id: form-macros,v 1.44.2.3 2006/11/01 22:06:55 rmccabe Exp $
-</tal:comment>
-
 <head>
 	<title tal:content="string:"></title>
 </head>
--- conga/luci/homebase/homebase_common.js	2006/10/04 17:24:58	1.13
+++ conga/luci/homebase/homebase_common.js	2006/11/16 19:34:53	1.13.2.1
@@ -8,6 +8,35 @@
 		ielem.className = ielem.className.replace(/ formerror/, '');
 }
 
+function toggle_visible(img_obj, elem_id, label_id) {
+	var elem = document.getElementById(elem_id)
+	if (!elem)
+		return (-1);
+
+	var old_state = !!!elem.className.match(/invisible/i);
+
+	if (label_id) {
+		var label = document.getElementById(label_id);
+		if (!label)
+			return (-1);
+		if (old_state)
+			label.innerHTML = 'Show';
+		else
+			label.innerHTML = 'Hide';
+	}
+
+	if (old_state) {
+		img_obj.src = 'arrow_right.png';
+		img_obj.alt = '[-]';
+		elem.className += ' invisible';
+	} else {
+		img_obj.src = 'arrow_down.png';
+		img_obj.alt = '[+]';
+		elem.className = elem.className.replace(/invisible/i,'');
+	}
+	return (0);
+}
+
 function is_valid_int(str, min, max) {
 	if (str.match(/[^0-9 -]/))
 		return (0);
--- conga/luci/homebase/homebase_portlet_fetcher	2006/05/18 17:47:15	1.3
+++ conga/luci/homebase/homebase_portlet_fetcher	2006/11/16 19:34:53	1.3.2.1
@@ -3,10 +3,6 @@
 
 <body>
 
-<tal:comment replace="nothing">
-	$Id: homebase_portlet_fetcher,v 1.3 2006/05/18 17:47:15 rmccabe Exp $
-</tal:comment>
-
 <metal:leftcolumn define-macro="left_column">
 	<span tal:omit-tag="">
 		<div tal:omit-tag="" metal:use-macro="here/portlet_homebase/macros/homebase_portlet" />
--- conga/luci/homebase/index_html	2006/10/31 17:28:04	1.18.2.1
+++ conga/luci/homebase/index_html	2006/11/16 19:34:53	1.18.2.2
@@ -14,10 +14,6 @@
 	tal:attributes="lang language;
 					xml:lang language">
 
-<tal:comment replace="nothing">
-	$Id: index_html,v 1.18.2.1 2006/10/31 17:28:04 rmccabe Exp $
-</tal:comment>
-
 <head metal:use-macro="here/header/macros/html_header">
 	<metal:fillbase fill-slot="base">
 		<metal:baseslot define-slot="base">
--- conga/luci/homebase/luci_homebase.css	2006/10/16 19:13:45	1.28
+++ conga/luci/homebase/luci_homebase.css	2006/11/16 19:34:53	1.28.2.1
@@ -380,6 +380,20 @@
 	padding: .5em;
 }
 
+a.cluster_help:link,
+a.cluster_help:visited,
+a.cluster_help:visited {
+	color: #000000;
+	text-decoration: none ! important;
+}
+
+a.cluster_help:hover {
+	text-decoration: none ! important;
+	cursor: help;
+	color: #000000;
+	border-bottom: 1px solid #cccccc;
+}
+
 a.cluster:link,
 a.cluster:visited {
 	border-bottom: 1px dashed #cccccc;
--- conga/luci/homebase/portlet_homebase	2006/06/20 21:21:47	1.7
+++ conga/luci/homebase/portlet_homebase	2006/11/16 19:34:53	1.7.2.1
@@ -5,10 +5,6 @@
 
 <body>
 
-<tal:comment replace="nothing">
-	$Id: portlet_homebase,v 1.7 2006/06/20 21:21:47 rmccabe Exp $
-</tal:comment>
-
 <div metal:define-macro="homebase_portlet">
 	<div class="type-node">
 	<dl class="portlet" id="portlet-homebase">
--- conga/luci/plone-custom/conga.js	2006/10/10 19:19:13	1.3
+++ conga/luci/plone-custom/conga.js	2006/11/16 19:34:53	1.3.2.1
@@ -8,5 +8,7 @@
 function popup_window(url, width_percent, height_percent) {
 	var width = window.innerWidth * (width_percent / 100);
 	var height = window.innerHeight * (height_percent / 100);
-	window.open('luci/doc', '', 'width=' + width + ',height=' + height + ',scrollbars,resizable', false);
+	var newwin = window.open(url, 'Conga Help', 'width=' + width + ',height=' + height + ',scrollbars,resizable', false);
+	if (newwin)
+		newwin.focus();
 }
--- conga/luci/plone-custom/footer	2006/09/19 14:48:21	1.2
+++ conga/luci/plone-custom/footer	2006/11/16 19:34:53	1.2.2.1
@@ -6,7 +6,7 @@
 	<span i18n:translate="description_copyright" tal:omit-tag="">
 		The 
 		<span>
-			<a href="http://redhat.com">
+			<a href="http://www.sourceware.org/cluster/conga">
 				Conga Cluster and Storage Management System
 			</a>
 		</span>
@@ -19,7 +19,7 @@
 			i18n:name="current_year" 
 			tal:define="now modules/DateTime/DateTime" 
 			tal:content="now/year" />
-		by <a href="http://redhat.com/Conga">Red Hat, Inc</a>
+		by <a href="http://www.redhat.com/">Red Hat, Inc</a>
 	</span>
 </p>
 
--- conga/luci/site/luci/Extensions/FenceDaemon.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/FenceDaemon.py	2006/11/16 19:34:53	1.1.2.1
@@ -27,4 +27,10 @@
     val = self.getAttribute("clean_start")
     return val
 
+  def setPostJoinDelay(self, delay):
+    self.addAttribute("post_join_delay", delay)
+
+  def setPostFailDelay(self, delay):
+    self.addAttribute("post_fail_delay", delay)
+
 
--- conga/luci/site/luci/Extensions/FenceHandler.py	2006/10/16 19:58:38	1.4
+++ conga/luci/site/luci/Extensions/FenceHandler.py	2006/11/16 19:34:53	1.4.2.1
@@ -68,6 +68,8 @@
               "fence_egenera":True,
               "fence_bullpap":True,
               "fence_drac":False,
+              "fence_xvm":True,
+              "fence_scsi":True,
               "fence_ipmilan":False,
               "fence_manual":False }
 
--- conga/luci/site/luci/Extensions/LuciSyslog.py	2006/10/31 17:28:04	1.2.2.2
+++ conga/luci/site/luci/Extensions/LuciSyslog.py	2006/11/16 19:34:53	1.2.2.3
@@ -3,14 +3,12 @@
 		LOG_DAEMON, LOG_PID, LOG_NDELAY, LOG_INFO, \
 		LOG_WARNING, LOG_AUTH, LOG_DEBUG
 
-"""Exception class for the LuciSyslog facility
-"""
+# Exception class for the LuciSyslog facility
 class LuciSyslogError(Exception):
 	def __init__(self, msg):
 		Exception.__init__(self, msg)
 
-"""Facility that provides centralized syslog(3) functionality for luci
-"""
+# Facility that provides centralized syslog(3) functionality for luci
 class LuciSyslog:
 	def __init__(self):
 		self.__init = 0
@@ -26,7 +24,8 @@
 		try:
 			syslog(LOG_INFO, msg)
 		except:
-			raise LuciSyslogError, 'syslog info call failed'
+			pass
+			#raise LuciSyslogError, 'syslog info call failed'
 
 	def warn(self, msg):
 		if not self.__init:
@@ -34,7 +33,8 @@
 		try:
 			syslog(LOG_WARNING, msg)
 		except:
-			raise LuciSyslogError, 'syslog warn call failed'
+			pass
+			#raise LuciSyslogError, 'syslog warn call failed'
 
 	def private(self, msg):
 		if not self.__init:
@@ -42,15 +42,30 @@
 		try:
 			syslog(LOG_AUTH, msg)
 		except:
-			raise LuciSyslogError, 'syslog private call failed'
+			pass
+			#raise LuciSyslogError, 'syslog private call failed'
 
 	def debug_verbose(self, msg):
 		if not LUCI_DEBUG_MODE or LUCI_DEBUG_VERBOSITY < 2 or not self.__init:
 			return
-		try:
-			syslog(LOG_DEBUG, msg)
-		except:
-			raise LuciSyslogError, 'syslog debug call failed'
+
+		msg_len = len(msg)
+		if msg_len < 1:
+			return
+
+		while True:
+			cur_len = min(msg_len, 800)
+			cur_msg = msg[:cur_len]
+			try:
+				syslog(LOG_DEBUG, cur_msg)
+			except:
+				pass
+
+			msg_len -= cur_len
+			if msg_len > 0:
+				msg = msg[cur_len:]
+			else:
+				break
 
 	def debug(self, msg):
 		if not LUCI_DEBUG_MODE or not self.__init:
@@ -58,7 +73,8 @@
 		try:
 			syslog(LOG_DEBUG, msg)
 		except:
-			raise LuciSyslogError, 'syslog debug call failed'
+			pass
+			#raise LuciSyslogError, 'syslog debug call failed'
 
 	def close(self):
 		try:
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2006/10/31 17:28:04	1.120.2.8
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2006/11/16 19:34:53	1.120.2.9
@@ -1,11 +1,10 @@
 import socket
 from ModelBuilder import ModelBuilder
 from xml.dom import minidom
-from ZPublisher import HTTPRequest
 import AccessControl
 from conga_constants import *
 from ricci_bridge import *
-from ricci_communicator import *
+from ricci_communicator import RicciCommunicator, RicciError, batch_status, extract_module_status
 from string import lower
 import time
 import Products.ManagedSystem
@@ -20,10 +19,11 @@
 from Script import Script
 from Samba import Samba
 from clusterOS import resolveOSType
+from FenceHandler import FenceHandler, FENCE_OPTS
 from GeneralError import GeneralError
 from UnknownClusterError import UnknownClusterError
 from homebase_adapters import nodeUnauth, nodeAuth, manageCluster, createClusterSystems, havePermCreateCluster, setNodeFlag, delNodeFlag, userAuthenticated, getStorageNode, getClusterNode
-from LuciSyslog import LuciSyslogError, LuciSyslog
+from LuciSyslog import LuciSyslog
 
 #Policy for showing the cluster chooser menu:
 #1) If there are no clusters in the ManagedClusterSystems
@@ -33,11 +33,9 @@
 #then only display chooser if the current user has
 #permissions on at least one. If the user is admin, show ALL clusters
 
-CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
-
 try:
 	luci_log = LuciSyslog()
-except LuciSyslogError, e:
+except:
 	pass
 
 def validateClusterNodes(request, sessionData, clusterName, numStorage):
@@ -114,7 +112,6 @@
 
 def validateCreateCluster(self, request):
 	errors = list()
-	messages = list()
 	requestResults = {}
 
 	if not havePermCreateCluster(self):
@@ -188,7 +185,7 @@
 		batchNode = createClusterBatch(cluster_os,
 						clusterName,
 						clusterName,
-						map(lambda x: x['ricci_host'], nodeList),
+						map(lambda x: x['host'], nodeList),
 						True,
 						True,
 						enable_storage,
@@ -213,10 +210,10 @@
 		for i in nodeList:
 			success = True
 			try:
-				rc = RicciCommunicator(i['ricci_host'])
+				rc = RicciCommunicator(i['host'])
 			except RicciError, e:
 				luci_log.debug('Unable to connect to the ricci agent on %s: %s'\
-					% (i['ricci_host'], str(e)))
+					% (i['host'], str(e)))
 				success = False
 			except:
 				success = False
@@ -224,39 +221,48 @@
 			if success == True:
 				try:
 					resultNode = rc.process_batch(batchNode, async=True)
-					batch_id_map[i['ricci_host']] = resultNode.getAttribute('batch_id')
+					batch_id_map[i['host']] = resultNode.getAttribute('batch_id')
 				except:
 					success = False
 
 			if not success:
 				nodeUnauth(nodeList)
 				cluster_properties['isComplete'] = False
-				errors.append('An error occurred while attempting to add cluster node \"' + i['ricci_host'] + '\"')
+				errors.append('An error occurred while attempting to add cluster node \"' + i['host'] + '\"')
 				return (False, {'errors': errors, 'requestResults':cluster_properties })
 		buildClusterCreateFlags(self, batch_id_map, clusterName)
 
-	messages.append('Creation of cluster \"' + clusterName + '\" has begun')
-	return (True, {'errors': errors, 'messages': messages })
+	response = request.RESPONSE
+	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
 
 def buildClusterCreateFlags(self, batch_map, clusterName):
-  path = str(CLUSTER_FOLDER_PATH + clusterName)
-  clusterfolder = self.restrictedTraverse(path)
-  for key in batch_map.keys():
-    key = str(key)
-    id = batch_map[key]
-    batch_id = str(id)
-    objname = str(key + "____flag") #This suffix needed to avoid name collision
-    clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-    #now designate this new object properly
-    objpath = str(path + "/" + objname)
-    flag = self.restrictedTraverse(objpath)
-    #flag[BATCH_ID] = batch_id
-    #flag[TASKTYPE] = CLUSTER_ADD
-    #flag[FLAG_DESC] = "Creating node " + key + " for cluster " + clusterName
-    flag.manage_addProperty(BATCH_ID,batch_id, "string")
-    flag.manage_addProperty(TASKTYPE,CLUSTER_ADD, "string")
-    flag.manage_addProperty(FLAG_DESC,"Creating node " + key + " for cluster " + clusterName, "string")
-    flag.manage_addProperty(LAST_STATUS, 0, "int")
+	path = str(CLUSTER_FOLDER_PATH + clusterName)
+
+	try:
+		clusterfolder = self.restrictedTraverse(path)
+	except Exception, e:
+		luci_log.debug_verbose('buildCCF0: no cluster folder at %s' % path)
+		return None
+
+	for key in batch_map.keys():
+		try:
+			key = str(key)
+			batch_id = str(batch_map[key])
+			#This suffix needed to avoid name collision
+			objname = str(key + "____flag")
+
+			clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+			#now designate this new object properly
+			objpath = str(path + "/" + objname)
+			flag = self.restrictedTraverse(objpath)
+
+			flag.manage_addProperty(BATCH_ID, batch_id, "string")
+			flag.manage_addProperty(TASKTYPE, CLUSTER_ADD, "string")
+			flag.manage_addProperty(FLAG_DESC, "Creating node " + key + " for cluster " + clusterName, "string")
+			flag.manage_addProperty(LAST_STATUS, 0, "int")
+		except Exception, e:
+			luci_log.debug_verbose('buildCCF1: error creating flag for %s: %s' \
+				% (key, str(e)))
 
 def validateAddClusterNode(self, request):
 	errors = list()
@@ -264,7 +270,7 @@
 	requestResults = {}
 
 	try:
-	 	sessionData = request.SESSION.get('checkRet')
+		sessionData = request.SESSION.get('checkRet')
 	except:
 		sessionData = None
 
@@ -333,7 +339,8 @@
 	while i < len(nodeList):
 		clunode = nodeList[i]
 		try:
-			batchNode = addClusterNodeBatch(clusterName,
+			batchNode = addClusterNodeBatch(clunode['os'],
+							clusterName,
 							True,
 							True,
 							enable_storage,
@@ -346,7 +353,7 @@
 			clunode['errors'] = True
 			nodeUnauth(nodeList)
 			cluster_properties['isComplete'] = False
-			errors.append('Unable to initiate node creation for host \"' + clunode['ricci_host'] + '\"')
+			errors.append('Unable to initiate node creation for host \"' + clunode['host'] + '\"')
 
 	if not cluster_properties['isComplete']:
 		return (False, {'errors': errors, 'requestResults': cluster_properties})
@@ -363,28 +370,29 @@
 		clunode = nodeList[i]
 		success = True
 		try:
-			rc = RicciCommunicator(clunode['ricci_host'])
-		except:
-			luci_log.info('Unable to connect to the ricci daemon on host ' + clunode['ricci_host'])
+			rc = RicciCommunicator(clunode['host'])
+		except Exception, e:
+			luci_log.info('Unable to connect to the ricci daemon on host %s: %s'% (clunode['host'], str(e)))
 			success = False
 
 		if success:
 			try:
 				resultNode = rc.process_batch(batchNode, async=True)
-				batch_id_map[clunode['ricci_host']] = resultNode.getAttribute('batch_id')
+				batch_id_map[clunode['host']] = resultNode.getAttribute('batch_id')
 			except:
 				success = False
 
 		if not success:
 			nodeUnauth(nodeList)
 			cluster_properties['isComplete'] = False
-			errors.append('An error occurred while attempting to add cluster node \"' + clunode['ricci_host'] + '\"')
+			errors.append('An error occurred while attempting to add cluster node \"' + clunode['host'] + '\"')
 			return (False, {'errors': errors, 'requestResults': cluster_properties})
 
-			messages.append('Cluster join initiated for host \"' + clunode['ricci_host'] + '\"')
-
+	messages.append('Cluster join initiated for host \"' + clunode['host'] + '\"')
 	buildClusterCreateFlags(self, batch_id_map, clusterName)
-	return (True, {'errors': errors, 'messages': messages})
+
+	response = request.RESPONSE
+	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
 
 def validateServiceAdd(self, request):
 	try:
@@ -420,16 +428,18 @@
 			form_hash[form_parent] = {'form': None, 'kids': []}
 		form_hash[form_parent]['kids'].append(form_id)
 		dummy_form = {}
+
 		for i in ielems:
 			try:
-				type = str(i.getAttribute('type'))
+				input_type = str(i.getAttribute('type'))
 			except:
 				continue
-			if not type or type == 'button':
+			if not input_type or input_type == 'button':
 				continue
 			try:
 				dummy_form[str(i.getAttribute('name'))] = str(i.getAttribute('value'))
-			except:
+			except Exception, e:
+				luci_log.debug_verbose('Error parsing service XML: %s' % str(e))
 				pass
 
 		try:
@@ -469,7 +479,7 @@
 			raise Exception, 'An error occurred while adding this resource'
 		modelb = res[1]
 		newres = res[0]
-		addResource(self, request, modelb, newres)
+		addResource(self, request, modelb, newres, res_type)
 	except Exception, e:
 		if len(errors) < 1:
 			errors.append('An error occurred while adding this resource')
@@ -480,35 +490,52 @@
 	
 ## Cluster properties form validation routines
 
-def validateMCastConfig(self, form):
+# rhel5 cluster version
+def validateMCastConfig(model, form):
+	errors = list()
 	try:
 		mcast_val = form['mcast'].strip().lower()
 		if mcast_val != 'true' and mcast_val != 'false':
-			raise KeyError(mcast_val)
+			raise KeyError, mcast_val
 		if mcast_val == 'true':
-			mcast_val = 1
+			mcast_manual = True
 		else:
-			mcast_val = 0
+			mcast_manual = False
 	except KeyError, e:
-		return (False, {'errors': ['An invalid multicast selection was made.']})
+		errors.append('An invalid multicast selection was made')
+		return (False, {'errors': errors})
 
-	if not mcast_val:
-		return (True, {'messages': ['Changes accepted. - FILL ME IN']})
+	if mcast_manual == True:
+		try:
+			addr_str = form['mcast_addr'].strip()
+			socket.inet_pton(socket.AF_INET, addr_str)
+		except KeyError, e:
+			errors.append('No multicast address was given')
+		except socket.error, e:
+			try:
+				socket.inet_pton(socket.AF_INET6, addr_str)
+			except socket.error, e:
+				errors.append('An invalid multicast address was given: %s')
+	else:
+		addr_str = None
+
+	if (addr_str is None and mcast_manual != True) or (mcast_manual == True and addr_str == model.getMcastAddr()):
+		errors.append('No multicast configuration changes were made.')
+		return (False, {'errors': errors})
 
 	try:
-		addr_str = form['mcast_addr'].strip()
-		socket.inet_pton(socket.AF_INET, addr_str)
-	except KeyError, e:
-		return (False, {'errors': ['No multicast address was given']})
-	except socket.error, e:
-		try:
-			socket.inet_pton(socket.AF_INET6, addr_str)
-		except socket.error, e6:
-			return (False, {'errors': ['An invalid multicast address was given: ' + e]})
+		model.usesMulticast = True
+		model.mcast_address = addr_str
+	except Exception, e:
+		luci_log.debug('Error updating mcast properties: %s' % str(e))
+		errors.append('Unable to update cluster multicast properties')
 
-	return (True, {'messages': ['Changes accepted. - FILL ME IN']})
+	if len(errors) > 0:
+		return (False, {'errors': errors})
 
-def validateQDiskConfig(self, form):
+	return (True, {})
+
+def validateQDiskConfig(model, form):
 	errors = list()
 
 	try:
@@ -520,7 +547,7 @@
 		else:
 			qdisk_val = 0
 	except KeyError, e:
-		return (False, {'errors': ['An invalid quorum partition selection was made.']})
+		return (False, {'errors': ['An invalid quorum partition selection was made']})
 
 	if not qdisk_val:
 		return (True, {'messages': ['Changes accepted. - FILL ME IN']})
@@ -528,64 +555,64 @@
 	try:
 		interval = int(form['interval'])
 		if interval < 0:
-			raise ValueError('Interval must be 0 or greater.')
+			raise ValueError, 'Interval must be 0 or greater'
 	except KeyError, e:
-		errors.append('No Interval value was given.')
+		errors.append('No Interval value was given')
 	except ValueError, e:
-		errors.append('An invalid Interval value was given: ' + e)
+		errors.append('An invalid Interval value was given: %s' % str(e))
 
 	try:
 		votes = int(form['votes'])
 		if votes < 1:
-			raise ValueError('Votes must be greater than 0')
+			raise ValueError, 'Votes must be greater than 0'
 	except KeyError, e:
-		errors.append('No Votes value was given.')
+		errors.append('No Votes value was given')
 	except ValueError, e:
-		errors.append('An invalid Votes value was given: ' + e)
+		errors.append('An invalid Votes value was given: %s' % str(e))
 
 	try:
 		tko = int(form['tko'])
 		if tko < 0:
-			raise ValueError('TKO must be 0 or greater')
+			raise ValueError, 'TKO must be 0 or greater'
 	except KeyError, e:
-		errors.append('No TKO value was given.')
+		errors.append('No TKO value was given')
 	except ValueError, e:
-		errors.append('An invalid TKO value was given: ' + e)
+		errors.append('An invalid TKO value was given: %s' % str(e))
 
 	try:
 		min_score = int(form['min_score'])
 		if min_score < 1:
 			raise ValueError('Minimum Score must be greater than 0')
 	except KeyError, e:
-		errors.append('No Minimum Score value was given.')
+		errors.append('No Minimum Score value was given')
 	except ValueError, e:
-		errors.append('An invalid Minimum Score value was given: ' + e)
+		errors.append('An invalid Minimum Score value was given: %s' % str(e))
 
 	try:
 		device = form['device'].strip()
 		if not device:
-			raise KeyError('device')
+			raise KeyError, 'device is none'
 	except KeyError, e:
-		errors.append('No Device value was given.')
+		errors.append('No Device value was given')
 
 	try:
 		label = form['label'].strip()
 		if not label:
-			raise KeyError('label')
+			raise KeyError, 'label is none'
 	except KeyError, e:
-		errors.append('No Label value was given.')
+		errors.append('No Label value was given')
 
 	num_heuristics = 0
 	try:
 		num_heuristics = int(form['num_heuristics'])
 		if num_heuristics < 0:
-			raise ValueError(form['num_heuristics'])
+			raise ValueError, 'invalid number of heuristics: %s' % form['num_heuristics']
 		if num_heuristics == 0:
 			num_heuristics = 1
 	except KeyError, e:
 		errors.append('No number of heuristics was given.')
 	except ValueError, e:
-		errors.append('An invalid number of heuristics was given: ' + e)
+		errors.append('An invalid number of heuristics was given: %s' % str(e))
 
 	heuristics = list()
 	for i in xrange(num_heuristics):
@@ -600,40 +627,49 @@
 				(not prefix + 'hscore' in form or not form['hscore'].strip())):
 				# The row is blank; ignore it.
 				continue
-			errors.append('No heuristic name was given for heuristic #' + str(i + 1))
+			errors.append('No heuristic name was given for heuristic #%d' % i + 1)
 
 		try:
 			hpath = form[prefix + 'hpath']
 		except KeyError, e:
-			errors.append('No heuristic path was given for heuristic #' + str(i + 1))
+			errors.append('No heuristic path was given for heuristic #%d' % i + 1)
 
 		try:
 			hint = int(form[prefix + 'hint'])
 			if hint < 1:
-				raise ValueError('Heuristic interval values must be greater than 0.')
+				raise ValueError, 'Heuristic interval values must be greater than 0'
 		except KeyError, e:
-			errors.append('No heuristic interval was given for heuristic #' + str(i + 1))
+			errors.append('No heuristic interval was given for heuristic #%d' % i + 1)
 		except ValueError, e:
-			errors.append('An invalid heuristic interval was given for heuristic #' + str(i + 1) + ': ' + e)
+			errors.append('An invalid heuristic interval was given for heuristic #%d: %s' % (i + 1, str(e)))
 
 		try:
 			hscore = int(form[prefix + 'score'])
 			if hscore < 1:
-				raise ValueError('Heuristic scores must be greater than 0.')
+				raise ValueError, 'Heuristic scores must be greater than 0'
 		except KeyError, e:
-			errors.append('No heuristic score was given for heuristic #' + str(i + 1))
+			errors.append('No heuristic score was given for heuristic #%d' % i + 1)
 		except ValueError, e:
-			errors.append('An invalid heuristic score was given for heuristic #' + str(i + 1) + ': ' + e)
+			errors.append('An invalid heuristic score was given for heuristic #%d: %s' % (i + 1, str(e)))
 		heuristics.append([ hname, hpath, hint, hscore ])
 
 	if len(errors) > 0:
 		return (False, {'errors': errors })
 	return (True, {'messages': ['Changes accepted. - FILL ME IN']})
 
-def validateGeneralConfig(self, form):
+def validateGeneralConfig(model, form):
 	errors = list()
 
 	try:
+		cp = model.getClusterPtr()
+		old_name = model.getClusterAlias()
+		old_ver = int(cp.getConfigVersion())
+	except Exception, e:
+		luci_log.debug_verbose('getConfigVersion: %s' % str(e))
+		errors.append('unable to determine the current configuration version')
+		return (False, {'errors': errors})
+
+	try:
 		cluster_name = form['cluname'].strip()
 		if not cluster_name:
 			raise KeyError('cluname')
@@ -642,19 +678,29 @@
 
 	try:
 		version_num = int(form['cfgver'])
-		if version_num < 0:
-			raise ValueError('configuration version numbers must be 0 or greater.')
+		if version_num < old_ver:
+			raise ValueError, 'configuration version number must be %d or greater.' % old_ver
+		# we'll increment the cluster version before propagating it.
+		version_num -= 1
 	except KeyError, e:
 		errors.append('No cluster configuration version was given.')
 	except ValueError, e:
-		errors.append('An invalid configuration version was given: ' + e)
+		errors.append('An invalid configuration version was given: %s' % str(e))
+
+	if len(errors) < 1:
+		try:
+			if cluster_name != old_name:
+				cp.addAttribute('alias', cluster_name)
+			cp.setConfigVersion(str(version_num))
+		except Exception, e:
+			luci_log.debug_verbose('unable to update general properties: %s' % str(e))
+			errors.append('Unable to update the cluster configuration.')
 
 	if len(errors) > 0:
 		return (False, {'errors': errors})
+	return (True, {})
 
-	return (True, {'messages': ['Changes accepted. - FILL ME IN']})
-
-def validateFenceConfig(self, form):
+def validateFenceConfig(model, form):
 	errors = list()
 
 	try:
@@ -664,7 +710,7 @@
 	except KeyError, e:
 		errors.append('No post fail delay was given.')
 	except ValueError, e:
-		errors.append('Invalid post fail delay: ' + e)
+		errors.append('Invalid post fail delay: %s' % str(e))
 
 	try:
 		post_join_delay = int(form['post_join_delay'])
@@ -673,12 +719,26 @@
 	except KeyError, e:
 		errors.append('No post join delay was given.')
 	except ValueError, e:
-		errors.append('Invalid post join delay: ' + e)
+		errors.append('Invalid post join delay: %s' % str(e))
+
+	try:
+		fd = model.getFenceDaemonPtr()
+		old_pj_delay = fd.getPostJoinDelay()
+		old_pf_delay = fd.getPostFailDelay()
+
+		if post_join_delay == old_pj_delay and post_fail_delay == old_pf_delay:
+			errors.append('No fence daemon properties were changed.')
+		else:
+			fd.setPostJoinDelay(str(post_join_delay))
+			fd.setPostFailDelay(str(post_fail_delay))
+	except Exception, e:
+		luci_log.debug_verbose('Unable to update fence daemon properties: %s' % str(e))
+		errors.append('An error occurred while attempting to update fence daemon properties.')
 
 	if len(errors) > 0:
 		return (False, {'errors': errors })
 
-	return (True, {'messages': ['Changes accepted. - FILL ME IN']})
+	return (True, {})
 
 configFormValidators = {
 	'general': validateGeneralConfig,
@@ -690,27 +750,111 @@
 def validateConfigCluster(self, request):
 	errors = list()
 	messages = list()
+	rc = None
+
+	try:
+		model = request.SESSION.get('model')
+		if not model:
+			raise Exception, 'model is none'
+	except Exception, e:
+		model = None
+		try:
+			cluname = request.form['clustername']
+		except:
+			try:
+				cluname = request['clustername']
+			except:
+				luci_log.debug_verbose('VCC0a: no model, no cluster name')
+				return (False, {'errors': ['No cluster model was found.']})
 
-	if not 'form' in request:
-		return (False, {'errors': ['No form was submitted.']})
-	if not 'configtype' in request.form:
+		try:
+			model = getModelForCluster(self, cluname)
+		except:
+			model = None
+
+		if model is None:
+			luci_log.debug_verbose('VCC0: unable to get model from session')
+			return (False, {'errors': ['No cluster model was found.']})
+	try:
+		if not 'configtype' in request.form:
+			luci_log.debug_verbose('VCC2: no configtype')
+			raise Exception, 'no config type'
+	except Exception, e:
+		luci_log.debug_verbose('VCC2a: %s' % str(e))
 		return (False, {'errors': ['No configuration type was submitted.']})
+
 	if not request.form['configtype'] in configFormValidators:
+		luci_log.debug_verbose('VCC3: invalid config type: %s' % request.form['configtype'])
 		return (False, {'errors': ['An invalid configuration type was submitted.']})
 
-	val = configFormValidators[request.form['configtype']]
-	ret = val(self, request.form)
+	try:
+		cp = model.getClusterPtr()
+	except:
+		luci_log.debug_verbose('VCC3a: getClusterPtr failed')
+		return (False, {'errors': ['No cluster model was found.']})
+
+	config_validator = configFormValidators[request.form['configtype']]
+	ret = config_validator(model, request.form)
 
 	retcode = ret[0]
 	if 'errors' in ret[1]:
 		errors.extend(ret[1]['errors'])
+
 	if 'messages' in ret[1]:
 		messages.extend(ret[1]['messages'])
 
+	if retcode == True:
+		try:
+			config_ver = int(cp.getConfigVersion()) + 1
+			# always increment the configuration version
+			cp.setConfigVersion(str(config_ver))
+			model.setModified(True)
+			conf_str = model.exportModelAsString()
+			if not conf_str:
+				raise Exception, 'conf_str is none'
+		except Exception, e:
+			luci_log.debug_verbose('VCC4: export model as string failed: %s' \
+				% str(e))
+			errors.append('Unable to store the new cluster configuration')
+
+	try:
+		clustername = model.getClusterName()
+		if not clustername:
+			raise Exception, 'cluster name from modelb.getClusterName() is blank'
+	except Exception, e:
+		luci_log.debug_verbose('VCC5: error: getClusterName: %s' % str(e))
+		errors.append('Unable to determine cluster name from model') 
+
+	if len(errors) > 0:
+		return (retcode, {'errors': errors, 'messages': messages})
+
+	if not rc:
+		rc = getRicciAgent(self, clustername)
+		if not rc:
+			luci_log.debug_verbose('VCC6: unable to find a ricci agent for the %s cluster' % clustername)
+			errors.append('Unable to contact a ricci agent for cluster %s' \
+				% clustername)
+
+	if rc:
+		batch_id, result = setClusterConf(rc, str(conf_str))
+		if batch_id is None or result is None:
+			luci_log.debug_verbose('VCC7: setCluserConf: batchid or result is None')
+			errors.append('Unable to propagate the new cluster configuration for %s' \
+				% clustername)
+		else:
+			try:
+				set_node_flag(self, clustername, rc.hostname(), batch_id,
+					CLUSTER_CONFIG, 'Updating cluster configuration')
+			except:
+				pass
+
 	if len(errors) < 1:
 		messages.append('The cluster properties have been updated.')
+	else:
+		return (retcode, {'errors': errors, 'messages': messages})
 
-	return (retcode, {'errors': errors, 'messages': messages})
+	response = request.RESPONSE
+	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername + '&busyfirst=true')
 
 def validateFenceAdd(self, request):
 	return (True, {})
@@ -718,6 +862,89 @@
 def validateFenceEdit(self, request):
 	return (True, {})
 
+def validateDaemonProperties(self, request):
+	errors = list()
+
+	form = None
+	try:
+		response = request.response
+		form = request.form
+		if not form:
+			form = None
+			raise Exception, 'no form was submitted'
+	except:
+		pass
+
+	if form is None:
+		luci_log.debug_verbose('VDP0: no form was submitted')
+		return (False, {'errors': ['No form was submitted']})
+
+	try:
+		nodename = form['nodename'].strip()
+		if not nodename:
+			raise Exception, 'nodename is blank'
+	except Exception, e:
+		errors.append('Unable to determine the current node name')
+		luci_log.debug_verbose('VDP1: no nodename: %s' % str(e))
+
+	try:
+		clustername = form['clustername'].strip()
+		if not clustername:
+			raise Exception, 'clustername is blank'
+	except Exception, e:
+		errors.append('Unable to determine the current cluster name')
+		luci_log.debug_verbose('VDP2: no clustername: %s' % str(e))
+
+	disable_list = list()
+	enable_list = list()
+	for i in form.items():
+		try:
+			if i[0][:11] == '__daemon__:':
+				daemon_prop = i[1]
+				if len(daemon_prop) == 2:
+					if daemon_prop[1] == '1':
+						disable_list.append(daemon_prop[0])
+				else:
+					if daemon_prop[1] == '0' and daemon_prop[2] == 'on':
+						enable_list.append(daemon_prop[0])
+		except Exception, e:
+			luci_log.debug_verbose('VDP3: error: %s' % str(i))
+
+	if len(enable_list) < 1 and len(disable_list) < 1:
+		luci_log.debug_verbose('VDP4: no changes made')
+		response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename)
+
+	nodename_resolved = resolve_nodename(self, clustername, nodename)
+	try:
+		rc = RicciCommunicator(nodename_resolved)
+		if not rc:
+			raise Exception, 'rc is None'
+	except Exception, e:
+		luci_log.debug_verbose('VDP5: RC %s: %s' % (nodename_resolved, str(e)))
+		errors.append('Unable to connect to the ricci agent on %s to update cluster daemon properties' % nodename_resolved)
+		return (False, {'errors': errors})
+		
+	batch_id, result = updateServices(rc, enable_list, disable_list)
+	if batch_id is None or result is None:
+		luci_log.debug_verbose('VDP6: setCluserConf: batchid or result is None')
+		errors.append('Unable to update the cluster daemon properties on node %s' % nodename_resolved)
+		return (False, {'errors': errors})
+
+	try:
+		status_msg = 'Updating %s daemon properties:' % nodename_resolved
+		if len(enable_list) > 0:
+			status_msg += ' enabling %s' % str(enable_list)[1:-1]
+		if len(disable_list) > 0:
+			status_msg += ' disabling %s' % str(disable_list)[1:-1]
+		set_node_flag(self, clustername, rc.hostname(), batch_id, CLUSTER_DAEMON, status_msg)
+	except:
+		pass
+
+	if len(errors) > 0:
+		return (False, {'errors': errors})
+
+	response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename + '&busyfirst=true')
+
 formValidators = {
 	6: validateCreateCluster,
 	7: validateConfigCluster,
@@ -728,11 +955,18 @@
 	33: validateResourceAdd,
 	51: validateFenceAdd,
 	50: validateFenceEdit,
+	55: validateDaemonProperties
 }
 
 def validatePost(self, request):
-	pagetype = int(request.form['pagetype'])
+	try:
+		pagetype = int(request.form['pagetype'])
+	except Exception, e:
+		luci_log.debug_verbose('VP0: error: %s' % str(e))
+		return None
+
 	if not pagetype in formValidators:
+		luci_log.debug_verbose('VP1: no handler for page type %d' % pagetype)
 		return None
 	else:
 		return formValidators[pagetype](self, request)
@@ -748,21 +982,23 @@
     except:
       request.SESSION.set('checkRet', {})
   else:
-    try: request.SESSION.set('checkRet', {})
-    except: pass
+    try:
+      request.SESSION.set('checkRet', {})
+    except:
+      pass
 
   #First, see if a cluster is chosen, then
   #check that the current user can access that system
   cname = None
   try:
     cname = request[CLUNAME]
-  except KeyError, e:
+  except:
     cname = ""
 
   try:
     url = request['URL']
-  except KeyError, e:
-    url = "."
+  except:
+    url = "/luci/cluster/index_html"
 
   try:
     pagetype = request[PAGETYPE]
@@ -811,7 +1047,7 @@
     clcfg['show_children'] = False
 
   #loop through all clusters
-  syslist= list()
+  syslist = list()
   for system in systems:
     clsys = {}
     clsys['Title'] = system[0]
@@ -839,20 +1075,30 @@
 
   return dummynode
 
+def getnodes(self, model):
+  mb = model
+  nodes = mb.getNodes()
+  names = list()
+  for node in nodes:
+    names.append(node.getName())
+  return names
 
 def createCluConfigTree(self, request, model):
   dummynode = {}
 
+  if not model:
+    return {}
+
   #There should be a positive page type
   try:
     pagetype = request[PAGETYPE]
-  except KeyError, e:
+  except:
     pagetype = '3'
 
   try:
     url = request['URL']
   except KeyError, e:
-    url = "."
+    url = "/luci/cluster/index_html"
 
   #The only way this method can run is if there exists
   #a clustername query var
@@ -1110,7 +1356,7 @@
   kids.append(rvadd)
   kids.append(rvcfg)
   rv['children'] = kids
- #################################################################
+ ################################################################
   fd = {}
   fd['Title'] = "Failover Domains"
   fd['cfg_type'] = "failoverdomains"
@@ -1266,8 +1512,10 @@
   return model.getClusterName()
 
 def getClusterAlias(self, model):
+  if not model:
+    return ''
   alias = model.getClusterAlias()
-  if alias == None:
+  if alias is None:
     return model.getClusterName()
   else:
     return alias
@@ -1281,6 +1529,7 @@
   portaltabs = list()
   if not userAuthenticated(self):
     return portaltabs
+
   selectedtab = "homebase"
   try:
     baseurl = req['URL']
@@ -1291,12 +1540,7 @@
     else:
       selectedtab = "homebase"
   except KeyError, e:
-    pass
-
-  try:
-    base2 = req['BASE2']
-  except KeyError, e:
-    base2 = req['HTTP_HOST'] + req['SERVER_PORT']
+    selectedtab = None
 
   htab = { 'Title':"homebase",
            'Description':"Home base for this luci server",
@@ -1309,7 +1553,7 @@
 
   ctab = { 'Title':"cluster",
            'Description':"Cluster configuration page",
-           'Taburl':"/luci/cluster?pagetype=3"}
+           'Taburl':"/luci/cluster/index_html?pagetype=3"}
   if selectedtab == "cluster":
     ctab['isSelected'] = True
   else:
@@ -1331,7 +1575,7 @@
 
 
 
-def check_clusters(self,clusters):
+def check_clusters(self, clusters):
   clist = list()
   for cluster in clusters:
     if cluster_permission_check(cluster[1]):
@@ -1357,15 +1601,15 @@
 	try:
 		clusterfolder = self.restrictedTraverse(path)
 		if not clusterfolder:
-			luci_log.debug('GRA: cluster folder %s for %s is missing.' \
+			luci_log.debug('GRA0: cluster folder %s for %s is missing.' \
 				% (path, clustername))
 			raise Exception, 'no cluster folder at %s' % path
 		nodes = clusterfolder.objectItems('Folder')
 		if len(nodes) < 1:
-			luci_log.debug('GRA: no cluster nodes for %s found.' % clustername)
+			luci_log.debug('GRA1: no cluster nodes for %s found.' % clustername)
 			raise Exception, 'no cluster nodes were found at %s' % path
 	except Exception, e:
-		luci_log.debug('GRA: cluster folder %s for %s is missing: %s.' \
+		luci_log.debug('GRA2: cluster folder %s for %s is missing: %s.' \
 			% (path, clustername, str(e)))
 		return None
 
@@ -1383,17 +1627,31 @@
 		try:
 			rc = RicciCommunicator(hostname)
 		except RicciError, e:
-			luci_log.debug('GRA: ricci error: %s' % str(e))
+			luci_log.debug('GRA3: ricci error: %s' % str(e))
 			continue
 
 		try:
 			clu_info = rc.cluster_info()
 		except Exception, e:
-			luci_log.debug('GRA: cluster_info error: %s' % str(e))
+			luci_log.debug('GRA4: cluster_info error: %s' % str(e))
+
+		try:
+			cur_name = str(clu_info[0]).strip().lower()
+			if not cur_name:
+				raise
+		except:
+			cur_name = None
 
-		if cluname != lower(clu_info[0]) and cluname != lower(clu_info[1]):
+		try:
+			cur_alias = str(clu_info[1]).strip().lower()
+			if not cur_alias:
+				raise
+		except:
+			cur_alias = None
+			
+		if (cur_name is not None and cluname != cur_name) and (cur_alias is not None and cluname != cur_alias):
 			try:
-				luci_log.debug('GRA: %s reports it\'s in cluster %s:%s; we expect %s' \
+				luci_log.debug('GRA5: %s reports it\'s in cluster %s:%s; we expect %s' \
 					 % (hostname, clu_info[0], clu_info[1], cluname))
 				setNodeFlag(self, node, CLUSTER_NODE_NOT_MEMBER)
 			except:
@@ -1407,29 +1665,43 @@
 		except:
 			pass
 
-	luci_log.debug('GRA: no ricci agent could be found for cluster %s' % cluname)
+	luci_log.debug('GRA6: no ricci agent could be found for cluster %s' \
+		% cluname)
 	return None
 
 def getRicciAgentForCluster(self, req):
+	clustername = None
 	try:
 		clustername = req['clustername']
-	except KeyError, e:
+		if not clustername:
+			clustername = None
+			raise
+	except:
 		try:
 			clustername = req.form['clusterName']
 			if not clustername:
-				raise
+				clustername = None
 		except:
-			luci_log.debug('no cluster name was specified in getRicciAgentForCluster')
-			return None
+			pass
+
+	if clustername is None:
+		luci_log.debug('GRAFC0: no cluster name was found')
+		return None
 	return getRicciAgent(self, clustername)
 
 def getClusterStatus(self, rc):
-	doc = getClusterStatusBatch(rc)
+	try:
+		doc = getClusterStatusBatch(rc)
+	except Exception, e:
+		luci_log.debug_verbose('GCS0: error: %s' % str(e))
+		doc = None
+
 	if not doc:
 		try:
-			luci_log.debug_verbose('getClusterStatusBatch returned None for %s/%s' % rc.cluster_info())
+			luci_log.debug_verbose('GCS1: returned None for %s/%s' % rc.cluster_info())
 		except:
 			pass
+
 		return {}
 
 	results = list()
@@ -1477,18 +1749,18 @@
 		baseurl = req['URL']
 		if not baseurl:
 			raise KeyError, 'is blank'
-	except KeyError, e:
-		baseurl = '.'
+	except:
+		baseurl = '/luci/cluster/index_html'
 
 	try:
 		cluname = req['clustername']
 		if not cluname:
 			raise KeyError, 'is blank'
-	except KeyError, e:
+	except:
 		try:
 			cluname = req.form['clusterName']
 			if not cluname:
-				raise
+				raise KeyError, 'is blank'
 		except:
 			cluname = '[error retrieving cluster name]'
 
@@ -1504,7 +1776,7 @@
 
 			svc = modelb.retrieveServiceByName(item['name'])
 			dom = svc.getAttribute("domain")
-			if dom != None:
+			if dom is not None:
 				itemmap['faildom'] = dom
 			else:
 				itemmap['faildom'] = "No Failover Domain"
@@ -1522,8 +1794,8 @@
 		baseurl = req['URL']
 		if not baseurl:
 			raise KeyError, 'is blank'
-	except KeyError, e:
-		baseurl = '.'
+	except:
+		baseurl = '/luci/cluster/index_html'
 
 	try:
 		cluname = req['clustername']
@@ -1588,7 +1860,7 @@
 	#first get service by name from model
 	svc = modelb.getService(servicename)
 	resource_list = list()
-	if svc != None:
+	if svc is not None:
 		indent_ctr = 0
 		children = svc.getChildren()
 		for child in children:
@@ -1603,7 +1875,7 @@
 	#Call yourself on every children
 	#then return
 	rc_map = {}
-	if parent != None:
+	if parent is not None:
 		rc_map['parent'] = parent
 	rc_map['name'] = child.getName()
 	if child.isRefObject() == True:
@@ -1631,22 +1903,27 @@
 	return child_depth + 1
 
 def serviceStart(self, rc, req):
+	svcname = None
 	try:
 		svcname = req['servicename']
-	except KeyError, e:
+	except:
 		try:
 			svcname = req.form['servicename']
 		except:
-			luci_log.debug_verbose('serviceStart error: no service name')
-			return None
+			pass
+
+	if svcname is None:
+		luci_log.debug_verbose('serviceStart0: no service name')
+		return None
 
+	nodename = None
 	try:
 		nodename = req['nodename']
-	except KeyError, e:
+	except:
 		try:
 			nodename = req.form['nodename']
 		except:
-			nodename = None
+			pass
 
 	cluname = None
 	try:
@@ -1658,52 +1935,38 @@
 			pass
 
 	if cluname is None:
-		luci_log.debug_verbose('serviceStart error: %s no service name' \
+		luci_log.debug_verbose('serviceStart2: no cluster name for svc %s' \
 			% svcname)
 		return None
 
-	ricci_agent = rc.hostname()
-
 	batch_number, result = startService(rc, svcname, nodename)
 	if batch_number is None or result is None:
-		luci_log.debug_verbose('startService %s call failed' \
-			% svcname)
+		luci_log.debug_verbose('startService3: SS(%s,%s,%s) call failed' \
+			% (svcname, cluname, nodename))
 		return None
 
-	#Now we need to create a DB flag for this system.
-	path = str(CLUSTER_FOLDER_PATH + cluname)
-	batch_id = str(batch_number)
-	objname = str(ricci_agent + "____flag")
-
 	try:
-		clusterfolder = self.restrictedTraverse(path)
-		clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-		#Now we need to annotate the new DB object
-		objpath = str(path + "/" + objname)
-		flag = self.restrictedTraverse(objpath)
-		flag.manage_addProperty(BATCH_ID, batch_id, "string")
-		flag.manage_addProperty(TASKTYPE, SERVICE_START, "string")
-		flag.manage_addProperty(FLAG_DESC, "Starting service \'" + svcname + "\'", "string")
+		set_node_flag(self, cluname, rc.hostname(), str(batch_number), SERVICE_START, "Starting service \'%s\'" % svcname)
 	except Exception, e:
-		luci_log.debug_verbose('Error creating flag at %s: %s' % (objpath, str(e)))
+		luci_log.debug_verbose('startService4: error setting flags for service %s@node %s for cluster %s' % (svcname, nodename, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['HTTP_REFERER'] + "&busyfirst=true")
+	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
 
 def serviceRestart(self, rc, req):
+	svcname = None
 	try:
 		svcname = req['servicename']
-	except KeyError, e:
+	except:
 		try:
 			svcname = req.form['servicename']
 		except:
-			luci_log.debug_verbose('no service name for serviceRestart')
-			return None
-	except:
-		luci_log.debug_verbose('no service name for serviceRestart')
+			pass
+
+	if svcname is None:
+		luci_log.debug_verbose('serviceRestart0: no service name')
 		return None
 
-	#Now we need to create a DB flag for this system.
 	cluname = None
 	try:
 		cluname = req['clustername']
@@ -1714,51 +1977,36 @@
 			pass
 
 	if cluname is None:
-		luci_log.debug_verbose('unable to determine cluser name for serviceRestart %s' % svcname)
+		luci_log.debug_verbose('serviceRestart1: no cluster for %s' % svcname)
 		return None
 
 	batch_number, result = restartService(rc, svcname)
 	if batch_number is None or result is None:
-		luci_log.debug_verbose('restartService for %s failed' % svcname)
+		luci_log.debug_verbose('serviceRestart2: %s failed' % svcname)
 		return None
 				
-	ricci_agent = rc.hostname()
-
-	path = str(CLUSTER_FOLDER_PATH + cluname)
-	batch_id = str(batch_number)
-	objname = str(ricci_agent + "____flag")
-
 	try:
-		clusterfolder = self.restrictedTraverse(path)
-		clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-
-		#Now we need to annotate the new DB object
-		objpath = str(path + "/" + objname)
-		flag = self.restrictedTraverse(objpath)
-		flag.manage_addProperty(BATCH_ID, batch_id, "string")
-		flag.manage_addProperty(TASKTYPE, SERVICE_RESTART, "string")
-		flag.manage_addProperty(FLAG_DESC, "Restarting service " + svcname, "string")
+		set_node_flag(self, cluname, rc.hostname(), str(batch_number), SERVICE_RESTART, "Restarting service \'%s\'" % svcname)
 	except Exception, e:
-		luci_log.debug_verbose('Error creating flag in restartService %s: %s' \
-			% (svcname, str(e)))
+		luci_log.debug_verbose('serviceRestart3: error setting flags for service %s for cluster %s' % (svcname, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['HTTP_REFERER'] + "&busyfirst=true")
+	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
 
 def serviceStop(self, rc, req):
+	svcname = None
 	try:
 		svcname = req['servicename']
-	except KeyError, e:
+	except:
 		try:
 			svcname = req.form['servicename']
 		except:
-			luci_log.debug_verbose('no service name for serviceStop')
-			return None
-	except:
-		luci_log.debug_verbose('no service name for serviceStop')
+			pass
+
+	if svcname is None:
+		luci_log.debug_verbose('serviceStop0: no service name')
 		return None
 
-	#Now we need to create a DB flag for this system.
 	cluname = None
 	try:
 		cluname = req['clustername']
@@ -1769,37 +2017,21 @@
 			pass
 
 	if cluname is None:
-		luci_log.debug_verbose('unable to determine cluser name for serviceStop %s' % svcname)
+		luci_log.debug_verbose('serviceStop1: no cluster name for %s' % svcname)
 		return None
 
 	batch_number, result = stopService(rc, svcname)
 	if batch_number is None or result is None:
-		luci_log.debug_verbose('stopService for %s failed' % svcname)
+		luci_log.debug_verbose('serviceStop2: stop %s failed' % svcname)
 		return None
 
-	ricci_agent = rc.hostname()
-
-	path = str(CLUSTER_FOLDER_PATH + cluname)
-	batch_id = str(batch_number)
-	objname = str(ricci_agent + "____flag")
-
 	try:
-		clusterfolder = self.restrictedTraverse(path)
-		clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-		#Now we need to annotate the new DB object
-		objpath = str(path + "/" + objname)
-		flag = self.restrictedTraverse(objpath)
-
-		flag.manage_addProperty(BATCH_ID, batch_id, "string")
-		flag.manage_addProperty(TASKTYPE, SERVICE_STOP, "string")
-		flag.manage_addProperty(FLAG_DESC, "Stopping service " + svcname, "string")
-		time.sleep(2)
+		set_node_flag(self, cluname, rc.hostname(), str(batch_number), SERVICE_STOP, "Stopping service \'%s\'" % svcname)
 	except Exception, e:
-		luci_log.debug_verbose('Error creating flags for stopService %s: %s' \
-			% (svcname, str(e)))
+		luci_log.debug_verbose('serviceStop3: error setting flags for service %s for cluster %s' % (svcname, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['HTTP_REFERER'] + "&busyfirst=true")
+	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
 
 def getFdomsInfo(self, modelb, request, clustatus):
   slist = list()
@@ -1820,11 +2052,11 @@
     fdom_map['cfgurl'] = baseurl + "?pagetype=" + FDOM_LIST + "&clustername=" + clustername
     ordered_attr = fdom.getAttribute('ordered')
     restricted_attr = fdom.getAttribute('restricted')
-    if ordered_attr != None and (ordered_attr == "true" or ordered_attr == "1"):
+    if ordered_attr is not None and (ordered_attr == "true" or ordered_attr == "1"):
       fdom_map['ordered'] = True
     else:
       fdom_map['ordered'] = False
-    if restricted_attr != None and (restricted_attr == "true" or restricted_attr == "1"):
+    if restricted_attr is not None and (restricted_attr == "true" or restricted_attr == "1"):
       fdom_map['restricted'] = True
     else:
       fdom_map['restricted'] = False
@@ -1845,7 +2077,7 @@
       else:
         nodesmap['status'] = NODE_INACTIVE
       priority_attr =  node.getAttribute('priority')
-      if priority_attr != None:
+      if priority_attr is not None:
         nodesmap['priority'] = "0"
       nodelist.append(nodesmap)
     fdom_map['nodeslist'] = nodelist
@@ -1858,7 +2090,7 @@
           break  #found more info about service...
 
       domain = svc.getAttribute("domain")
-      if domain != None:
+      if domain is not None:
         if domain == fdom.getName():
           svcmap = {}
           svcmap['name'] = svcname
@@ -1870,54 +2102,87 @@
     fdomlist.append(fdom_map)
   return fdomlist
 
-def processClusterProps(self, ricci_agent, request):
-  #First, retrieve cluster.conf from session
-  conf = request.SESSION.get('conf')
-  model = ModelBuilder(0, None, None, conf)
-
-  #Next, determine actiontype and switch on it
-  actiontype = request[ACTIONTYPE]
-
-  if actiontype == BASECLUSTER:
-    cp = model.getClusterPtr()
-    cfgver = cp.getConfigVersion()
-
-    rcfgver = request['cfgver']
-
-    if cfgver != rcfgver:
-      cint = int(cfgver)
-      rint = int(rcfgver)
-      if rint > cint:
-        cp.setConfigVersion(rcfgver)
-
-    rname = request['cluname']
-    name = model.getClusterAlias()
+def clusterTaskProcess(self, model, request):
+	try:
+		task = request['task']
+	except:
+		try:
+			task = request.form['task']
+		except:
+			luci_log.debug_verbose('CTP1: no task specified')
+			task = None
 
-    if rname != name:
-      cp.addAttribute('alias', rname)
+	if not model:
+		try:
+			cluname = request['clustername']
+			if not cluname:
+				raise Exception, 'cluname is blank'
+		except:
+			try:
+				cluname = request.form['clustername']
+				if not cluname:
+					raise Exception, 'cluname is blank'
+			except:
+				luci_log.debug_verbose('CTP0: no model/no cluster name')
+				return 'Unable to determine the cluster name.'
+		try:
+			model = getModelForCluster(self, cluname)
+		except Exception, e:
+			luci_log.debug_verbose('CPT1: GMFC failed for %s' % cluname)
+			model = None
 
-    response = request.RESPONSE
-    response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
-    return
+	if not model:
+		return 'Unable to get the model object for %s' % cluname
 
-  elif actiontype == FENCEDAEMON:
-    pass
+	if task == CLUSTER_STOP:
+		clusterStop(self, model)
+	elif task == CLUSTER_START:
+		clusterStart(self, model)
+	elif task == CLUSTER_RESTART:
+		clusterRestart(self, model)
+	elif task == CLUSTER_DELETE:
+		clusterStop(self, model, delete=True)
+	else:
+		return 'An unknown cluster task was requested.'
 
-  elif actiontype == MULTICAST:
-    pass
+	response = request.RESPONSE
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], NODES, model.getClusterName()))
 
-  elif actiontype == QUORUMD:
-    pass
+def getClusterInfo(self, model, req):
+  try:
+    cluname = req[CLUNAME]
+  except:
+    try:
+      cluname = req.form['clustername']
+    except:
+      try:
+        cluname = req.form['clusterName']
+      except:
+        luci_log.debug_verbose('GCI0: unable to determine cluster name')
+        return {}
 
-  else:
-    return
+  if model is None:
+    rc = getRicciAgent(self, cluname)
+    if not rc:
+      luci_log.debug_verbose('GCI1: unable to find a ricci agent for the %s cluster' % cluname)
+      return {}
+    try:
+      model = getModelBuilder(None, rc, rc.dom0())
+      if not model:
+        raise Exception, 'model is none'
 
+      try:
+        req.SESSION.set('model', model)
+      except Exception, e2:
+        luci_log.debug_verbose('GCI2 unable to set model in session: %s' % str(e2))
+    except Exception, e:
+      luci_log.debug_verbose('GCI3: unable to get model for cluster %s: %s' % (cluname, str(e)))
+      return {}
 
-def getClusterInfo(self, model, req):
-  cluname = req[CLUNAME]
-  baseurl = req['URL'] + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + cluname + "&"
+  prop_baseurl = req['URL'] + '?' + PAGETYPE + '=' + CLUSTER_CONFIG + '&' + CLUNAME + '=' + cluname + '&'
   map = {}
-  basecluster_url = baseurl + ACTIONTYPE + "=" + BASECLUSTER
+  basecluster_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_GENERAL_TAB
   #needed:
   map['basecluster_url'] = basecluster_url
   #name field
@@ -1929,14 +2194,14 @@
   #new cluster params - if rhel5
   #-------------
   #Fence Daemon Props
-  fencedaemon_url = baseurl + ACTIONTYPE + "=" + FENCEDAEMON
+  fencedaemon_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_FENCE_TAB
   map['fencedaemon_url'] = fencedaemon_url
   fdp = model.getFenceDaemonPtr()
   pjd = fdp.getAttribute('post_join_delay')
-  if pjd == None:
+  if pjd is None:
     pjd = "6"
   pfd = fdp.getAttribute('post_fail_delay')
-  if pfd == None:
+  if pfd is None:
     pfd = "0"
   #post join delay
   map['pjd'] = pjd
@@ -1944,7 +2209,7 @@
   map['pfd'] = pfd
   #-------------
   #if multicast
-  multicast_url = baseurl + ACTIONTYPE + "=" + MULTICAST
+  multicast_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_MCAST_TAB
   map['multicast_url'] = multicast_url
   #mcast addr
   is_mcast = model.isMulticast()
@@ -1958,7 +2223,7 @@
 
   #-------------
   #quorum disk params
-  quorumd_url = baseurl + ACTIONTYPE + "=" + QUORUMD
+  quorumd_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_QDISK_TAB
   map['quorumd_url'] = quorumd_url
   is_quorumd = model.isQuorumd()
   map['is_quorumd'] = is_quorumd
@@ -1975,27 +2240,27 @@
   if is_quorumd:
     qdp = model.getQuorumdPtr()
     interval = qdp.getAttribute('interval')
-    if interval != None:
+    if interval is not None:
       map['interval'] = interval
 
     tko = qdp.getAttribute('tko')
-    if tko != None:
+    if tko is not None:
       map['tko'] = tko
 
     votes = qdp.getAttribute('votes')
-    if votes != None:
+    if votes is not None:
       map['votes'] = votes
 
     min_score = qdp.getAttribute('min_score')
-    if min_score != None:
+    if min_score is not None:
       map['min_score'] = min_score
 
     device = qdp.getAttribute('device')
-    if device != None:
+    if device is not None:
       map['device'] = device
 
     label = qdp.getAttribute('label')
-    if label != None:
+    if label is not None:
       map['label'] = label
 
     heuristic_kids = qdp.getChildren()
@@ -2003,24 +2268,24 @@
     for kid in heuristic_kids:
       hmap = {}
       hname = kid.getAttribute('name')
-      if hname == None:
+      if hname is None:
         hname = h_ctr
         h_ctr = h_ctr + 1
       hprog = kid.getAttribute('program')
       hscore = kid.getAttribute('score')
       hinterval = kid.getAttribute('interval')
-      if hprog == None:
+      if hprog is None:
         continue
-      if hname != None:
+      if hname is not None:
         hmap['hname'] = hname
       else:
         hmap['hname'] = ""
       hmap['hprog'] = hprog
-      if hscore != None:
+      if hscore is not None:
         hmap['hscore'] = hscore
       else:
         hmap['hscore'] = ""
-      if hinterval != None:
+      if hinterval is not None:
         hmap['hinterval'] = hinterval
       else:
         hmap['hinterval'] = ""
@@ -2029,7 +2294,7 @@
 
   return map
 
-def getClustersInfo(self,status,req):
+def getClustersInfo(self, status, req):
   map = {}
   nodelist = list()
   svclist = list()
@@ -2062,6 +2327,12 @@
   map['votes'] = clu['votes']
   map['minquorum'] = clu['minQuorum']
   map['clucfg'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_CONFIG + "&" + CLUNAME + "=" + clustername
+
+  map['restart_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_RESTART
+  map['stop_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_STOP
+  map['start_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_START
+  map['delete_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_DELETE
+
   svc_dict_list = list()
   for svc in svclist:
       svc_dict = {}
@@ -2093,6 +2364,317 @@
 
   return map
 
+def nodeLeave(self, rc, clustername, nodename_resolved):
+	path = str(CLUSTER_FOLDER_PATH + clustername + '/' + nodename_resolved)
+
+	try:
+		nodefolder = self.restrictedTraverse(path)
+		if not nodefolder:
+			raise Exception, 'cannot find database object at %s' % path
+	except Exception, e:
+		luci_log.debug('NLO: node_leave_cluster err: %s' % str(e))
+		return None
+
+	objname = str(nodename_resolved + "____flag")
+	fnpresent = noNodeFlagsPresent(self, nodefolder, objname, nodename_resolved)
+
+	if fnpresent is None:
+		luci_log.debug('NL1: An error occurred while checking flags for %s' \
+			% nodename_resolved)
+		return None
+
+	if fnpresent == False:
+		luci_log.debug('NL2: flags are still present for %s -- bailing out' \
+			% nodename_resolved)
+		return None
+
+	batch_number, result = nodeLeaveCluster(rc)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('NL3: nodeLeaveCluster error: batch_number and/or result is None')
+		return None
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_LEAVE_CLUSTER, "Node \'%s\' leaving cluster" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('NL4: failed to set flags: %s' % str(e))
+	return True
+
+def nodeJoin(self, rc, clustername, nodename_resolved):
+	batch_number, result = nodeJoinCluster(rc)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('NJ0: batch_number and/or result is None')
+		return None
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_JOIN_CLUSTER, "Node \'%s\' joining cluster" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('NJ1: failed to set flags: %s' % str(e))
+	return True
+
+def clusterStart(self, model):
+	if model is None:
+		return None
+
+	clustername = model.getClusterName()
+	nodes = model.getNodes()
+	if not nodes or len(nodes) < 1:
+		return None
+
+	errors = 0
+	for node in nodes:
+		nodename = node.getName().strip()
+		nodename_resolved = resolve_nodename(self, clustername, nodename)
+
+		try:
+			rc = RicciCommunicator(nodename_resolved)
+		except Exception, e:
+			luci_log.debug_verbose('CStart: RC %s: %s' \
+				% (nodename_resolved, str(e)))
+			errors += 1
+			continue
+		if nodeJoin(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('CStart1: nodeLeave %s' % nodename_resolved)
+			errors += 1
+
+	return errors
+
+def clusterStop(self, model, delete=False):
+	if model is None:
+		return None
+
+	clustername = model.getClusterName()
+	nodes = model.getNodes()
+	if not nodes or len(nodes) < 1:
+		return None
+
+	errors = 0
+	for node in nodes:
+		nodename = node.getName().strip()
+		nodename_resolved = resolve_nodename(self, clustername, nodename)
+
+		try:
+			rc = RicciCommunicator(nodename_resolved)
+		except Exception, e:
+			luci_log.debug_verbose('[%d] CStop0: RC %s: %s' \
+				% (delete, nodename_resolved, str(e)))
+			errors += 1
+			continue
+		if nodeLeave(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('[%d] CStop1: nodeLeave %s' \
+				% (delete, nodename_resolved))
+			errors += 1
+	return errors
+
+def clusterRestart(self, model):
+	snum_err = clusterStop(self, model)
+	if snum_err:
+		luci_log.debug_verbose('cluRestart0: clusterStop: %d errs' % snum_err)
+	jnum_err = clusterStart(self, model)
+	if jnum_err:
+		luci_log.debug_verbose('cluRestart0: clusterStart: %d errs' % jnum_err)
+	return snum_err + jnum_err
+
+def clusterDelete(self, model):
+	return clusterStop(self, model, delete=True)
+
+def forceNodeReboot(self, rc, clustername, nodename_resolved):
+	batch_number, result = nodeReboot(rc)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('FNR0: batch_number and/or result is None')
+		return None
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_REBOOT, "Node \'%s\' is being rebooted" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('FNR1: failed to set flags: %s' % str(e))
+	return True
+
+def forceNodeFence(self, clustername, nodename, nodename_resolved):
+	path = str(CLUSTER_FOLDER_PATH + clustername)
+
+	try:
+		clusterfolder = self.restrictedTraverse(path)
+		if not clusterfolder:
+			raise Exception, 'no cluster folder at %s' % path
+	except Exception, e:
+		luci_log.debug('FNF0: The cluster folder %s could not be found: %s' \
+			 % (clustername, str(e)))
+		return None
+
+	try:
+		nodes = clusterfolder.objectItems('Folder')
+		if not nodes or len(nodes) < 1:
+			raise Exception, 'no cluster nodes'
+	except Exception, e:
+		luci_log.debug('FNF1: No cluster nodes for %s were found: %s' \
+			% (clustername, str(e)))
+		return None
+
+	found_one = False
+	for node in nodes:
+		if node[1].getId().find(nodename) != (-1):
+			continue
+
+		try:
+			rc = RicciCommunicator(node[1].getId())
+			if not rc:
+				raise Exception, 'rc is None'
+		except Exception, e:
+			luci_log.debug('FNF2: ricci error for host %s: %s' \
+				% (node[0], str(e)))
+			continue
+
+		if not rc.authed():
+			rc = None
+			try:
+				snode = getStorageNode(self, node[1].getId())
+				setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+			try:
+				setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+			continue
+		found_one = True
+		break
+
+	if not found_one:
+		return None
+
+	batch_number, result = nodeFence(rc, nodename)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('FNF3: batch_number and/or result is None')
+		return None
+
+	try:
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_FENCE, "Node \'%s\' is being fenced" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('FNF4: failed to set flags: %s' % str(e))
+	return True
+
+def nodeDelete(self, rc, model, clustername, nodename, nodename_resolved):
+	#We need to get a node name other than the node
+	#to be deleted, then delete the node from the cluster.conf
+	#and propogate it. We will need two ricci agents for this task.
+
+	# Make sure we can find a second node before we hose anything.
+	path = str(CLUSTER_FOLDER_PATH + clustername)
+	try:
+		clusterfolder = self.restrictedTraverse(path)
+		if not clusterfolder:
+			raise Exception, 'no cluster folder at %s' % path
+	except Exception, e:
+		luci_log.debug_verbose('ND0: node delete error for cluster %s: %s' \
+				% (clustername, str(e)))
+		return None
+
+	try:
+		nodes = clusterfolder.objectItems('Folder')
+		if not nodes or len(nodes) < 1:
+			raise Exception, 'no cluster nodes in DB'
+	except Exception, e:
+		luci_log.debug_verbose('ND1: node delete error for cluster %s: %s' \
+			% (clustername, str(e)))
+
+	found_one = False
+	for node in nodes:
+		if node[1].getId().find(nodename) != (-1):
+			continue
+		#here we make certain the node is up...
+		# XXX- we should also make certain this host is still
+		# in the cluster we believe it is.
+		try:
+			rc2 = RicciCommunicator(node[1].getId())
+		except Exception, e:
+			luci_log.info('ND2: ricci %s error: %s' % (node[0], str(e)))
+			continue
+
+		if not rc2.authed():
+			try:
+				setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+			try:
+				snode = getStorageNode(self, node[0])
+				setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
+			except:
+				pass
+
+			luci_log.debug_verbose('ND3: %s is not authed' % node[0])
+			rc2 = None
+			continue
+		else:
+			found_one = True
+			break
+
+	if not found_one:
+		luci_log.debug_verbose('ND4: unable to find ricci agent to delete %s from %s' % (nodename, clustername))
+		return None
+
+	#First, delete cluster.conf from node to be deleted.
+	#next, have node leave cluster.
+	batch_number, result = nodeLeaveCluster(rc, purge=True)
+	if batch_number is None or result is None:
+		luci_log.debug_verbose('ND5: batch_number and/or result is None')
+		return None
+
+	#It is not worth flagging this node in DB, as we are going
+	#to delete it anyway. Now, we need to delete node from model
+	#and send out new cluster.conf
+	delete_target = None
+	nodelist = model.getNodes()
+	find_node = lower(nodename)
+	for n in nodelist:
+		try:
+			if lower(n.getName()) == find_node:
+				delete_target = n
+				break
+		except:
+			continue
+
+	if delete_target is None:
+		luci_log.debug_verbose('ND6: unable to find delete target for %s in %s' \
+			% (nodename, clustername))
+		return None
+
+	model.deleteNode(delete_target)
+
+	try:
+		str_buf = model.exportModelAsString()
+		if not str_buf:
+			raise Exception, 'model string is blank'
+	except Exception, e:
+		luci_log.debug_verbose('ND7: exportModelAsString: %s' % str(e))
+		return None
+
+	# propagate the new cluster.conf via the second node
+	batch_number, result = setClusterConf(rc2, str(str_buf))
+	if batch_number is None:
+		luci_log.debug_verbose('ND8: batch number is None after del node in NTP')
+		return None
+
+	#Now we need to delete the node from the DB
+	path = str(CLUSTER_FOLDER_PATH + clustername)
+	del_path = str(path + '/' + nodename_resolved)
+
+	try:
+		delnode = self.restrictedTraverse(del_path)
+		clusterfolder = self.restrictedTraverse(path)
+		clusterfolder.manage_delObjects(delnode[0])
+	except Exception, e:
+		luci_log.debug_verbose('ND9: error deleting %s: %s' \
+			% (del_path, str(e)))
+
+	try:
+		set_node_flag(self, clustername, rc2.hostname(), str(batch_number), NODE_DELETE, "Deleting node \'%s\'" % nodename_resolved)
+	except Exception, e:
+		luci_log.debug_verbose('ND10: failed to set flags: %s' % str(e))
+	return True
+
 def nodeTaskProcess(self, model, request):
 	try:
 		clustername = request['clustername']
@@ -2122,9 +2704,6 @@
 			return None
 
 	nodename_resolved = resolve_nodename(self, clustername, nodename)
-	if not nodename_resolved or not nodename or not task or not clustername:
-		luci_log.debug('resolve_nodename failed for NTP')
-		return None
 
 	if task != NODE_FENCE:
 		# Fencing is the only task for which we don't
@@ -2171,319 +2750,43 @@
 			return None
 
 	if task == NODE_LEAVE_CLUSTER:
-		path = str(CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved)
-
-		try:
-			nodefolder = self.restrictedTraverse(path)
-			if not nodefolder:
-				raise Exception, 'cannot find directory at %s' % path
-		except Exception, e:
-			luci_log.debug('node_leave_cluster err: %s' % str(e))
+		if nodeLeave(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeLeave failed')
 			return None
 
-		objname = str(nodename_resolved + "____flag")
-
-		fnpresent = noNodeFlagsPresent(self, nodefolder, objname, nodename_resolved)
-		if fnpresent is None:
-			luci_log.debug('An error occurred while checking flags for %s' \
-				% nodename_resolved)
-			return None
-
-		if fnpresent == False:
-			luci_log.debug('flags are still present for %s -- bailing out' \
-				% nodename_resolved)
-			return None
-
-		batch_number, result = nodeLeaveCluster(rc)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeLeaveCluster error: batch_number and/or result is None')
-			return None
-
-		batch_id = str(batch_number)
-		objpath = str(path + "/" + objname)
-
-		try:
-			nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_LEAVE_CLUSTER, "string")
-			flag.manage_addProperty(FLAG_DESC, "Node \'" + nodename + "\' leaving cluster", "string")
-		except:
-			luci_log.debug('An error occurred while setting flag %s' % objpath)
-
 		response = request.RESPONSE
-		#Is this correct? Should we re-direct to the cluster page?
-		response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_JOIN_CLUSTER:
-		batch_number, result = nodeJoinCluster(rc)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeJoin error: batch_number and/or result is None')
+		if nodeJoin(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeJoin failed')
 			return None
 
-		path = str(CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved)
-		batch_id = str(batch_number)
-		objname = str(nodename_resolved + "____flag")
-		objpath = str(path + "/" + objname)
-
-		try:
-			nodefolder = self.restrictedTraverse(path)
-			nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_JOIN_CLUSTER, "string")
-			flag.manage_addProperty(FLAG_DESC, "Node \'" + nodename + "\' joining cluster", "string")
-		except Exception, e:
-			luci_log.debug_verbose('nodeJoin error: creating flags at %s: %s' \
-				% (path, str(e)))
-
 		response = request.RESPONSE
-		#Once again, is this correct? Should we re-direct to the cluster page?
-		response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_REBOOT:
-		batch_number, result = nodeReboot(rc)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeReboot: batch_number and/or result is None')
+		if forceNodeReboot(self, rc, clustername, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeReboot failed')
 			return None
 
-		path = str(CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved)
-		batch_id = str(batch_number)
-		objname = str(nodename_resolved + "____flag")
-		objpath = str(path + "/" + objname)
-
-		try:
-			nodefolder = self.restrictedTraverse(path)
-			nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_REBOOT, "string")
-			flag.manage_addProperty(FLAG_DESC, "Node \'" + nodename + "\' is being rebooted", "string")
-		except Exception, e:
-			luci_log.debug_verbose('nodeReboot err: creating flags at %s: %s' \
-				% (path, str(e)))
-
 		response = request.RESPONSE
-		#Once again, is this correct? Should we re-direct to the cluster page?
-		response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_FENCE:
-		#here, we DON'T want to open connection to node to be fenced.
-		path = str(CLUSTER_FOLDER_PATH + clustername)
-		try:
-			clusterfolder = self.restrictedTraverse(path)
-			if not clusterfolder:
-				raise Exception, 'no cluster folder at %s' % path
-		except Exception, e:
-			luci_log.debug('The cluster folder for %s could not be found: %s' \
-				 % (clustername, str(e)))
-			return None
-
-		try:
-			nodes = clusterfolder.objectItems('Folder')
-			if not nodes or len(nodes) < 1:
-				raise Exception, 'no cluster nodes'
-		except Exception, e:
-			luci_log.debug('No cluster nodes for %s were found: %s' \
-				% (clustername, str(e)))
-			return None
-
-		found_one = False
-		for node in nodes:
-			if node[1].getId().find(nodename) != (-1):
-				continue
-
-			try:
-				rc = RicciCommunicator(node[1].getId())
-				if not rc:
-					continue
-			except RicciError, e:
-				luci_log.debug('ricci error for host %s: %s' \
-					% (node[0], str(e)))
-				continue
-			except:
-				continue
-
-			if not rc.authed():
-				rc = None
-				try:
-					snode = getStorageNode(self, node[1].getId())
-					setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
-				except:
-					pass
-
-				try:
-					setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
-				except:
-					pass
-
-				continue
-			found_one = True
-			break
-
-		if not found_one:
-			return None
-
-		batch_number, result = nodeFence(rc, nodename)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeFence: batch_number and/or result is None')
+		if forceNodeFence(self, clustername, nodename, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeFencefailed')
 			return None
 
-		path = str(path + "/" + nodename_resolved)
-		batch_id = str(batch_number)
-		objname = str(nodename_resolved + "____flag")
-		objpath = str(path + "/" + objname)
-
-		try:
-			nodefolder = self.restrictedTraverse(path)
-			nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_FENCE, "string")
-			flag.manage_addProperty(FLAG_DESC, "Node \'" + nodename + "\' is being fenced", "string")
-		except Exception, e:
-			luci_log.debug_verbose('nodeFence err: creating flags at %s: %s' \
-				% (path, str(e)))
-
 		response = request.RESPONSE
-		#Once again, is this correct? Should we re-direct to the cluster page?
-		response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 	elif task == NODE_DELETE:
-		#We need to get a node name other than the node
-		#to be deleted, then delete the node from the cluster.conf
-		#and propogate it. We will need two ricci agents for this task.
-
-		# Make sure we can find a second node before we hose anything.
-		path = str(CLUSTER_FOLDER_PATH + clustername)
-		try:
-			clusterfolder = self.restrictedTraverse(path)
-			if not clusterfolder:
-				raise Exception, 'no cluster folder at %s' % path
-		except Exception, e:
-			luci_log.debug_verbose('node delete error for cluster %s: %s' \
-				% (clustername, str(e)))
+		if nodeDelete(self, rc, model, clustername, nodename, nodename_resolved) is None:
+			luci_log.debug_verbose('NTP: nodeDelete failed')
 			return None
-
-		try:
-			nodes = clusterfolder.objectItems('Folder')
-			if not nodes or len(nodes) < 1:
-				raise Exception, 'no cluster nodes in DB'
-		except Exception, e:
-			luci_log.debug_verbose('node delete error for cluster %s: %s' \
-				% (clustername, str(e)))
-
-		found_one = False
-		for node in nodes:
-			if node[1].getId().find(nodename) != (-1):
-				continue
-			#here we make certain the node is up...
-			# XXX- we should also make certain this host is still
-			# in the cluster we believe it is.
-			try:
-				rc2 = RicciCommunicator(node[1].getId())
-			except Exception, e:
-				luci_log.info('ricci %s error: %s' % (node[0], str(e)))
-				continue
-			except:
-				continue
-
-			if not rc2.authed():
-				try:
-					setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
-				except:
-					pass
-
-				try:
-					snode = getStorageNode(self, node[0])
-					setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
-				except:
-					pass
-
-				luci_log.debug_verbose('%s is not authed' % node[0])
-				rc2 = None
-				continue
-			else:
-				found_one = True
-				break
-
-		if not found_one:
-			luci_log.debug_verbose('unable to find ricci node to delete %s from %s' % (nodename, clustername))
-			return None
-
-		#First, delete cluster.conf from node to be deleted.
-		#next, have node leave cluster.
-		batch_number, result = nodeLeaveCluster(rc, purge=True)
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('nodeDelete: batch_number and/or result is None')
-			return None
-
-		#It is not worth flagging this node in DB, as we are going
-		#to delete it anyway. Now, we need to delete node from model
-		#and send out new cluster.conf
-		delete_target = None
-		nodelist = model.getNodes()
-		find_node = lower(nodename)
-		for n in nodelist:
-			try:
-				if lower(n.getName()) == find_node:
-					delete_target = n
-					break
-			except:
-				continue
-
-		if delete_target is None:
-			luci_log.debug_verbose('unable to find delete target for %s in %s' \
-				% (nodename, clustername))
-			return None
-
-		model.deleteNode(delete_target)
-
-		try:
-			str_buf = model.exportModelAsString()
-			if not str_buf:
-				raise Exception, 'model string is blank'
-		except Exception, e:
-			luci_log.debug_verbose('NTP exportModelAsString: %s' % str(e))
-			return None
-
-		# propagate the new cluster.conf via the second node
-		batch_number, result = setClusterConf(rc2, str(str_buf))
-		if batch_number is None:
-			luci_log.debug_verbose('batch number is None after del node in NTP')
-			return None
-
-		#Now we need to delete the node from the DB
-		path = str(CLUSTER_FOLDER_PATH + clustername)
-		del_path = str(path + "/" + nodename_resolved)
-
-		try:
-			delnode = self.restrictedTraverse(del_path)
-			clusterfolder = self.restrictedTraverse(path)
-			clusterfolder.manage_delObjects(delnode[0])
-		except Exception, e:
-			luci_log.debug_verbose('error deleting %s: %s' % (del_path, str(e)))
-
-		batch_id = str(batch_number)
-		objname = str(nodename_resolved + "____flag")
-		objpath = str(path + "/" + objname)
-
-		try:
-			clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#Now we need to annotate the new DB object
-			flag = self.restrictedTraverse(objpath)
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, NODE_DELETE, "string")
-			flag.manage_addProperty(FLAG_DESC, "Deleting node \'" + nodename + "\'", "string")
-		except Exception, e:
-			luci_log.debug_verbose('nodeDelete %s err setting flag@%s: %s' \
-				% (nodename, objpath, str(e)))
-
 		response = request.RESPONSE
-		response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
+		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
 
 def getNodeInfo(self, model, status, request):
   infohash = {}
+  item = None
   baseurl = request['URL']
   nodestate = NODE_ACTIVE
   svclist = list()
@@ -2553,25 +2856,38 @@
 
   #return infohash
   infohash['d_states'] = None
+
+  nodename_resolved = resolve_nodename(self, clustername, nodename)
+
   if nodestate == NODE_ACTIVE or nodestate == NODE_INACTIVE:
   #call service module on node and find out which daemons are running
-    rc = RicciCommunicator(nodename)
-    dlist = list()
-    dlist.append("ccsd")
-    dlist.append("cman")
-    dlist.append("fenced")
-    dlist.append("rgmanager")
-    states = getDaemonStates(rc, dlist)
-    infohash['d_states'] = states
+    try:
+      rc = RicciCommunicator(nodename_resolved)
+      if not rc:
+        raise Exception, 'rc is none'
+    except Exception, e:
+      rc = None
+      luci_log.info('Error connecting to %s: %s' \
+          % (nodename_resolved, str(e)))
+
+    if rc is not None:
+      dlist = list()
+      dlist.append("ccsd")
+      dlist.append("cman")
+      dlist.append("fenced")
+      dlist.append("rgmanager")
+      states = getDaemonStates(rc, dlist)
+      infohash['d_states'] = states
 
-  infohash['logurl'] = '/luci/logs/?nodename=' + nodename + '&clustername=' + clustername
+  infohash['logurl'] = '/luci/logs/?nodename=' + nodename_resolved + '&clustername=' + clustername
   return infohash
   #get list of faildoms for node
 
-def getNodesInfo(self, model,status,req):
+def getNodesInfo(self, model, status, req):
   resultlist = list()
   nodelist = list()
   svclist = list()
+
   #Sort into lists...
   for item in status:
     if item['type'] == "node":
@@ -2581,13 +2897,36 @@
     else:
       continue
 
+  try:
+    clustername = req['clustername']
+    if not clustername:
+      raise KeyError, 'clustername is blank'
+  except:
+    try:
+      clustername = req.form['clustername']
+      raise KeyError, 'clustername is blank'
+    except:
+      try:
+        clustername = req.form['clusterName']
+      except:
+        try:
+          clustername = model.getClusterName()
+        except:
+          luci_log.debug_verbose('GNI0: unable to determine cluster name')
+          return {}
+      
+ 
   for item in nodelist:
     map = {}
     name = item['name']
     map['nodename'] = name
-    clustername = req['clustername']
-    baseurl = req['URL']
+    try:
+      baseurl = req['URL']
+    except:
+      baseurl = '/luci/cluster/index_html'
+
     cfgurl = baseurl + "?" + PAGETYPE + "=" + NODE + "&" + CLUNAME + "=" + clustername + "&nodename=" + name
+
     map['configurl'] = cfgurl
     map['fenceurl'] = cfgurl + "#fence"
     if item['clustered'] == "true":
@@ -2600,7 +2939,10 @@
       map['status'] = NODE_INACTIVE
       map['status_str'] = NODE_INACTIVE_STR
 
-    map['logurl'] = '/luci/logs?nodename=' + name + '&clustername=' + clustername
+    nodename_resolved = resolve_nodename(self, clustername, name)
+
+    map['logurl'] = '/luci/logs?nodename=' + nodename_resolved + '&clustername=' + clustername
+
     #set up URLs for dropdown menu...
     if map['status'] == NODE_ACTIVE:
       map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_LEAVE_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
@@ -2644,115 +2986,328 @@
 
 def getFence(self, model, request):
   map = {}
-  fencename = request['fencedevicename']
+  fencename = request['fencename']
   fencedevs = model.getFenceDevices()
   for fencedev in fencedevs:
     if fencedev.getName().strip() == fencename:
       map = fencedev.getAttributes()
+      try:
+        map['pretty_name'] = FENCE_OPTS[fencedev.getAgentType()]
+      except:
+        map['pretty_name'] = fencedev.getAgentType()
+
+      nodes_used = list()
+      nodes = model.getNodes()
+      for node in nodes:
+        flevels = node.getFenceLevels()
+        for flevel in flevels: #These are the method blocks...
+          kids = flevel.getChildren()
+          for kid in kids: #These are actual devices in each level
+            if kid.getName().strip() == fencedev.getName().strip():
+              #See if this fd already has an entry for this node
+              found_duplicate = False
+              for item in nodes_used:
+                if item['nodename'] == node.getName().strip():
+                  found_duplicate = True
+              if found_duplicate == True:
+                continue
+              baseurl = request['URL']
+              clustername = model.getClusterName()
+              node_hash = {}
+              node_hash['nodename'] = node.getName().strip()
+              node_hash['nodeurl'] = baseurl + "?clustername=" + clustername + "&nodename=" + node.getName() + "&pagetype=" + NODE 
+              nodes_used.append(node_hash)
+
+      map['nodesused'] = nodes_used
       return map
 
   return map
+
+def getFDForInstance(fds, name):
+  for fd in fds:
+    if fd.getName().strip() == name:
+      return fd
+
+  raise
   
 def getFenceInfo(self, model, request):
+  clustername = request['clustername']
+  baseurl = request['URL']
   map = {}
-  fencedevs = list() 
-  level1 = list()
-  level2 = list()
+  level1 = list() #First level fence devices
+  level2 = list() #Second level fence devices
+  shared1 = list() #List of available sharable fence devs not used in level1
+  shared2 = list() #List of available sharable fence devs not used in level2
   map['level1'] = level1
   map['level2'] = level2
-  map['fencedevs'] = fencedevs
-  nodename = ""
-  if request == None:  #this is being called by the fence device page
-    #Get list of fence devices
-    fds = model.getFenceDevices()
-    for fd in fds:
-      #create fencedev hashmap
-      if fd.isShared() == True:
-        fencedev = fd.getAttributes()
-        fencedevs.append(fencedev)
-      
-    return map
+  map['shared1'] = shared1
+  map['shared2'] = shared2
 
-  else:
-    try:
-      nodename = request['nodename']
-    except KeyError, e:
-      raise GeneralError('FATAL', "Could not extract nodename from request")
+  major_num = 1
+  minor_num = 100
+
+  try:
+    nodename = request['nodename']
+  except KeyError, e:
+    raise GeneralError('FATAL', "Could not extract nodename from request")
     
-    #here we need to get fences for a node - just the first two levels
-    #then fill in two data structures with all attr's 
-    try:
-      node = model.retrieveNodeByName(nodename)
-    except GeneralError, e:
-      raise GeneralError('FATAL', "Couldn't find node name in current node list")
+  #Here we need to get fences for a node - just the first two levels
+  #Each level has its own list of fence devs used in that level
+  #For each fence dev, a list of instance structs is appended
+  #In addition, for each level, a list of available but unused fence devs
+  #is returned. 
+  try:
+    node = model.retrieveNodeByName(nodename)
+  except GeneralError, e:
+    raise GeneralError('FATAL', "Couldn't find node name in current node list")
 
-    levels = node.getFenceLevels()
-    len_levels = len(levels)
+  fds = model.getFenceDevices()
 
-    if len_levels == 0:
-      return map
+  levels = node.getFenceLevels()
+  len_levels = len(levels)
+
+  if len_levels == 0:
+    return map
 
-    for i in xrange(2):
-      if not i in levels:
+  if len_levels >= 1:
+    first_level = levels[0]
+    kids = first_level.getChildren()
+    last_kid_fd = None  #This is a marker for allowing multi instances
+                        #beneath a fencedev
+    for kid in kids:
+      instance_name = kid.getName().strip()
+      try:
+        fd = getFDForInstance(fds, instance_name)
+      except:
+        fd = None #Set to None in case last time thru loop
         continue
-      fence_struct = {}
-      if levels[i] != None:
-        level = levels[i]
-      else:
-        #No more levels...
+
+      if fd is not None:
+        if fd.isShared() == False:  #Not a shared dev...build struct and add
+          fencedev = {}
+          fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+          fencedev['isShared'] = False
+          fencedev['id'] = str(major_num)
+          major_num = major_num + 1
+          devattrs = fd.getAttributes()
+          kees = devattrs.keys()
+          for kee in kees:
+            fencedev[kee] = devattrs[kee]
+          kidattrs = kid.getAttributes()
+          kees = kidattrs.keys()
+          for kee in kees:
+            if kee == "name":
+              continue #Don't duplicate name attr
+            fencedev[kee] = kidattrs[kee]
+          #This fencedev struct is complete, and needs to be placed on the 
+          #level1 Q. Because it is non-shared, we should set last_kid_fd
+          #to none.
+          last_kid_fd = None
+          level1.append(fencedev)
+        else:  #This dev is shared
+          if (last_kid_fd is not None) and (fd.getName().strip() == last_kid_fd.getName().strip()):  #just append a new instance struct to last_kid_fd
+            instance_struct = {}
+            instance_struct['id'] = str(minor_num)
+            minor_num = minor_num + 1
+            kidattrs = kid.getAttributes()
+            kees = kidattrs.keys()
+            for kee in kees:
+              if kee == "name":
+                continue
+              instance_struct[kee] = kidattrs[kee]
+            #Now just add this struct to last_kid_fd and reset last_kid_fd
+            ilist = last_kid_fd['instance_list']
+            ilist.append(instance_struct)
+            last_kid_fd = fd
+            continue
+          else: #Shared, but not used above...so we need a new fencedev struct
+            fencedev = {}
+            fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+            fencedev['isShared'] = True
+            fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV 
+            fencedev['id'] = str(major_num)
+            major_num = major_num + 1
+            inlist = list()
+            fencedev['instance_list'] = inlist
+            devattrs = fd.getAttributes()
+            kees = devattrs.keys()
+            for kee in kees:
+              fencedev[kee] = devattrs[kee]
+            instance_struct = {}
+            kidattrs = kid.getAttributes()
+            kees = kidattrs.keys()
+            for kee in kees:
+              if kee == "name":
+                continue
+              instance_struct[kee] = kidattrs[kee]
+            inlist.append(instance_struct) 
+            level1.append(fencedev)
+            last_kid_fd = fd
+            continue
+
+    #level1 list is complete now, but it is still necessary to build shared1
+    sharednames = list()
+    for fd in fds:
+      isUnique = True
+      if fd.isShared() == False:
         continue
-      kids = level.getChildren()
-      if len(kids) == 0:
+      for fdev in level1:
+        if fd.getName().strip() == fdev['name']:
+          isUnique = False
+          break
+      if isUnique == True:
+        shared_struct = {}
+        shared_struct['name'] = fd.getName().strip()
+        agentname = fd.getAgentType()
+        shared_struct['agent'] = agentname
+        shared_struct['prettyname'] = FENCE_OPTS[agentname]
+        shared1.append(shared_struct)
+
+  #YUK: This next section violates the DRY rule, :-(
+  if len_levels >= 2:
+    second_level = levels[1]
+    kids = second_level.getChildren()
+    last_kid_fd = None  #This is a marker for allowing multi instances
+                        #beneath a fencedev
+    for kid in kids:
+      instance_name = kid.getName().strip()
+      try:
+        fd = getFDForInstance(fds, instance_name)
+      except:
+        fd = None #Set to None in case last time thru loop
         continue
-      else:
-        #for each kid, 
-        ### resolve name, find fence device
-        ### Add fd to list, if it is not there yet 
-        ### determine if it is a shared fence type
-        ### if it is a shared device, add instance entry
-        fds = model.getFenceDevices()
-        fence_struct = None
-        for kid in kids:
-          name = kid.getName()
-          found_fd = False
-          if not i in map:
-			continue
-          for entry in map[i]:
-            if entry['name'] == name:
-              fence_struct = entry
-              found_fd = True
-              break
-          if found_fd == False:
-            for fd in fds:
-              if fd.getName() == name:  #Found the fence device
-                fence_struct = {}
-                fence_struct['isShareable'] = fd.isShared()
-                fd_attrs = fd.getAttributes()
-                kees = fd_attrs.keys()
-                for kee in kees:
-                  fence_struct[kee] = fd_attrs[kee]
-          fi_attrs = kid.getAttributes()
-          kees = fi_attrs.keys()
-          if fence_struct['isShareable'] == True:
+      if fd is not None:
+        if fd.isShared() == False:  #Not a shared dev...build struct and add
+          fencedev = {}
+          fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+          fencedev['isShared'] = False
+          fencedev['id'] = str(major_num)
+          major_num = major_num + 1
+          devattrs = fd.getAttributes()
+          kees = devattrs.keys()
+          for kee in kees:
+            fencedev[kee] = devattrs[kee]
+          kidattrs = kid.getAttributes()
+          kees = kidattrs.keys()
+          for kee in kees:
+            if kee == "name":
+              continue #Don't duplicate name attr
+            fencedev[kee] = kidattrs[kee]
+          #This fencedev struct is complete, and needs to be placed on the 
+          #level2 Q. Because it is non-shared, we should set last_kid_fd
+          #to none.
+          last_kid_fd = None
+          level2.append(fencedev)
+        else:  #This dev is shared
+          if (last_kid_fd is not None) and (fd.getName().strip() == last_kid_fd.getName().strip()):  #just append a new instance struct to last_kid_fd
             instance_struct = {}
+            instance_struct['id'] = str(minor_num)
+            minor_num = minor_num + 1
+            kidattrs = kid.getAttributes()
+            kees = kidattrs.keys()
             for kee in kees:
-              instance_struct[kee] = fi_attrs[kee]
-              try:
-                  check = fence_struct['instances']
-                  check.append(instance_struct)
-              except KeyError, e:
-                  fence_struct['instances'] = list()
-                  fence_struct['instances'].append(instance_struct) 
-          else:  #Not a shareable fence device type
+              if kee == "name":
+                continue
+              instance_struct[kee] = kidattrs[kee]
+            #Now just add this struct to last_kid_fd and reset last_kid_fd
+            ilist = last_kid_fd['instance_list']
+            ilist.append(instance_struct)
+            last_kid_fd = fd
+            continue
+          else: #Shared, but not used above...so we need a new fencedev struct
+            fencedev = {}
+            fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
+            fencedev['isShared'] = True
+            fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV 
+            fencedev['id'] = str(major_num)
+            major_num = major_num + 1
+            inlist = list()
+            fencedev['instance_list'] = inlist
+            devattrs = fd.getAttributes()
+            kees = devattrs.keys()
             for kee in kees:
-              fence_struct[kee] = fi_attrs[kee]
-        if i == 0:
-          level1.append(fence_struct)      
-        else:
-          level2.append(fence_struct)      
+              fencedev[kee] = devattrs[kee]
+            instance_struct = {}
+            kidattrs = kid.getAttributes()
+            kees = kidattrs.keys()
+            for kee in kees:
+              if kee == "name":
+                continue
+              instance_struct[kee] = kidattrs[kee]
+            inlist.append(instance_struct) 
+            level2.append(fencedev)
+            last_kid_fd = fd
+            continue
 
-    return map    
+    #level2 list is complete but like above, we need to build shared2
+    sharednames = list()
+    for fd in fds:
+      isUnique = True
+      if fd.isShared() == False:
+        continue
+      for fdev in level2:
+        if fd.getName.strip() == fdev['name']:
+          isUnique = False
+          break
+      if isUnique == True:
+        shared_struct = {}
+        shared_struct['name'] = fd.getName().strip()
+        agentname = fd.getAgentType()
+        shared_struct['agent'] = agentname
+        shared_struct['prettyname'] = FENCE_OPTS[agentname]
+        shared2.append(shared_struct)
+
+  return map    
       
+def getFencesInfo(self, model, request):
+  clustername = request['clustername']
+  baseurl = request['URL']
+  map = {}
+  fencedevs = list() #This is for the fencedev list page
+  map['fencedevs'] = fencedevs
+  #Get list of fence devices
+  fds = model.getFenceDevices()
+  nodes_used = list() #This section determines which nodes use the dev
+  for fd in fds:
+    #create fencedev hashmap
+    if fd.isShared() == True:
+      fencedev = {}
+      attr_hash = fd.getAttributes()
+      kees = attr_hash.keys()
+      for kee in kees:
+        fencedev[kee] = attr_hash[kee] #copy attrs over
+      try:
+        fencedev['pretty_name'] = FENCE_OPTS[fd.getAgentType()]
+      except:
+        fencedev['pretty_name'] = fd.getAgentType()
+      #Add config url for this fencedev
+      fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV
+
+      nodes = model.getNodes()
+      for node in nodes:
+        flevels = node.getFenceLevels()
+        for flevel in flevels: #These are the method blocks...
+          kids = flevel.getChildren()
+          for kid in kids: #These are actual devices in each level
+            if kid.getName().strip() == fd.getName().strip():
+              #See if this fd already has an entry for this node
+              found_duplicate = False
+              for item in nodes_used:
+                if item['nodename'] == node.getName().strip():
+                  found_duplicate = True
+              if found_duplicate == True:
+                continue
+              node_hash = {}
+              node_hash['nodename'] = node.getName().strip()
+              node_hash['nodeurl'] = baseurl + "?clustername=" + clustername + "&nodename=" + node.getName() + "&pagetype=" + NODE 
+              nodes_used.append(node_hash)
+
+      fencedev['nodesused'] = nodes_used
+      fencedevs.append(fencedev)
+    
+  return map
+
+    
 def getLogsForNode(self, request):
 	try:
 		nodename = request['nodename']
@@ -2780,12 +3335,7 @@
 	if clustername is None:
 		nodename_resolved = nodename
 	else:
-		try:
-			nodename_resolved = resolve_nodename(self, clustername, nodename)
-		except:
-			luci_log.debug_verbose('Unable to resolve node name %s/%s to retrieve logging information' \
-				% (nodename, clustername))
-			return 'Unable to resolve node name for %s in cluster %s' % (nodename, clustername)
+		nodename_resolved = resolve_nodename(self, clustername, nodename)
 
 	try:
 		rc = RicciCommunicator(nodename_resolved)
@@ -2838,7 +3388,7 @@
   try:
     stringbuf = model.exportModelAsString()
     if not stringbuf:
-   	  raise Exception, 'model is blank'
+      raise Exception, 'model is blank'
   except Exception, e:
     luci_log.debug_verbose('exportModelAsString error: %s' % str(e))
     return None
@@ -2861,17 +3411,14 @@
 def getXenVMInfo(self, model, request):
 	try:
 		xenvmname = request['servicename']
-	except KeyError, e:
+	except:
 		try:
 			xenvmname = request.form['servicename']
 		except:
 			luci_log.debug_verbose('servicename is missing from request')
 			return {}
-	except:
-		luci_log.debug_verbose('servicename is missing from request')
-		return {}
 
-	try:  
+	try:
 		xenvm = model.retrieveXenVMsByName(xenvmname)
 	except:
 		luci_log.debug('An error occurred while attempting to get VM %s' \
@@ -2915,7 +3462,7 @@
   try:
     items = clusterfolder.objectItems('ManagedSystem')
     if not items or len(items) < 1:
-      luci_log.debug_verbose('ICB3: no flags at %s for cluster %s' \
+      luci_log.debug_verbose('ICB3: NOT BUSY: no flags at %s for cluster %s' \
           % (cluname, path))
       return map  #This returns an empty map, and should indicate not busy
   except Exception, e:
@@ -2925,7 +3472,7 @@
     luci_log.debug('ICB5: An error occurred while looking for cluster %s flags at path %s' % (cluname, path))
     return map
 
-  luci_log.debug_verbose('ICB6: isClusterBusy: %s is busy: %d flags' \
+  luci_log.debug_verbose('ICB6: %s is busy: %d flags' \
       % (cluname, len(items)))
   map['busy'] = "true"
   #Ok, here is what is going on...if there is an item,
@@ -2962,27 +3509,25 @@
         rc = RicciCommunicator(ricci[0])
         if not rc:
           rc = None
-          raise RicciError, 'rc is None for %s' % ricci[0]
-      except RicciError, e:
+          luci_log.debug_verbose('ICB6b: rc is none')
+      except Exception, e:
         rc = None
-        luci_log.debug_verbose('ICB7: ricci returned error in iCB for %s: %s' \
+        luci_log.debug_verbose('ICB7: RC: %s: %s' \
           % (cluname, str(e)))
-      except:
-        rc = None
-        luci_log.info('ICB8: ricci connection failed for cluster %s' % cluname)
 
       batch_id = None
       if rc is not None:
         try:
           batch_id = item[1].getProperty(BATCH_ID)
-          luci_log.debug_verbose('ICB8A: got batch_id %s from %s' \
+          luci_log.debug_verbose('ICB8: got batch_id %s from %s' \
               % (batch_id, item[0]))
         except Exception, e:
           try:
             luci_log.debug_verbose('ICB8B: failed to get batch_id from %s: %s' \
                 % (item[0], str(e)))
           except:
-            luci_log.debug_verbose('ICB8C: failed to get batch_id from %s' % item[0])
+            luci_log.debug_verbose('ICB8C: failed to get batch_id from %s' \
+              % item[0])
 
         if batch_id is not None:
           try:
@@ -3030,18 +3575,31 @@
           elif laststatus == 0:
             node_report['statusindex'] = 0
             node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_INSTALL
+          elif laststatus == DISABLE_SVC_TASK:
+            node_report['statusindex'] = DISABLE_SVC_TASK
+            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_CFG
           elif laststatus == REBOOT_TASK:
             node_report['statusindex'] = REBOOT_TASK
             node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_CFG
           elif laststatus == SEND_CONF:
             node_report['statusindex'] = SEND_CONF
             node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_JOIN
+          elif laststatus == ENABLE_SVC_TASK:
+            node_report['statusindex'] = ENABLE_SVC_TASK
+            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_JOIN
+          else:
+            node_report['statusindex'] = 0
+            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + ' Install is in an unknown state.'
           nodereports.append(node_report)
           continue
         elif creation_status == -(INSTALL_TASK):
           node_report['iserror'] = True
           (err_code, err_msg) = extract_module_status(batch_xml, INSTALL_TASK)
           node_report['errormessage'] = CLUNODE_CREATE_ERRORS[INSTALL_TASK] + err_msg
+        elif creation_status == -(DISABLE_SVC_TASK):
+          node_report['iserror'] = True
+          (err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
+          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[DISABLE_SVC_TASK] + err_msg
         elif creation_status == -(REBOOT_TASK):
           node_report['iserror'] = True
           (err_code, err_msg) = extract_module_status(batch_xml, REBOOT_TASK)
@@ -3050,6 +3608,10 @@
           node_report['iserror'] = True
           (err_code, err_msg) = extract_module_status(batch_xml, SEND_CONF)
           node_report['errormessage'] = CLUNODE_CREATE_ERRORS[SEND_CONF] + err_msg
+        elif creation_status == -(ENABLE_SVC_TASK):
+          node_report['iserror'] = True
+          (err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
+          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[ENABLE_SVC_TASK] + err_msg
         elif creation_status == -(START_NODE):
           node_report['iserror'] = True
           (err_code, err_msg) = extract_module_status(batch_xml, START_NODE)
@@ -3057,7 +3619,13 @@
         else:
           node_report['iserror'] = True
           node_report['errormessage'] = CLUNODE_CREATE_ERRORS[0]
-        clusterfolder.manage_delObjects(item[0])
+
+        try:
+          clusterfolder.manage_delObjects(item[0])
+        except Exception, e:
+          luci_log.debug_verbose('ICB14: delObjects: %s: %s' \
+            % (item[0], str(e)))
+
         nodereports.append(node_report)
         continue
       else:  #either batch completed successfully, or still running
@@ -3069,7 +3637,7 @@
           try:
               clusterfolder.manage_delObjects(item[0])
           except Exception, e:
-              luci_log.info('ICB14: Unable to delete %s: %s' % (item[0], str(e)))
+              luci_log.info('ICB15: Unable to delete %s: %s' % (item[0], str(e)))
           continue
         else:
           map['busy'] = "true"
@@ -3079,23 +3647,41 @@
           nodereports.append(node_report)
           propslist = list()
           propslist.append(LAST_STATUS)
-          item[1].manage_delProperties(propslist)
-          item[1].manage_addProperty(LAST_STATUS,creation_status, "int")
+          try:
+            item[1].manage_delProperties(propslist)
+            item[1].manage_addProperty(LAST_STATUS, creation_status, "int")
+          except Exception, e:
+            luci_log.debug_verbose('ICB16: last_status err: %s %d: %s' \
+              % (item[0], creation_status, str(e)))
           continue
           
     else:
       node_report = {}
       node_report['isnodecreation'] = False
       ricci = item[0].split("____") #This removes the 'flag' suffix
-      rc = RicciCommunicator(ricci[0])
-      finished = checkBatch(rc, item[1].getProperty(BATCH_ID))
+
+      try:
+        rc = RicciCommunicator(ricci[0])
+      except Exception, e:
+        rc = None
+        finished = False
+        luci_log.debug_verbose('ICB15: ricci error: %s: %s' \
+          % (ricci[0], str(e)))
+
+      if rc is not None:
+        finished = checkBatch(rc, item[1].getProperty(BATCH_ID))
+
       if finished == True:
-        node_report['desc'] = item[1].getProperty(FLAG_DESC) + REDIRECT_MSG
+        flag_desc = item[1].getProperty(FLAG_DESC)
+        if flag_desc is None:
+          node_report['desc'] = REDIRECT_MSG
+        else:
+          node_report['desc'] = flag_desc + REDIRECT_MSG
         nodereports.append(node_report)
         try:
             clusterfolder.manage_delObjects(item[0])
         except Exception, e:
-            luci_log.info('Unable to delete %s: %s' % (item[0], str(e)))
+            luci_log.info('ICB16: Unable to delete %s: %s' % (item[0], str(e)))
       else:
         node_report = {}
         map['busy'] = "true"
@@ -3106,6 +3692,7 @@
   if isBusy:
     part1 = req['ACTUAL_URL']
     part2 = req['QUERY_STRING']
+
     dex = part2.find("&busyfirst")
     if dex != (-1):
       tmpstr = part2[:dex] #This strips off busyfirst var
@@ -3113,11 +3700,14 @@
       ###FIXME - The above assumes that the 'busyfirst' query var is at the
       ###end of the URL...
     wholeurl = part1 + "?" + part2
-    #map['url'] = "5, url=" + req['ACTUAL_URL'] + "?" + req['QUERY_STRING']
     map['refreshurl'] = "5; url=" + wholeurl
     req['specialpagetype'] = "1"
   else:
-    map['refreshurl'] = '5; url=\".\"'
+    try:
+      query = req['QUERY_STRING'].replace('&busyfirst=true', '')
+      map['refreshurl'] = '5; url=' + req['ACTUAL_URL'] + '?' + query
+    except:
+      map['refreshurl'] = '5; url=/luci/cluster?pagetype=3'
   return map
 
 def getClusterOS(self, rc):
@@ -3145,15 +3735,12 @@
 
 	try:
 		cluname = request['clustername']
-	except KeyError, e:
+	except:
 		try:
 			cluname = request.form['clustername']
 		except:
 			luci_log.debug_verbose('getResourcesInfo missing cluster name')
 			return resList
-	except:
-		luci_log.debug_verbose('getResourcesInfo missing cluster name')
-		return resList
 
 	for item in modelb.getResources():
 		itemmap = {}
@@ -3167,24 +3754,22 @@
 
 def getResourceInfo(modelb, request):
 	if not modelb:
-		luci_log.debug_verbose('no modelb obj in getResourceInfo')
+		luci_log.debug_verbose('GRI0: no modelb object in session')
 		return {}
 
 	name = None
 	try:
 		name = request['resourcename']
-	except KeyError, e:
+	except:
 		try:
 			name = request.form['resourcename']
 		except:
 			pass
-	except:
-		pass
 
 	if name is None:
 		try:
-			type = request.form['type']
-			if type == 'ip':
+			res_type = request.form['type']
+			if res_type == 'ip':
 				name = request.form['value'].strip()
 		except:
 			pass
@@ -3195,15 +3780,12 @@
 
 	try:
 		cluname = request['clustername']
-	except KeyError, e:
+	except:
 		try:
 			cluname = request.form['clustername']
 		except:
 			luci_log.debug_verbose('getResourceInfo missing cluster name')
 			return {}
-	except:
-		luci_log.debug_verbose('getResourceInfo missing cluster name')
-		return {}
 
 	try:
 		baseurl = request['URL']
@@ -3225,41 +3807,47 @@
 				continue
 
 def delResource(self, rc, request):
-	errstr = 'An error occurred in while attempting to set the cluster.conf'
+	errstr = 'An error occurred while attempting to set the new cluster.conf'
 
 	try:
 		modelb = request.SESSION.get('model')
-	except:
-		luci_log.debug_verbose('delRes unable to extract model from SESSION')
+	except Exception, e:
+		luci_log.debug_verbose('delResource0: no model: %s' % str(e))
 		return errstr
 
+	name = None
 	try:
 		name = request['resourcename']
-	except KeyError, e:
+	except:
 		try:
 			name = request.form['resourcename']
 		except:
-			luci_log.debug_verbose('delRes missing resname %s' % str(e))
-			return errstr + ': ' + str(e)
-	except:
-		luci_log.debug_verbose('delRes missing resname')
-		return errstr + ': ' + str(e)
+			pass
 
+	if name is None:
+		luci_log.debug_verbose('delResource1: no resource name')
+		return errstr + ': no resource name was provided.'
+
+	clustername = None
 	try:
 		clustername = request['clustername']
-	except KeyError, e:
+	except:
 		try:
 			clustername = request.form['clustername']
 		except:
-			luci_log.debug_verbose('delRes missing cluster name')
-			return errstr + ': could not determine the cluster name.'
+			pass
+
+	if clustername is None:
+		luci_log.debug_verbose('delResource2: no cluster name for %s' % name)
+		return errstr + ': could not determine the cluster name.'
 
 	try:
 		ragent = rc.hostname()
 		if not ragent:
-			raise
-	except:
-		return errstr
+			raise Exception, 'unable to determine the hostname of the ricci agent'
+	except Exception, e:
+		luci_log.debug_verbose('delResource3: %s: %s' % (errstr, str(e)))
+		return errstr + ': could not determine the ricci agent hostname'
 
 	resPtr = modelb.getResourcesPtr()
 	resources = resPtr.getChildren()
@@ -3272,7 +3860,7 @@
 			break
 
 	if not found:
-		luci_log.debug_verbose('delRes cant find res %s' % name)
+		luci_log.debug_verbose('delResource4: cant find res %s' % name)
 		return errstr + ': the specified resource was not found.'
 
 	try:
@@ -3280,36 +3868,22 @@
 		if not conf:
 			raise Exception, 'model string is blank'
 	except Exception, e:
-		luci_log.debug_verbose('delRes: exportModelAsString failed: %s' % str(e))
+		luci_log.debug_verbose('delResource5: exportModelAsString failed: %s' \
+			% str(e))
 		return errstr
 
 	batch_number, result = setClusterConf(rc, str(conf))
 	if batch_number is None or result is None:
-		luci_log.debug_verbose('delRes: missing batch and/or result from setClusterConf')
+		luci_log.debug_verbose('delResource6: missing batch and/or result')
 		return errstr
 
-	modelstr = ""
-	path = CLUSTER_FOLDER_PATH + str(clustername)
-	clusterfolder = self.restrictedTraverse(path)
-	batch_id = str(batch_number)
-	objname = str(ragent) + '____flag'
-	objpath = str(path + '/' + objname)
-
 	try:
-		clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-		#Now we need to annotate the new DB object
-		flag = self.restrictedTraverse(objpath)
-		flag.manage_addProperty(BATCH_ID, batch_id, "string")
-		flag.manage_addProperty(TASKTYPE, RESOURCE_REMOVE, "string")
-		flag.manage_addProperty(FLAG_DESC, "Removing Resource \'" + request['resourcename'] + "\'", "string")
+		set_node_flag(self, clustername, ragent, str(batch_number), RESOURCE_REMOVE, "Removing Resource \'%s\'" % request['resourcename'])
 	except Exception, e:
-		luci_log.debug('delRes: An error occurred while setting flag %s: %s' \
-			% (objname, str(e)))
-	except:
-		luci_log.debug('delRes: An error occurred while setting flag %s' % objname)
+		luci_log.debug_verbose('delResource7: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
+	response.redirect(request['URL'] + "?pagetype=" + RESOURCES + "&clustername=" + clustername + '&busyfirst=true')
 
 def addIp(request, form=None):
 	if form is None:
@@ -3335,7 +3909,7 @@
 			return None
 	else:
 		try:
-			res = apply(Ip)
+			res = Ip()
 			if not res:
 				raise Exception, 'apply(Ip) is None'
 		except Exception, e:
@@ -3391,7 +3965,7 @@
 			return None
 	else:
 		try:
-			res = apply(Fs)
+			res = Fs()
 			if not res:
 				raise Exception, 'apply(Fs) is None'
 		except Exception, e:
@@ -3499,7 +4073,7 @@
 			return None
 	else:
 		try:
-			res = apply(Clusterfs)
+			res = Clusterfs()
 			if not res:
 				raise Exception, 'apply(Clusterfs) is None'
 		except Exception, e:
@@ -3586,7 +4160,7 @@
 			return None
 	else:
 		try:
-			res = apply(Netfs)
+			res = Netfs()
 		except Exception, e:
 			luci_log.debug_verbose('addNfsm error: %s' % str(e))
 			return None
@@ -3681,7 +4255,7 @@
 			return None
 	else:
 		try:
-			res = apply(NFSClient)
+			res = NFSClient()
 		except:
 			luci_log.debug_verbose('addNfsc error: %s' % str(e))
 			return None
@@ -3745,7 +4319,7 @@
 			return None
 	else:
 		try:
-			res = apply(NFSExport)
+			res = NFSExport()
 		except:
 			luci_log.debug_verbose('addNfsx error: %s', str(e))
 			return None
@@ -3793,7 +4367,7 @@
 			return None
 	else:
 		try:
-			res = apply(Script)
+			res = Script()
 		except Exception, e:
 			luci_log.debug_verbose('addScr error: %s' % str(e))
 			return None
@@ -3814,10 +4388,10 @@
 		luci_log.debug_verbose('addScr error: %s' % err)
 
 	try:
-		file = form['file'].strip()
-		if not file:
+		path = form['file'].strip()
+		if not path:
 			raise KeyError, 'file path is blank'
-		res.attr_hash['file'] = file
+		res.attr_hash['file'] = path
 	except Exception, e:
 		err = str(e)
 		errors.append(err)
@@ -3851,7 +4425,7 @@
 			return None
 	else:
 		try:
-			res = apply(Samba)
+			res = Samba()
 		except Exception, e:
 			luci_log.debug_verbose('addSmb error: %s' % str(e))
 			return None
@@ -3900,7 +4474,7 @@
 		if not mb_nodes or not len(mb_nodes):
 			raise Exception, 'node list is empty'
 	except Exception, e:
-		luci_log.debug_verbose('no model builder nodes found for %s: %s' \
+		luci_log.debug_verbose('RCC0: no model builder nodes found for %s: %s' \
 				% (str(e), clusterName))
 		return 'Unable to find cluster nodes for %s' % clusterName
 
@@ -3909,17 +4483,18 @@
 		if not cluster_node:
 			raise Exception, 'cluster node is none'
 	except Exception, e:
-		luci_log.debug('cant find cluster node for %s: %s'
+		luci_log.debug('RCC1: cant find cluster node for %s: %s'
 			% (clusterName, str(e)))
 		return 'Unable to find an entry for %s in the Luci database.' % clusterName
 
 	try:
 		db_nodes = map(lambda x: x[0], cluster_node.objectItems('Folder'))
 		if not db_nodes or not len(db_nodes):
-			raise
-	except:
+			raise Exception, 'no database nodes'
+	except Exception, e:
 		# Should we just create them all? Can this even happen?
-		return 'Unable to find database entries for any nodes in ' + clusterName
+		luci_log.debug('RCC2: error: %s' % str(e))
+		return 'Unable to find database entries for any nodes in %s' % clusterName
 
 	same_host = lambda x, y: x == y or x[:len(y) + 1] == y + '.' or y[:len(x) + 1] == x + '.'
 
@@ -3946,11 +4521,15 @@
 
 	messages = list()
 	for i in missing_list:
-		cluster_node.delObjects([i])
-		## or alternately
-		#new_node = cluster_node.restrictedTraverse(i)
-		#setNodeFlag(self, new_node, CLUSTER_NODE_NOT_MEMBER)
-		messages.append('Node \"' + i + '\" is no longer in a member of cluster \"' + clusterName + '.\". It has been deleted from the management interface for this cluster.')
+		try:
+			## or alternately
+			##new_node = cluster_node.restrictedTraverse(i)
+			##setNodeFlag(self, new_node, CLUSTER_NODE_NOT_MEMBER)
+			cluster_node.delObjects([i])
+			messages.append('Node \"%s\" is no longer in a member of cluster \"%s\." It has been deleted from the management interface for this cluster.' % (i, clusterName))
+			luci_log.debug_verbose('VCC3: deleted node %s' % i)
+		except Exception, e:
+			luci_log.debug_verbose('VCC4: delObjects: %s: %s' % (i, str(e)))
 
 	new_flags = CLUSTER_NODE_NEED_AUTH | CLUSTER_NODE_ADDED
 	for i in new_list:
@@ -3958,69 +4537,66 @@
 			cluster_node.manage_addFolder(i, '__luci__:csystem:' + clusterName)
 			new_node = cluster_node.restrictedTraverse(i)
 			setNodeFlag(self, new_node, new_flags)
-			messages.append('A new node, \"' + i + ',\" is now a member of cluster \"' + clusterName + '.\" It has added to the management interface for this cluster, but you must authenticate to it in order for it to be fully functional.')
-		except:
-			messages.append('A new node, \"' + i + ',\" is now a member of cluster \"' + clusterName + ',\". but has not added to the management interface for this cluster as a result of an error creating the database entry.')
+			messages.append('A new cluster node, \"%s,\" is now a member of cluster \"%s.\" It has been added to the management interface for this cluster, but you must authenticate to it in order for it to be fully functional.' % (i, clusterName))
+		except Exception, e:
+			messages.append('A new cluster node, \"%s,\" is now a member of cluster \"%s,\". but it has not been added to the management interface for this cluster as a result of an error creating a database entry for it.' % (i, clusterName))
+			luci_log.debug_verbose('VCC5: addFolder: %s/%s: %s' \
+				% (clusterName, i, str(e)))
 	
 	return messages
 
-def addResource(self, request, modelb, res):
+def addResource(self, request, modelb, res, res_type):
 	clustername = modelb.getClusterName()
 	if not clustername:
-		raise Exception, 'cluster name from modelb.getClusterName() is blank'
+		luci_log.debug_verbose('addResource0: no cluname from mb')
+		return 'Unable to determine cluster name'
 
 	rc = getRicciAgent(self, clustername)
 	if not rc:
-		raise Exception, 'Unable to find a ricci agent for the %s cluster' % clustername
+		luci_log.debug_verbose('addResource1: unable to find a ricci agent for cluster %s' % clustername)
+		return 'Unable to find a ricci agent for the %s cluster' % clustername
 
-	modelb.getResourcesPtr().addChild(res)
+	try:
+		modelb.getResourcesPtr().addChild(res)
+	except Exception, e:
+		luci_log.debug_verbose('addResource2: adding the new resource failed: %s' % str(e))
+		return 'Unable to add the new resource'
 
 	try:
 		conf = modelb.exportModelAsString()
 		if not conf:
 			raise Exception, 'model string for %s is blank' % clustername
 	except Exception, e:
-		luci_log.debug_verbose('addResource: exportModelAsString err: %s' % str(e))
+		luci_log.debug_verbose('addResource3: exportModelAsString : %s' \
+			% str(e))
 		return 'An error occurred while adding this resource'
 
 	try:
 		ragent = rc.hostname()
 		if not ragent:
-			luci_log.debug_verbose('missing hostname')
+			luci_log.debug_verbose('addResource4: missing ricci hostname')
 			raise Exception, 'unknown ricci agent hostname'
-		luci_log.debug_verbose('SENDING NEW CLUSTER CONF: %s' % conf)
+
 		batch_number, result = setClusterConf(rc, str(conf))
 		if batch_number is None or result is None:
-			luci_log.debug_verbose('missing batch_number or result')
-			raise Exception, 'batch_number or results is None from setClusterConf'
+			luci_log.debug_verbose('addResource5: missing batch_number or result')
+			raise Exception, 'unable to save the new cluster configuration.'
 	except Exception, e:
+		luci_log.debug_verbose('addResource6: %s' % str(e))
 		return 'An error occurred while propagating the new cluster.conf: %s' % str(e)
 
-	path = str(CLUSTER_FOLDER_PATH + clustername)
-	clusterfolder = self.restrictedTraverse(path)
-	batch_id = str(batch_number)
-	objname = str(ragent + '____flag')
-	objpath = str(path + '/' + objname)
+	if res_type != 'ip':
+		res_name = res.attr_hash['name']
+	else:
+		res_name = res.attr_hash['address']
 
 	try:
-		clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-		#Now we need to annotate the new DB object
-		flag = self.restrictedTraverse(objpath)
-		flag.manage_addProperty(BATCH_ID, batch_id, "string")
-		flag.manage_addProperty(TASKTYPE, RESOURCE_ADD, "string")
-
-		if type != 'ip':
-			flag.manage_addProperty(FLAG_DESC, "Creating New Resource \'" + res.attr_hash['name'] + "\'", "string")
-		else:
-			flag.manage_addProperty(FLAG_DESC, "Creating New Resource \'" + res.attr_hash['address'] + "\'", "string")
+		set_node_flag(self, clustername, ragent, str(batch_number), RESOURCE_ADD, "Creating New Resource \'%s\'" % res_name)
 	except Exception, e:
-		try:
-			luci_log.info('Unable to create flag %s: %s' % (objpath, str(e)))
-		except:
-			pass
+		luci_log.debug_verbose('addResource7: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
+	response.redirect(request['URL'] + "?pagetype=" + RESOURCES + "&clustername=" + clustername + '&busyfirst=true')
 
 def getResourceForEdit(modelb, name):
 	resPtr = modelb.getResourcesPtr()
@@ -4048,21 +4624,26 @@
 		clusterfolder = self.restrictedTraverse(path)
 		objs = clusterfolder.objectItems('Folder')
 	except Exception, e:
-		luci_log.info('resolve_nodename failed for %s/%s: %s' \
+		luci_log.info('RNN0: error for %s/%s: %s' \
 			% (nodename, clustername, str(e)))
+		return nodename
 
 	for obj in objs:
-		if obj[0].find(nodename) != (-1):
-			return obj[0]
+		try:
+			if obj[0].find(nodename) != (-1):
+				return obj[0]
+		except:
+			continue
 
-	luci_log.info('resolve_nodename failed for %s/%s' % (nodename, clustername))
-	return None
+	luci_log.info('RNN1: failed for %s/%s: nothing found' \
+		% (nodename, clustername))
+	return nodename
 
 def noNodeFlagsPresent(self, nodefolder, flagname, hostname):
 	try:
 		items = nodefolder.objectItems('ManagedSystem')
 	except:
-		luci_log.debug('An error occurred while trying to list flags for cluster ' + nodefolder[0])
+		luci_log.debug('NNFP0: error getting flags for %s' % nodefolder[0])
 		return None
 
 	for item in items:
@@ -4071,9 +4652,10 @@
 
 		#a flag already exists... try to delete it
 		try:
+			# hostname must be a FQDN
 			rc = RicciCommunicator(hostname)
-		except RicciError, e:
-			luci_log.info('Unable to connect to the ricci daemon: %s' % str(e))
+		except Exception, e:
+			luci_log.info('NNFP1: ricci error %s: %s' % (hostname, str(e)))
 			return None
 
 		if not rc.authed():
@@ -4082,15 +4664,14 @@
 				setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
 			except:
 				pass
-			luci_log.info('Node %s is not authenticated' % item[0])
-			return None
+			luci_log.info('NNFP2: %s not authenticated' % item[0])
 
 		finished = checkBatch(rc, item[1].getProperty(BATCH_ID))
 		if finished == True:
 			try:
 				nodefolder.manage_delObjects(item[0])
 			except Exception, e:
-				luci_log.info('manage_delObjects for %s failed: %s' \
+				luci_log.info('NNFP3: manage_delObjects for %s failed: %s' \
 					% (item[0], str(e)))
 				return None
 			return True
@@ -4100,22 +4681,62 @@
 
 	return True
 
-def getModelBuilder(rc, isVirtualized):
+def getModelBuilder(self, rc, isVirtualized):
 	try:
 		cluster_conf_node = getClusterConf(rc)
 		if not cluster_conf_node:
-			raise
-	except:
-		luci_log.debug('unable to get cluster_conf_node in getModelBuilder')
+			raise Exception, 'getClusterConf returned None'
+	except Exception, e:
+		luci_log.debug_verbose('GMB0: unable to get cluster_conf_node in getModelBuilder: %s' % str(e))
 		return None
 
 	try:
 		modelb = ModelBuilder(0, None, None, cluster_conf_node)
+		if not modelb:
+			raise Exception, 'ModelBuilder returned None'
 	except Exception, e:
 		try:
-			luci_log.debug('An error occurred while trying to get modelb for conf \"%s\": %s' % (cluster_conf_node.toxml(), str(e)))
+			luci_log.debug_verbose('GMB1: An error occurred while trying to get modelb for conf \"%s\": %s' % (cluster_conf_node.toxml(), str(e)))
 		except:
-			pass
+			luci_log.debug_verbose('GMB1: ModelBuilder failed')
 
-	modelb.setIsVirtualized(isVirtualized)
+	if modelb:
+		modelb.setIsVirtualized(isVirtualized)
 	return modelb
+
+def getModelForCluster(self, clustername):
+	rc = getRicciAgent(self, clustername)
+	if not rc:
+		luci_log.debug_verbose('GMFC0: unable to find a ricci agent for %s' \
+			% clustername)
+		return None
+
+	try:
+		model = getModelBuilder(None, rc, rc.dom0())
+		if not model:
+			raise Exception, 'model is none'
+	except Exception, e:
+		luci_log.debug_verbose('GMFC1: unable to get model builder for %s: %s' \
+			 % (clustername, str(e)))
+		return None
+
+	return model
+
+def set_node_flag(self, cluname, agent, batchid, task, desc):
+	path = str(CLUSTER_FOLDER_PATH + cluname)
+	batch_id = str(batchid)
+	objname = str(agent + '____flag')
+
+	try:
+		clusterfolder = self.restrictedTraverse(path)
+		clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+		objpath = str(path + '/' + objname)
+		flag = self.restrictedTraverse(objpath)
+		flag.manage_addProperty(BATCH_ID, batch_id, 'string')
+		flag.manage_addProperty(TASKTYPE, task, 'string')
+		flag.manage_addProperty(FLAG_DESC, desc, 'string')
+	except Exception, e:
+		errmsg = 'SNF0: error creating flag (%s,%s,%s)@%s: %s' \
+					% (batch_id, task, desc, objpath, str(e))
+		luci_log.debug_verbose(errmsg)
+		raise Exception, errmsg
--- conga/luci/site/luci/Extensions/conga_constants.py	2006/10/24 16:36:23	1.19.2.1
+++ conga/luci/site/luci/Extensions/conga_constants.py	2006/11/16 19:34:53	1.19.2.2
@@ -42,6 +42,13 @@
 FENCEDEV_LIST="52"
 FENCEDEV_CONFIG="53"
 FENCEDEV="54"
+CLUSTER_DAEMON="55"
+
+#Cluster tasks
+CLUSTER_STOP = '1000'
+CLUSTER_START = '1001'
+CLUSTER_RESTART = '1002'
+CLUSTER_DELETE = '1003'
 
 #General tasks
 NODE_LEAVE_CLUSTER="100"
@@ -55,6 +62,13 @@
 MULTICAST="203"
 QUORUMD="204"
 
+PROPERTIES_TAB = 'tab'
+
+PROP_GENERAL_TAB = '1'
+PROP_FENCE_TAB = '2'
+PROP_MCAST_TAB = '3'
+PROP_QDISK_TAB = '4'
+
 PAGETYPE="pagetype"
 ACTIONTYPE="actiontype"
 TASKTYPE="tasktype"
@@ -66,6 +80,9 @@
 PATH_TO_PRIVKEY="/var/lib/luci/var/certs/privkey.pem"
 PATH_TO_CACERT="/var/lib/luci/var/certs/cacert.pem"
 
+# Zope DB paths
+CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
+
 #Node states
 NODE_ACTIVE="0"
 NODE_INACTIVE="1"
@@ -75,26 +92,36 @@
 NODE_UNKNOWN_STR="Unknown State"
 
 #cluster/node create batch task index
-INSTALL_TASK=1
-REBOOT_TASK=2
-SEND_CONF=3
-START_NODE=4
-RICCI_CONNECT_FAILURE=(-1000)
+INSTALL_TASK = 1
+DISABLE_SVC_TASK = 2
+REBOOT_TASK = 3
+SEND_CONF = 4
+ENABLE_SVC_TASK = 5
+START_NODE = 6
+RICCI_CONNECT_FAILURE = (-1000)
 
-RICCI_CONNECT_FAILURE_MSG="A problem was encountered connecting with this node.  "
+RICCI_CONNECT_FAILURE_MSG = "A problem was encountered connecting with this node.  "
 #cluster/node create error messages
-CLUNODE_CREATE_ERRORS = ["An unknown error occurred when creating this node: ", "A problem occurred when installing packages: ","A problem occurred when rebooting this node: ", "A problem occurred when propagating the configuration to this node: ", "A problem occurred when starting this node: "]
+CLUNODE_CREATE_ERRORS = [
+	"An unknown error occurred when creating this node: ",
+	"A problem occurred when installing packages: ",
+	"A problem occurred when disabling cluster services on this node: ",
+	"A problem occurred when rebooting this node: ",
+	"A problem occurred when propagating the configuration to this node: ",
+	"A problem occurred when enabling cluster services on this node: ",
+	"A problem occurred when starting this node: "
+]
 
 #cluster/node create error status messages
-PRE_INSTALL="The install state is not yet complete"
-PRE_REBOOT="Installation complete, but reboot not yet complete"
-PRE_CFG="Reboot stage successful, but configuration for the cluster is not yet distributed"
-PRE_JOIN="Packages are installed and configuration has been distributed, but the node has not yet joined the cluster."
+PRE_INSTALL = "The install state is not yet complete"
+PRE_REBOOT = "Installation complete, but reboot not yet complete"
+PRE_CFG = "Reboot stage successful, but configuration for the cluster is not yet distributed"
+PRE_JOIN = "Packages are installed and configuration has been distributed, but the node has not yet joined the cluster."
 
 
-POSSIBLE_REBOOT_MESSAGE="This node is not currently responding and is probably<br/>rebooting as planned. This state should persist for 5 minutes or so..."
+POSSIBLE_REBOOT_MESSAGE = "This node is not currently responding and is probably<br/>rebooting as planned. This state should persist for 5 minutes or so..."
 
-REDIRECT_MSG="  You will be redirected in 5 seconds. Please fasten your safety restraints."
+REDIRECT_MSG = " You will be redirected in 5 seconds. Please fasten your safety restraints."
 
 
 # Homebase-specific constants
@@ -112,7 +139,7 @@
 CLUSTER_NODE_NOT_MEMBER = 0x02
 CLUSTER_NODE_ADDED = 0x04
 
-PLONE_ROOT='luci'
+PLONE_ROOT = 'luci'
 
 LUCI_DEBUG_MODE = 1
 LUCI_DEBUG_VERBOSITY = 2
--- conga/luci/site/luci/Extensions/homebase_adapters.py	2006/11/01 22:06:55	1.34.2.5
+++ conga/luci/site/luci/Extensions/homebase_adapters.py	2006/11/16 19:34:53	1.34.2.6
@@ -1,23 +1,20 @@
-import string
 import re
-import sys
 import os
 from AccessControl import getSecurityManager
-from ZPublisher import HTTPRequest
-import xml.dom
 import cgi
 
-from ricci_defines import *
+from conga_constants import PLONE_ROOT, CLUSTER_NODE_NEED_AUTH, \
+							HOMEBASE_ADD_CLUSTER, HOMEBASE_ADD_CLUSTER_INITIAL, \
+							HOMEBASE_ADD_SYSTEM, HOMEBASE_ADD_USER, \
+							HOMEBASE_DEL_SYSTEM, HOMEBASE_DEL_USER, HOMEBASE_PERMS
 from ricci_bridge import getClusterConf
-from ricci_communicator import RicciCommunicator
-from ricci_communicator import CERTS_DIR_PATH
+from ricci_communicator import RicciCommunicator, CERTS_DIR_PATH
 from clusterOS import resolveOSType
-from conga_constants import *
-from LuciSyslog import LuciSyslog, LuciSyslogError
+from LuciSyslog import LuciSyslog
 
 try:
 	luci_log = LuciSyslog()
-except LuciSyslogError, e:
+except:
 	pass
 
 def siteIsSetup(self):
@@ -27,8 +24,8 @@
 	except: pass
 	return False
 
-def strFilter(regex, replaceChar, str):
-	return re.sub(regex, replaceChar, str)
+def strFilter(regex, replaceChar, arg):
+	return re.sub(regex, replaceChar, arg)
 
 def validateDelSystem(self, request):
 	errors = list()
@@ -74,6 +71,8 @@
 
 	try:
 		user = self.portal_membership.getMemberById(userId)
+		if not user:
+			raise Exception, 'user %s does not exist' % userId
 	except:
 		return (False, {'errors': [ 'No such user: \"' + userId + '\"' ] })
 
@@ -138,8 +137,12 @@
 				rc = RicciCommunicator(host)
 				rc.unauth()
 				i['cur_auth'] = False
-		except:
-			pass
+		except Exception, e:
+			try:
+				luci_log.debug_verbose('unauth for %s failed: %s' \
+					% (i['host'], str(e)))
+			except:
+				pass
 
 def nodeAuth(cluster, host, passwd):
 	messages = list()
@@ -531,7 +534,7 @@
 						i[1].manage_setLocalRoles(userId, roles)
 						messages.append('Added permission for ' + userId + ' for cluster ' + i[0])
 				except:
-						errors.append('Failed to add permission for ' + userId + ' for cluster ' + i[0])
+					errors.append('Failed to add permission for ' + userId + ' for cluster ' + i[0])
 			else:
 				try:
 					if user.has_role('View', i[1]):
@@ -545,7 +548,7 @@
 
 						messages.append('Removed permission for ' + userId + ' for cluster ' + i[0])
 				except:
-						errors.append('Failed to remove permission for ' + userId + ' for cluster ' + i[0])
+					errors.append('Failed to remove permission for ' + userId + ' for cluster ' + i[0])
 
 	storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
 	if not '__SYSTEM' in request.form:
@@ -572,7 +575,7 @@
 						i[1].manage_setLocalRoles(userId, roles)
 						messages.append('Added permission for ' + userId + ' for system ' + i[0])
 				except:
-						errors.append('Failed to add permission for ' + userId + ' for system ' + i[0])
+					errors.append('Failed to add permission for ' + userId + ' for system ' + i[0])
 			else:
 				try:
 					if user.has_role('View', i[1]):
@@ -586,7 +589,7 @@
 
 						messages.append('Removed permission for ' + userId + ' for system ' + i[0])
 				except:
-						errors.append('Failed to remove permission for ' + userId + ' for system ' + i[0])
+					errors.append('Failed to remove permission for ' + userId + ' for system ' + i[0])
 
 	if len(errors) > 0:
 		returnCode = False
@@ -665,23 +668,25 @@
 ]
 
 def userAuthenticated(self):
-	if (isAdmin(self) or getSecurityManager().getUser().has_role('Authenticated', self.restrictedTraverse(PLONE_ROOT))):
-		return True
-
+	try:
+		if (isAdmin(self) or getSecurityManager().getUser().has_role('Authenticated', self.restrictedTraverse(PLONE_ROOT))):
+			return True
+	except Exception, e:
+		luci_log.debug_verbose('UA0: %s' % str(e)) 
 	return False
 
 def isAdmin(self):
 	try:
 		return getSecurityManager().getUser().has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
-	except:
-		pass
+	except Exception, e:
+		luci_log.debug_verbose('IA0: %s' % str(e)) 
 	return False
 
 def userIsAdmin(self, userId):
 	try:
 		return self.portal_membership.getMemberById(userId).has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
-	except:
-		pass
+	except Exception, e:
+		luci_log.debug_verbose('UIA0: %s: %s' % (userId, str(e)))
 	return False
 
 def homebaseControlPost(self, request):
@@ -698,15 +703,19 @@
 	if 'pagetype' in request.form:
 		pagetype = int(request.form['pagetype'])
 	else:
-		try: request.SESSION.set('checkRet', {})
-		except: pass
+		try:
+			request.SESSION.set('checkRet', {})
+		except:
+			pass
 		return homebasePortal(self, request, '.', '0')
 
 	try:
 		validatorFn = formValidators[pagetype - 1]
 	except:
-		try: request.SESSION.set('checkRet', {})
-		except: pass
+		try:
+			request.SESSION.set('checkRet', {})
+		except:
+			pass
 		return homebasePortal(self, request, '.', '0')
 
 	if validatorFn == validateAddClusterInitial or validatorFn == validateAddCluster:
@@ -913,71 +922,111 @@
 
 def getClusterSystems(self, clusterName):
 	if isAdmin(self):
-		return self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName + '/objectItems')('Folder')
+		try:
+			return self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName + '/objectItems')('Folder')
+		except Exception, e:
+			luci_log.debug_verbose('GCS0: %s: %s' % (clusterName, str(e)))
+			return None
 
 	try:
 		i = getSecurityManager().getUser()
 		if not i:
-			raise
-	except:
+			raise Exception, 'GCSMGU failed'
+	except Exception, e:
+		luci_log.debug_verbose('GCS1: %s: %s' % (clusterName, str(e)))
 		return None
 
-	csystems = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName + '/objectItems')('Folder')
-	if not csystems:
+	try:
+		csystems = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName + '/objectItems')('Folder')
+		if not csystems or len(csystems) < 1:
+			return None
+	except Exception, e:
+		luci_log.debug_verbose('GCS2: %s: %s' % (clusterName, str(e)))
 		return None
 
 	allowedCSystems = list()
 	for c in csystems:
-		if i.has_role('View', c[1]):
-			allowedCSystems.append(c)
-	return (c)
+		try:
+			if i.has_role('View', c[1]):
+				allowedCSystems.append(c)
+		except Exception, e:
+			luci_log.debug_verbose('GCS3: %s: %s: %s' \
+				% (clusterName, c[0], str(e)))
+
+	return allowedCSystems
 
 def getClusters(self):
 	if isAdmin(self):
-		return self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
+		try:
+			return self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
+		except Exception, e:
+			luci_log.debug_verbose('GC0: %s' % str(e))
+			return None
 	try:
 		i = getSecurityManager().getUser()
 		if not i:
-			raise
-	except:
+			raise Exception, 'GSMGU failed'
+	except Exception, e:
+		luci_log.debug_verbose('GC1: %s' % str(e))
 		return None
 
-	clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-	if not clusters:
+	try:
+		clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
+		if not clusters or len(clusters) < 1:
+			return None
+	except Exception, e:
+		luci_log.debug_verbose('GC2: %s' % str(e))
 		return None
 
 	allowedClusters = list()
 	for c in clusters:
-		if i.has_role('View', c[1]):
-			allowedClusters.append(c)
+		try:
+			if i.has_role('View', c[1]):
+				allowedClusters.append(c)
+		except Exception, e:
+			luci_log.debug_verbose('GC3: %s: %s' % (c[0], str(e)))
 
 	return allowedClusters
 
 def getStorage(self):
 	if isAdmin(self):
-		return self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
+		try:
+			return self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
+		except Exception, e:
+			luci_log.debug_verbose('GS0: %s' % str(e))
+			return None
+
 	try:
 		i = getSecurityManager().getUser()
 		if not i:
-			return None
-	except:
+			raise Exception, 'GSMGU failed'
+	except Exception, e:
+		luci_log.debug_verbose('GS1: %s' % str(e))
 		return None
 
-	storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
-	if not storage:
+	try:
+		storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
+		if not storage or len(storage) < 1:
+			return None
+	except Exception, e:
+		luci_log.debug_verbose('GS2: %s' % str(e))
 		return None
 
 	allowedStorage = list()
 	for s in storage:
-		if i.has_role('View', s[1]):
-			allowedStorage.append(s)
+		try:
+			if i.has_role('View', s[1]):
+				allowedStorage.append(s)
+		except Exception, e:
+			luci_log.debug_verbose('GS3: %s' % str(e))
 
 	return allowedStorage
 
 def createSystem(self, host, passwd):
 	try:
 		exists = self.restrictedTraverse(PLONE_ROOT +'/systems/storage/' + host)
-		return 'Storage system \"' + host + '\" is already managed.'
+		luci_log.debug_verbose('CS0: %s already exists' % host)
+		return 'Storage system %s is already managed' % host
 	except:
 		pass
 
@@ -986,49 +1035,52 @@
 		if rc is None:
 			raise Exception, 'unknown error'
 	except Exception, e:
+		luci_log.debug_verbose('CS1: %s: %s' % (host, str(e)))
 		return 'Unable to establish a connection to the ricci agent on %s: %s' \
 			% (host, str(e))
 
 	try:
 		if not rc.authed():
 			rc.auth(passwd)
-	except:
-		return 'Unable to communicate with the ricci agent on \"' + host + '\" for authentication'
+	except Exception, e:
+		luci_log.debug_verbose('CS2: %s: %s' % (host, str(e)))
+		return 'Unable to communicate with the ricci agent on %s for authentication' % host
 
 	try:
 		i = rc.authed()
-	except:
-		return 'Unable to authenticate to the ricci agent on \"' + host + '\"'
+	except Exception, e:
+		luci_log.debug_verbose('CS3 %s: %s' % (host, str(e)))
+		return 'Unable to authenticate to the ricci agent on %s' % host
 
 	if i != True:
-		return 'Authentication for storage system \"' + host + '\" failed'
-
-#	rhost = rc.system_name()
-#	if rhost and rhost != host and rhost[:9] != 'localhost' and rhost[:5] != '127.0':
-#		host = str(rhost)
+		return 'Authentication for storage system %s failed' % host
 
 	try:
-		exists = self.restrictedTraverse(PLONE_ROOT +'/systems/storage/' + host)
-		return 'Storage system \"' + host + '\" is already managed.'
+		exists = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + host)
+		luci_log.debug_verbose('CS4 %s already exists' % host)
+		return 'Storage system %s is already managed' % host
 	except:
 		pass
 
 	try:
 		ssystem = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/')
 	except Exception, e:
-		return 'Unable to create storage system %s: %s' % (host, str(e))
+		luci_log.debug_verbose('CS5 %s: %s' % (host, str(e)))
+		return 'Unable to create storage system %s: %s' % host
 
 	try:
 		ssystem.manage_addFolder(host, '__luci__:system')
 		newSystem = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + host)
 	except Exception, e:
-		return 'Unable to create storage system %s: %s' % (host, str(e))
+		luci_log.debug_verbose('CS6 %s: %s' % (host, str(e)))
+		return 'Unable to create DB entry for storage system %s' % host
 
 	try:
 		newSystem.manage_acquiredPermissions([])
-		newSystem.manage_role('View', ['Access contents information','View'])
+		newSystem.manage_role('View', ['Access contents information', 'View'])
 	except Exception, e:
-		return 'Unable to set permissions on storage system %s: %s' % (host, str(e))
+		luci_log.debug_verbose('CS7 %s: %s' % (host, str(e)))
+		return 'Unable to set permissions on storage system %s' % host
 
 	return None
 
@@ -1036,26 +1088,27 @@
 	try:
 		sessionData = request.SESSION.get('checkRet')
 		nodeUnauth(sessionData['requestResults']['nodeList'])
-	except:
-		pass
+	except Exception, e:
+		luci_log.debug_verbose('AMC0: %s' % str(e))
 
 def manageCluster(self, clusterName, nodeList):
 	clusterName = str(clusterName)
-	luci_log.debug_verbose('manageCluster for %s' % clusterName)
 
 	try:
 		clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/')
 		if not clusters:
 			raise Exception, 'cannot find the cluster entry in the DB'
-	except:
+	except Exception, e:
 		nodeUnauth(nodeList)
-		return 'Unable to create cluster \"' + clusterName + '\": the cluster directory is missing.'
+		luci_log.debug_verbose('MC0: %s: %s' % (clusterName, str(e)))
+		return 'Unable to create cluster %s: the cluster directory is missing.' % clusterName
 
 	try:
 		newCluster = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName)
 		if newCluster:
 			nodeUnauth(nodeList)
-			return 'A cluster named \"' + clusterName + '\" is already managed by Luci'
+			luci_log.debug_verbose('MC1: cluster %s: already exists' % clusterName)
+			return 'A cluster named %s is already managed by Luci' % clusterName
 	except:
 		pass
 
@@ -1063,20 +1116,22 @@
 		clusters.manage_addFolder(clusterName, '__luci__:cluster')
 		newCluster = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName)
 		if not newCluster:
-			raise Exception, 'unable to find cluster folder for %s' % clusterName
+			raise Exception, 'unable to create the cluster DB entry for %s' % clusterName
 	except Exception, e:
 		nodeUnauth(nodeList)
+		luci_log.debug_verbose('MC2: %s: %s' % (clusterName, str(e)))
 		return 'Unable to create cluster %s: %s' % (clusterName, str(e))
 
 	try:
 		newCluster.manage_acquiredPermissions([])
-		newCluster.manage_role('View', ['Access Contents Information','View'])
+		newCluster.manage_role('View', ['Access Contents Information', 'View'])
 	except Exception, e:
+		luci_log.debug_verbose('MC3: %s: %s' % (clusterName, str(e)))
 		nodeUnauth(nodeList)
 		try:
 			clusters.manage_delObjects([clusterName])
-		except:
-			pass
+		except Exception, e:
+			luci_log.debug_verbose('MC4: %s: %s' % (clusterName, str(e)))
 		return 'Unable to set permissions on new cluster: %s: %s' % (clusterName, str(e))
 
 	try:
@@ -1084,14 +1139,14 @@
 		if not cluster_os:
 			raise KeyError, 'Cluster OS is blank'
 	except KeyError, e:
-		luci_log.debug_verbose('Warning adding cluster %s: %s' \
-			% (clusterName, str(e)))
+		luci_log.debug_verbose('MC5: %s: %s' % (clusterName, str(e)))
 		cluster_os = 'rhel5'
 
 	try:
 		newCluster.manage_addProperty('cluster_os', cluster_os, 'string')
-	except:
-		pass # we were unable to set the OS property string on this cluster
+	except Exception, e:
+		luci_log.debug_verbose('MC5: %s: %s: %s' \
+			% (clusterName, cluster_os, str(e)))
 
 	for i in nodeList:
 		#if 'ricci_host' in i:
@@ -1103,15 +1158,19 @@
 			newCluster.manage_addFolder(host, '__luci__:csystem:' + clusterName)
 			newSystem = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName + '/' + host)
 			if not newSystem:
-				raise Exception, 'unable to create cluster system DB entry'
+				raise Exception, 'unable to create cluster system DB entry for node %s' % host
 			newSystem.manage_acquiredPermissions([])
 			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
 		except Exception, e:
 			nodeUnauth(nodeList)
 			try:
 				clusters.manage_delObjects([clusterName])
-			except:
-				pass
+			except Exception, e:
+				luci_log.debug_verbose('MC6: %s: %s: %s' \
+					% (clusterName, host, str(e)))
+
+			luci_log.debug_verbose('MC7: %s: %s: %s' \
+				% (clusterName, host, str(e)))
 			return 'Unable to create cluster node %s for cluster %s: %s' \
 				% (host, clusterName, str(e))
 
@@ -1120,6 +1179,7 @@
 		if not ssystem:
 			raise Exception, 'The storage DB entry is missing'
 	except Exception, e:
+		luci_log.debug_verbose('MC8: %s: %s: %s' % (clusterName, host, str(e)))
 		return 'Error adding storage node %s: %s' % (host, str(e))
 
 	# Only add storage systems if the cluster and cluster node DB
@@ -1134,22 +1194,25 @@
 			# It's already there, as a storage system, no problem.
 			exists = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + host)
 			continue
-		except: pass
+		except:
+			pass
 
 		try:
 			ssystem.manage_addFolder(host, '__luci__:system')
 			newSystem = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + host)
 			newSystem.manage_acquiredPermissions([])
 			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except: pass
+		except Exception, e:
+			luci_log.debug_verbose('MC9: %s: %s: %s' % (clusterName, host, str(e)))
 
 def createClusterSystems(self, clusterName, nodeList):
 	try:
 		clusterObj = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName)
 		if not clusterObj:
 			raise Exception, 'cluster %s DB entry is missing' % clusterName
-	except:
+	except Exception, e:
 		nodeUnauth(nodeList)
+		luci_log.debug_verbose('CCS0: %s: %s' % (clusterName, str(e)))
 		return 'No cluster named \"' + clusterName + '\" is managed by Luci'
 
 	for i in nodeList:
@@ -1168,6 +1231,7 @@
 			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
 		except Exception, e:
 			nodeUnauth(nodeList)
+			luci_log.debug_verbose('CCS1: %s: %s: %s' % (clusterName, host, str(e)))
 			return 'Unable to create cluster node %s for cluster %s: %s' \
 				% (host, clusterName, str(e))
 
@@ -1176,8 +1240,7 @@
 		if not ssystem:
 			raise Exception, 'storage DB entry is missing'
 	except Exception, e:
-		luci_log.debug_verbose('Error: adding storage DB node for %s: %s' \
-			% (host, str(e)))
+		luci_log.debug_verbose('CCS2: %s: %s' % (clusterName, host, str(e)))
 		return
 
 	# Only add storage systems if the and cluster node DB
@@ -1192,14 +1255,16 @@
 			# It's already there, as a storage system, no problem.
 			exists = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + host)
 			continue
-		except: pass
+		except:
+			pass
 
 		try:
 			ssystem.manage_addFolder(host, '__luci__:system')
 			newSystem = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + host)
 			newSystem.manage_acquiredPermissions([])
 			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except: pass
+		except Exception, e:
+			luci_log.debug_verbose('CCS3: %s: %s' % (clusterName, host, str(e)))
 
 def delSystem(self, systemName):
 	try:
@@ -1207,6 +1272,7 @@
 		if not ssystem:
 			raise Exception, 'storage DB entry is missing'
 	except Exception, e:
+		luci_log.debug_verbose('delSystem0: %s: %s' % (systemName, str(e)))
 		return 'Unable to find storage system %s: %s' % (systemName, str(e))
 
 	try:
@@ -1216,27 +1282,33 @@
 	except Exception, e:
 		try:
 			ssystem.manage_delObjects([systemName])
-		except:
-			return 'Unable to delete the storage system \"' + systemName + '\"'
-		luci_log.debug_verbose('ricci error for %s: %s' % (systemName, str(e)))
+		except Exception, e:
+			luci_log.debug_verbose('delSystem1: %s: %s' % (systemName, str(e)))
+			return 'Unable to delete the storage system %s' % systemName
+		luci_log.debug_verbose('delSystem2: %s: %s' % (systemName, str(e)))
 		return
 
 	# Only unauthenticate if the system isn't a member of
 	# a managed cluster.
 	cluster_info = rc.cluster_info()
 	if not cluster_info[0]:
-		try: rc.unauth()
-		except: pass
+		try:
+			rc.unauth()
+		except:
+			pass
 	else:
 		try:
 			newSystem = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + cluster_info[0] + '/' + systemName)
 		except:
-			try: rc.unauth()
-			except: pass
+			try:
+				rc.unauth()
+			except:
+				pass
 
 	try:
 		ssystem.manage_delObjects([systemName])
 	except Exception, e:
+		luci_log.debug_verbose('delSystem3: %s: %s' % (systemName, str(e)))
 		return 'Unable to delete storage system %s: %s' \
 			% (systemName, str(e))
 
@@ -1244,9 +1316,10 @@
 	try:
 		clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/')
 		if not clusters:
-			raise
-	except:
-		return 'Unable to find cluster \"' + clusterName + '\"'
+			raise Exception, 'clusters DB entry is missing'
+	except Exception, e:
+		luci_log.debug_verbose('delCluster0: %s' % str(e))
+		return 'Unable to find cluster %s' % clusterName
 
 	err = delClusterSystems(self, clusterName)
 	if err:
@@ -1254,26 +1327,28 @@
 
 	try:
 		clusters.manage_delObjects([clusterName])
-	except:
-		return 'Unable to delete cluster \"' + clusterName + '\"'
+	except Exception, e:
+		luci_log.debug_verbose('delCluster1: %s' % str(e))
+		return 'Unable to delete cluster %s' % clusterName
 
 def delClusterSystem(self, cluster, systemName):
 	try:
 		if not self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + systemName):
 			raise
 	except:
+		# It's not a storage system, so unauthenticate.
 		try:
 			rc = RicciCommunicator(systemName)
 			rc.unauth()
 		except Exception, e:
-			luci_log.debug_verbose('ricci error for %s: %s' \
+			luci_log.debug_verbose('delClusterSystem0: ricci error for %s: %s' \
 				% (systemName, str(e)))
 
 	try:
 		cluster.manage_delObjects([systemName])
 	except Exception, e:
 		err_str = 'Error deleting cluster object %s: %s' % (systemName, str(e))
-		luci_log.debug_verbose(err_str)
+		luci_log.debug_verbose('delClusterSystem1: %s' % err_str)
 		return err_str
 
 def delClusterSystems(self, clusterName):
@@ -1285,7 +1360,7 @@
 	except Exception, e:
 		luci_log.debug_verbose('delCluSysterms: error for %s: %s' \
 			% (clusterName, str(e)))
-		return 'Unable to find any systems for cluster \"' + clusterName + '\"'
+		return 'Unable to find any systems for cluster %s' % clusterName
 
 	errors = ''
 	for i in csystems:
@@ -1297,34 +1372,65 @@
 def getDefaultUser(self, request):
 	try:
 		user = request.form['userList']
-	except:
+	except KeyError, e:
 		try:
 			user = request['user']
 		except:
 			try:
-				user = self.portal_membership.listMembers()[0].getUserName()
-			except:
+				members = list()
+				members.extend(self.portal_membership.listMembers())
+				members.sort()
+				user = members[0].getUserName()
+			except Exception, e:
+				luci_log.debug_verbose('getDefaultUser0: %s' % str(e))
 				user = None
 
+	if not user:
+		luci_log.debug_verbose('getDefaultUser1: user is none')
 	return user
 
 def getUserPerms(self):
 	perms = {}
-	for i in self.portal_membership.listMembers():
+
+	try:
+		members = list()
+		members.extend(self.portal_membership.listMembers())
+		if len(members) < 1:
+			raise Exception, 'no portal members exist'
+		members.sort()
+	except Exception, e:
+		luci_log.debug_verbose('getUserPerms0: %s' % str(e))
+		return {}
+
+	for i in members:
 		userName = i.getUserName()
 
 		perms[userName] = {}
 		perms[userName]['cluster'] = {}
 		perms[userName]['storage'] = {}
 
-		clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-		storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
+		try:
+			clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
+			storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
+		except Exception, e:
+			luci_log.debug_verbose('getUserPerms1: user %s: %s' % (userName, str(e)))
+			continue
 
 		for c in clusters:
-			perms[userName]['cluster'][c[0]] = i.has_role('View', c[1])
-
+			try:
+				perms[userName]['cluster'][c[0]] = i.has_role('View', c[1])
+			except Exception, e:
+				luci_log.debug_verbose('getUserPerms2: user %s, obj %s: %s' \
+					% (userName, c[0], str(e)))
+				continue
+				
 		for s in storage:
-			perms[userName]['storage'][s[0]] = i.has_role('View', s[1])
+			try:
+				perms[userName]['storage'][s[0]] = i.has_role('View', s[1])
+			except Exception, e:
+				luci_log.debug_verbose('getUserPerms2: user %s, obj %s: %s' \
+					% (userName, s[0], str(e)))
+				continue
 
 	return perms
 
@@ -1397,39 +1503,52 @@
 def getClusterNode(self, nodename, clustername):
 	try:
 		cluster_node = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + str(clustername) + '/' + str(nodename))
+		if not cluster_node:
+			raise Exception, 'cluster node is none'
 		return cluster_node
-	except:
+	except Exception, e:
+		luci_log.debug_verbose('getClusterNode0: %s %s: %s' \
+			% (nodename, clustername, str(e)))
 		return None
 
 def getStorageNode(self, nodename):
 	try:
 		storage_node = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + '/' + str(nodename))
+		if not storage_node:
+			raise Exception, 'storage node is none'
 		return storage_node
-	except:
+	except Exception, e:
+		luci_log.debug_verbose('getStorageNode0: %s: %s' % (nodename, str(e)))
 		return None
 
 def testNodeFlag(node, flag_mask):
 	try:
 		flags = node.getProperty('flags')
+		if flags is None:
+			return False
 		return flags & flag_mask != 0
-	except:
-		pass
+	except Exception, e:
+		luci_log.debug_verbose('testNodeFlag0: %s' % str(e))
 	return False
 
 def setNodeFlag(node, flag_mask):
 	try:
 		flags = node.getProperty('flags')
+		if flags is None:
+			flags = 0
 		node.manage_changeProperties({ 'flags': flags | flag_mask })
 	except:
 		try:
 			node.manage_addProperty('flags', flag_mask, 'int')
-		except:
-			pass
+		except Exception, e:
+			luci_log.debug_verbose('setNodeFlag0: %s' % str(e))
 
 def delNodeFlag(node, flag_mask):
 	try:
 		flags = node.getProperty('flags')
+		if flags is None:
+			return
 		if flags & flag_mask != 0:
 			node.manage_changeProperties({ 'flags': flags & ~flag_mask })
-	except:
-		pass
+	except Exception, e:
+		luci_log.debug_verbose('delNodeFlag0: %s' % str(e))
--- conga/luci/site/luci/Extensions/ricci_bridge.py	2006/11/01 22:06:55	1.30.2.6
+++ conga/luci/site/luci/Extensions/ricci_bridge.py	2006/11/16 19:34:53	1.30.2.7
@@ -18,7 +18,7 @@
 		return False
 
 	try:
-		batchid = batch.getAttribute('batch_id')
+		dummy = batch.getAttribute('batch_id')
 		result = batch.getAttribute('status')
 	except:
 		return False
@@ -28,7 +28,8 @@
 
 	return False
 
-def addClusterNodeBatch(cluster_name,
+def addClusterNodeBatch(os_str,
+						cluster_name,
 						install_base,
 						install_services,
 						install_shared_storage,
@@ -65,13 +66,31 @@
 		
 	need_reboot = install_base or install_services or install_shared_storage or install_LVS
 	if need_reboot:
+		batch += '<module name="service">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="disable">'
+		batch += '<var mutable="false" name="services" type="list_xml">'
+		if os_str == 'rhel4':
+			batch += '<service name="ccsd"/>'
+		batch += '<service name="cman"/>'
+		batch += '</var>'
+		batch += '</function_call>'
+		batch += '</request>'
+		batch += '</module>'
+
 		batch += '<module name="reboot">'
 		batch += '<request API_version="1.0">'
 		batch += '<function_call name="reboot_now"/>'
 		batch += '</request>'
 		batch += '</module>'
 	else:
-		# need placeholder instead of reboot
+		# need 2 placeholders instead of disable services / reboot
+		batch += '<module name="rpm">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="install"/>'
+		batch += '</request>'
+		batch += '</module>'
+
 		batch += '<module name="rpm">'
 		batch += '<request API_version="1.0">'
 		batch += '<function_call name="install"/>'
@@ -95,6 +114,26 @@
 	batch += '</request>'
 	batch += '</module>'
 
+	if need_reboot:
+		batch += '<module name="service">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="enable">'
+		batch += '<var mutable="false" name="services" type="list_xml">'
+		if os_str == 'rhel4':
+			batch += '<service name="ccsd"/>'
+		batch += '<service name="cman"/>'
+		batch += '</var>'
+		batch += '</function_call>'
+		batch += '</request>'
+		batch += '</module>'
+	else:
+		# placeholder instead of enable services
+		batch += '<module name="rpm">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="install"/>'
+		batch += '</request>'
+		batch += '</module>'
+
 	batch += '<module name="cluster">'
 	batch += '<request API_version="1.0">'
 	batch += '<function_call name="start_node"/>'
@@ -114,13 +153,6 @@
 						install_LVS,
 						upgrade_rpms):
 
-	if os_str == 'rhel5':
-		cluster_version = '5'
-	elif os_str == 'rhel4':
-		cluster_version = '4'
-	else:
-		cluster_version = 'unknown'
-
 	batch = '<?xml version="1.0" ?>'
 	batch += '<batch>'
 	batch += '<module name="rpm">'
@@ -149,13 +181,31 @@
 
 	need_reboot = install_base or install_services or install_shared_storage or install_LVS
 	if need_reboot:
+		batch += '<module name="service">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="disable">'
+		batch += '<var mutable="false" name="services" type="list_xml">'
+		if os_str == 'rhel4':
+			batch += '<service name="ccsd"/>'
+		batch += '<service name="cman"/>'
+		batch += '</var>'
+		batch += '</function_call>'
+		batch += '</request>'
+		batch += '</module>'
+
 		batch += '<module name="reboot">'
 		batch += '<request API_version="1.0">'
 		batch += '<function_call name="reboot_now"/>'
 		batch += '</request>'
 		batch += '</module>'
 	else:
-		# need placeholder instead of reboot
+		# need 2 placeholders instead of disable services / reboot
+		batch += '<module name="rpm">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="install"/>'
+		batch += '</request>'
+		batch += '</module>'
+
 		batch += '<module name="rpm">'
 		batch += '<request API_version="1.0">'
 		batch += '<function_call name="install"/>'
@@ -195,6 +245,26 @@
 	batch += '</request>'
 	batch += '</module>'
 
+	if need_reboot:
+		batch += '<module name="service">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="enable">'
+		batch += '<var mutable="false" name="services" type="list_xml">'
+		if os_str == 'rhel4':
+			batch += '<service name="ccsd"/>'
+		batch += '<service name="cman"/>'
+		batch += '</var>'
+		batch += '</function_call>'
+		batch += '</request>'
+		batch += '</module>'
+	else:
+		# placeholder instead of enable services
+		batch += '<module name="rpm">'
+		batch += '<request API_version="1.0">'
+		batch += '<function_call name="install"/>'
+		batch += '</request>'
+		batch += '</module>'
+
 	batch += '<module name="cluster">'
 	batch += '<request API_version="1.0">'
 	batch += '<function_call name="start_node">'
@@ -276,7 +346,7 @@
 	return doc
 
 def getClusterStatusBatch(rc):
-	batch_str ='<module name="cluster"><request API_version="1.0"><function_call name="status"/></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="status"/></request></module>'
 	ricci_xml = rc.batch_run(batch_str, async=False)
 
 	if not ricci_xml or not ricci_xml.firstChild:
@@ -308,7 +378,7 @@
 def getNodeLogs(rc):
 	errstr = 'log not accessible'
 
-	batch_str = '<module name="log"><request sequence="1254" API_version="1.0"><function_call name="get"><var mutable="false" name="age" type="int" value="18000"/><var mutable="false" name="tags" type="list_str"></var></function_call></request></module>'
+	batch_str = '<module name="log"><request API_version="1.0"><function_call name="get"><var mutable="false" name="age" type="int" value="18000"/><var mutable="false" name="tags" type="list_str"></var></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str, async=False)
 	if not ricci_xml:
@@ -318,8 +388,8 @@
 		if not log_entries or len(log_entries) < 1:
 			raise Exception, 'no log data is available.'
 	except Exception, e:
-		'Error retrieving log data from %s: %s' \
-			% (rc.hostname(), str(e))
+		luci_log.debug_verbose('Error retrieving log data from %s: %s' \
+			% (rc.hostname(), str(e)))
 		return None
 	time_now = time()
 	entry = ''
@@ -357,7 +427,7 @@
 	return entry
 
 def nodeReboot(rc):
-	batch_str = '<module name="reboot"><request sequence="111" API_version="1.0"><function_call name="reboot_now"/></request></module>'
+	batch_str = '<module name="reboot"><request API_version="1.0"><function_call name="reboot_now"/></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
@@ -371,13 +441,13 @@
 	if purge == False:
 		purge_conf = 'false'
 
-	batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="stop_node"><var mutable="false" name="cluster_shutdown" type="boolean" value="' + cshutdown + '"/><var mutable="false" name="purge_conf" type="boolean" value="' + purge_conf + '"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="stop_node"><var mutable="false" name="cluster_shutdown" type="boolean" value="' + cshutdown + '"/><var mutable="false" name="purge_conf" type="boolean" value="' + purge_conf + '"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
 def nodeFence(rc, nodename):
-	batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="fence_node"><var mutable="false" name="nodename" type="string" value="' + nodename + '"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="fence_node"><var mutable="false" name="nodename" type="string" value="' + nodename + '"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
@@ -387,28 +457,48 @@
 	if cluster_startup == True:
 		cstartup = 'true'
 
-	batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="start_node"><var mutable="false" name="cluster_startup" type="boolean" value="' + cstartup + '"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="start_node"><var mutable="false" name="cluster_startup" type="boolean" value="' + cstartup + '"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
 def startService(rc, servicename, preferrednode=None):
 	if preferrednode != None:
-		batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/><var mutable="false" name="nodename" type="string" value=\"' + preferrednode + '\" /></function_call></request></module>'
+		batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/><var mutable="false" name="nodename" type="string" value=\"' + preferrednode + '\" /></function_call></request></module>'
 	else:
-		batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+		batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
+def updateServices(rc, enable_list, disable_list):
+	batch = ''
+
+	if enable_list and len(enable_list) > 0:
+		batch += '<module name="service"><request API_version="1.0"><function_call name="enable"><var mutable="false" name="services" type="list_xml">'
+		for i in enable_list:
+			batch += '<service name="%s"/>' % str(i)
+		batch += '</var></function_call></request></module>'
+
+	if disable_list and len(disable_list) > 0:
+		batch += '<module name="service"><request API_version="1.0"><function_call name="disable"><var mutable="false" name="services" type="list_xml">'
+		for i in disable_list:
+			batch += '<service name="%s"/>' % str(i)
+		batch += '</var></function_call></request></module>'
+
+	if batch == '':
+		return None
+	ricci_xml = rc.batch_run(batch)
+	return batchAttemptResult(ricci_xml)
+	
 def restartService(rc, servicename):
-	batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="restart_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="restart_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
 
 def stopService(rc, servicename):
-	batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="stop_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+	batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="stop_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
 
 	ricci_xml = rc.batch_run(batch_str)
 	return batchAttemptResult(ricci_xml)
@@ -463,7 +553,6 @@
 		return None
 
 	resultlist = list()
-	svc_node = None
 	for node in varnode.childNodes:
 		if node.nodeName == 'service':
 			svchash = {}
--- conga/luci/site/luci/Extensions/ricci_communicator.py	2006/11/01 22:06:55	1.9.2.3
+++ conga/luci/site/luci/Extensions/ricci_communicator.py	2006/11/16 19:34:53	1.9.2.4
@@ -1,10 +1,8 @@
-from time import *
-from socket import *
+from socket import socket, ssl, AF_INET, SOCK_STREAM
 import xml
 import xml.dom
 from xml.dom import minidom
 from LuciSyslog import LuciSyslog
-from HelperFunctions import access_to_host_allowed
 
 CERTS_DIR_PATH = '/var/lib/luci/var/certs/'
 
@@ -36,7 +34,7 @@
             raise RicciError, 'Error connecting to %s:%d: unknown error' \
                     % (self.__hostname, self.__port)
 
-        luci_log.debug_verbose('Connected to %s:%d' \
+        luci_log.debug_verbose('RC:init0: Connected to %s:%d' \
             % (self.__hostname, self.__port))
         try:
             self.ss = ssl(sock, self.__privkey_file, self.__cert_file)
@@ -53,7 +51,7 @@
         # receive ricci header
         hello = self.__receive()
         try:
-            luci_log.debug_verbose('Received header from %s: \"%s\"' \
+            luci_log.debug_verbose('RC:init1: Received header from %s: \"%s\"' \
                 % (self.__hostname, hello.toxml()))
         except:
             pass
@@ -69,34 +67,34 @@
     
     
     def hostname(self):
-        luci_log.debug_verbose('[auth %d] reported hostname = %s' \
+        luci_log.debug_verbose('RC:hostname: [auth %d] reported hostname = %s' \
             % (self.__authed, self.__hostname))
         return self.__hostname
     def authed(self):
-        luci_log.debug_verbose('reported authed = %d for %s' \
+        luci_log.debug_verbose('RC:authed: reported authed = %d for %s' \
             % (self.__authed, self.__hostname))
         return self.__authed
     def system_name(self):
-        luci_log.debug_verbose('[auth %d] reported system_name = %s for %s' \
+        luci_log.debug_verbose('RC:system_name: [auth %d] reported system_name = %s for %s' \
             % (self.__authed, self.__reported_hostname, self.__hostname))
         return self.__reported_hostname
     def cluster_info(self):
-        luci_log.debug_verbose('[auth %d] reported cluster_info = (%s,%s) for %s' \
+        luci_log.debug_verbose('RC:cluster_info: [auth %d] reported cluster_info = (%s,%s) for %s' \
             % (self.__authed, self.__cluname, self.__clualias, self.__hostname))
         return (self.__cluname, self.__clualias)
     def os(self):
-        luci_log.debug_verbose('[auth %d] reported system_name = %s for %s' \
+        luci_log.debug_verbose('RC:os: [auth %d] reported system_name = %s for %s' \
             % (self.__authed, self.__os, self.__hostname))
         return self.__os
     def dom0(self):
-        luci_log.debug_verbose('[auth %d] reported system_name = %s for %s' \
+        luci_log.debug_verbose('RC:dom0: [auth %d] reported system_name = %s for %s' \
             % (self.__authed, self.__dom0, self.__hostname))
         return self.__dom0
     
     
     def auth(self, password):
         if self.authed():
-            luci_log.debug_verbose('already authenticated to %s' \
+            luci_log.debug_verbose('RC:auth0: already authenticated to %s' \
                 % self.__hostname)
             return True
         
@@ -113,7 +111,8 @@
         resp = self.__receive()
         self.__authed = resp.firstChild.getAttribute('authenticated') == 'true'
 
-        luci_log.debug_verbose('auth call returning %d' % self.__authed)
+        luci_log.debug_verbose('RC:auth1: auth call returning %d' \
+			% self.__authed)
         return self.__authed
 
 
@@ -126,26 +125,26 @@
         self.__send(doc)
         resp = self.__receive()
 
-        luci_log.debug_verbose('trying to unauthenticate to %s' \
+        luci_log.debug_verbose('RC:unauth0: trying to unauthenticate to %s' \
             % self.__hostname)
 
         try:
             ret = resp.firstChild.getAttribute('success')
-            luci_log.debug_verbose('unauthenticate returned %s for %s' \
+            luci_log.debug_verbose('RC:unauth1: unauthenticate returned %s for %s' \
                 % (ret, self.__hostname))
             if ret != '0':
                 raise Exception, 'Invalid response'
         except:
             errstr = 'Error authenticating to host %s: %s' \
                         % (self.__hostname, str(ret))
-            luci_log.debug(errstr)
+            luci_log.debug_verbose('RC:unauth2:' + errstr)
             raise RicciError, errstr
         return True
 
 
     def process_batch(self, batch_xml, async=False):
         try:
-            luci_log.debug_verbose('auth=%d to %s for batch %s [async=%d]' \
+            luci_log.debug_verbose('RC:PB0: [auth=%d] to %s for batch %s [async=%d]' \
                 % (self.__authed, self.__hostname, batch_xml.toxml(), async))
         except:
             pass
@@ -171,7 +170,7 @@
         try:
             self.__send(doc)
         except Exception, e:
-            luci_log.debug('Error sending XML \"%s\" to host %s' \
+            luci_log.debug_verbose('RC:PB1: Error sending XML \"%s\" to host %s' \
                 % (doc.toxml(), self.__hostname))
             raise RicciError, 'Error sending XML to host %s: %s' \
                     % (self.__hostname, str(e))
@@ -181,13 +180,13 @@
         # receive response
         doc = self.__receive()
         try:
-            luci_log.debug_verbose('received from %s XML \"%s\"' \
+            luci_log.debug_verbose('RC:PB2: received from %s XML \"%s\"' \
                 % (self.__hostname, doc.toxml()))
         except:
             pass
  
         if doc.firstChild.getAttribute('success') != '0':
-            luci_log.debug_verbose('batch command failed')
+            luci_log.debug_verbose('RC:PB3: batch command failed')
             raise RicciError, 'The last ricci command to host %s failed' \
                     % self.__hostname
         
@@ -197,7 +196,7 @@
                 if node.nodeName == 'batch':
                     batch_node = node.cloneNode(True)
         if batch_node == None:
-            luci_log.debug_verbose('batch node missing <batch/>')
+            luci_log.debug_verbose('RC:PB4: batch node missing <batch/>')
             raise RicciError, 'missing <batch/> in ricci\'s response from %s' \
                     % self.__hostname
 
@@ -206,23 +205,23 @@
     def batch_run(self, batch_str, async=True):
         try:
             batch_xml_str = '<?xml version="1.0" ?><batch>' + batch_str + '</batch>'
-            luci_log.debug_verbose('attempting batch \"%s\" for host %s' \
+            luci_log.debug_verbose('RC:BRun0: attempting batch \"%s\" for host %s' \
                 % (batch_xml_str, self.__hostname))
             batch_xml = minidom.parseString(batch_xml_str).firstChild
         except Exception, e:
-            luci_log.debug('received invalid batch XML for %s: \"%s\"' \
-                % (self.__hostname, batch_xml_str))
+            luci_log.debug_verbose('RC:BRun1: received invalid batch XML for %s: \"%s\": %s' \
+                % (self.__hostname, batch_xml_str, str(e)))
             raise RicciError, 'batch XML is malformed'
 
         try:
             ricci_xml = self.process_batch(batch_xml, async)
             try:
-                luci_log.debug_verbose('received XML \"%s\" from host %s in response to batch command.' \
+                luci_log.debug_verbose('RC:BRun2: received XML \"%s\" from host %s in response to batch command.' \
                     % (ricci_xml.toxml(), self.__hostname))
             except:
                 pass
         except:
-            luci_log.debug('An error occurred while trying to process the batch job: %s' % batch_xml_str)
+            luci_log.debug_verbose('RC:BRun3: An error occurred while trying to process the batch job: \"%s\"' % batch_xml_str)
             return None
 
         doc = minidom.Document()
@@ -230,7 +229,7 @@
         return doc
 
     def batch_report(self, batch_id):
-        luci_log.debug_verbose('[auth=%d] asking for batchid# %s for host %s' \
+        luci_log.debug_verbose('RC:BRep0: [auth=%d] asking for batchid# %s for host %s' \
             % (self.__authed, batch_id, self.__hostname))
 
         if not self.authed():
@@ -273,7 +272,7 @@
             try:
                 pos = self.ss.write(buff)
             except Exception, e:
-                luci_log.debug('Error sending XML \"%s\" to %s: %s' \
+                luci_log.debug_verbose('RC:send0: Error sending XML \"%s\" to %s: %s' \
                     % (buff, self.__hostname, str(e)))
                 raise RicciError, 'write error while sending XML to host %s' \
                         % self.__hostname
@@ -282,7 +281,7 @@
                         % self.__hostname
             buff = buff[pos:]
         try:
-            luci_log.debug_verbose('Sent XML \"%s\" to host %s' \
+            luci_log.debug_verbose('RC:send1: Sent XML \"%s\" to host %s' \
                 % (xml_doc.toxml(), self.__hostname))
         except:
             pass
@@ -304,19 +303,19 @@
                     # we haven't received all of the XML data yet.
                     continue
         except Exception, e:
-            luci_log.debug('Error reading data from %s: %s' \
+            luci_log.debug_verbose('RC:recv0: Error reading data from %s: %s' \
                 % (self.__hostname, str(e)))
             raise RicciError, 'Error reading data from host %s' % self.__hostname
         except:
             raise RicciError, 'Error reading data from host %s' % self.__hostname
-        luci_log.debug_verbose('Received XML \"%s\" from host %s' \
+        luci_log.debug_verbose('RC:recv1: Received XML \"%s\" from host %s' \
             % (xml_in, self.__hostname))
 
         try:
             if doc == None:
                 doc = minidom.parseString(xml_in)
         except Exception, e:
-            luci_log.debug('Error parsing XML \"%s" from %s' \
+            luci_log.debug_verbose('RC:recv2: Error parsing XML \"%s" from %s' \
                 % (xml_in, str(e)))
             raise RicciError, 'Error parsing XML from host %s: %s' \
                     % (self.__hostname, str(e))
@@ -328,7 +327,7 @@
         
         try:        
             if doc.firstChild.nodeName != 'ricci':
-                luci_log.debug('Expecting \"ricci\" got XML \"%s\" from %s' %
+                luci_log.debug_verbose('RC:recv3: Expecting \"ricci\" got XML \"%s\" from %s' %
                     (xml_in, self.__hostname))
                 raise Exception, 'Expecting first XML child node to be \"ricci\"'
         except Exception, e:
@@ -346,7 +345,7 @@
     try:
         return RicciCommunicator(hostname)
     except Exception, e:
-        luci_log.debug('Error creating a ricci connection to %s: %s' \
+        luci_log.debug_verbose('RC:GRC0: Error creating a ricci connection to %s: %s' \
             % (hostname, str(e)))
         return None
     pass
@@ -396,7 +395,7 @@
 def batch_status(batch_xml):
     if batch_xml.nodeName != 'batch':
         try:
-            luci_log.debug('Expecting an XML batch node. Got \"%s\"' \
+            luci_log.debug_verbose('RC:BS0: Expecting an XML batch node. Got \"%s\"' \
                 % batch_xml.toxml())
         except:
             pass
@@ -416,10 +415,10 @@
                     last = last + 1
                     last = last - 2 * last
     try:
-        luci_log.debug_verbose('Returning (%d, %d) for batch_status(\"%s\")' \
+        luci_log.debug_verbose('RC:BS1: Returning (%d, %d) for batch_status(\"%s\")' \
             % (last, total, batch_xml.toxml()))
     except:
-        luci_log.debug_verbose('Returning last, total')
+        luci_log.debug_verbose('RC:BS2: Returning last, total')
 
     return (last, total)
 
@@ -445,7 +444,7 @@
 # * error_msg:  error message
 def extract_module_status(batch_xml, module_num=1):
     if batch_xml.nodeName != 'batch':
-        luci_log.debug('Expecting \"batch\" got \"%s\"' % batch_xml.toxml())
+        luci_log.debug_verbose('RC:EMS0: Expecting \"batch\" got \"%s\"' % batch_xml.toxml())
         raise RicciError, 'Invalid XML node; expecting a batch node'
 
     c = 0
--- conga/make/version.in	2006/11/01 23:11:25	1.21.2.4
+++ conga/make/version.in	2006/11/16 19:34:53	1.21.2.5
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=23
+RELEASE=24
--- conga/ricci/modules/log/LogParser.cpp	2006/10/23 21:13:21	1.6.2.1
+++ conga/ricci/modules/log/LogParser.cpp	2006/11/16 19:34:53	1.6.2.2
@@ -165,7 +165,8 @@
 
 set<String>&
 get_files(const String& path_,
-	  set<String>& files)
+	  set<String>& files,
+          time_t age_time)
 {
   String path = utils::rstrip(utils::strip(path_), "/");
   if (path.empty() || path.find_first_of(" ; & $ ` ? > < ' \" ; | \\ * \n \t") != path.npos)
@@ -178,11 +179,12 @@
     //    throw String("unable to stat ") + path;
     return files;
   if (S_ISREG(st.st_mode)) {
-    files.insert(path);
+    if (st.st_mtime >= age_time)
+      files.insert(path);
     
     // get rotated logs
     for (int i=0; i<25; i++)
-      get_files(path + "." + utils::to_string(i), files);
+      get_files(path + "." + utils::to_string(i), files, age_time);
     
     return files;
   } else if (S_ISDIR(st.st_mode))
@@ -204,7 +206,7 @@
       if (kid_path == "." || kid_path == "..")
 	continue;
       kid_path = path + "/" + kid_path;
-      get_files(kid_path, files);
+      get_files(kid_path, files, age_time);
     }
   } catch ( ... ) {
     closedir(d);
@@ -366,6 +368,13 @@
 		       const list<String>& paths)
 {
   set<LogEntry> ret;
+  time_t age_time = time(NULL);
+
+  if ((long long) age_time - age < 0)
+    age_time = 0;
+  else
+    age_time -= age;
+  
   
   // set of requested tags
   set<String> req_tags(domains.begin(), domains.end());
@@ -375,10 +384,10 @@
   for (list<String>::const_iterator iter = paths.begin();
        iter != paths.end();
        iter++)
-    get_files(*iter, files);
+    get_files(*iter, files, age_time);
   if (files.empty()) {
-    get_files("/var/log/messages", files);
-    get_files("/var/log/syslog", files);
+    get_files("/var/log/messages", files, age_time);
+    get_files("/var/log/syslog", files, age_time);
   }
   
   // process log files



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-11-17  0:59 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-11-17  0:59 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2006-11-17 00:59:33

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	version bump to 0.8-25

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.4&r2=1.18.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.5&r2=1.45.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.5&r2=1.21.2.6

--- conga/clustermon.spec.in.in	2006/11/16 19:34:52	1.18.2.4
+++ conga/clustermon.spec.in.in	2006/11/17 00:59:33	1.18.2.5
@@ -195,6 +195,9 @@
 
 %changelog
 
+* Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-25
+- Fix build issues (D-BUS detection)
+
 * Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-24
  - version bump
 
--- conga/conga.spec.in.in	2006/11/16 19:34:52	1.45.2.5
+++ conga/conga.spec.in.in	2006/11/17 00:59:33	1.45.2.6
@@ -283,6 +283,9 @@
 
 %changelog
 
+* Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-25
+- Fix build issues (D-BUS detection)
+
 * Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-24
 - Fixed bz215039 (Cannot create a new resource via luci web app)
 - Fixed bz215034 (Cannot change daemon properties via luci web app)
--- conga/make/version.in	2006/11/16 19:34:53	1.21.2.5
+++ conga/make/version.in	2006/11/17 00:59:33	1.21.2.6
@@ -1,2 +1,2 @@
 VERSION=0.8
-RELEASE=24
+RELEASE=25



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-11-17 20:46 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-11-17 20:46 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2006-11-17 20:46:03

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	changelog template for 0.8-26

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.5&r2=1.18.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.6&r2=1.45.2.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.6&r2=1.21.2.7

--- conga/clustermon.spec.in.in	2006/11/17 00:59:33	1.18.2.5
+++ conga/clustermon.spec.in.in	2006/11/17 20:46:02	1.18.2.6
@@ -195,6 +195,16 @@
 
 %changelog
 
+* day month date 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
+
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+
+- 
+
+
+
 * Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-25
 - Fix build issues (D-BUS detection)
 
--- conga/conga.spec.in.in	2006/11/17 00:59:33	1.45.2.6
+++ conga/conga.spec.in.in	2006/11/17 20:46:02	1.45.2.7
@@ -283,6 +283,16 @@
 
 %changelog
 
+* day month date 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
+
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+
+- 
+
+
+
 * Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-25
 - Fix build issues (D-BUS detection)
 
--- conga/make/version.in	2006/11/17 00:59:33	1.21.2.6
+++ conga/make/version.in	2006/11/17 20:46:02	1.21.2.7
@@ -1,2 +1,6 @@
 VERSION=0.8
-RELEASE=25
+RELEASE=26_UNRELEASED
+# Remove "_UNRELEASED" at release time.
+# Put release num at the beggining, 
+# so that after it gets released, it has 
+# seniority over UNRELEASED one



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-12-12 13:53 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-12-12 13:53 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2006-12-12 13:53:53

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	0.8-26.el5 release

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.6&r2=1.18.2.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.11&r2=1.45.2.12
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.7&r2=1.21.2.8

--- conga/clustermon.spec.in.in	2006/11/17 20:46:02	1.18.2.6
+++ conga/clustermon.spec.in.in	2006/12/12 13:53:53	1.18.2.7
@@ -195,15 +195,8 @@
 
 %changelog
 
-* day month date 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
-
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-
-- 
-
-
+* Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
+- Version bump
 
 * Thu Nov 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-25
 - Fix build issues (D-BUS detection)
--- conga/conga.spec.in.in	2006/12/12 13:26:23	1.45.2.11
+++ conga/conga.spec.in.in	2006/12/12 13:53:53	1.45.2.12
@@ -284,12 +284,7 @@
 
 %changelog
 
-* Fri Dec 08 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
-
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-
+* Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
 - luci storage: fix bytes->TB conversion
 - Improved bz201394: luci doesn't verify ricci's SSL cert against trusted list (part 1 - backend)
 - Fixed bz217387 (luci - HTML error shows up in display (minor))
--- conga/make/version.in	2006/11/17 20:46:02	1.21.2.7
+++ conga/make/version.in	2006/12/12 13:53:53	1.21.2.8
@@ -1,5 +1,5 @@
 VERSION=0.8
-RELEASE=26_UNRELEASED
+RELEASE=26
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2006-12-13 19:21 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2006-12-13 19:21 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2006-12-13 19:21:52

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 
	ricci/modules/cluster/clumon/src/daemon: Monitor.cpp 

Log message:
	- Skeleton for 0.8-27 build
	- Improved bz218941: Conga/luci - cannot add node to cluster via luci web app

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.7&r2=1.18.2.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.13&r2=1.45.2.14
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.8&r2=1.21.2.9
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/clumon/src/daemon/Monitor.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.10.2.2&r2=1.10.2.3

--- conga/clustermon.spec.in.in	2006/12/12 13:53:53	1.18.2.7
+++ conga/clustermon.spec.in.in	2006/12/13 19:21:52	1.18.2.8
@@ -192,9 +192,17 @@
 ###  changelog ###
 
 
-
 %changelog
 
+* Wed Dec 13 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-27
+
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+
+- Improved bz218941: Conga/luci - cannot add node to cluster via luci web app
+
+
 * Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
 - Version bump
 
--- conga/conga.spec.in.in	2006/12/12 13:58:23	1.45.2.13
+++ conga/conga.spec.in.in	2006/12/13 19:21:52	1.45.2.14
@@ -284,6 +284,15 @@
 
 %changelog
 
+* Wed Dec 13 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-27
+
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+
+- 
+
+
 * Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
 - luci storage: fix bytes->TB conversion
 - Improved bz201394: luci doesn't verify ricci's SSL cert against trusted list (part 1 - backend)
--- conga/make/version.in	2006/12/12 13:53:53	1.21.2.8
+++ conga/make/version.in	2006/12/13 19:21:52	1.21.2.9
@@ -1,5 +1,5 @@
 VERSION=0.8
-RELEASE=26
+RELEASE=27_UNRELEASED
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 
--- conga/ricci/modules/cluster/clumon/src/daemon/Monitor.cpp	2006/10/24 14:31:41	1.10.2.2
+++ conga/ricci/modules/cluster/clumon/src/daemon/Monitor.cpp	2006/12/13 19:21:52	1.10.2.3
@@ -269,7 +269,9 @@
   msg = generateXML(msg_xml);
   
   // return nodes - nodename
-  nodes.erase(find(nodes.begin(), nodes.end(), nodename));
+  vector<String>::iterator iter = find(nodes.begin(), nodes.end(), nodename);
+  if (iter != nodes.end())
+    nodes.erase(iter);
   return nodes;
 }
 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-01-17 14:32 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2007-01-17 14:32 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2007-01-17 14:32:04

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/site/luci/var: Data.fs 
	make           : version.in 

Log message:
	0.8-27 commits

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.8&r2=1.18.2.9
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.19&r2=1.45.2.20
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/var/Data.fs.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.15.2.5&r2=1.15.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.9&r2=1.21.2.10

--- conga/clustermon.spec.in.in	2006/12/13 19:21:52	1.18.2.8
+++ conga/clustermon.spec.in.in	2007/01/17 14:31:37	1.18.2.9
@@ -194,15 +194,9 @@
 
 %changelog
 
-* Wed Dec 13 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-27
-
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-
+* Wed Jan 10 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-27
 - Improved bz218941: Conga/luci - cannot add node to cluster via luci web app
 
-
 * Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-26
 - Version bump
 
--- conga/conga.spec.in.in	2007/01/10 23:51:59	1.45.2.19
+++ conga/conga.spec.in.in	2007/01/17 14:31:38	1.45.2.20
@@ -284,12 +284,7 @@
 
 %changelog
 
-* Fri Dec 22 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-27
-
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-
+* Wed Jan 10 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-27
 - Fixed bz214989 (Package download not working for Conga during cluster creation)
 - Fixed bz219522 (Restarting a cluster via luci web gui = UnboundLocalError - local variable 'e' referenced before assignment)
 - Fixed bz201394 (luci doesn't verify ricci's SSL cert against trusted list)
Binary files /cvs/cluster/conga/luci/site/luci/var/Data.fs	2006/12/12 13:50:27	1.15.2.5 and /cvs/cluster/conga/luci/site/luci/var/Data.fs	2007/01/17 14:31:38	1.15.2.6 differ
rcsdiff: /cvs/cluster/conga/luci/site/luci/var/Data.fs: diff failed
--- conga/make/version.in	2006/12/13 19:21:52	1.21.2.9
+++ conga/make/version.in	2007/01/17 14:32:04	1.21.2.10
@@ -1,5 +1,5 @@
 VERSION=0.8
-RELEASE=27_UNRELEASED
+RELEASE=27
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-01-17 14:57 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2007-01-17 14:57 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2007-01-17 14:57:03

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	0.8-28 specfile updates

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.9&r2=1.18.2.10
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.20&r2=1.45.2.21
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.10&r2=1.21.2.11

--- conga/clustermon.spec.in.in	2007/01/17 14:31:37	1.18.2.9
+++ conga/clustermon.spec.in.in	2007/01/17 14:57:03	1.18.2.10
@@ -194,6 +194,9 @@
 
 %changelog
 
+* Wed Jan 17 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-28
+- Version bump
+
 * Wed Jan 10 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-27
 - Improved bz218941: Conga/luci - cannot add node to cluster via luci web app
 
--- conga/conga.spec.in.in	2007/01/17 14:31:38	1.45.2.20
+++ conga/conga.spec.in.in	2007/01/17 14:57:03	1.45.2.21
@@ -284,6 +284,11 @@
 
 %changelog
 
+* Wed Jan 17 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-28
+- Updated docs
+- Fixed bz222223 (Deactivate LV before removal)
+- Resolves: bz222223
+
 * Wed Jan 10 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-27
 - Fixed bz214989 (Package download not working for Conga during cluster creation)
 - Fixed bz219522 (Restarting a cluster via luci web gui = UnboundLocalError - local variable 'e' referenced before assignment)
--- conga/make/version.in	2007/01/17 14:32:04	1.21.2.10
+++ conga/make/version.in	2007/01/17 14:57:03	1.21.2.11
@@ -1,5 +1,5 @@
 VERSION=0.8
-RELEASE=27
+RELEASE=28
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-01-17 16:36 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2007-01-17 16:36 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2007-01-17 16:36:51

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	0.8-29 release

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.10&r2=1.18.2.11
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.21&r2=1.45.2.22
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.11&r2=1.21.2.12

--- conga/clustermon.spec.in.in	2007/01/17 14:57:03	1.18.2.10
+++ conga/clustermon.spec.in.in	2007/01/17 16:36:51	1.18.2.11
@@ -194,6 +194,9 @@
 
 %changelog
 
+* Wed Jan 17 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-29
+- Version bump
+
 * Wed Jan 17 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-28
 - Version bump
 
--- conga/conga.spec.in.in	2007/01/17 14:57:03	1.45.2.21
+++ conga/conga.spec.in.in	2007/01/17 16:36:51	1.45.2.22
@@ -284,6 +284,9 @@
 
 %changelog
 
+* Wed Jan 17 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-29
+- Remove test accounts from Data.fs
+
 * Wed Jan 17 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-28
 - Updated docs
 - Fixed bz222223 (Deactivate LV before removal)
--- conga/make/version.in	2007/01/17 14:57:03	1.21.2.11
+++ conga/make/version.in	2007/01/17 16:36:51	1.21.2.12
@@ -1,5 +1,5 @@
 VERSION=0.8
-RELEASE=28
+RELEASE=29
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-01-23 22:34 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2007-01-23 22:34 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	kupcevic at sourceware.org	2007-01-23 22:34:37

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/site/luci/var: Data.fs 
	make           : version.in 

Log message:
	- Fixed bz212445 (release blocker: prevent management page access) by uploading new Data.fs
	- Release 0.8-30

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.11&r2=1.18.2.12
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.23&r2=1.45.2.24
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/var/Data.fs.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.15.2.8&r2=1.15.2.9
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.12&r2=1.21.2.13

--- conga/clustermon.spec.in.in	2007/01/17 16:36:51	1.18.2.11
+++ conga/clustermon.spec.in.in	2007/01/23 22:34:28	1.18.2.12
@@ -194,6 +194,9 @@
 
 %changelog
 
+* Tue Jan 23 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-30
+- Version bump
+
 * Wed Jan 17 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-29
 - Version bump
 
--- conga/conga.spec.in.in	2007/01/17 16:38:17	1.45.2.23
+++ conga/conga.spec.in.in	2007/01/23 22:34:28	1.45.2.24
@@ -284,6 +284,10 @@
 
 %changelog
 
+* Tue Jan 23 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-30
+- Fixed bz212445 (release blocker: prevent management page access)
+- Resolves: bz212445
+
 * Wed Jan 17 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-29
 - Remove test accounts from Data.fs
 - Related: bz222223
Binary files /cvs/cluster/conga/luci/site/luci/var/Data.fs	2007/01/17 16:34:03	1.15.2.8 and /cvs/cluster/conga/luci/site/luci/var/Data.fs	2007/01/23 22:34:28	1.15.2.9 differ
rcsdiff: /cvs/cluster/conga/luci/site/luci/var/Data.fs: diff failed
--- conga/make/version.in	2007/01/17 16:36:51	1.21.2.12
+++ conga/make/version.in	2007/01/23 22:34:37	1.21.2.13
@@ -1,5 +1,5 @@
 VERSION=0.8
-RELEASE=29
+RELEASE=30
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-02-05 12:12 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2007-02-05 12:12 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	kupcevic at sourceware.org	2007-02-05 12:12:33

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	0.9.1-3 release

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.25&r2=1.25.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.67.2.1&r2=1.67.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.28&r2=1.28.2.1

--- conga/clustermon.spec.in.in	2006/12/13 19:14:53	1.25
+++ conga/clustermon.spec.in.in	2007/02/05 12:12:33	1.25.2.1
@@ -195,16 +195,13 @@
 
 %changelog
 
-* Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
-
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+* Sun Feb 04 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-3
+- Fixed bz205522 (Web based Cluster Administration Interface)
+- Related: bz205522
 
+* Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
 - Improved bz218941: Conga/luci - cannot add node to cluster via luci web app
 
-
-
 * Fri Nov 17 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-1
 - version bump
 
--- conga/conga.spec.in.in	2007/02/02 18:22:41	1.67.2.1
+++ conga/conga.spec.in.in	2007/02/05 12:12:33	1.67.2.2
@@ -285,14 +285,14 @@
 ###  changelog ###
 
 
-%changelog
 
-* Fri Dec 22 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
+%changelog
 
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+* Sun Feb 04 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-3
+- Fixed bz205522 (Web based Cluster Administration Interface)
+- Resolves: bz205522
 
+* Fri Dec 22 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
 - luci storage: fix bytes->TB conversion
 - Fixed bz201394 (luci doesn't verify ricci's SSL cert against trusted list)
 - Fixed bz217387 (luci - HTML error shows up in display (minor))
--- conga/make/version.in	2006/11/17 20:39:42	1.28
+++ conga/make/version.in	2007/02/05 12:12:33	1.28.2.1
@@ -1,5 +1,5 @@
 VERSION=0.9.1
-RELEASE=2_UNRELEASED
+RELEASE=3
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-02-05 20:08 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-02-05 20:08 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2007-02-05 20:08:29

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci           : load_site.py pack.py 
	luci/utils     : luci_admin luci_cleanup luci_manage 
	ricci/docs     : cluster_api.html modules.html rpm_api.html 
	                 service_api.html 

Log message:
	docs update

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.25&r2=1.26
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.67&r2=1.68
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/load_site.py.diff?cvsroot=cluster&r1=1.15&r2=1.16
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/pack.py.diff?cvsroot=cluster&r1=1.5&r2=1.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/utils/luci_admin.diff?cvsroot=cluster&r1=1.52&r2=1.53
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/utils/luci_cleanup.diff?cvsroot=cluster&r1=1.5&r2=1.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/utils/luci_manage.diff?cvsroot=cluster&r1=1.2&r2=1.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/docs/cluster_api.html.diff?cvsroot=cluster&r1=1.4&r2=1.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/docs/modules.html.diff?cvsroot=cluster&r1=1.4&r2=1.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/docs/rpm_api.html.diff?cvsroot=cluster&r1=1.2&r2=1.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/docs/service_api.html.diff?cvsroot=cluster&r1=1.2&r2=1.3

--- conga/clustermon.spec.in.in	2006/12/13 19:14:53	1.25
+++ conga/clustermon.spec.in.in	2007/02/05 20:08:28	1.26
@@ -1,7 +1,7 @@
 ###############################################################################
 ###############################################################################
 ##
-##  Copyright (C) 2006 Red Hat, Inc.  All rights reserved.
+##  Copyright (C) 2006-2007 Red Hat, Inc.  All rights reserved.
 ##
 ##  This copyrighted material is made available to anyone wishing to use,
 ##  modify, copy, or redistribute it subject to the terms and conditions
--- conga/conga.spec.in.in	2007/01/23 22:32:29	1.67
+++ conga/conga.spec.in.in	2007/02/05 20:08:28	1.68
@@ -1,7 +1,7 @@
 ###############################################################################
 ###############################################################################
 ##
-##  Copyright (C) 2006 Red Hat, Inc.  All rights reserved.
+##  Copyright (C) 2006-2007 Red Hat, Inc.  All rights reserved.
 ##
 ##  This copyrighted material is made available to anyone wishing to use,
 ##  modify, copy, or redistribute it subject to the terms and conditions
--- conga/luci/load_site.py	2006/11/02 00:46:49	1.15
+++ conga/luci/load_site.py	2007/02/05 20:08:28	1.16
@@ -3,6 +3,7 @@
 ##############################################################################
 #
 # Copyright (c) 2001 Zope Corporation and Contributors. All Rights Reserved.
+# Copyright (C) 2006-2007 Red Hat, Inc.
 #
 # This software is subject to the provisions of the Zope Public License,
 # Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
--- conga/luci/pack.py	2006/11/02 00:46:49	1.5
+++ conga/luci/pack.py	2007/02/05 20:08:28	1.6
@@ -1,5 +1,7 @@
 #!/usr/bin/python
 
+# Copyright (C) 2006-2007 Red Hat, Inc.
+
 import os, sys, string
 
 sys.path.extend((
--- conga/luci/utils/luci_admin	2007/01/18 03:02:38	1.52
+++ conga/luci/utils/luci_admin	2007/02/05 20:08:28	1.53
@@ -1,5 +1,7 @@
 #!/usr/bin/python
 
+# Copyright (C) 2006-2007 Red Hat, Inc.
+
 import sys, os, stat, select, string, pwd
 from sys import stderr, argv
 import types
--- conga/luci/utils/luci_cleanup	2007/02/02 19:29:35	1.5
+++ conga/luci/utils/luci_cleanup	2007/02/05 20:08:28	1.6
@@ -1,5 +1,7 @@
 #!/usr/bin/python
 
+# Copyright (C) 2006-2007 Red Hat, Inc.
+
 import sys, os, pwd
 import types
 
--- conga/luci/utils/luci_manage	2007/02/02 19:29:35	1.2
+++ conga/luci/utils/luci_manage	2007/02/05 20:08:28	1.3
@@ -1,5 +1,7 @@
 #!/usr/bin/python
 
+# Copyright (C) 2006-2007 Red Hat, Inc.
+
 import sys, os, pwd
 import types
 
--- conga/ricci/docs/cluster_api.html	2006/10/05 17:38:01	1.4
+++ conga/ricci/docs/cluster_api.html	2007/02/05 20:08:28	1.5
@@ -8,7 +8,7 @@
 	<META NAME="CHANGED" CONTENT="20060620;15340700">
 </HEAD>
 <BODY LANG="en-US" DIR="LTR">
-<P>Cluster module manages Redhat Cluster Suite. 
+<P>Cluster module manages Red Hat Cluster Suite. 
 </P>
 <P>Module name: ???cluster??? 
 </P>
--- conga/ricci/docs/modules.html	2006/06/05 19:54:40	1.4
+++ conga/ricci/docs/modules.html	2007/02/05 20:08:28	1.5
@@ -14,7 +14,7 @@
 <P>Management Modules:</P>
 <UL>
 	<LI><P><A HREF="cluster_api.html">Cluster Module</A> ??? Manages
-	Redhat Cluster Suite</P>
+	Red Hat Cluster Suite</P>
 	<LI><P><A HREF="rpm_api.html">Rpm Module</A> ??? Manages rpm
 	packages. Allows retrieval of currently installed rpms, querying
 	repositories, and installation/upgrade of rpms using repositories
@@ -35,4 +35,4 @@
 <P><BR><BR>
 </P>
 </BODY>
-</HTML>
\ No newline at end of file
+</HTML>
--- conga/ricci/docs/rpm_api.html	2006/10/12 19:13:11	1.2
+++ conga/ricci/docs/rpm_api.html	2007/02/05 20:08:28	1.3
@@ -36,10 +36,10 @@
 installed; and upgraded, if already installed. 
 </P>
 <P>There are couple of predefined rpm sets: <BR>- ???Cluster Base???
-- base infrastructure of Redhat Cluster Suite (currently ccs, cman,
+- base infrastructure of Red Hat Cluster Suite (currently ccs, cman,
 dlm, fence, and respective kernel-... rpms) <BR>- ???Cluster Base -
-Gulm??? - base infrastructure of Redhat Cluster Suite using Gulm lock
-manager (currently ccs, gulm, fence and respective kernel-... rpms)
+Gulm??? - base infrastructure of Red Hat Cluster Suite using GULM lock
+manager (currently ccs, gulm, and respective kernel-... rpms)
 <BR>- ???Cluster Service Manager??? - (currently rgmanager, magma,
 magma-plugins) <BR>- ???Clustered Storage??? - shared storage
 (currently GFS, lvm2-cluster and respective kernel-... rpms) <BR>-
--- conga/ricci/docs/service_api.html	2007/02/05 19:52:44	1.2
+++ conga/ricci/docs/service_api.html	2007/02/05 20:08:28	1.3
@@ -30,9 +30,9 @@
 running=&quot;true/false&quot;/&gt;<BR>???enabled??? - enabled on
 boot; ???running??? - currently running.</P>
 <P>There are couple of predefined service sets: <BR>- ???Cluster
-Base??? - base infrastructure of Redhat Cluster Suite (currently
+Base??? - base infrastructure of Red Hat Cluster Suite (currently
 ccsd, cman, fenced) <BR>- ???Cluster Base - Gulm??? - base
-infrastructure of Redhat Cluster Suite using Gulm lock manager
+infrastructure of Red Hat Cluster Suite using GULM lock manager
 (currently ccsd, lock_gulmd) <BR>- ???Cluster Service
 Manager??? - (currently rgmanager) <BR>- ???Clustered Storage??? -
 shared storage (currently clvmd, gfs)<BR>- ???Linux Virtual Server???



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-02-05 22:01 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2007-02-05 22:01 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	kupcevic at sourceware.org	2007-02-05 22:01:23

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	0.9.1-4 release

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.25.2.1&r2=1.25.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.67.2.2&r2=1.67.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.28.2.1&r2=1.28.2.2

--- conga/clustermon.spec.in.in	2007/02/05 12:12:33	1.25.2.1
+++ conga/clustermon.spec.in.in	2007/02/05 22:01:23	1.25.2.2
@@ -1,7 +1,7 @@
 ###############################################################################
 ###############################################################################
 ##
-##  Copyright (C) 2006 Red Hat, Inc.  All rights reserved.
+##  Copyright (C) 2006-2007 Red Hat, Inc.  All rights reserved.
 ##
 ##  This copyrighted material is made available to anyone wishing to use,
 ##  modify, copy, or redistribute it subject to the terms and conditions
@@ -195,10 +195,13 @@
 
 %changelog
 
-* Sun Feb 04 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-3
+* Mon Feb 05 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-4
 - Fixed bz205522 (Web based Cluster Administration Interface)
 - Related: bz205522
 
+* Sun Feb 04 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-3
+- Version bump
+
 * Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
 - Improved bz218941: Conga/luci - cannot add node to cluster via luci web app
 
--- conga/conga.spec.in.in	2007/02/05 12:12:33	1.67.2.2
+++ conga/conga.spec.in.in	2007/02/05 22:01:23	1.67.2.3
@@ -1,7 +1,7 @@
 ###############################################################################
 ###############################################################################
 ##
-##  Copyright (C) 2006 Red Hat, Inc.  All rights reserved.
+##  Copyright (C) 2006-2007 Red Hat, Inc.  All rights reserved.
 ##
 ##  This copyrighted material is made available to anyone wishing to use,
 ##  modify, copy, or redistribute it subject to the terms and conditions
@@ -288,6 +288,9 @@
 
 %changelog
 
+* Mon Feb 05 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-4
+- Related: bz205522
+
 * Sun Feb 04 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-3
 - Fixed bz205522 (Web based Cluster Administration Interface)
 - Resolves: bz205522
--- conga/make/version.in	2007/02/05 12:12:33	1.28.2.1
+++ conga/make/version.in	2007/02/05 22:01:23	1.28.2.2
@@ -1,5 +1,5 @@
 VERSION=0.9.1
-RELEASE=3
+RELEASE=4
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-02-07  1:36 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2007-02-07  1:36 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	kupcevic at sourceware.org	2007-02-07 01:36:14

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	Bring (some) order into release versions:
	RHEL4 -> ver 0.9.1-xx (already)
	RHEL5 -> ver 0.9.2-xx (from next build)
	HEAD  -> ver 0.9.3-xx (this commit)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.26&r2=1.27
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.68&r2=1.69
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&r1=1.28&r2=1.29

--- conga/clustermon.spec.in.in	2007/02/05 20:08:28	1.26
+++ conga/clustermon.spec.in.in	2007/02/07 01:36:14	1.27
@@ -195,7 +195,8 @@
 
 %changelog
 
-* Tue Dec 12 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
+
+* Tue Feb 06 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.3-1
 
 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
@@ -204,7 +205,6 @@
 - Improved bz218941: Conga/luci - cannot add node to cluster via luci web app
 
 
-
 * Fri Nov 17 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-1
 - version bump
 
--- conga/conga.spec.in.in	2007/02/05 20:08:28	1.68
+++ conga/conga.spec.in.in	2007/02/07 01:36:14	1.69
@@ -284,7 +284,7 @@
 
 %changelog
 
-* Fri Dec 22 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
+* Tue Feb 06 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.3-1
 
 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
--- conga/make/version.in	2006/11/17 20:39:42	1.28
+++ conga/make/version.in	2007/02/07 01:36:14	1.29
@@ -1,5 +1,5 @@
-VERSION=0.9.1
-RELEASE=2_UNRELEASED
+VERSION=0.9.3
+RELEASE=1_UNRELEASED
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-03-20 20:52 kupcevic
  0 siblings, 0 replies; 46+ messages in thread
From: kupcevic @ 2007-03-20 20:52 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	kupcevic at sourceware.org	2007-03-20 20:52:53

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	- Changelog updates
	- version 0.9.1-6

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.25.2.2&r2=1.25.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.67.2.5&r2=1.67.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.28.2.2&r2=1.28.2.3

--- conga/clustermon.spec.in.in	2007/02/05 22:01:23	1.25.2.2
+++ conga/clustermon.spec.in.in	2007/03/20 20:52:52	1.25.2.3
@@ -195,6 +195,11 @@
 
 %changelog
 
+* Tue Mar 20 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-6
+- Do not fail on i18n machines
+- Fixed bz225747 (Create/delete cluster - then access disk on node = Generic error on host: cluster tools: cman_tool errored)
+- Resolves: bz225747
+
 * Mon Feb 05 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-4
 - Fixed bz205522 (Web based Cluster Administration Interface)
 - Related: bz205522
--- conga/conga.spec.in.in	2007/03/20 17:38:15	1.67.2.5
+++ conga/conga.spec.in.in	2007/03/20 20:52:52	1.67.2.6
@@ -286,12 +286,15 @@
 
 
 
+
 %changelog
+
 * Tue Mar 20 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-6
+- Do not fail on i18n machines
 - Fixed bz225747 (Create/delete cluster - then access disk on node = Generic error on host: cluster tools: cman_tool errored)
-- Related: bz205522
+- Resolves: bz225747
 
-* Fri Feb 16 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-5
+* Tue Mar 20 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-5
 - Fixed bz228637 (security alert - passwords sent back from server as input value)
 - Fixed bz225558 (conga does not add fence_xvmd to cluster.conf)
 - Fixed bz215014 (luci failover domain forms are missing/empty)
@@ -303,6 +306,9 @@
 - Fixed bz225164 (Conga allows creation/rename of clusters with name greater than 15 characters)
 - Fixed bz227723 (Entering bad password when creating a new cluster = UnboundLocalError: local variable 'e' referenced before assignment)
 - Fixed bz223162 (Error trying to create a new fence device for a cluster node)
+ - Resolves: bz228637, bz225558, bz215014, bz225588, bz225206
+ - Resolves: bz213083, bz218964, bz222051, bz225164, bz227723
+ - Resolves: bz223162
 
 * Mon Feb 05 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-4
 - Related: bz205522
--- conga/make/version.in	2007/02/05 22:01:23	1.28.2.2
+++ conga/make/version.in	2007/03/20 20:52:53	1.28.2.3
@@ -1,5 +1,5 @@
 VERSION=0.9.1
-RELEASE=4
+RELEASE=6
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-04-11 19:23 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-04-11 19:23 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-04-11 20:23:52

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	- changelog update
	- version bump

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.13&r2=1.18.2.14
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.31&r2=1.45.2.32
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.14&r2=1.21.2.15

--- conga/clustermon.spec.in.in	2007/03/30 15:12:52	1.18.2.13
+++ conga/clustermon.spec.in.in	2007/04/11 19:23:51	1.18.2.14
@@ -194,11 +194,15 @@
 
 %changelog
 
-* Wed Mar 28 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.2-4
+* Wed Apr 11 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.2-5
 - Do not fail on i18n machines
 - Fixed bz225747 (Create/delete cluster - then access disk on node = Generic error on host: cluster tools: cman_tool errored)
 - Fixed bz230454 (Unable to configure a virtual service)
-- Resolves: bz225747, bz230454
+- Resolves: bz225747
+- Related: bz229027, bz230447, bz230452, bz230454, bz230457,
+- Related: bz230469, bz228637, bz233326, bz225747, bz225206,
+- Related: bz230461, bz225164, bz227758
+
 
 * Tue Jan 23 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-30
 - Version bump
--- conga/conga.spec.in.in	2007/04/10 19:51:25	1.45.2.31
+++ conga/conga.spec.in.in	2007/04/11 19:23:51	1.45.2.32
@@ -282,7 +282,7 @@
 
 %changelog
 
-* Wed Mar 28 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.9.2-4
+* Wed Apr 11 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.2-5
 - Do not fail on i18n machines
 - Fixed bz225747 (Create/delete cluster - then access disk on node = Generic error on host: cluster tools: cman_tool errored)
 - Fixed bz233326 (CVE-2007-0240 Conga includes version of Zope that is vulnerable to a XSS attack)
@@ -298,8 +298,8 @@
 - Fixed bz225164 (Conga allows creation/rename of clusters with name greater than 15 characters)
 - Fixed bz227758 (Entering bad password when creating a new cluster = UnboundLocalError: local variable 'e' referenced before assignment)
 - Resolves: bz229027, bz230447, bz230452, bz230454, bz230457,
-- Resolves: bz230469, bz228637, bz233326, bz225747, bz225206,
-- Resolves: bz230461, bz225164, bz227758
+- Resolves: bz230469, bz228637, bz233326, bz225206, bz227758,
+- Resolves: bz230461, bz225164
 
 * Tue Jan 23 2007 Stanko Kupcevic <kupcevic@redhat.com> 0.8-30
 - Fixed bz212445 (release blocker: prevent management page access)
--- conga/make/version.in	2007/03/30 15:12:52	1.21.2.14
+++ conga/make/version.in	2007/04/11 19:23:51	1.21.2.15
@@ -1,5 +1,5 @@
 VERSION=0.9.2
-RELEASE=4
+RELEASE=5
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-04-11 20:15 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-04-11 20:15 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-04-11 21:15:31

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	version bump

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.15&r2=1.18.2.16
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.33&r2=1.45.2.34
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.15&r2=1.21.2.16

--- conga/clustermon.spec.in.in	2007/04/11 20:12:43	1.18.2.15
+++ conga/clustermon.spec.in.in	2007/04/11 20:15:31	1.18.2.16
@@ -194,6 +194,12 @@
 
 %changelog
 
+* Wed Apr 11 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.2-7
+
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+
 * Wed Apr 11 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.2-5
 - Version bump
 - Related: bz229027, bz230447, bz230452, bz230454, bz230457,
--- conga/conga.spec.in.in	2007/04/11 20:12:43	1.45.2.33
+++ conga/conga.spec.in.in	2007/04/11 20:15:31	1.45.2.34
@@ -282,6 +282,12 @@
 
 %changelog
 
+* Wed Apr 11 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.2-7
+
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+
 * Wed Apr 11 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.2-5
 - Fixed bz225206 (Cluster cannot be deleted (from 'Manage Systems') - but no error results)
 - Fixed bz225164 (Conga allows creation/rename of clusters with name greater than 15 characters)
--- conga/make/version.in	2007/04/11 19:23:51	1.21.2.15
+++ conga/make/version.in	2007/04/11 20:15:31	1.21.2.16
@@ -1,5 +1,5 @@
 VERSION=0.9.2
-RELEASE=5
+RELEASE=7
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-05-01 15:57 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-05-01 15:57 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2007-05-01 16:57:33

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	Do not build for ppc64 (workaround for 236827)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.25.2.4&r2=1.25.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.67.2.9&r2=1.67.2.10
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.28.2.4&r2=1.28.2.5

--- conga/clustermon.spec.in.in	2007/04/25 18:08:42	1.25.2.4
+++ conga/clustermon.spec.in.in	2007/05/01 15:57:33	1.25.2.5
@@ -29,6 +29,7 @@
 Source0: %{name}-%{version}.tar.gz
 Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
+ExcludeArch: ppc64
 BuildRequires: glibc-devel gcc-c++ libxml2-devel
 BuildRequires: openssl-devel dbus-devel pam-devel pkgconfig
 BuildRequires: net-snmp-devel tog-pegasus-devel
@@ -195,6 +196,10 @@
 
 
 %changelog
+* Tue Apr 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.1-8
+- Do not build for ppc64
+- Related: bz236827
+
 * Wed Apr 25 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.1-7
 - Fixed bz237838 (modcluster and ricci scriptlets fail during install)
 - Resolves: bz237838
--- conga/conga.spec.in.in	2007/04/30 18:34:42	1.67.2.9
+++ conga/conga.spec.in.in	2007/05/01 15:57:33	1.67.2.10
@@ -36,7 +36,7 @@
 %endif
 Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
-
+ExcludeArch: ppc64
 BuildRequires: python-devel >= 2.3.4
 BuildRequires: glibc-devel gcc-c++ libxml2-devel sed
 #BuildRequires: pam-devel
@@ -288,6 +288,9 @@
 
 
 %changelog
+* Tue Apr 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.1-8
+- Do not build for ppc64
+- Resolves: bz236827
 
 * Wed Apr 25 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.1-7
 - Fixed bz234196 (CVE-2007-0240 Conga includes version of Zope that is vulnerable to a XSS attack)
--- conga/make/version.in	2007/03/27 17:14:45	1.28.2.4
+++ conga/make/version.in	2007/05/01 15:57:33	1.28.2.5
@@ -1,5 +1,5 @@
 VERSION=0.9.1
-RELEASE=7
+RELEASE=8
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-06-27  7:43 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-06-27  7:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-06-27 07:43:35

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/site/luci/Extensions: conga_constants.py 
	luci/site/luci/var: Data.fs 
	make           : version.in 

Log message:
	Update the DB, and some other housekeeping

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.17&r2=1.18.2.18
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.42&r2=1.45.2.43
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.19.2.11&r2=1.19.2.12
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/var/Data.fs.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.15.2.17&r2=1.15.2.18
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.17&r2=1.21.2.18

--- conga/clustermon.spec.in.in	2007/04/30 18:47:32	1.18.2.17
+++ conga/clustermon.spec.in.in	2007/06/27 07:43:17	1.18.2.18
@@ -194,12 +194,8 @@
 
 %changelog
 
-* Mon Apr 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.2-7
-
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-
+* Wed Jun 27 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-1
+- Performance improvements.
 
 * Wed Apr 11 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.2-5
 - Version bump
--- conga/conga.spec.in.in	2007/06/26 21:41:48	1.45.2.42
+++ conga/conga.spec.in.in	2007/06/27 07:43:17	1.45.2.43
@@ -284,10 +284,7 @@
 
 
 %changelog
-* Mon Jun 18 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-1
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+* Wed Jun 27 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-1
 - Fixed bz238655 (conga does not set the "nodename" attribute for manual fencing)
 - Fixed bz221899 (Node log displayed in partially random order)
 - Fixed bz225782 (Need more luci service information on startup - no info written to log about failed start cause)
--- conga/luci/site/luci/Extensions/conga_constants.py	2007/06/19 15:54:10	1.19.2.11
+++ conga/luci/site/luci/Extensions/conga_constants.py	2007/06/27 07:43:17	1.19.2.12
@@ -150,6 +150,6 @@
 # Debugging parameters. Set LUCI_DEBUG_MODE to True and LUCI_DEBUG_VERBOSITY
 # to >= 2 to get full debugging output in syslog (LOG_DAEMON/LOG_DEBUG).
 
-LUCI_DEBUG_MODE			= True
+LUCI_DEBUG_MODE			= False
 LUCI_DEBUG_NET			= False
-LUCI_DEBUG_VERBOSITY	= 3
+LUCI_DEBUG_VERBOSITY	= 0
Binary files /cvs/cluster/conga/luci/site/luci/var/Data.fs	2007/06/19 19:50:47	1.15.2.17 and /cvs/cluster/conga/luci/site/luci/var/Data.fs	2007/06/27 07:43:17	1.15.2.18 differ
rcsdiff: /cvs/cluster/conga/luci/site/luci/var/Data.fs: diff failed
--- conga/make/version.in	2007/06/18 18:39:49	1.21.2.17
+++ conga/make/version.in	2007/06/27 07:43:35	1.21.2.18
@@ -1,5 +1,5 @@
 VERSION=0.10.0
-RELEASE=2_UNRELEASED
+RELEASE=1
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-08-08 21:24 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-08-08 21:24 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-08-08 21:24:13

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	Packaging-related housekeeping for bz230451 fix

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.21&r2=1.18.2.22
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.49&r2=1.45.2.50
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.20&r2=1.21.2.21

--- conga/clustermon.spec.in.in	2007/07/30 06:13:31	1.18.2.21
+++ conga/clustermon.spec.in.in	2007/08/08 21:24:12	1.18.2.22
@@ -193,6 +193,9 @@
 
 
 %changelog
+* Wed Aug 08 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-4
+- Fixed bz230451 (fence_xvm.key file is not automatically created. Should have a least a default)
+- Related bz230451
 
 * Mon Jul 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-3
 - Fixed bz249351 (conga reports that ricci agent is unresponsive even though it's running)
--- conga/conga.spec.in.in	2007/07/30 06:13:31	1.45.2.49
+++ conga/conga.spec.in.in	2007/08/08 21:24:12	1.45.2.50
@@ -286,6 +286,10 @@
 
 ###  changelog ###
 %changelog
+* Wed Aug 08 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-4
+- Fixed bz230451 (fence_xvm.key file is not automatically created. Should have a least a default)
+- Resolves bz230451
+
 * Mon Jul 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-3
 - Fixed bz245947 (luci/Conga cluster configuration tool not initializing cluster node members)
 - Fixed bz249641 (conga is unable to do storage operations if there is an lvm snapshot present)
--- conga/make/version.in	2007/07/31 18:18:07	1.21.2.20
+++ conga/make/version.in	2007/08/08 21:24:12	1.21.2.21
@@ -1,5 +1,5 @@
 VERSION=0.10.0
-RELEASE=3
+RELEASE=4_UNRELEASED
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-08-09 22:02 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-08-09 22:02 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2007-08-09 22:02:22

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	                 download_files 
	make           : version.in 
Added files:
	.              : Plone-2.5.3-final_CMFPlone.patch 
Removed files:
	.              : Plone-2.5.2-1_CMFPlone.patch 

Log message:
	Update spec files, and bump version numbers

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/Plone-2.5.3-final_CMFPlone.patch.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=NONE&r2=1.2.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.25.2.5&r2=1.25.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.67.2.11&r2=1.67.2.12
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/download_files.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.5.2.2&r2=1.5.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/Plone-2.5.2-1_CMFPlone.patch.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.28.2.5&r2=1.28.2.6

/cvs/cluster/conga/Plone-2.5.3-final_CMFPlone.patch,v  -->  standard output
revision 1.2.2.1
--- conga/Plone-2.5.3-final_CMFPlone.patch
+++ -	2007-08-09 22:02:22.562577000 +0000
@@ -0,0 +1,111 @@
+diff -ur Plone-2.5.3-final.orig/CMFPlone/exportimport/configure.zcml Plone-2.5.3-final/CMFPlone/exportimport/configure.zcml
+--- Plone-2.5.3-final.orig/CMFPlone/exportimport/configure.zcml	2007-05-16 06:35:51.000000000 -0400
++++ Plone-2.5.3-final/CMFPlone/exportimport/configure.zcml	2007-06-12 15:31:05.000000000 -0400
+@@ -32,12 +32,6 @@
+      />
+ 
+   <adapter
+-     factory="Products.CMFCore.exportimport.content.StructureFolderWalkingAdapter"
+-     provides="Products.GenericSetup.interfaces.IFilesystemImporter"
+-     for="Products.ATContentTypes.interface.IATContentType"
+-     />
+-
+-  <adapter
+      factory=".propertiestool.SimpleItemWithPropertiesXMLAdapter"
+      provides="Products.GenericSetup.interfaces.IBody"
+      for="Products.CMFCore.interfaces.IMemberDataTool
+@@ -51,19 +45,6 @@
+           Products.GenericSetup.interfaces.ISetupEnviron"
+      />
+ 
+-  <!-- Mark ATCT objects as IDAVAware so CMFSetup can export/import them -->
+-  <five:implements
+-     class="Products.ATContentTypes.content.document.ATDocument"
+-     interface="Products.GenericSetup.interfaces.IDAVAware"
+-     />
+-
+-  <!-- XXX: Temporarily disable ATTopic exporting until we have an
+-       actual exporter or Marshaller -->
+-  <five:implements
+-     class="Products.ATContentTypes.content.topic.ATTopic"
+-     interface="Products.CMFPlone.exportimport.content.IDisabledExport"
+-     />
+-
+   <adapter
+      factory=".content.NullExporterAdapter"
+      provides="Products.GenericSetup.interfaces.IFilesystemExporter"
+diff -ur Plone-2.5.3-final.orig/CMFPlone/MembershipTool.py Plone-2.5.3-final/CMFPlone/MembershipTool.py
+--- Plone-2.5.3-final.orig/CMFPlone/MembershipTool.py	2007-05-16 06:35:51.000000000 -0400
++++ Plone-2.5.3-final/CMFPlone/MembershipTool.py	2007-06-12 15:31:05.000000000 -0400
+@@ -1,4 +1,3 @@
+-import PIL
+ from cStringIO import StringIO
+ from DateTime import DateTime
+ from Products.CMFCore.utils import getToolByName, _checkPermission
+@@ -589,6 +588,7 @@
+             if portrait_data == '':
+                 continue
+             try:
++                import PIL
+                 img = PIL.Image.open(StringIO(portrait_data))
+             except ConflictError:
+                 pass
+diff -ur Plone-2.5.3-final.orig/CMFPlone/setup/dependencies.py Plone-2.5.3-final/CMFPlone/setup/dependencies.py
+--- Plone-2.5.3-final.orig/CMFPlone/setup/dependencies.py	2007-05-16 06:35:51.000000000 -0400
++++ Plone-2.5.3-final/CMFPlone/setup/dependencies.py	2007-06-12 15:31:05.000000000 -0400
+@@ -107,7 +107,8 @@
+ except ImportError:
+     log(("PIL not found. Plone needs PIL 1.1.5 or newer. "
+          "Please download it from http://www.pythonware.com/products/pil/ or "
+-         "http://effbot.org/downloads/#Imaging"))
++         "http://effbot.org/downloads/#Imaging"),
++        severity=logging.INFO, optional=1)
+ 
+ try:
+     from elementtree import ElementTree
+diff -ur Plone-2.5.3-final.orig/CMFPlone/utils.py Plone-2.5.3-final/CMFPlone/utils.py
+--- Plone-2.5.3-final.orig/CMFPlone/utils.py	2007-05-16 06:35:52.000000000 -0400
++++ Plone-2.5.3-final/CMFPlone/utils.py	2007-06-12 15:31:05.000000000 -0400
+@@ -3,8 +3,6 @@
+ from os.path import join, abspath, split
+ from cStringIO import StringIO
+ 
+-from PIL import Image
+-
+ import zope.interface
+ from zope.interface import implementedBy
+ from zope.component import getMultiAdapter
+@@ -41,15 +39,6 @@
+ DANGEROUS_CHARS_REGEX = re.compile(r"[?&/:\\#]+")
+ EXTRA_DASHES_REGEX = re.compile(r"(^\-+)|(\-+$)")
+ 
+-# Settings for member image resize quality
+-PIL_SCALING_ALGO = Image.ANTIALIAS
+-PIL_QUALITY = 88
+-MEMBER_IMAGE_SCALE = (75, 100)
+-IMAGE_SCALE_PARAMS = {'scale': MEMBER_IMAGE_SCALE,
+-                      'quality': PIL_QUALITY,
+-                      'algorithm': PIL_SCALING_ALGO,
+-                      'default_format': 'PNG'}
+-
+ _marker = []
+ 
+ class BrowserView(BaseView):
+@@ -632,6 +621,17 @@
+     return security
+ 
+ def scale_image(image_file, max_size=None, default_format=None):
++    from PIL import Image
++
++    # Settings for member image resize quality
++    PIL_SCALING_ALGO = Image.ANTIALIAS
++    PIL_QUALITY = 88
++    MEMBER_IMAGE_SCALE = (75, 100)
++    IMAGE_SCALE_PARAMS = {'scale': MEMBER_IMAGE_SCALE,
++                          'quality': PIL_QUALITY,
++                          'algorithm': PIL_SCALING_ALGO,
++                          'default_format': 'PNG'}
++
+     """Scales an image down to@most max_size preserving aspect ratio
+     from an input file
+ 
--- conga/clustermon.spec.in.in	2007/05/01 15:57:33	1.25.2.5
+++ conga/clustermon.spec.in.in	2007/08/09 22:02:19	1.25.2.6
@@ -35,7 +35,7 @@
 BuildRequires: net-snmp-devel tog-pegasus-devel
 
 %description
-This package contains Red Hat Enterprise Linux Cluster Suite 
+This package contains Red Hat Enterprise Linux Cluster Suite
 SNMP/CIM module/agent/provider.
 
 
@@ -43,7 +43,7 @@
 %setup -q
 
 %build
-%configure      --arch=%{_arch} \
+%configure		--arch=%{_arch} \
 		--docdir=%{_docdir} \
 		--pegasus_providers_dir=%{PEGASUS_PROVIDERS_DIR} \
 		--include_zope_and_plone=no
@@ -89,25 +89,26 @@
 
 
 %post -n modcluster
+/sbin/chkconfig --add modclusterd
 DBUS_PID=`cat /var/run/messagebus.pid 2> /dev/null`
 /bin/kill -s SIGHUP $DBUS_PID > /dev/null 2>&1
-/sbin/service oddjobd reload > /dev/null 2>&1
-/sbin/chkconfig --add modclusterd
+# It's ok if this fails (it will fail when oddjob is not running).
+/sbin/service oddjobd reload > /dev/null 2>&1 || true
 
 %preun -n modcluster
 if [ "$1" == "0" ]; then
-   /sbin/service modclusterd stop > /dev/null 2>&1
-   /sbin/chkconfig --del modclusterd
+	/sbin/service modclusterd stop > /dev/null 2>&1
+	/sbin/chkconfig --del modclusterd
 fi
 
 %postun -n modcluster
 if [ "$1" == "0" ]; then
-   DBUS_PID=`cat /var/run/messagebus.pid 2> /dev/null`
-   /bin/kill -s SIGHUP $DBUS_PID > /dev/null 2>&1
-   /sbin/service oddjobd reload > /dev/null 2>&1
+	DBUS_PID=`cat /var/run/messagebus.pid 2> /dev/null`
+	/bin/kill -s SIGHUP $DBUS_PID > /dev/null 2>&1
+	/sbin/service oddjobd reload > /dev/null 2>&1
 fi
 if [ "$1" == "1" ]; then
-   /sbin/service modclusterd condrestart > /dev/null 2>&1
+	/sbin/service modclusterd condrestart > /dev/null 2>&1
 fi
 
 
@@ -121,7 +122,7 @@
 Summary: Red Hat Enterprise Linux Cluster Suite - SNMP agent
 
 Requires: modcluster = %{version}-%{release}
-Requires: net-snmp 
+Requires: net-snmp
 Requires: oddjob openssl
 Requires(post): initscripts
 Requires(postun): initscripts
@@ -138,13 +139,12 @@
 			%{_docdir}/cluster-snmp-%{version}/
 
 %post -n cluster-snmp
-/sbin/service snmpd condrestart > /dev/null 2>&1
-exit 0
+/sbin/service snmpd condrestart > /dev/null 2>&1 || true
 
 %postun -n cluster-snmp
 # don't restart snmpd twice on upgrades
 if [ "$1" == "0" ]; then
-   /sbin/service snmpd condrestart > /dev/null 2>&1
+	/sbin/service snmpd condrestart > /dev/null 2>&1
 fi
 
 
@@ -159,14 +159,14 @@
 Summary: Red Hat Enterprise Linux Cluster Suite - CIM provider
 
 Requires: modcluster = %{version}-%{release}
-Requires: tog-pegasus 
+Requires: tog-pegasus
 Requires: oddjob openssl
 Requires(post): initscripts
 Requires(postun): initscripts
 Conflicts: clumon-cim
 
 %description -n cluster-cim
-CIM provider for Red Hat Enterprise Linux Cluster Suite. 
+CIM provider for Red Hat Enterprise Linux Cluster Suite.
 
 %files -n cluster-cim
 %defattr(-,root,root)
@@ -174,14 +174,13 @@
 			%{_docdir}/cluster-cim-%{version}/
 
 %post -n cluster-cim
-/sbin/service tog-pegasus condrestart > /dev/null 2>&1
 # pegasus might not be running, don't fail %post
-exit 0
+/sbin/service tog-pegasus condrestart > /dev/null 2>&1 || true
 
 %postun -n cluster-cim
 # don't restart pegasus twice on upgrades
 if [ "$1" == "0" ]; then
-   /sbin/service tog-pegasus condrestart > /dev/null 2>&1
+	/sbin/service tog-pegasus condrestart > /dev/null 2>&1
 fi
 # pegasus might not be running, don't fail %postun
 exit 0
@@ -196,6 +195,9 @@
 
 
 %changelog
+* Thu Aug 09 2007 Ryan McCabe <rmccabe@redhat.com> 0.11.0-1
+- Merge in fixes from the RHEL5 code base.
+
 * Tue Apr 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.1-8
 - Do not build for ppc64
 - Related: bz236827
@@ -259,5 +261,5 @@
 - Don't auto-start modclusterd after installation, do it manually
 
 * Wed Aug 09 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11
-- Spinoff: separate clustermon.srpm (modcluster, cluster-snmp and 
+- Spinoff: separate clustermon.srpm (modcluster, cluster-snmp and
    cluster-cim) from conga.srpm
--- conga/conga.spec.in.in	2007/05/02 02:39:53	1.67.2.11
+++ conga/conga.spec.in.in	2007/08/09 22:02:19	1.67.2.12
@@ -11,7 +11,7 @@
 ###############################################################################
 
 
-%define include_zope_and_plone     @@INCLUDE_ZOPE_AND_PLONE@@
+%define include_zope_and_plone	@@INCLUDE_ZOPE_AND_PLONE@@
 
 
 
@@ -32,7 +32,7 @@
 Source1: @@ZOPE_ARCHIVE_TAR@@
 Source3: @@ZOPE_FIVE_ARCHIVE_TAR@@
 Source2: @@PLONE_ARCHIVE_TAR@@
-Patch2: Plone-2.5.2-1_CMFPlone.patch
+Patch2: Plone-2.5.3-final_CMFPlone.patch
 %endif
 Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
@@ -45,9 +45,9 @@
 
 
 %description
-Conga is a project developing management system for remote stations. 
-It consists of luci, https frontend, and ricci, secure daemon that dispatches 
-incoming messages to underlying management modules. 
+Conga is a project developing management system for remote stations.
+It consists of luci, https frontend, and ricci, secure daemon that dispatches
+incoming messages to underlying management modules.
 
 
 %prep
@@ -69,7 +69,7 @@
 %endif
 
 %build
-%configure      --arch=%{_arch} \
+%configure		--arch=%{_arch} \
 		--docdir=%{_docdir} \
 		--include_zope_and_plone=%{include_zope_and_plone}
 make %{?_smp_mflags} conga
@@ -106,7 +106,7 @@
 Requires: zope
 Requires: plone >= 2.5
 %endif
-Requires: grep openssl mailcap stunnel 
+Requires: grep openssl mailcap stunnel
 Requires: sed util-linux
 
 Requires(pre): grep shadow-utils
@@ -115,19 +115,22 @@
 
 
 %description -n luci
-Conga is a project developing management system for remote stations. 
-It consists of luci, https frontend, and ricci, secure daemon that 
-dispatches incoming messages to underlying management modules. 
+Conga is a project developing management system for remote stations.
+It consists of luci, https frontend, and ricci, secure daemon that
+dispatches incoming messages to underlying management modules.
 
 This package contains Luci website.
 
 
 %files -n luci
+%verify(not size md5 mtime) /var/lib/luci/var/Data.fs
 %defattr(-,root,root)
 %config(noreplace)		%{_sysconfdir}/sysconfig/luci
 				%{_sysconfdir}/rc.d/init.d/luci
 				%{_sbindir}/luci_admin
 				%{_docdir}/luci-%{version}/
+%defattr(0755,root,root)
+				%{_libdir}/luci/
 %defattr(-,luci,luci)
 				%{_localstatedir}/lib/luci
 				%{_libdir}/luci/ssl
@@ -136,45 +139,49 @@
 %endif
 
 %pre -n luci
-if ! /bin/grep ^luci\:x /etc/group 2>&1 >/dev/null; then
-   /usr/sbin/groupadd -r -f luci >/dev/null 2>&1
+if ! /bin/grep ^luci\:x /etc/group >&/dev/null; then
+	/usr/sbin/groupadd -r -f luci >&/dev/null
 fi
-if ! /bin/grep ^luci\:x /etc/passwd 2>&1 >/dev/null; then
-   /usr/sbin/useradd -r -M -s /sbin/nologin -d /var/lib/luci -g luci luci >/dev/null 2>&1
+if ! /bin/grep ^luci\:x /etc/passwd >&/dev/null; then
+	/usr/sbin/useradd -r -M -s /sbin/nologin -d /var/lib/luci -g luci luci >&/dev/null
 fi
 
 %post -n luci
 /sbin/chkconfig --add luci
-/sbin/service luci status >/dev/null 2>&1
+/sbin/service luci status >&/dev/null
 LUCI_RUNNING=$?
 if [ "$LUCI_RUNNING" == "0" ]; then
-   /sbin/service luci stop >/dev/null 2>&1
+	/sbin/service luci stop >&/dev/null
 fi
-if [ -e /var/lib/luci/var/luci_backup.xml ]; then
-   # restore luci database
-   /usr/sbin/luci_admin restore >/dev/null 2>&1
+if [ -f /var/lib/luci/var/luci_backup.xml ]; then
+	# restore luci database
+	/usr/sbin/luci_admin restore >&/dev/null
 fi
+
 # set initial admin password (if not already set) to random value
-if ! /bin/grep True /var/lib/luci/.default_password_has_been_reset 2>&1 >/dev/null; then
-   /usr/sbin/luci_admin password --random >/dev/null 2>&1
+grep True /var/lib/luci/.default_password_has_been_reset >&/dev/null
+if [ $? -ne 0 ]; then
+	/usr/sbin/luci_admin password --random >&/dev/null &&
+		rm -f /var/lib/luci/var/Data.fs.index /var/lib/luci/var/Data.fs.tmp /var/lib/luci/var/Data.fs.old
+	find %{_libdir}/luci/zope/var -print0 2>/dev/null | xargs -0 chown luci:
 fi
 if [ "$LUCI_RUNNING" == "0" ]; then
-   /sbin/service luci start >/dev/null 2>&1
+	/sbin/service luci start >&/dev/null
 fi
 
 %preun -n luci
 if [ "$1" == "0" ]; then
-   /sbin/service luci stop >/dev/null 2>&1
-   /sbin/chkconfig --del luci
+	/sbin/service luci stop >&/dev/null
+	/sbin/chkconfig --del luci
 fi
-/sbin/service luci status >/dev/null 2>&1
+/sbin/service luci status >&/dev/null
 LUCI_RUNNING=$?
 if [ "$LUCI_RUNNING" == "0" ]; then
-   /sbin/service luci stop >/dev/null 2>&1
+	/sbin/service luci stop >&/dev/null
 fi
-/usr/sbin/luci_admin backup >/dev/null 2>&1
+/usr/sbin/luci_admin backup >&/dev/null
 if [ "$LUCI_RUNNING" == "0" ]; then
-   /sbin/service luci start >/dev/null 2>&1
+	/sbin/service luci start >&/dev/null
 fi
 
 
@@ -191,7 +198,7 @@
 Requires: initscripts
 Requires: oddjob dbus openssl pam cyrus-sasl >= 2.1
 Requires: sed util-linux
-Requires: modcluster >= 0.8
+Requires: modcluster >= 0.11.0
 
 # modreboot
 
@@ -212,12 +219,12 @@
 Requires(postun): initscripts util-linux
 
 %description -n ricci
-Conga is a project developing management system for remote stations. 
-It consists of luci, https frontend, and ricci, secure daemon that dispatches 
-incoming messages to underlying management modules. 
+Conga is a project developing management system for remote stations.
+It consists of luci, https frontend, and ricci, secure daemon that dispatches
+incoming messages to underlying management modules.
 
-This package contains listening daemon (dispatcher), as well as 
-reboot, rpm, storage, service and log management modules. 
+This package contains listening daemon (dispatcher), as well as
+reboot, rpm, storage, service and log management modules.
 
 
 %files -n ricci
@@ -249,33 +256,33 @@
 			%{_libexecdir}/ricci-modlog
 
 %pre -n ricci
-if ! /bin/grep ^ricci\:x /etc/group 2>&1 >/dev/null; then
-   /usr/sbin/groupadd -r -f ricci >/dev/null 2>&1
+if ! /bin/grep ^ricci\:x /etc/group >&/dev/null; then
+	/usr/sbin/groupadd -r -f ricci >&/dev/null
 fi
-if ! /bin/grep ^ricci\:x /etc/passwd 2>&1 >/dev/null; then
-   /usr/sbin/useradd -r -M -s /sbin/nologin -d /var/lib/ricci -g ricci ricci >/dev/null 2>&1
+if ! /bin/grep ^ricci\:x /etc/passwd >&/dev/null; then
+	/usr/sbin/useradd -r -M -s /sbin/nologin -d /var/lib/ricci -g ricci ricci >&/dev/null
 fi
 
 %post -n ricci
-DBUS_PID=`cat /var/run/messagebus.pid 2> /dev/null`
-/bin/kill -s SIGHUP $DBUS_PID > /dev/null 2>&1
-/sbin/service oddjobd reload >/dev/null 2>&1
+DBUS_PID=`cat /var/run/messagebus.pid 2>/dev/null`
+/bin/kill -s SIGHUP $DBUS_PID >&/dev/null
+/sbin/service oddjobd reload >&/dev/null
 /sbin/chkconfig --add ricci
 
 %preun -n ricci
 if [ "$1" == "0" ]; then
-   /sbin/service ricci stop > /dev/null 2>&1
-   /sbin/chkconfig --del ricci
+	/sbin/service ricci stop >&/dev/null
+	/sbin/chkconfig --del ricci
 fi
 
 %postun -n ricci
 if [ "$1" == "0" ]; then
-   DBUS_PID=`cat /var/run/messagebus.pid 2> /dev/null`
-   /bin/kill -s SIGHUP $DBUS_PID > /dev/null 2>&1
-   /sbin/service oddjobd reload > /dev/null 2>&1
+	DBUS_PID=`cat /var/run/messagebus.pid 2>/dev/null`
+	/bin/kill -s SIGHUP $DBUS_PID >&/dev/null
+	/sbin/service oddjobd reload >&/dev/null
 fi
 if [ "$1" == "1" ]; then
-   /sbin/service ricci condrestart > /dev/null 2>&1
+	/sbin/service ricci condrestart >&/dev/null
 fi
 
 
@@ -288,6 +295,9 @@
 
 
 %changelog
+* Thu Aug 09 2007 Ryan McCabe <rmccabe@redhat.com> 0.11.0-1
+- Merge in fixes from the RHEL5 code base.
+
 * Tue Apr 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.9.1-9
 - Fix bz238656 (conga does not set the "nodename" attribute for manual fencing)
 
@@ -362,11 +372,11 @@
 - Fixed deleting cluster
 - Fixed deleting node
 - Fixed redirection for all async->busy wait calls
-- Storage module: properly probe cluster quorum if LVM locking 
+- Storage module: properly probe cluster quorum if LVM locking
   is marked as clustered
 
 * Wed Nov 01 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-23
-- 213504: luci does not correctly handle cluster.conf with 
+- 213504: luci does not correctly handle cluster.conf with
   nodes lacking FQDN
 
 * Tue Oct 31 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-22
@@ -414,19 +424,19 @@
 * Fri Aug 18 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-12
 - Don't auto-start ricci after installation, do it manually
 - Under certain circumstances, default luci password would not get reset
-- Many Luci improvements   
+- Many Luci improvements
 
 * Wed Aug 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11.7
 - Move ricci-modrpm, ricci-modlog, ricci-modstorage, ricci-modservice
    from /usr/sbin to /usr/libexec
 
 * Wed Aug 09 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-11
-- Spin off clustermon.srpm (modcluster, cluster-snmp and 
+- Spin off clustermon.srpm (modcluster, cluster-snmp and
    cluster-cim) from conga.srpm
 - Luci: tighten down security
 
 * Thu Aug 03 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-10
-- Luci: fix login issues, add cluster resources, styling... 
+- Luci: fix login issues, add cluster resources, styling...
 
 * Wed Jul 26 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-9
 - Update Luci to Plone 2.5
@@ -439,7 +449,7 @@
 - More compliant specfile, minor fixes
 
 * Tue Jun 27 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-6
-- Luci persists users/clusters/systems/permissions across upgrades 
+- Luci persists users/clusters/systems/permissions across upgrades
 
 * Fri Jun 16 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.8-4
 - Moved storage, service, log and rpm modules into main ricci.rpm
--- conga/download_files	2007/03/27 02:07:39	1.5.2.2
+++ conga/download_files	2007/08/09 22:02:19	1.5.2.3
@@ -2,10 +2,10 @@
 # URLs is a space delimited list of urls to download from
 
 
-ZOPE_ARCHIVE=Zope-2.8.9-final
-ZOPE_ARCHIVE_TAR=Zope-2.8.9-final.tgz
-ZOPE_MD5SUM=afe67f446ed602fa7ae0137e05d095cb
-ZOPE_URLs="http://www.zope.org/Products/Zope/2.8.9/Zope-2.8.9-final.tgz"
+ZOPE_ARCHIVE=Zope-2.8.9.1-final
+ZOPE_ARCHIVE_TAR=Zope-2.8.9.1-final.tgz
+ZOPE_MD5SUM=091e96f14c9a8aadcad3f6da74cc38c1
+ZOPE_URLs="http://www.zope.org/Products/Zope/2.8.9.1/Zope-2.8.9.1-final.tgz"
 
 
 ZOPE_FIVE_ARCHIVE=Five
@@ -14,10 +14,10 @@
 ZOPE_FIVE_URLs="http://codespeak.net/z3/five/release/Five-1.2.6.tgz"
 
 
-PLONE_ARCHIVE=Plone-2.5.2-1
-PLONE_ARCHIVE_TAR=Plone-2.5.2-1.tar.gz
-PLONE_MD5SUM=b4891a3f11a0eacb13b234d530ba9af1
-PLONE_URLs="http://plone.googlecode.com/files/Plone-2.5.2-1.tar.gz \
-	http://superb-west.dl.sourceforge.net/sourceforge/plone/Plone-2.5.2-1.tar.gz \
-	    http://superb-east.dl.sourceforge.net/sourceforge/plone/Plone-2.5.2-1.tar.gz \
-	    http://easynews.dl.sourceforge.net/sourceforge/plone/Plone-2.5.2-1.tar.gz"
+PLONE_ARCHIVE=Plone-2.5.3-final
+PLONE_ARCHIVE_TAR=Plone-2.5.3-final.tar.gz
+PLONE_MD5SUM=36117b0757982d66d445b6c6b9df0e25
+PLONE_URLs="http://plone.googlecode.com/files/Plone-2.5.3-final.tar.gz \
+	http://superb-west.dl.sourceforge.net/sourceforge/plone/Plone-2.5.3-final.tar.gz \
+	http://superb-east.dl.sourceforge.net/sourceforge/plone/Plone-2.5.3-final.tar.gz \
+	http://easynews.dl.sourceforge.net/sourceforge/plone/Plone-2.5.3-final.tar.gz"
--- conga/make/version.in	2007/05/01 15:57:33	1.28.2.5
+++ conga/make/version.in	2007/08/09 22:02:21	1.28.2.6
@@ -1,5 +1,5 @@
-VERSION=0.9.1
-RELEASE=8
+VERSION=0.11.0
+RELEASE=1
 # Remove "_UNRELEASED" at release time.
 # Put release num at the beggining, 
 # so that after it gets released, it has 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-08-13 19:06 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-08-13 19:06 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-08-13 19:06:44

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/site/luci/var: Data.fs 
	luci/utils     : luci_admin 

Log message:
	- Update the luci zope database file
	- Update the changelog
	- Fix some nits in the luci_admin script that were hit by users in the field

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.22&r2=1.18.2.23
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.52&r2=1.45.2.53
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/var/Data.fs.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.15.2.21&r2=1.15.2.22
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/utils/luci_admin.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.50.2.4&r2=1.50.2.5

--- conga/clustermon.spec.in.in	2007/08/08 21:24:12	1.18.2.22
+++ conga/clustermon.spec.in.in	2007/08/13 19:06:01	1.18.2.23
@@ -195,7 +195,7 @@
 %changelog
 * Wed Aug 08 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-4
 - Fixed bz230451 (fence_xvm.key file is not automatically created. Should have a least a default)
-- Related bz230451
+- Resolves: bz230451
 
 * Mon Jul 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-3
 - Fixed bz249351 (conga reports that ricci agent is unresponsive even though it's running)
--- conga/conga.spec.in.in	2007/08/11 04:16:19	1.45.2.52
+++ conga/conga.spec.in.in	2007/08/13 19:06:01	1.45.2.53
@@ -310,9 +310,12 @@
 
 ###  changelog ###
 %changelog
-* Wed Aug 08 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-4
+* Mon Aug 13 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-4
 - Fixed bz230451 (fence_xvm.key file is not automatically created. Should have a least a default)
-- Resolves bz230451
+- Fixed bz249097 (allow a space as a valid password char)
+- Fixed bz250834 (ZeroDivisionError when attempting to click an empty lvm volume group)
+- Resolves: bz249097
+- Related: bz230451
 
 * Mon Jul 30 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-3
 - Fixed bz245947 (luci/Conga cluster configuration tool not initializing cluster node members)
Binary files /cvs/cluster/conga/luci/site/luci/var/Data.fs	2007/08/08 22:42:56	1.15.2.21 and /cvs/cluster/conga/luci/site/luci/var/Data.fs	2007/08/13 19:06:02	1.15.2.22 differ
rcsdiff: /cvs/cluster/conga/luci/site/luci/var/Data.fs: diff failed
--- conga/luci/utils/luci_admin	2007/08/10 18:32:54	1.50.2.4
+++ conga/luci/utils/luci_admin	2007/08/13 19:06:44	1.50.2.5
@@ -2,13 +2,13 @@
 
 # Copyright (C) 2006-2007 Red Hat, Inc.
 
-import sys, os, stat, select, string, pwd
-from sys import stderr, argv
+import sys, os, pwd
+from select import select
+from stat import S_ISREG
 import types
 import xml
 import xml.dom
-from xml.dom import minidom
-	
+
 sys.path.extend((
 	'/usr/lib/luci/zope/lib/python',
 	'/usr/lib/luci/zope/lib/python/Products',
@@ -25,14 +25,14 @@
 ))
 
 from Products import __path__
-for i in ['/usr/lib/luci/zope/lib/python/Products',
+for pdir in ['/usr/lib/luci/zope/lib/python/Products',
 	  '/usr/lib64/luci/zope/lib/python/Products',
 	  '/usr/lib64/luci/zope/lib64/python/Products',
 	  '/usr/lib64/zope/lib/python/Products',
 	  '/usr/lib64/zope/lib64/python/Products',
 	  '/usr/lib/zope/lib/python/Products']:
-	if os.path.isdir(i):
-		__path__.append(i)
+	if os.path.isdir(pdir):
+		__path__.append(pdir)
 
 LUCI_INIT_DEBUG = 0
 
@@ -59,6 +59,12 @@
 SSL_HTTPS_PUBKEY_PATH  = LUCI_CERT_DIR + SSL_HTTPS_PUBKEY_NAME
 SSL_KEYCONFIG_PATH     = LUCI_CERT_DIR + SSL_KEYCONFIG_NAME
 
+# only root should run this
+if os.getuid() != 0:
+	sys.stderr.write('Only the \'root\' user can run %s\n' % sys.argv[0])
+	sys.stderr.write('Try again with root privileges.\n')
+	sys.exit(2)
+
 ssl_key_data = [
 	{ 'id'  : SSL_PRIVKEY_PATH,
 	  'name': SSL_PRIVKEY_NAME,
@@ -81,12 +87,14 @@
 	  'type': 'config',
 	  'mode': 0644 }
 ]
+
 for name in os.listdir(LUCI_PEERS_DIR):
-	path = LUCI_PEERS_DIR + name
-	if stat.S_ISREG(os.stat(path).st_mode):
-		ssl_key_data.append({'id'   : path, 
-				     'name' : path.lstrip(LUCI_CERT_DIR), 
-				     'type' : 'public', 
+	cert_path = LUCI_PEERS_DIR + name
+	if S_ISREG(os.stat(cert_path).st_mode):
+		ssl_key_data.append({
+				     'id'   : cert_path,
+				     'name' : cert_path.lstrip(LUCI_CERT_DIR),
+				     'type' : 'public',
 				     'mode' : 0644})
 
 #null = file(os.devnull, 'rwb+', 0)   - available on python 2.4 and above!!!
@@ -109,21 +117,42 @@
 			raise
 		return luci
 	except:
-		msg = 'Cannot find the \"' + LUCI_USER + '\" user.\n'
+		msg = 'Cannot find the "%s" user.\n' % LUCI_USER
 		sys.stderr.write(msg)
-		raise msg
-	
+		raise Exception, msg
+
 
 def set_default_passwd_reset_flag():
 	# set flag marking admin password has been set
-	uid, gid = get_luci_uid_gid()
-	open(LUCI_ADMIN_SET_PATH, 'w').write('True')
+
+	try:
+		uid, gid = get_luci_uid_gid()
+	except:
+		sys.stderr.write('Unable to find the luci user\'s UID\n')
+		return False
+
+	try:
+		open(LUCI_ADMIN_SET_PATH, 'w').write('True')
+	except IOError, e:
+		if e[0] != 2:
+			sys.stderr.write('Unable to open "%s" for writing: %s\n' \
+				% (LUCI_ADMIN_SET_PATH, e[1]))
+			return False
+	except Exception, e:
+		sys.stderr.write('Unable to open "%s" for writing: %s\n' \
+			% (LUCI_ADMIN_SET_PATH, str(e)))
+		return False
+
 	os.chown(LUCI_ADMIN_SET_PATH, uid, gid)
 	os.chmod(LUCI_ADMIN_SET_PATH, 0640)
 	return True
 
 def get_default_passwd_reset_flag():
-	return open(LUCI_ADMIN_SET_PATH, 'r').read(16).strip() == 'True'
+	try:
+		return open(LUCI_ADMIN_SET_PATH, 'r').read(16).strip() == 'True'
+	except:
+		return False
+	return False
 
 
 def read_passwd(prompt, confirm_prompt):
@@ -138,7 +167,7 @@
 			continue
 		s2 = getpass(confirm_prompt)
 		if s1 != s2:
-			print 'Passwords mismatch, try again'
+			print 'Password mismatch, try again'
 			continue
 		return s1
 
@@ -146,41 +175,37 @@
 
 def restore_luci_db_fsattr():
 	uid, gid = -1, -1
+
 	try:
 		uid, gid = get_luci_uid_gid()
 	except:
 		return -1
-	
+
 	try:
 		os.chown(LUCI_DB_PATH, uid, gid)
 		os.chmod(LUCI_DB_PATH, 0600)
-		for i in [ '.tmp', '.old', '.index', '.lock' ]:
+
+		for fext in [ '.tmp', '.old', '.index', '.lock' ]:
 			try:
-				os.chown(LUCI_DB_PATH + i, uid, gid)
-				os.chmod(LUCI_DB_PATH + i, 0600)
-			except: pass
-	except:
-		sys.stderr.write('Unable to change ownership of the Luci database back to user \"' + LUCI_USER + '\"\n')
+				os.chown('%s%s' % (LUCI_DB_PATH, fext), uid, gid)
+				os.chmod('%s%s' % (LUCI_DB_PATH, fext), 0600)
+			except:
+				pass
+	except Exception, e:
+		sys.stderr.write('Unable to change ownership of the Luci database back to user "%s": %s\n' % (LUCI_USER, str(e)))
 		return -1
 
 def set_zope_passwd(user, passwd):
 	sys.stderr = null
-	import ZODB
 	from ZODB.FileStorage import FileStorage
 	from ZODB.DB import DB
-	import OFS
 	from OFS.Application import AppInitializer
-	import OFS.Folder
 	import AccessControl
 	import AccessControl.User
 	from AccessControl.AuthEncoding import SSHADigestScheme
 	from AccessControl.SecurityManagement import newSecurityManager
 	import transaction
-	import Products.CMFCore
-	import Products.CMFCore.MemberDataTool
 	import App.ImageFile
-	import Products.PluggableAuthService.plugins.ZODBUserManager
-	import BTrees.OOBTree
 	# Zope wants to open a www/ok.gif and images/error.gif
 	# when you initialize the application object. This keeps
 	# the AppInitializer(app).initialize() call below from failing.
@@ -196,10 +221,10 @@
 			sys.stderr.write('It appears that Luci is running. Please stop Luci before attempting to reset passwords.\n')
 			return -1
 		else:
-			sys.stderr.write('Unable to open the Luci database \"' + dbfn + '\":' + str(e) + '\n')
+			sys.stderr.write('Unable to open the Luci database \"' + LUCI_DB_PATH + '\":' + str(e) + '\n')
 			return -1
 	except Exception, e:
-		sys.stderr.write('Unable to open the Luci database \"' + dbfn + '\":' + str(e) + '\n')
+		sys.stderr.write('Unable to open the Luci database \"' + LUCI_DB_PATH + '\":' + str(e) + '\n')
 		return -1
 
 	try:
@@ -238,10 +263,10 @@
 
 	if restore_luci_db_fsattr():
 		return -1
-	
+
 	if user == 'admin' and ret == 0:
 		set_default_passwd_reset_flag()
-	
+
 	return ret
 
 
@@ -254,6 +279,7 @@
 	if not certList or len(certList) < 1:
 		sys.stderr.write('Your backup file contains no certificate data. Please check that your backup file is not corrupt.\n')
 		return -1
+
 	uid, gid = -1, -1
 	try:
 		uid, gid = get_luci_uid_gid()
@@ -300,22 +326,14 @@
 
 def luci_restore(argv):
 	sys.stderr = null
-	import ZODB
 	from ZODB.FileStorage import FileStorage
 	from ZODB.DB import DB
-	import OFS
 	from OFS.Application import AppInitializer
-	import OFS.Folder
 	import AccessControl
 	import AccessControl.User
-	from AccessControl.AuthEncoding import SSHADigestScheme
 	from AccessControl.SecurityManagement import newSecurityManager
 	import transaction
-	import Products.CMFCore
-	import Products.CMFCore.MemberDataTool
 	import App.ImageFile
-	import Products.PluggableAuthService.plugins.ZODBUserManager
-	import BTrees.OOBTree
 	from DateTime import DateTime
 	App.ImageFile.__init__ = lambda x, y: None
 	sys.stderr = orig_stderr
@@ -497,7 +515,7 @@
 		try:
 			title = str(s.getAttribute('title'))
 		except:
-			title = '__luci__:system'
+			title = ''
 
 		x.manage_addFolder(id, title)
 		try:
@@ -505,7 +523,8 @@
 			if not new_system:
 				raise
 			new_system.manage_acquiredPermissions([])
-			new_system.manage_role('View', ['Access contents information','View'])
+			new_system.manage_role('View',
+				['Access contents information', 'View'])
 		except:
 			transaction.abort()
 			sys.stderr.write('An error occurred while restoring storage system \"' + id + '\"\n')
@@ -556,7 +575,7 @@
 
 		title = c.getAttribute('title')
 		if not title:
-			title = '__luci__:cluster'
+			title = ''
 		else:
 			title = str(title)
 
@@ -567,7 +586,8 @@
 			if not new_cluster:
 				raise
 			new_cluster.manage_acquiredPermissions([])
-			new_cluster.manage_role('View', ['Access contents information','View'])
+			new_cluster.manage_role('View',
+				['Access contents information', 'View'])
 		except:
 			transaction.abort()
 			sys.stderr.write('An error occurred while restoring the cluster \"' + id + '\"\n')
@@ -606,7 +626,7 @@
 				newsys = str(newsys)
 				stitle = i.getAttribute('title')
 				if not stitle:
-					stitle = '__luci__:csystem:' + id
+					stitle = ''
 				else:
 					stitle = str(stitle)
 
@@ -616,7 +636,8 @@
 					if not newcs:
 						raise
 					newcs.manage_acquiredPermissions([])
-					newcs.manage_role('View', ['Access contents information','View'])
+					newcs.manage_role('View',
+						['Access contents information', 'View'])
 				except:
 					transaction.abort()
 					sys.stderr.write('An error occurred while restoring the storage system \"' + newsys + '\" for cluster \"' + id + '\"\n')
@@ -655,24 +676,24 @@
 	return 0
 
 # This function's ability to work is dependent
-# upon the structure of @dict
-def dataToXML(doc, dict, tltag):
+# upon the structure of @obj_dict
+def dataToXML(doc, obj_dict, tltag):
 	node = doc.createElement(tltag)
-	for i in dict:
-		if isinstance(dict[i], types.DictType):
+	for i in obj_dict:
+		if isinstance(obj_dict[i], types.DictType):
 			if i[-4:] == 'List':
 				tagname = i
 			else:
 				tagname = tltag[:-4]
-			temp = dataToXML(doc, dict[i], tagname)
+			temp = dataToXML(doc, obj_dict[i], tagname)
 			node.appendChild(temp)
-		elif isinstance(dict[i], types.StringType) or isinstance(dict[i], types.IntType):
-			node.setAttribute(i, str(dict[i]))
-		elif isinstance(dict[i], types.ListType):
-			if len(dict[i]) < 1:
+		elif isinstance(obj_dict[i], types.StringType) or isinstance(obj_dict[i], types.IntType):
+			node.setAttribute(i, str(obj_dict[i]))
+		elif isinstance(obj_dict[i], types.ListType):
+			if len(obj_dict[i]) < 1:
 				continue
 			temp = doc.createElement(i)
-			for x in dict[i]:
+			for x in obj_dict[i]:
 				t = doc.createElement('ref')
 				t.setAttribute('name', x)
 				temp.appendChild(t.cloneNode(True))
@@ -681,23 +702,15 @@
 
 def luci_backup(argv):
 	sys.stderr = null
-	import ZODB
 	from ZODB.FileStorage import FileStorage
 	from ZODB.DB import DB
-	import OFS
 	from OFS.Application import AppInitializer
-	import OFS.Folder
 	import AccessControl
 	import AccessControl.User
-	from AccessControl.AuthEncoding import SSHADigestScheme
 	from AccessControl.SecurityManagement import newSecurityManager
 	import transaction
-	import Products.CMFCore
-	import Products.CMFCore.MemberDataTool
 	from CMFPlone.utils import getToolByName
 	import App.ImageFile
-	import Products.PluggableAuthService.plugins.ZODBUserManager
-	import BTrees.OOBTree
 	App.ImageFile.__init__ = lambda x, y: None
 	sys.stderr = orig_stderr
 
@@ -706,11 +719,6 @@
 	else:
 		dbfn = LUCI_DB_PATH
 
-	if len(argv) > 1:
-		backupfn = argv[1]
-	else:
-		backupfn = LUCI_BACKUP_PATH
-
 	try:
 		fs = FileStorage(dbfn)
 		db = DB(fs)
@@ -800,7 +808,7 @@
 				continue
 	except:
 		pass
-		
+
 	try:
 		storagedir = app.luci.systems.storage
 		clusterdir = app.luci.systems.cluster
@@ -822,7 +830,7 @@
 					systems[i[0]]['permList'] = map(lambda x: x[0], filter(lambda x: len(x) > 1 and 'View' in x[1], roles.items()))
 			else:
 				systems[i[0]]['permList'] = {}
-			
+
 	if clusterdir and len(clusterdir):
 		for i in clusterdir.objectItems():
 			cluster_name = i[0]
@@ -854,7 +862,7 @@
 	db.close()
 	fs.close()
 
-	backup = {
+	backup_data = {
 		'userList': users,
 		'systemList': systems,
 		'clusterList': clusters
@@ -863,7 +871,7 @@
 	doc = xml.dom.minidom.Document()
 	luciData = doc.createElement('luci')
 	doc.appendChild(luciData)
-	dataNode = dataToXML(doc, backup, 'backupData')
+	dataNode = dataToXML(doc, backup_data, 'backupData')
 
 	certList = doc.createElement('certificateList')
 	for i in ssl_key_data:
@@ -898,15 +906,16 @@
 
 def _execWithCaptureErrorStatus(command, argv, searchPath = 0, root = '/', stdin = 0, catchfd = 1, catcherrfd = 2, closefd = -1):
     if not os.access (root + command, os.X_OK):
-        raise RuntimeError, command + " can not be run"
+        raise RuntimeError, '%s is not executable' % command
 
     (read, write) = os.pipe()
-    (read_err,write_err) = os.pipe()
+    (read_err, write_err) = os.pipe()
 
     childpid = os.fork()
     if (not childpid):
         # child
-        if (root and root != '/'): os.chroot (root)
+        if (root and root != '/'):
+			os.chroot (root)
         if isinstance(catchfd, tuple):
             for fd in catchfd:
                 os.dup2(write, fd)
@@ -943,7 +952,7 @@
     rc_err = ""
     in_list = [read, read_err]
     while len(in_list) != 0:
-        i,o,e = select.select(in_list, [], [], 0.1)
+        i, o, e = select(in_list, [], [], 0.1)
         for fd in i:
             if fd == read:
                 s = os.read(read, 1000)
@@ -992,17 +1001,17 @@
     command = '/bin/rm'
     args = [command, '-f', SSL_PRIVKEY_PATH, SSL_PUBKEY_PATH]
     _execWithCaptureErrorStatus(command, args)
-    
+
     # /usr/bin/openssl genrsa -out /var/lib/luci/var/certs/privkey.pem 2048 > /dev/null 2>&1
     command = '/usr/bin/openssl'
     args = [command, 'genrsa', '-out', SSL_PRIVKEY_PATH, '2048']
     _execWithCaptureErrorStatus(command, args)
-    
+
     # /usr/bin/openssl req -new -x509 -key /var/lib/luci/var/certs/privkey.pem -out /var/lib/luci/var/certs/cacert.pem -days 1825 -config /var/lib/luci/var/certs/cacert.config
     command = '/usr/bin/openssl'
     args = [command, 'req', '-new', '-x509', '-key', SSL_PRIVKEY_PATH, '-out', SSL_PUBKEY_PATH, '-days', '1825', '-config', SSL_KEYCONFIG_PATH]
     _execWithCaptureErrorStatus(command, args)
-    
+
     # take ownership and restrict access
     try:
 	    uid, gid = get_luci_uid_gid()
@@ -1015,7 +1024,7 @@
 	    args = [command, '-f', SSL_PRIVKEY_PATH, SSL_PUBKEY_PATH]
 	    _execWithCaptureErrorStatus(command, args)
 	    return False
-    
+
     return True
 
 
@@ -1037,37 +1046,37 @@
 		sys.stderr.write('If you want to reset admin password, execute\n')
 		sys.stderr.write('\t' + argv[0] + ' password\n')
 		sys.exit(1)
-	
+
 	print 'Initializing the Luci server\n'
-	
+
 	print '\nCreating the \'admin\' user\n'
-	password = read_passwd('Enter password: ', 'Confirm password: ')
+	new_password = read_passwd('Enter password: ', 'Confirm password: ')
 	print '\nPlease wait...'
-	if not set_zope_passwd('admin', password):
+	if not set_zope_passwd('admin', new_password):
 		restore_luci_db_fsattr()
 		print 'The admin password has been successfully set.'
 	else:
 		sys.stderr.write('Unable to set the admin user\'s password.\n')
 		sys.exit(1)
-	
+
 	print 'Generating SSL certificates...'
 	if generate_ssl_certs() == False:
 		sys.stderr.write('failed. exiting ...\n')
 		sys.exit(1)
-	
+
 	print 'Luci server has been successfully initialized'
 	restart_message()
-	
+
 	return
 
 
 def password(argv):
-	password = None
+	passwd = None
 	if '--random' in argv:
 		print 'Resetting the admin user\'s password to some random value\n'
 		try:
 			rand = open('/dev/urandom', 'r')
-			password = rand.read(16)
+			passwd = rand.read(16)
 			rand.close()
 		except:
 			sys.stderr.write('Unable to read from /dev/urandom\n')
@@ -1078,12 +1087,12 @@
 			sys.stderr.write('To initialize it, execute\n')
 			sys.stderr.write('\t' + argv[0] + ' init\n')
 			sys.exit(1)
-		
+
 		print 'Resetting the admin user\'s password\n'
-		password = read_passwd('Enter new password: ', 'Confirm password: ')
-		
+		passwd = read_passwd('Enter new password: ', 'Confirm password: ')
+
 	print '\nPlease wait...'
-	if not set_zope_passwd('admin', password):
+	if not set_zope_passwd('admin', passwd):
 		print 'The admin password has been successfully reset.'
 	else:
 		sys.stderr.write('Unable to set the admin user\'s password.\n')
@@ -1118,7 +1127,7 @@
 		# The LUCI_BACKUP_DIR must not be world-writable
 		# as the code below is obviously not safe against
 		# races.
-		stat = os.stat(LUCI_BACKUP_PATH)
+		os.stat(LUCI_BACKUP_PATH)
 		trynum = 1
 		basename = '/luci_backup-'
 
@@ -1128,7 +1137,7 @@
 				try:
 					os.rename(LUCI_BACKUP_PATH, oldbackup)
 				except:
-					sys.stderr.stderr('Unable to rename the existing backup file.\n')
+					sys.stderr.write('Unable to rename the existing backup file.\n')
 					sys.stderr.write('The Luci backup failed.\n')
 				break
 			trynum += 1
@@ -1162,8 +1171,10 @@
 def restore(argv):
 	print 'Restoring the Luci server...'
 
-	try: os.umask(077)
-	except: pass
+	try:
+		os.umask(077)
+	except:
+		pass
 
 	if luci_restore(argv[2:]):
 		ret = False
@@ -1197,7 +1208,7 @@
 def test_luci_installation():
    # perform basic checks
    # TODO: do more tests
-   
+
    # check if luci user and group are present on the system
    try:
 	   get_luci_uid_gid()
@@ -1206,7 +1217,7 @@
 	   sys.stderr.write('Mising luci\'s system account and group')
 	   sys.stderr.write('Recommended action: reinstall luci\n\n')
 	   sys.exit(3)
-   
+
    return True
 
 
@@ -1214,16 +1225,9 @@
     if len(argv) < 2:
         luci_help(argv)
         sys.exit(1)
-    
-    # only root should run this
-    if os.getuid() != 0:
-        sys.stderr.write('Only \'root\' can run ' + argv[0] + '\n')
-        sys.stderr.write('Try again with root privileges.\n')
-        sys.exit(2)
 
-    # test if luci installation is OK
     test_luci_installation()
-    
+
     if 'init' in argv:
         init(argv)
     elif 'backup' in argv:



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2007-08-20 16:23 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2007-08-20 16:23 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2007-08-20 16:23:28

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	ricci/modules/cluster: Clusvcadm.cpp 

Log message:
	fix bz253341: failure to start cluster service which had been modifed for correction

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.23&r2=1.18.2.24
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.56&r2=1.45.2.57
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/Clusvcadm.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.7.2.5&r2=1.7.2.6

--- conga/clustermon.spec.in.in	2007/08/13 19:06:01	1.18.2.23
+++ conga/clustermon.spec.in.in	2007/08/20 16:23:27	1.18.2.24
@@ -193,6 +193,10 @@
 
 
 %changelog
+* Mon Aug 20 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-5
+- Fixed bz253341 (failure to start cluster service which had been modifed for correction)
+- Resolves: bz253341
+
 * Wed Aug 08 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-4
 - Fixed bz230451 (fence_xvm.key file is not automatically created. Should have a least a default)
 - Resolves: bz230451
--- conga/conga.spec.in.in	2007/08/17 20:26:31	1.45.2.56
+++ conga/conga.spec.in.in	2007/08/20 16:23:27	1.45.2.57
@@ -310,8 +310,10 @@
 
 ###  changelog ###
 %changelog
-* Fri Aug 17 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-5
-* Fixed bz249291 (delete node task fails to do all items listed in the help document)
+* Mon Aug 20 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-5
+- Fixed bz249291 (delete node task fails to do all items listed in the help document)
+- Fixed bz253341 (failure to start cluster service which had been modifed for correction)
+- Related: bz253341
 - Resolves: bz249291
 
 * Mon Aug 13 2007 Ryan McCabe <rmccabe@redhat.com> 0.10.0-4
--- conga/ricci/modules/cluster/Clusvcadm.cpp	2007/03/12 03:45:57	1.7.2.5
+++ conga/ricci/modules/cluster/Clusvcadm.cpp	2007/08/20 16:23:28	1.7.2.6
@@ -1,5 +1,5 @@
 /*
-  Copyright Red Hat, Inc. 2005
+  Copyright Red Hat, Inc. 2005-2007
 
   This program is free software; you can redistribute it and/or modify it
   under the terms of the GNU General Public License as published by the
@@ -89,7 +89,7 @@
     if (*iter == nodename)
       node_found = true;
   if (!node_found && nodename.size())
-    throw String("node unable to run services");
+    throw String("Node " + nodename + " is unable to run cluster services. Check whether the rgmanager service is running");
   
   // start
   for (list<ServiceStatus>::const_iterator iter = services.begin();
@@ -100,10 +100,20 @@
 
       if (iter->status == ServiceStatus::RG_STATE_MIGRATE)
          throw String(servicename + " is in the process of being migrated");
-		
-      if (iter->status == ServiceStatus::RG_STATE_STOPPED ||
+
+      /*
+      ** Failed services must be disabled before they can be
+      ** started again.
+      */
+      if (iter->status == ServiceStatus::RG_STATE_FAILED) {
+        try {
+          Clusvcadm::stop(servicename);
+        } catch ( ... ) {
+          throw String("Unable to disable failed service " + servicename + " before starting it");
+        }
+        flag = "-e";
+      } else if (iter->status == ServiceStatus::RG_STATE_STOPPED ||
 	  iter->status == ServiceStatus::RG_STATE_STOPPING ||
-	  iter->status == ServiceStatus::RG_STATE_FAILED ||
 	  iter->status == ServiceStatus::RG_STATE_ERROR ||
 	  iter->status == ServiceStatus::RG_STATE_DISABLED)
 	flag = "-e";
@@ -127,12 +137,12 @@
 	if (utils::execute(CLUSVCADM_TOOL_PATH, args, out, err, status, false))
 	  throw command_not_found_error_msg(CLUSVCADM_TOOL_PATH);
 	if (status != 0)
-	  throw String("clusvcadm failed");
+	  throw String("clusvcadm failed to start " + servicename);
       }
       return;
     }
   
-  throw String("no such service");
+  throw String(servicename + ": no such cluster service");
 }
 
 void 
@@ -150,7 +160,7 @@
     if (*iter == nodename)
       node_found = true;
   if (!node_found && nodename.size())
-    throw String("node unable to run services");
+    throw String("Node " + nodename + " is unable to run cluster services. Check whether the rgmanager service is running");
   
   // start
   for (list<ServiceStatus>::const_iterator iter = services.begin();
@@ -162,9 +172,16 @@
       String flag;
       if (iter->status == ServiceStatus::RG_STATE_MIGRATE)
          throw String(servicename + " is already in the process of being migrated");
-      if (iter->status == ServiceStatus::RG_STATE_STOPPED ||
+
+      if (iter->status == ServiceStatus::RG_STATE_FAILED) {
+        try {
+          Clusvcadm::stop(servicename);
+        } catch ( ... ) {
+          throw String("Unable to disable failed service " + servicename + " before starting it");
+        }
+        flag = "-e";
+      } else if (iter->status == ServiceStatus::RG_STATE_STOPPED ||
 	  iter->status == ServiceStatus::RG_STATE_STOPPING ||
-	  iter->status == ServiceStatus::RG_STATE_FAILED ||
 	  iter->status == ServiceStatus::RG_STATE_ERROR ||
 	  iter->status == ServiceStatus::RG_STATE_DISABLED)
 	flag = "-e";
@@ -185,13 +202,13 @@
 	if (utils::execute(CLUSVCADM_TOOL_PATH, args, out, err, status, false))
 	  throw command_not_found_error_msg(CLUSVCADM_TOOL_PATH);
 	if (status != 0)
-	  throw String("clusvcadm failed");
+	  throw String("clusvcadm failed to migrate " + servicename);
       }
       return;
     }
   }
   
-  throw String("no such virtual service");
+  throw String(servicename + ": no such virtual machine service");
 }
 
 void 
@@ -206,6 +223,7 @@
        iter++)
     if (iter->name == servicename) {
       if (iter->status == ServiceStatus::RG_STATE_STARTING ||
+          iter->status == ServiceStatus::RG_STATE_FAILED   ||
 	  iter->status == ServiceStatus::RG_STATE_STARTED) {
 	String out, err;
 	int status;
@@ -218,12 +236,12 @@
 	if (utils::execute(CLUSVCADM_TOOL_PATH, args, out, err, status, false))
 	  throw command_not_found_error_msg(CLUSVCADM_TOOL_PATH);
 	if (status != 0)
-	  throw String("clusvcadm failed");
+	  throw String("clusvcadm failed to stop " + servicename);
       }
       return;
     }
   
-  throw String("no such service");
+  throw String(servicename + ": no such cluster service");
 }
 
 void 
@@ -243,9 +261,15 @@
       if (iter->status == ServiceStatus::RG_STATE_STARTING)
          throw String(servicename + " is in the process of being started");
 
-      if (iter->status == ServiceStatus::RG_STATE_STOPPED ||
+      if (iter->status == ServiceStatus::RG_STATE_FAILED) {
+        try {
+          Clusvcadm::stop(servicename);
+        } catch ( ... ) {
+          throw String("Unable to disable failed service " + servicename + " before starting it");
+        }
+        flag = "-e";
+      } else if (iter->status == ServiceStatus::RG_STATE_STOPPED ||
 	  iter->status == ServiceStatus::RG_STATE_STOPPING ||
-	  iter->status == ServiceStatus::RG_STATE_FAILED ||
 	  iter->status == ServiceStatus::RG_STATE_ERROR ||
 	  iter->status == ServiceStatus::RG_STATE_DISABLED)
 	flag = "-e";
@@ -264,12 +288,12 @@
 	if (utils::execute(CLUSVCADM_TOOL_PATH, args, out, err, status, false))
 	  throw command_not_found_error_msg(CLUSVCADM_TOOL_PATH);
 	if (status != 0)
-	  throw String("clusvcadm failed");
+	  throw String("clusvcadm failed to restart cluster service " + servicename);
       }
       return;
     }
   
-  throw String("no such service");
+  throw String(servicename + ": no such cluster service");
 }
 
 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-01-29 22:02 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-01-29 22:02 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2008-01-29 22:02:13

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 
	ricci/modules/cluster: ClusterStatus.cpp 
	ricci/modules/log: LogParser.cpp 
	ricci/modules/rpm: PackageHandler.cpp 
	ricci/modules/service: ServiceManager.cpp 

Log message:
	Fix 430737

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.30&r2=1.18.2.31
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.65&r2=1.45.2.66
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.30&r2=1.21.2.31
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/ClusterStatus.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.15.2.4&r2=1.15.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/log/LogParser.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.6.2.4&r2=1.6.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/rpm/PackageHandler.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.9.2.6&r2=1.9.2.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/ServiceManager.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.5.2.4&r2=1.5.2.5

--- conga/clustermon.spec.in.in	2008/01/23 04:56:16	1.18.2.30
+++ conga/clustermon.spec.in.in	2008/01/29 22:02:12	1.18.2.31
@@ -193,6 +193,10 @@
 
 
 %changelog
+* Mon Jan 28 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-2
+- Run the cmirror init script when starting cluster nodes that use shared storage.
+- Related: bz430737
+
 * Mon Jan 21 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-1
 * Fixed bz317541 (Conga displays quorum status incorrectly when qdisk is used)
 
--- conga/conga.spec.in.in	2008/01/25 17:18:37	1.45.2.65
+++ conga/conga.spec.in.in	2008/01/29 22:02:12	1.45.2.66
@@ -292,6 +292,9 @@
 
 ###  changelog ###
 %changelog
+* Mon Jan 28 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-2
+- Fix bz430737 (Conga should install the 'cmirror' package when clustered storage is requested)
+
 * Fri Jan 25 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-1
 - Fix a bug that prevented the fix for bz230462 from working
 
--- conga/make/version.in	2008/01/25 17:23:14	1.21.2.30
+++ conga/make/version.in	2008/01/29 22:02:12	1.21.2.31
@@ -1,2 +1,2 @@
 VERSION=0.12.0
-RELEASE=1
+RELEASE=2
--- conga/ricci/modules/cluster/ClusterStatus.cpp	2008/01/17 17:38:37	1.15.2.4
+++ conga/ricci/modules/cluster/ClusterStatus.cpp	2008/01/29 22:02:12	1.15.2.5
@@ -263,6 +263,7 @@
 
 		if (use_qdisk)
 			run_initd("qdiskd", true, false);
+		run_initd("cmirror", true, false);
 		run_initd("clvmd", true, false);
 		run_initd("gfs", true, false);
 		run_initd("gfs2", true, false);
@@ -275,6 +276,7 @@
 				run_chkconfig("qdiskd", true);
 			else
 				run_chkconfig("qdiskd", false);
+			run_chkconfig("cmirror", true);
 			run_chkconfig("clvmd", true);
 			run_chkconfig("gfs", true);
 			run_chkconfig("gfs2", true);
--- conga/ricci/modules/log/LogParser.cpp	2008/01/17 17:38:38	1.6.2.4
+++ conga/ricci/modules/log/LogParser.cpp	2008/01/29 22:02:13	1.6.2.5
@@ -47,6 +47,7 @@
 	"lock_gulmd_LTPX",
 	"cman",
 	"cman_tool",
+	"cmirror",
 	"ccs",
 	"ccs_tool",
 	"ccsd",
@@ -82,6 +83,7 @@
 	"lvm",
 	"clvm",
 	"clvmd",
+	"cmirror",
 	"end_request",
 	"buffer",
 	"scsi",
--- conga/ricci/modules/rpm/PackageHandler.cpp	2008/01/25 17:19:05	1.9.2.6
+++ conga/ricci/modules/rpm/PackageHandler.cpp	2008/01/29 22:02:13	1.9.2.7
@@ -507,10 +507,14 @@
 		set.packages.push_back("gfs2-utils");
 		if (RHEL5) {
 			set.packages.push_back("gfs-utils");
-			if (kernel.find("xen") == kernel.npos)
+			set.packages.push_back("cmirror");
+			if (kernel.find("xen") == kernel.npos) {
 				set.packages.push_back("kmod-gfs");
-			else
+				set.packages.push_back("kmod-gfs2");
+			} else {
 				set.packages.push_back("kmod-gfs-xen");
+				set.packages.push_back("kmod-gfs2-xen");
+			}
 		}
 	}
 
--- conga/ricci/modules/service/ServiceManager.cpp	2008/01/25 17:19:05	1.5.2.4
+++ conga/ricci/modules/service/ServiceManager.cpp	2008/01/29 22:02:13	1.5.2.5
@@ -522,6 +522,8 @@
 	} else if (RHEL5 || FC6) {
 		descr = "Shared Storage: clvmd, gfs, gfs2";
 		servs.push_back("clvmd");
+		if (RHEL5)
+			servs.push_back("cmirror");
 		servs.push_back("gfs");
 		servs.push_back("gfs2");
 	}



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-02-12 17:40 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-02-12 17:40 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2008-02-12 17:40:39

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 
	ricci/modules/cluster: ClusterStatus.cpp 
	ricci/modules/log: LogParser.cpp 
	ricci/modules/rpm: PackageHandler.cpp 
	ricci/modules/service: ServiceManager.cpp 

Log message:
	Fix 432533 and 432534

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.31&r2=1.18.2.32
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.68&r2=1.45.2.69
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.33&r2=1.21.2.34
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/ClusterStatus.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.15.2.5&r2=1.15.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/log/LogParser.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.6.2.5&r2=1.6.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/rpm/PackageHandler.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.9.2.7&r2=1.9.2.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/ServiceManager.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.5.2.5&r2=1.5.2.6

--- conga/clustermon.spec.in.in	2008/01/29 22:02:12	1.18.2.31
+++ conga/clustermon.spec.in.in	2008/02/12 17:40:38	1.18.2.32
@@ -193,6 +193,9 @@
 
 
 %changelog
+* Tue Feb 12 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-5
+- Fix bz432534 (clustermon should not attempt to start or stop the cmirror service)
+
 * Mon Jan 28 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-2
 - Run the cmirror init script when starting cluster nodes that use shared storage.
 - Related: bz430737
--- conga/conga.spec.in.in	2008/02/08 21:56:33	1.45.2.68
+++ conga/conga.spec.in.in	2008/02/12 17:40:38	1.45.2.69
@@ -292,6 +292,9 @@
 
 ###  changelog ###
 %changelog
+* Tue Feb 12 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-5
+- Fix bz432533 (Do not attempt to install the cmirror package when shared storage is requested)
+
 * Fri Feb 08 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-4
 - Fix bz429151 ([RFE] Luci: increase min password length to 6 characters)
 - Fix bz429152 ([RFE] add inactivity timeout)
--- conga/make/version.in	2008/02/08 21:59:48	1.21.2.33
+++ conga/make/version.in	2008/02/12 17:40:38	1.21.2.34
@@ -1,2 +1,2 @@
 VERSION=0.12.0
-RELEASE=4
+RELEASE=5
--- conga/ricci/modules/cluster/ClusterStatus.cpp	2008/01/29 22:02:12	1.15.2.5
+++ conga/ricci/modules/cluster/ClusterStatus.cpp	2008/02/12 17:40:38	1.15.2.6
@@ -263,7 +263,6 @@
 
 		if (use_qdisk)
 			run_initd("qdiskd", true, false);
-		run_initd("cmirror", true, false);
 		run_initd("clvmd", true, false);
 		run_initd("gfs", true, false);
 		run_initd("gfs2", true, false);
@@ -276,7 +275,6 @@
 				run_chkconfig("qdiskd", true);
 			else
 				run_chkconfig("qdiskd", false);
-			run_chkconfig("cmirror", true);
 			run_chkconfig("clvmd", true);
 			run_chkconfig("gfs", true);
 			run_chkconfig("gfs2", true);
--- conga/ricci/modules/log/LogParser.cpp	2008/01/29 22:02:13	1.6.2.5
+++ conga/ricci/modules/log/LogParser.cpp	2008/02/12 17:40:38	1.6.2.6
@@ -47,7 +47,6 @@
 	"lock_gulmd_LTPX",
 	"cman",
 	"cman_tool",
-	"cmirror",
 	"ccs",
 	"ccs_tool",
 	"ccsd",
@@ -83,7 +82,6 @@
 	"lvm",
 	"clvm",
 	"clvmd",
-	"cmirror",
 	"end_request",
 	"buffer",
 	"scsi",
--- conga/ricci/modules/rpm/PackageHandler.cpp	2008/01/29 22:02:13	1.9.2.7
+++ conga/ricci/modules/rpm/PackageHandler.cpp	2008/02/12 17:40:39	1.9.2.8
@@ -507,7 +507,6 @@
 		set.packages.push_back("gfs2-utils");
 		if (RHEL5) {
 			set.packages.push_back("gfs-utils");
-			set.packages.push_back("cmirror");
 			if (kernel.find("xen") == kernel.npos) {
 				set.packages.push_back("kmod-gfs");
 				set.packages.push_back("kmod-gfs2");
--- conga/ricci/modules/service/ServiceManager.cpp	2008/01/29 22:02:13	1.5.2.5
+++ conga/ricci/modules/service/ServiceManager.cpp	2008/02/12 17:40:39	1.5.2.6
@@ -522,8 +522,6 @@
 	} else if (RHEL5 || FC6) {
 		descr = "Shared Storage: clvmd, gfs, gfs2";
 		servs.push_back("clvmd");
-		if (RHEL5)
-			servs.push_back("cmirror");
 		servs.push_back("gfs");
 		servs.push_back("gfs2");
 	}



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-04-07 20:11 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-04-07 20:11 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2008-04-07 20:11:44

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	- Update changelogs
	- Bump version

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.25.2.10&r2=1.25.2.11
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.67.2.21&r2=1.67.2.22
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.28.2.9&r2=1.28.2.10

--- conga/clustermon.spec.in.in	2008/01/02 22:45:31	1.25.2.10
+++ conga/clustermon.spec.in.in	2008/04/07 20:11:44	1.25.2.11
@@ -195,6 +195,10 @@
 
 
 %changelog
+* Tue Mar 25 2008 Ryan McCabe <rmccabe@redhat.com> 0.11.1-1
+- Fix bz441376 (modclusterd segfaults during startup)
+- Backport fixes from RHEL5
+
 * Wed Jan 02 2008 Ryan McCabe <rmccabe@redhat.com> 0.11.0-4
 - Fix bz426189 (Conga attempts to free a null pointer)
 - Resolves: bz426189
--- conga/conga.spec.in.in	2008/03/25 17:12:18	1.67.2.21
+++ conga/conga.spec.in.in	2008/04/07 20:11:44	1.67.2.22
@@ -319,6 +319,11 @@
 
 %changelog
 * Tue Mar 25 2008 Ryan McCabe <rmccabe@redhat.com> 0.11.1-1
+- Fix bz349561 (Add Conga support for Oracle Resource Agent)
+- Fix bz369131 (RFE: should only have to enter a system's password once)
+- Fix bz333181 (Add option to not fail-back service)
+- Fix bz295781 (RFE: add ability to set self_fence attribute for clusterfs resources)
+- Fix bz253904 (Quorum disk page: Interval + tko should be together)
 - Backport fixes from RHEL5
 
 * Fri Oct 19 2007 Ryan McCabe <rmccabe@redhat.com> 0.11.0-3
--- conga/make/version.in	2008/01/02 22:45:33	1.28.2.9
+++ conga/make/version.in	2008/04/07 20:11:44	1.28.2.10
@@ -1,6 +1,2 @@
-VERSION=0.11.0
-RELEASE=4
-# Remove "_UNRELEASED" at release time.
-# Put release num at the beggining, 
-# so that after it gets released, it has 
-# seniority over UNRELEASED one
+VERSION=0.11.1
+RELEASE=1



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-04-11  6:48 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-04-11  6:48 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2008-04-11 06:48:11

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/cluster   : fence-macros 
	luci/site/luci/Extensions: StorageReport.py 

Log message:
	Fix bz441574 (nodename" field for fence_scsi disabled when adding a new fence device/instance)
	
	Sync up spec files' %pre and %post sections with RHEL5.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.25.2.11&r2=1.25.2.12
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.67.2.22&r2=1.67.2.23
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/fence-macros.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.2.4.1&r2=1.2.4.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/StorageReport.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.22.2.4&r2=1.22.2.5

--- conga/clustermon.spec.in.in	2008/04/07 20:11:44	1.25.2.11
+++ conga/clustermon.spec.in.in	2008/04/11 06:48:09	1.25.2.12
@@ -1,20 +1,18 @@
-###############################################################################
-###############################################################################
-##
-##  Copyright (C) 2006-2007 Red Hat, Inc.  All rights reserved.
-##
-##  This copyrighted material is made available to anyone wishing to use,
-##  modify, copy, or redistribute it subject to the terms and conditions
-##  of the GNU General Public License v.2.
-##
-###############################################################################
+##############################################################################
+#
+# Copyright (C) 2006-2008 Red Hat, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License version 2.
+#
 ###############################################################################
 
 
 %define PEGASUS_PROVIDERS_DIR %{_libdir}/Pegasus/providers
 
 
-############  SRPM  ###################
+############ SRPM ###################
 
 
 Name: clustermon
@@ -60,7 +58,7 @@
 
 
 
-###  cluster module  ###
+### cluster module ###
 
 
 %package -n modcluster
@@ -91,31 +89,32 @@
 
 %post -n modcluster
 /sbin/chkconfig --add modclusterd
-DBUS_PID=`cat /var/run/messagebus.pid 2> /dev/null`
-/bin/kill -s SIGHUP $DBUS_PID > /dev/null 2>&1
+DBUS_PID=`cat /var/run/messagebus.pid 2>/dev/null`
+/bin/kill -s SIGHUP $DBUS_PID >&/dev/null
 # It's ok if this fails (it will fail when oddjob is not running).
-/sbin/service oddjobd reload > /dev/null 2>&1 || true
+/sbin/service oddjobd reload >&/dev/null
+exit 0
 
 %preun -n modcluster
 if [ "$1" == "0" ]; then
-	/sbin/service modclusterd stop > /dev/null 2>&1
+	/sbin/service modclusterd stop >&/dev/null
 	/sbin/chkconfig --del modclusterd
 fi
+exit 0
 
 %postun -n modcluster
 if [ "$1" == "0" ]; then
 	DBUS_PID=`cat /var/run/messagebus.pid 2> /dev/null`
-	/bin/kill -s SIGHUP $DBUS_PID > /dev/null 2>&1
-	/sbin/service oddjobd reload > /dev/null 2>&1
+	/bin/kill -s SIGHUP $DBUS_PID >&/dev/null
+	/sbin/service oddjobd reload >&/dev/null
 fi
 if [ "$1" == "1" ]; then
-	/sbin/service modclusterd condrestart > /dev/null 2>&1
+	/sbin/service modclusterd condrestart >&/dev/null
 fi
+exit 0
 
 
-
-
-###  cluster-snmp  ###
+### cluster-snmp ###
 
 
 %package -n cluster-snmp
@@ -140,20 +139,19 @@
 			%{_docdir}/cluster-snmp-%{version}/
 
 %post -n cluster-snmp
-/sbin/service snmpd condrestart > /dev/null 2>&1 || true
+/sbin/service snmpd condrestart >&/dev/null
+exit 0
 
 %postun -n cluster-snmp
 # don't restart snmpd twice on upgrades
 if [ "$1" == "0" ]; then
-	/sbin/service snmpd condrestart > /dev/null 2>&1
+	/sbin/service snmpd condrestart >&/dev/null
 fi
+exit 0
 
 
 
-
-
-###  cluster-cim  ###
-
+### cluster-cim ###
 
 %package -n cluster-cim
 Group: System Environment/Base
@@ -176,12 +174,13 @@
 
 %post -n cluster-cim
 # pegasus might not be running, don't fail %post
-/sbin/service tog-pegasus condrestart > /dev/null 2>&1 || true
+/sbin/service tog-pegasus condrestart >&/dev/null
+exit 0
 
 %postun -n cluster-cim
 # don't restart pegasus twice on upgrades
 if [ "$1" == "0" ]; then
-	/sbin/service tog-pegasus condrestart > /dev/null 2>&1
+	/sbin/service tog-pegasus condrestart >&/dev/null
 fi
 # pegasus might not be running, don't fail %postun
 exit 0
--- conga/conga.spec.in.in	2008/04/07 20:11:44	1.67.2.22
+++ conga/conga.spec.in.in	2008/04/11 06:48:10	1.67.2.23
@@ -1,22 +1,18 @@
 ###############################################################################
-###############################################################################
-##
-##  Copyright (C) 2006-2007 Red Hat, Inc.  All rights reserved.
-##
-##  This copyrighted material is made available to anyone wishing to use,
-##  modify, copy, or redistribute it subject to the terms and conditions
-##  of the GNU General Public License v.2.
-##
-###############################################################################
+#
+# Copyright (C) 2006-2008 Red Hat, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License version 2.
+#
 ###############################################################################
 
 
 %define include_zope_and_plone	@@INCLUDE_ZOPE_AND_PLONE@@
 
 
-
-############  SRPM  ###################
-
+############ SRPM ###################
 
 Name: conga
 Version: @@VERS@@
@@ -42,13 +38,11 @@
 BuildRequires: cyrus-sasl-devel >= 2.1
 BuildRequires: openssl-devel dbus-devel pkgconfig file
 
-
 %description
 Conga is a project developing management system for remote stations.
 It consists of luci, https frontend, and ricci, secure daemon that dispatches
 incoming messages to underlying management modules.
 
-
 %prep
 %setup -q
 %if "%{include_zope_and_plone}" == "yes"
@@ -81,13 +75,7 @@
 rm -rf $RPM_BUILD_ROOT
 
 
-
-
-
-
-
-#######  luci  #######
-
+####### luci #######
 
 %package -n luci
 
@@ -112,14 +100,12 @@
 Requires(post): chkconfig initscripts
 Requires(preun): chkconfig initscripts
 
-
 %description -n luci
 Conga is a project developing management system for remote stations.
 It consists of luci, https frontend, and ricci, secure daemon that
 dispatches incoming messages to underlying management modules.
 
-This package contains Luci website.
-
+This package contains the luci server.
 
 %files -n luci
 %verify(not size md5 mtime) /var/lib/luci/var/Data.fs
@@ -138,12 +124,12 @@
 %endif
 
 %pre -n luci
-grep ^luci\:x /etc/group >&/dev/null
-if [ $? -ne 0 ]; then 
+groupmod luci >&/dev/null
+if [ $? -eq 6 ]; then 
 	/usr/sbin/groupadd -r -f luci >&/dev/null
 fi
 
-grep ^luci\:x /etc/passwd >&/dev/null
+id luci >&/dev/null
 if [ $? -ne 0 ]; then
 	/usr/sbin/useradd -r -M -s /sbin/nologin -d /var/lib/luci -g luci luci >&/dev/null
 fi
@@ -159,7 +145,7 @@
 	fi
 	/usr/sbin/luci_admin backup >&/dev/null
 	if [ $LUCI_RUNNING -eq 0 ]; then
-		/sbin/service luci start >& /dev/null
+		/sbin/service luci start >&/dev/null
 	fi
 fi
 exit 0
@@ -206,10 +192,8 @@
 exit 0
 
 
-
 ### ricci daemon & basic modules ###
 
-
 %package -n ricci
 
 Group: System Environment/Base
@@ -232,7 +216,6 @@
 
 # modlog
 
-
 Requires(pre): grep shadow-utils
 Requires(post): chkconfig initscripts util-linux
 Requires(preun): chkconfig initscripts
@@ -246,7 +229,6 @@
 This package contains listening daemon (dispatcher), as well as
 reboot, rpm, storage, service and log management modules.
 
-
 %files -n ricci
 %defattr(-,root,root)
 # ricci
@@ -318,7 +300,8 @@
 
 
 %changelog
-* Tue Mar 25 2008 Ryan McCabe <rmccabe@redhat.com> 0.11.1-1
+* Tue Mar 25 2008 Ryan McCabe <rmccabe@redhat.com> 0.11.1-2
+- Fix bz441574 (nodename" field for fence_scsi disabled when adding a new fence device/instance)
 - Fix bz349561 (Add Conga support for Oracle Resource Agent)
 - Fix bz369131 (RFE: should only have to enter a system's password once)
 - Fix bz333181 (Add option to not fail-back service)
--- conga/luci/cluster/fence-macros	2008/03/25 01:27:10	1.2.4.1
+++ conga/luci/cluster/fence-macros	2008/04/11 06:48:10	1.2.4.2
@@ -1849,7 +1849,7 @@
 			<tr>
 				<td>Node name</td>
 				<td>
-					<input type="text" name="node" disabled="disabled"
+					<input type="text" name="node"
 						tal:attributes="value request/node | nothing" />
 				</td>
 			</tr>
--- conga/luci/site/luci/Extensions/StorageReport.py	2008/03/25 01:27:12	1.22.2.4
+++ conga/luci/site/luci/Extensions/StorageReport.py	2008/04/11 06:48:11	1.22.2.5
@@ -1,4 +1,4 @@
-# Copyright (C) 2006-2007 Red Hat, Inc.
+# Copyright (C) 2006-2008 Red Hat, Inc.
 #
 # This program is free software; you can redistribute
 # it and/or modify it under the terms of version 2 of the



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-04-11  6:54 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-04-11  6:54 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2008-04-11 06:54:44

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 

Log message:
	Update changelogs and bump the version

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.36&r2=1.18.2.37
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.76&r2=1.45.2.77
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.36&r2=1.21.2.37

--- conga/clustermon.spec.in.in	2008/03/31 17:23:45	1.18.2.36
+++ conga/clustermon.spec.in.in	2008/04/11 06:54:43	1.18.2.37
@@ -212,6 +212,9 @@
 
 
 %changelog
+* Thu Apr 10 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-8
+- Fix bz441947 (cluster-snmp dlopen failure)
+
 * Thu Mar 27 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-7
 - Fix bz439186
 
--- conga/conga.spec.in.in	2008/04/11 06:50:32	1.45.2.76
+++ conga/conga.spec.in.in	2008/04/11 06:54:43	1.45.2.77
@@ -291,7 +291,7 @@
 
 ###  changelog ###
 %changelog
-* Thu Apr 10 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-7
+* Thu Apr 10 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-8
 - Fix bz441573 ("nodename" field for fence_scsi disabled when adding a new fence device/instance)
 
 * Wed Feb 27 2008 Ryan McCabe <rmccabe@redhat.com> 0.12.0-6
--- conga/make/version.in	2008/03/28 01:15:16	1.21.2.36
+++ conga/make/version.in	2008/04/11 06:54:43	1.21.2.37
@@ -1,2 +1,2 @@
 VERSION=0.12.0
-RELEASE=7
+RELEASE=8



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-04-18  3:31 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-04-18  3:31 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL4
Changes by:	rmccabe at sourceware.org	2008-04-18 03:31:46

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/cluster   : fence-macros validate_fence.js 
	luci/site/luci/Extensions: FenceHandler.py 

Log message:
	Fix bz442933 (Luci inserts "hostname" instead of "ipaddr" for rsa II fencing device)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.25.2.13&r2=1.25.2.14
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.67.2.24&r2=1.67.2.25
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/fence-macros.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.2.4.3&r2=1.2.4.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/validate_fence.js.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.3.2.5&r2=1.3.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceHandler.py.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.17.2.7&r2=1.17.2.8

--- conga/clustermon.spec.in.in	2008/04/14 17:17:29	1.25.2.13
+++ conga/clustermon.spec.in.in	2008/04/18 03:31:46	1.25.2.14
@@ -194,6 +194,9 @@
 
 
 %changelog
+* Tue Apr 15 2008 Ryan McCabe <rmccabe@redhat.com> 0.11.1-3
+- Fix bz441966 (cluster-snmp dlopen failure)
+ 
 * Tue Mar 25 2008 Ryan McCabe <rmccabe@redhat.com> 0.11.1-2
 - Fix bz441376 (modclusterd segfaults during startup)
 - Backport fixes from RHEL5
--- conga/conga.spec.in.in	2008/04/14 15:55:08	1.67.2.24
+++ conga/conga.spec.in.in	2008/04/18 03:31:46	1.67.2.25
@@ -301,6 +301,7 @@
 
 %changelog
 * Tue Mar 25 2008 Ryan McCabe <rmccabe@redhat.com> 0.11.1-2
+- Fix bz442933 (Luci inserts "hostname" instead of "ipaddr" for rsa II fencing device)
 - Fix bz442369 (conga should install 'sg3_utils' and start service 'scsi_reserve' when scsi fencing is used)
 - Fix bz441574 (nodename" field for fence_scsi disabled when adding a new fence device/instance)
 - Fix bz349561 (Add Conga support for Oracle Resource Agent)
--- conga/luci/cluster/fence-macros	2008/04/11 22:43:17	1.2.4.3
+++ conga/luci/cluster/fence-macros	2008/04/18 03:31:46	1.2.4.4
@@ -188,6 +188,10 @@
 		<tal:block metal:use-macro="here/fence-macros/macros/fence-form-rsa" />
 	</tal:block>
 
+	<tal:block tal:condition="python: cur_fence_type == 'fence_rsb'">
+		<tal:block metal:use-macro="here/fence-macros/macros/fence-form-rsb" />
+	</tal:block>
+
 	<tal:block tal:condition="python: cur_fence_type == 'fence_brocade'">
 		<tal:block metal:use-macro="here/fence-macros/macros/fence-form-brocade" />
 	</tal:block>
@@ -329,6 +333,7 @@
 	<option name="fence_egenera" value="fence_egenera">Egenera SAN Controller</option>
 	<option name="fence_ilo" value="fence_ilo">HP iLO</option>
 	<option name="fence_rsa" value="fence_rsa">IBM RSA II</option>
+	<option name="fence_rsb" value="fence_rsb">Fujitsu Siemens RSB</option>
 	<option name="fence_bladecenter" value="fence_bladecenter">IBM Blade Center</option>
 	<option name="fence_bullpap" value="fence_bullpap">Bull PAP</option>
 	<option name="fence_rps10" value="fence_rps10">RPS10 Serial Switch</option>
@@ -701,8 +706,8 @@
 			<tr>
 				<td>Hostname</td>
 				<td>
-					<input name="hostname" type="text"
-						tal:attributes="value cur_fencedev/hostname | nothing" />
+					<input name="ipaddr" type="text"
+						tal:attributes="value cur_fencedev/ipaddr | nothing" />
 				</td>
 			</tr>
 			<tr>
@@ -742,6 +747,66 @@
 	</div>
 </div>
 
+<div metal:define-macro="fence-form-rsb"
+	tal:attributes="id cur_fencedev/name | nothing">
+
+	<div id="fence_rsb" class="fencedev">
+		<table>
+			<tr>
+				<td><strong class="cluster">Fence Type</strong></td>
+				<td>Fujitsu Siemens RemoteView Service Board (RSB)</td>
+			</tr>
+			<tr>
+				<td>Name</td>
+				<td>
+					<input name="name" type="text"
+						tal:attributes="value cur_fencedev/name | nothing" />
+				</td>
+			</tr>
+			<tr>
+				<td>Hostname</td>
+				<td>
+					<input name="ipaddr" type="text"
+						tal:attributes="value cur_fencedev/ipaddr | nothing" />
+				</td>
+			</tr>
+			<tr>
+				<td>Login</td>
+				<td>
+					<input name="login" type="text"
+						tal:attributes="value cur_fencedev/login | nothing" />
+				</td>
+			</tr>
+			<tr>
+				<td>Password</td>
+				<td>
+					<input name="passwd" type="password" autocomplete="off"
+						tal:attributes="value nothing" />
+				</td>
+			</tr>
+			<tr>
+				<td>
+					<span title="Full path to a script to generate fence password">Password Script (optional)</span>
+				</td>
+				<td>
+					<input type="text" name="passwd_script"
+						tal:attributes="
+							disabled cur_fencedev/isShared | nothing;
+							value cur_fencedev/passwd_script | nothing" />
+				</td>
+			</tr>
+		</table>
+
+		<tal:block tal:condition="exists: cur_fencedev">
+			<input type="hidden" name="existing_device" value="1" />
+			<input type="hidden" name="orig_name"
+				tal:attributes="value cur_fencedev/name | nothing" />
+		</tal:block>
+
+		<input type="hidden" name="fence_type" value="fence_rsb" />
+	</div>
+</div>
+
 <div metal:define-macro="fence-form-brocade"
 	tal:attributes="id cur_fencedev/name | nothing">
 
@@ -1376,6 +1441,7 @@
 	<tal:block metal:use-macro="here/fence-macros/macros/fence-form-ilo" />
 	<tal:block metal:use-macro="here/fence-macros/macros/fence-form-drac" />
 	<tal:block metal:use-macro="here/fence-macros/macros/fence-form-rsa" />
+	<tal:block metal:use-macro="here/fence-macros/macros/fence-form-rsb" />
 	<tal:block metal:use-macro="here/fence-macros/macros/fence-form-brocade" />
 	<tal:block metal:use-macro="here/fence-macros/macros/fence-form-sanbox2" />
 	<tal:block metal:use-macro="here/fence-macros/macros/fence-form-vixel" />
--- conga/luci/cluster/validate_fence.js	2008/03/25 01:27:10	1.3.2.5
+++ conga/luci/cluster/validate_fence.js	2008/04/18 03:31:46	1.3.2.6
@@ -34,7 +34,8 @@
 fence_validator['manual'] = [];
 fence_validator['mcdata'] = [ 'ipaddr', 'login', 'passwd', 'passwd_script' ];
 fence_validator['rps10'] = [ 'device', 'port'];
-fence_validator['rsa'] = [ 'hostname', 'login', 'passwd', 'passwd_script' ];
+fence_validator['rsa'] = [ 'ipaddr', 'login', 'passwd', 'passwd_script' ];
+fence_validator['rsb'] = [ 'ipaddr', 'login', 'passwd', 'passwd_script' ];
 fence_validator['sanbox2'] = [ 'ipaddr', 'login', 'passwd', 'passwd_script' ];
 fence_validator['scsi'] = [];
 fence_validator['unknown'] = [];
--- conga/luci/site/luci/Extensions/FenceHandler.py	2008/03/25 01:27:11	1.17.2.7
+++ conga/luci/site/luci/Extensions/FenceHandler.py	2008/04/18 03:31:46	1.17.2.8
@@ -543,10 +543,10 @@
 	errors = list()
 
 	try:
-		hostname = form['hostname'].strip()
+		hostname = form['ipaddr'].strip()
 		if not hostname:
 			raise Exception, 'blank'
-		fencedev.addAttribute('hostname', hostname)
+		fencedev.addAttribute('ipaddr', hostname)
 	except Exception, e:
 		errors.append(FD_PROVIDE_HOSTNAME)
 
@@ -822,6 +822,7 @@
 	'fence_ipmilan':		val_ipmilan_fd,
 	'fence_drac':			val_drac_fd,
 	'fence_rsa':			val_rsa_fd,
+	'fence_rsb':			val_rsa_fd, # same params as rsa
 	'fence_rps10':			val_rps10_fd,
 	'fence_manual':			val_noop_fd
 }
@@ -1082,6 +1083,7 @@
 	'fence_ipmilan':		val_noop_fi,
 	'fence_drac':			val_noop_fi,
 	'fence_rsa':			val_noop_fi,
+	'fence_rsb':			val_noop_fi,
 	'fence_rps10':			val_noop_fi
 }
 



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-05-12 15:13 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-05-12 15:13 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2008-05-12 15:13:33

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/site/luci/Extensions: ricci_communicator.py 
	make           : version.in 
	ricci/common   : File.cpp Variable.cpp daemon_init.c utils.cpp 
	ricci/include  : shred_allocator.h 
	ricci/init.d   : ricci 
	ricci/modules/cluster/clumon: Makefile 
	ricci/modules/cluster/clumon/init.d: modclusterd 
	ricci/modules/cluster/clumon/src/common: Cluster.cpp 
	ricci/modules/storage: ContentFS.cpp ExtendedFS.cpp LV.cpp 
	                       Mapper.cpp MountHandler.cpp PTSource.cpp 
	                       PV.cpp Partition.cpp PartitionTable.cpp 
	                       parted_wrapper.cpp 

Log message:
	Fixes from HEAD

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.37&r2=1.18.2.38
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.84&r2=1.45.2.85
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_communicator.py.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.9.2.14&r2=1.9.2.15
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.38&r2=1.21.2.39
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/common/File.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1.2.4&r2=1.1.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/common/Variable.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.8.2.1&r2=1.8.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/common/daemon_init.c.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1.2.1&r2=1.1.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/common/utils.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.6.2.2&r2=1.6.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/include/shred_allocator.h.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.2.2.1&r2=1.2.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/init.d/ricci.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.8.2.4&r2=1.8.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/clumon/Makefile.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.4.2.2&r2=1.4.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/clumon/init.d/modclusterd.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.2.2.2&r2=1.2.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/clumon/src/common/Cluster.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.6.2.2&r2=1.6.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/ContentFS.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.5.2.1&r2=1.5.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/ExtendedFS.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.7.2.2&r2=1.7.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/LV.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.6.2.4&r2=1.6.2.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/Mapper.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3.2.1&r2=1.3.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/MountHandler.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.5.2.2&r2=1.5.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/PTSource.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.2.2.1&r2=1.2.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/PV.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.4.2.3&r2=1.4.2.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/Partition.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3.2.1&r2=1.3.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/PartitionTable.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.5.2.1&r2=1.5.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/parted_wrapper.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.8.2.4&r2=1.8.2.5

--- conga/clustermon.spec.in.in	2008/04/11 06:54:43	1.18.2.37
+++ conga/clustermon.spec.in.in	2008/05/12 15:13:31	1.18.2.38
@@ -18,7 +18,7 @@
 Name: clustermon
 Version: @@VERS@@
 Release: @@REL@@%{?dist}
-License: GPL
+License: GPLv2
 URL: http://sources.redhat.com/cluster/conga
 
 Group: System Environment/Base
@@ -67,12 +67,11 @@
 make %{?_smp_mflags} clustermon
 
 %install
-rm -rf $RPM_BUILD_ROOT
-make DESTDIR=$RPM_BUILD_ROOT install_clustermon
+rm -rf %{buildroot}
+make DESTDIR=%{buildroot} install_clustermon
 
 %clean
-rm -rf $RPM_BUILD_ROOT
-
+rm -rf %{buildroot}
 
 
 
@@ -191,7 +190,7 @@
 			%{_docdir}/cluster-cim-%{version}/
 
 %post -n cluster-cim
-# pegasus might not be running, don't fail %post
+# pegasus might not be running, don't fail
 /sbin/service tog-pegasus condrestart >&/dev/null
 exit 0
 
@@ -200,7 +199,7 @@
 if [ "$1" == "0" ]; then
 	/sbin/service tog-pegasus condrestart >&/dev/null
 fi
-# pegasus might not be running, don't fail %postun
+# pegasus might not be running, don't fail
 exit 0
 
 
--- conga/conga.spec.in.in	2008/04/28 03:54:18	1.45.2.84
+++ conga/conga.spec.in.in	2008/05/12 15:13:32	1.45.2.85
@@ -62,11 +62,11 @@
 make %{?_smp_mflags} conga
 
 %install
-rm -rf $RPM_BUILD_ROOT
-make DESTDIR=$RPM_BUILD_ROOT install_conga
+rm -rf %{buildroot}
+make DESTDIR=%{buildroot} install_conga
 
 %clean
-rm -rf $RPM_BUILD_ROOT
+rm -rf %{buildroot}
 
 
 ####### luci #######
--- conga/luci/site/luci/Extensions/ricci_communicator.py	2008/01/23 04:44:32	1.9.2.14
+++ conga/luci/site/luci/Extensions/ricci_communicator.py	2008/05/12 15:13:32	1.9.2.15
@@ -169,7 +169,7 @@
 				luci_log.debug_net('RC:UNAUTH0: unauthenticate %s for %s:%d' \
 					% (ret, self.__hostname, self.__port))
 			if ret != '0':
-				raise Exception, 'Invalid response'
+				raise Exception, 'Invalid response: %s' % ret
 
 			try:
 				self.ss.untrust()
@@ -494,7 +494,7 @@
 					% batch_xml.toxml())
 			except:
 				pass
-		raise RicciError, 'Not an XML batch node'
+		raise RicciError, 'Not an XML batch node: %s' % batch_xml.nodeName
 
 	total = 0
 	last = 0
@@ -544,7 +544,7 @@
 		if LUCI_DEBUG_NET is True:
 			luci_log.debug_net_priv('RC:EMS0: Expecting "batch" got "%s"' \
 				% batch_xml.toxml())
-		raise RicciError, 'Invalid XML node; expecting a batch node'
+		raise RicciError, 'Invalid XML node; expecting a batch node: %s' % batch_xml.nodeName
 
 	c = 0
 	for node in batch_xml.childNodes:
--- conga/make/version.in	2008/04/28 03:54:18	1.21.2.38
+++ conga/make/version.in	2008/05/12 15:13:32	1.21.2.39
@@ -1,2 +1,2 @@
-VERSION=0.12.0
-RELEASE=9
+VERSION=0.14.0
+RELEASE=1
--- conga/ricci/common/File.cpp	2008/01/17 17:38:36	1.1.2.4
+++ conga/ricci/common/File.cpp	2008/05/12 15:13:32	1.1.2.5
@@ -139,17 +139,17 @@
 	MutexLocker l(*_mutex);
 
 	long len = size();
-	const auto_ptr<char> buff(new char[len]);
+	char buff[len];
 	try {
 		((fstream *) _pimpl->fs)->seekg(0, ios::beg);
 		check_failed();
-		((fstream *) _pimpl->fs)->read(buff.get(), len);
+		((fstream *) _pimpl->fs)->read(buff, len);
 		check_failed();
-		String ret(buff.get(), len);
-		::shred(buff.get(), len);
+		String ret(buff, len);
+		::shred(buff, len);
 		return ret;
 	} catch ( ... ) {
-		::shred(buff.get(), len);
+		::shred(buff, len);
 		throw;
 	}
 }
--- conga/ricci/common/Variable.cpp	2008/01/17 17:38:36	1.8.2.1
+++ conga/ricci/common/Variable.cpp	2008/05/12 15:13:32	1.8.2.2
@@ -26,6 +26,8 @@
 
 #include <stdio.h>
 
+#include <vector>
+#include <algorithm>
 using namespace std;
 
 // ##### class Variable #####
--- conga/ricci/common/daemon_init.c	2008/01/17 17:38:36	1.1.2.1
+++ conga/ricci/common/daemon_init.c	2008/05/12 15:13:32	1.1.2.2
@@ -20,7 +20,7 @@
 /** @file
  * daemon_init function, does sanity checks and calls daemon().
  *
- * $Id: daemon_init.c,v 1.1.2.1 2008/01/17 17:38:36 rmccabe Exp $
+ * $Id: daemon_init.c,v 1.1.2.2 2008/05/12 15:13:32 rmccabe Exp $
  *
  * Author: Jeff Moyer <moyer@mclinux.com>
  */
@@ -232,5 +232,4 @@
 
 	update_pidfile(prog);
 	nice(-1);
-	//mlockall(MCL_CURRENT | MCL_FUTURE);
 }
--- conga/ricci/common/utils.cpp	2008/01/17 17:38:36	1.6.2.2
+++ conga/ricci/common/utils.cpp	2008/05/12 15:13:32	1.6.2.3
@@ -23,8 +23,13 @@
 #include "utils.h"
 #include "executils.h"
 
-#include <openssl/md5.h>
+#include <unistd.h>
+#include <stdio.h>
 #include <stdlib.h>
+#include <math.h>
+#include <errno.h>
+#include <limits.h>
+#include <openssl/md5.h>
 
 //#include <iostream>
 
@@ -201,24 +206,44 @@
 long long
 utils::to_long(const String& str)
 {
-	return atoll(str.c_str());
+	char *p = NULL;
+	long long ret;
+	ret = strtoll(strip(str).c_str(), &p, 10);
+	if (p != NULL && *p != '\0')
+		throw String("Not a number: ") + str;
+	if (ret == LLONG_MIN && errno == ERANGE)
+		throw String("Numeric underflow: ") + str;
+	if (ret == LLONG_MAX && errno == ERANGE)
+		throw String("Numeric overflow: ") + str;
+	return ret;
 }
 
 float
 utils::to_float(const String& str)
 {
-	float num = 0;
+	char *p = NULL;
+	float ret;
 
-	sscanf(strip(str).c_str(), "%f", &num);
-	return num;
+	ret = strtof(strip(str).c_str(), &p);
+	if (p != NULL && *p == '\0')
+		throw String("Invalid floating point number: ") + str;
+	if (ret == 0 && errno == ERANGE)
+		throw String("Floating point underflow: ") + str;
+	if ((ret == HUGE_VALF || ret == HUGE_VALL) && errno == ERANGE)
+		throw String("Floating point overflow: ") + str;
+
+	return ret;
 }
 
 String
 utils::to_string(int value)
 {
 	char tmp[64];
+	int ret;
 
-	sprintf(tmp, "%d", value);
+	ret = snprintf(tmp, sizeof(tmp), "%d", value);
+	if (ret < 0 || (size_t) ret >= sizeof(tmp))
+		throw String("Invalid integer");
 	return tmp;
 }
 
@@ -226,8 +251,11 @@
 utils::to_string(long value)
 {
 	char tmp[64];
+	int ret;
 
-	sprintf(tmp, "%ld", value);
+	ret = snprintf(tmp, sizeof(tmp), "%ld", value);
+	if (ret < 0 || (size_t) ret >= sizeof(tmp))
+		throw String("Invalid long integer");
 	return tmp;
 }
 
@@ -235,8 +263,11 @@
 utils::to_string(long long value)
 {
 	char tmp[64];
+	int ret;
 
-	sprintf(tmp, "%lld", value);
+	ret = snprintf(tmp, sizeof(tmp), "%lld", value);
+	if (ret < 0 || (size_t) ret >= sizeof(tmp))
+		throw String("Invalid long long integer");
 	return tmp;
 }
 
--- conga/ricci/include/shred_allocator.h	2008/01/17 17:38:37	1.2.2.1
+++ conga/ricci/include/shred_allocator.h	2008/05/12 15:13:32	1.2.2.2
@@ -33,6 +33,12 @@
 #ifndef __CONGA_SHRED_ALLOCATOR_H
 #define __CONGA_SHRED_ALLOCATOR_H
 
+extern "C" {
+	#include <unistd.h>
+	#include <stdlib.h>
+	#include <string.h>
+}
+
 #include <new>
 
 template<typename _Tp>
--- conga/ricci/init.d/ricci	2008/01/17 17:38:37	1.8.2.4
+++ conga/ricci/init.d/ricci	2008/05/12 15:13:32	1.8.2.5
@@ -53,103 +53,159 @@
 {
 	rm -f "$SSL_PUBKEY" "$SSL_PRIVKEY"
 	echo -n "generating SSL certificates...  "
-	/usr/bin/openssl genrsa -out "$SSL_PRIVKEY" 2048 >&/dev/null
-	/usr/bin/openssl req -new -x509 -key "$SSL_PRIVKEY" -out "$SSL_PUBKEY" -days 1825 -config /var/lib/ricci/certs/cacert.config
-	/bin/chown $RUNASUSER:$RUNASUSER "$SSL_PRIVKEY" "$SSL_PUBKEY"
+	/usr/bin/openssl genrsa -out "$SSL_PRIVKEY" 2048 >&/dev/null &&
+	/usr/bin/openssl req -new -x509 -key "$SSL_PRIVKEY" -out "$SSL_PUBKEY" -days 1825 -config /var/lib/ricci/certs/cacert.config &&
+	/bin/chown $RUNASUSER:$RUNASUSER "$SSL_PRIVKEY" "$SSL_PUBKEY" &&
+	/bin/chmod 600 "$SSL_PRIVKEY" &&
 	/bin/chmod 644 "$SSL_PUBKEY"
-	/bin/chmod 600 "$SSL_PRIVKEY"
 	ret=$?
 	echo "done"
 	return $ret
 }
 
+check_ricci_lockfiles() {
+	ricci_status >& /dev/null
+	ret=$?
+	if [ "$ret" -eq 1 ] || [ "$ret" -eq 2 ]; then
+		# stale pid and/or lockfile
+		rm -f -- "$PIDFILE" "$LOCKFILE"
+	fi
+}
+
+ricci_status() {
+	if [ -f "$PIDFILE" ]; then
+		status -p "$PIDFILE" "$RICCID"
+		ricci_up=$?
+	else
+		status "$RICCID"
+		ricci_up=$?
+	fi
+	return $ricci_up
+}
+
+ricci_stop() {
+	ricci_status >& /dev/null
+	ret=$?
+
+	if [ "$ret" -ne 0 ]; then
+		# already stopped - no error
+		check_ricci_lockfiles
+		return 0
+	fi
+
+	killproc "$RICCID" SIGTERM
+
+	ricci_status >& /dev/null
+	ret=$?
+
+	max_wait=10
+	cur_wait=0
+	while [ "$ret" -eq 0 ] && [ $cur_wait -lt $max_wait ]; do
+		sleep 1
+		cur_wait=`expr $cur_wait + 1`
+        ricci_status >& /dev/null
+		ret=$?
+    done
+
+	ricci_status >& /dev/null
+	ret=$?
+
+	if [ "$ret" -ne 0 ]; then
+		rm -f -- "$PIDFILE" "$LOCKFILE"
+		return 0
+	fi
+
+	return 1
+}
 
 case $1 in
 	start)
 		service messagebus status >&/dev/null
-		if [ $? -ne 0 ]; then
+		if [ "$?" -ne 0 ]; then
 			service messagebus start
 			service messagebus status >&/dev/null
-			if [ $? -ne 0 ]; then
-				echo "not starting ricci..."
-				/usr/bin/logger -t $RICCID "startup failed"
+			if [ "$?" -ne 0 ]; then
+				/usr/bin/logger -t "$RICCID" -- "messagebus startup failed"
+				failure "not starting $RICCID"
 				exit 1
 			fi
 		fi
+
 		service oddjobd status >&/dev/null
-		if [ $? -ne 0 ]; then
+		if [ "$?" -ne 0 ]; then
 			service oddjobd start
 			service oddjobd status >&/dev/null
-			if [ $? -ne 0 ]; then
-				echo "not starting ricci..."
-				/usr/bin/logger -t $RICCID "startup failed"
+			if [ "$?" -ne 0 ]; then
+				/usr/bin/logger -t "$RICCID" -- "oddjob startup failed"
+				failure "not starting $RICCID"
 				exit 1
 			fi
 		fi
 
 		service saslauthd start >&/dev/null
+
 		ssl_certs_ok
-		if [ "1$?" != "10" ] ; then
+		if [ "$?" -ne 0 ] ; then
 			generate_ssl_certs
 		fi
 
-		NewUID=`grep "^$RUNASUSER:" /etc/passwd | sed -e 's/^[^:]*:[^:]*://' -e 's/:.*//'`
+		check_ricci_lockfiles
+		NewUID=`grep "^$RUNASUSER:" /etc/passwd | cut -d: -f3`
 		echo -n $"Starting $ID: "
-		daemon $RICCID -u $NewUID
-		rtrn=$?
+		daemon "$RICCID" -u "$NewUID"
 		echo
+		ret=$?
 
-		if [ $rtrn -eq 0 ]; then
-			touch "$LOCKFILE"
-			/usr/bin/logger -t $RICCID "startup succeeded"
+		if [ "$ret" -eq 0 ]; then
+			touch -- "$LOCKFILE"
+			/usr/bin/logger -t $RICCID -- "startup succeeded"
 		else
-			/usr/bin/logger -t $RICCID "startup failed"
+			/usr/bin/logger -t $RICCID -- "startup failed"
 		fi
 	;;
 
 	restart)
 		$0 stop
 		$0 start
-		rtrn=$?
+		ret=$?
 	;;
 
 	status)
-		status $RICCID
-		rtrn=$?
+		ricci_status
+		ret=$?
 	;;
 
 	stop)
 		echo -n "Shutting down $ID: "
-		killproc $RICCID SIGTERM
-		rtrn=$?
-		if [ $rtrn -eq 0 ]; then
-			sleep 8
-			rm -f $PIDFILE
-			rm -f $LOCKFILE
-			/usr/bin/logger -t $RICCID "shutdown succeeded"
-			rtrn=0
+		ricci_stop
+		ret=$?
+		if [ "$ret" -eq 0 ]; then
+			/usr/bin/logger -t "$RICCID" -- "shutdown succeeded"
 		else
-			/usr/bin/logger -t $RICCID "shutdown failed"
-			rtrn=1
+			/usr/bin/logger -t "$RICCID" -- "shutdown failed"
 		fi
 		echo
 	;;
 
 	condrestart)
-		if [ -f ${PIDFILE} ] ; then
+		if [ -f "$PIDFILE" ]; then
 			$0 restart
-			rtrn=$?
+			ret=$?
 		fi
 	;;
 
+	try-restart)
+		ret=3
+	;;
+
 	reload)
-		rtrn=3
+		ret=3
 	;;
 
 	*)
 		echo "Usage: $0 {start|stop|status|restart|condrestart|reload}"
-		rtrn=3
+		ret=3
 	;;
 esac
 
-exit $rtrn
+exit $ret
--- conga/ricci/modules/cluster/clumon/Makefile	2008/01/17 17:38:37	1.4.2.2
+++ conga/ricci/modules/cluster/clumon/Makefile	2008/05/12 15:13:33	1.4.2.3
@@ -11,6 +11,7 @@
 top_srcdir=../../..
 UNINSTALL=${top_srcdir}/scripts/uninstall.pl
 
+include ${top_srcdir}/make/version.in
 include ${top_srcdir}/make/defines.mk
 
 all:
--- conga/ricci/modules/cluster/clumon/init.d/modclusterd	2008/01/17 17:38:37	1.2.2.2
+++ conga/ricci/modules/cluster/clumon/init.d/modclusterd	2008/05/12 15:13:33	1.2.2.3
@@ -37,66 +37,122 @@
 # If we're not configured, then don't start anything.
 #
 [ "${NETWORKING}" = "yes" ] || exit 1
-#[ -f "$CFG_FILE" ] || exit 0
 
+modclusterd_status() {
+	if [ -f "$PIDFILE" ]; then
+		status -p "$PIDFILE" "$MODCLUSTERD"
+		ret=$?
+	else
+		status "$MODCLUSTERD"
+		ret=$?
+	fi
+	return $ret
+}
+
+check_modclusterd_lockfiles() {
+	modclusterd_status >& /dev/null
+	ret=$?
+	if [ "$ret" -eq 1 ] || [ "$ret" -eq 2 ]; then
+		# stale pid and/or lockfile
+		rm -f -- "$PIDFILE" "$LOCKFILE"
+	fi
+	return 0
+}
+
+modclusterd_stop() {
+	modclusterd_status >& /dev/null
+	ret=$?
+
+	if [ "$ret" -ne 0 ]; then
+		# already stopped - no error
+		check_modclusterd_lockfiles
+		return 0
+	fi
+
+	killproc "$MODCLUSTERD" SIGTERM
+
+	modclusterd_status >& /dev/null
+	ret=$?
+
+	max_wait=10
+	cur_wait=0
+	while [ "$ret" -eq 0 ] && [ $cur_wait -lt $max_wait ]; do
+		sleep 1
+		cur_wait=`expr $cur_wait + 1`
+		modclusterd_status >& /dev/null
+		ret=$?
+	done
+
+	modclusterd_status >& /dev/null
+	ret=$?
+
+	if [ "$ret" -ne 0 ]; then
+		rm -f -- "$PIDFILE" "$LOCKFILE"
+		return 0
+	fi
+
+	return 1
+}
 
 case $1 in
 	start)
 		echo -n $"Starting $ID: "
-		daemon $MODCLUSTERD
-		rtrn=$?
-		if [ $rtrn -eq 0 ]; then
-			touch $LOCKFILE
-			/usr/bin/logger -t $MODCLUSTERD "startup succeeded"
+		check_modclusterd_lockfiles
+		daemon "$MODCLUSTERD"
+		ret=$?
+		if [ "$ret" -eq 0 ]; then
+			touch -- "$LOCKFILE"
+			/usr/bin/logger -t "$MODCLUSTERD" -- "startup succeeded"
 		else
-			/usr/bin/logger -t $MODCLUSTERD "startup failed"
-			rtrn=1
+			/usr/bin/logger -t "$MODCLUSTERD" -- "startup failed"
+			ret=1
 		fi
 		echo
 	;;
 
 	restart)
 		$0 stop
-		sleep 8
 		$0 start
-		rtrn=$?
+		ret=$?
 	;;
 
 	status)
-		status $MODCLUSTERD
-		rtrn=$?
+		modclusterd_status
+		ret=$?
 	;;
 
 	stop)
 		echo -n "Shutting down $ID: "
-		killproc $MODCLUSTERD SIGTERM
-		rtrn=$?
-		if [ $rtrn -eq 0 ]; then
-			rm -f $PIDFILE
-			rm -f $LOCKFILE
-			/usr/bin/logger -t $MODCLUSTERD "shutdown succeeded"
+		modclusterd_stop
+		ret=$?
+		if [ "$ret" -eq 0 ]; then
+			/usr/bin/logger -t "$MODCLUSTERD" -- "shutdown succeeded"
 		else
-			/usr/bin/logger -t $MODCLUSTERD "shutdown failed"
-			rtrn=1
+			/usr/bin/logger -t "$MODCLUSTERD" -- "shutdown failed"
+			ret=1
 		fi
 		echo
 	;;
 
 	condrestart)
-		if [ -f ${PIDFILE} ] ; then
+		if [ -f "$PIDFILE" ]; then
 			$0 restart
-			rtrn=$?
+			ret=$?
 		fi
 	;;
 
+	try-restart)
+		ret=3
+	;;
+
 	reload)
-		rtrn=3
+		ret=3
 	;;
 
 	*)
 		echo $"Usage: $0 {start|stop|reload|restart|status}"
-		rtrn=3
+		ret=3
 	;;
 esac
 
-exit $rtrn
+exit $ret
--- conga/ricci/modules/cluster/clumon/src/common/Cluster.cpp	2008/01/17 17:38:38	1.6.2.2
+++ conga/ricci/modules/cluster/clumon/src/common/Cluster.cpp	2008/05/12 15:13:33	1.6.2.3
@@ -25,7 +25,8 @@
 #include <stdio.h>
 
 extern "C" {
-#	include <libcman.h>
+	#include <limits.h>
+	#include <libcman.h>
 }
 
 using namespace std;
--- conga/ricci/modules/storage/ContentFS.cpp	2008/01/17 17:38:39	1.5.2.1
+++ conga/ricci/modules/storage/ContentFS.cpp	2008/05/12 15:13:33	1.5.2.2
@@ -28,6 +28,7 @@
 #include "BDFactory.h"
 #include "defines.h"
 #include "utils.h"
+#include <algorithm>
 
 
 using namespace std;
--- conga/ricci/modules/storage/ExtendedFS.cpp	2008/01/17 17:38:39	1.7.2.2
+++ conga/ricci/modules/storage/ExtendedFS.cpp	2008/05/12 15:13:33	1.7.2.3
@@ -27,8 +27,9 @@
 #include "UMountError.h"
 #include "FileMagic.h"
 
-
+#include <algorithm>
 #include <iostream>
+
 using namespace std;
 
 
--- conga/ricci/modules/storage/LV.cpp	2008/01/17 17:38:39	1.6.2.4
+++ conga/ricci/modules/storage/LV.cpp	2008/05/12 15:13:33	1.6.2.5
@@ -124,7 +124,7 @@
     long long min  = utils::to_long(xml.get_attr("min"));
     long long max  = utils::to_long(xml.get_attr("max"));
     long long step = utils::to_long(xml.get_attr("step"));
-    long long min_size = (long long ) (size * (usage / 100.0));
+    long long min_size = (long long) (size * usage) / 100;
     if (min_size > min)
       min = min_size;
     else if (min_size > max)
--- conga/ricci/modules/storage/Mapper.cpp	2008/01/17 17:38:39	1.3.2.1
+++ conga/ricci/modules/storage/Mapper.cpp	2008/05/12 15:13:33	1.3.2.2
@@ -26,6 +26,7 @@
 #include "BDFactory.h"
 #include "MapperFactory.h"
 #include "MidAir.h"
+#include <algorithm>
 
 
 using namespace std;
--- conga/ricci/modules/storage/MountHandler.cpp	2008/01/17 17:38:39	1.5.2.2
+++ conga/ricci/modules/storage/MountHandler.cpp	2008/05/12 15:13:33	1.5.2.3
@@ -37,8 +37,9 @@
 #include <sys/file.h>
 #include <errno.h>
 
-
+#include <algorithm>
 #include <iostream>
+
 using namespace std;
 
 
--- conga/ricci/modules/storage/PTSource.cpp	2008/01/17 17:38:39	1.2.2.1
+++ conga/ricci/modules/storage/PTSource.cpp	2008/05/12 15:13:33	1.2.2.2
@@ -25,6 +25,7 @@
 #include "PartitionTable.h"
 #include "parted_wrapper.h"
 
+#include <algorithm>
 
 using namespace std;
 
--- conga/ricci/modules/storage/PV.cpp	2008/01/17 17:38:39	1.4.2.3
+++ conga/ricci/modules/storage/PV.cpp	2008/05/12 15:13:33	1.4.2.4
@@ -26,6 +26,7 @@
 #include "MapperFactory.h"
 #include "utils.h"
 #include "LVMClusterLockingError.h"
+#include <algorithm>
 
 
 using namespace std;
--- conga/ricci/modules/storage/Partition.cpp	2008/01/17 17:38:39	1.3.2.1
+++ conga/ricci/modules/storage/Partition.cpp	2008/05/12 15:13:33	1.3.2.2
@@ -30,6 +30,7 @@
 #include "ContentExtendedPartition.h"
 #include "PV.h"
 
+#include <algorithm>
 
 using namespace std;
 
--- conga/ricci/modules/storage/PartitionTable.cpp	2008/01/17 17:38:39	1.5.2.1
+++ conga/ricci/modules/storage/PartitionTable.cpp	2008/05/12 15:13:33	1.5.2.2
@@ -28,7 +28,7 @@
 #include "MapperFactory.h"
 #include "utils.h"
 
-
+#include <algorithm>
 #include <iostream>
 
 
--- conga/ricci/modules/storage/parted_wrapper.cpp	2008/01/17 17:38:39	1.8.2.4
+++ conga/ricci/modules/storage/parted_wrapper.cpp	2008/05/12 15:13:33	1.8.2.5
@@ -24,8 +24,9 @@
 #include "parted_wrapper.h"
 #include "utils.h"
 
-
+#include <algorithm>
 #include <iostream>
+
 using namespace std;
 
 
@@ -667,11 +668,13 @@
   String s = utils::to_lower(utils::strip(size_str));
   long long multiplier;
   // parted defines 1KB as 1000 bytes.
-  if (s.find("b") == s.npos)
+  if (s.find("b") == s.npos) {
     multiplier = 1000 * 1000;  // by old parted behavior, size is in MB
-  else {
+    return (long long) utils::to_long(s) * multiplier;
+  } else {
     if (s.size() < 3)
       throw String("parted size has an invalid value: ") + s;
+	/* This path should never be hit on RHEL5 and later. */
     multiplier = 1;
     if (s[s.size()-2] == 'k')
       multiplier = 1000;
@@ -681,7 +684,6 @@
       multiplier = 1000 * 1000 * 1000;
     else if (s[s.size()-2] == 't')
       multiplier = (long long) 1000 * 1000 * 1000 * 1000;
+    return (long long) utils::to_float(s) * multiplier;
   }
-
-  return (long long) utils::to_float(s) * multiplier;
 }



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-07-28 17:49 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-07-28 17:49 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	RHEL5
Changes by:	rmccabe at sourceware.org	2008-07-28 17:49:45

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	make           : version.in 
	ricci/modules/virt: Makefile Virt.cpp 
	ricci/test_suite/cluster: vm_list.xml 

Log message:
	Build fixes

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.38&r2=1.18.2.39
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.45.2.98&r2=1.45.2.99
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/make/version.in.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.41&r2=1.21.2.42
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/virt/Makefile.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1.2.1&r2=1.1.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/virt/Virt.cpp.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1.2.1&r2=1.1.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/test_suite/cluster/vm_list.xml.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1.2.1&r2=1.1.2.2

--- conga/clustermon.spec.in.in	2008/05/12 15:13:31	1.18.2.38
+++ conga/clustermon.spec.in.in	2008/07/28 17:49:44	1.18.2.39
@@ -27,22 +27,11 @@
 Source0: %{name}-%{version}.tar.gz
 Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
-%define virt_support 0
-
-%ifarch i386 x86_64 ia64
-%define virt_support 0
-%endif
-
 BuildRequires: cman-devel
 BuildRequires: glibc-devel gcc-c++ libxml2-devel
 BuildRequires: openssl-devel dbus-devel pam-devel pkgconfig
 BuildRequires: net-snmp-devel tog-pegasus-devel
 
-%if %{virt_support}
-BuildRequires: libvirt-devel
-Requires: libvirt
-%endif
-
 %description
 This package contains Red Hat Enterprise Linux Cluster Suite
 SNMP/CIM module/agent/provider.
@@ -52,17 +41,10 @@
 %setup -q
 
 %build
-%if %{virt_support}
-%configure		--arch=%{_arch} \
-		--docdir=%{_docdir} \
-		--pegasus_providers_dir=%{PEGASUS_PROVIDERS_DIR} \
-		--include_zope_and_plone=no --VIRT_SUPPORT=0
-%else
 %configure		--arch=%{_arch} \
 		--docdir=%{_docdir} \
 		--pegasus_providers_dir=%{PEGASUS_PROVIDERS_DIR} \
-		--include_zope_and_plone=no --VIRT_SUPPORT=0
-%endif
+		--include_zope_and_plone=no
 
 make %{?_smp_mflags} clustermon
 
--- conga/conga.spec.in.in	2008/07/23 19:55:40	1.45.2.98
+++ conga/conga.spec.in.in	2008/07/28 17:49:44	1.45.2.99
@@ -31,6 +31,12 @@
 %endif
 Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
+%define virt_support 0
+
+%ifarch i386 x86_64 ia64
+%define virt_support 1
+%endif
+
 BuildRequires: python-devel >= 2.4.1
 BuildRequires: glibc-devel gcc-c++ libxml2-devel sed
 BuildRequires: cyrus-sasl-devel >= 2.1
@@ -56,9 +62,16 @@
 %endif
 
 %build
+%if %{virt_support}
 %configure		--arch=%{_arch} \
 		--docdir=%{_docdir} \
-		--include_zope_and_plone=%{include_zope_and_plone}
+		--include_zope_and_plone=%{include_zope_and_plone} --VIRT_SUPPORT=1
+%else
+%configure		--arch=%{_arch} \
+		--docdir=%{_docdir} \
+		--include_zope_and_plone=%{include_zope_and_plone} --VIRT_SUPPORT=0
+%endif
+
 make %{?_smp_mflags} conga
 
 %install
@@ -198,10 +211,16 @@
 Requires: initscripts
 Requires: oddjob dbus openssl pam cyrus-sasl >= 2.1
 Requires: sed util-linux
-Requires: modcluster >= 0.10.0
+Requires: modcluster >= 0.12.0
 
 # modreboot
 
+# modvirt
+
+%if %{virt_support}
+BuildRequires: libvirt-devel
+%endif
+
 # modrpm
 
 # modstorage
@@ -253,13 +272,14 @@
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modlog.systembus.conf
 			%{_libexecdir}/ricci-modlog
 
+# modvirt
+%config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modvirt.oddjob.conf
+%config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modvirt.systembus.conf
+			%{_libexecdir}/ricci-modvirt
+
 %pre -n ricci
-if ! /bin/grep ^ricci\:x /etc/group >&/dev/null; then
-	/usr/sbin/groupadd -r -f ricci >&/dev/null
-fi
-if ! /bin/grep ^ricci\:x /etc/passwd >&/dev/null; then
-	/usr/sbin/useradd -r -M -s /sbin/nologin -d /var/lib/ricci -g ricci ricci >&/dev/null
-fi
+getent group ricci >/dev/null || groupadd -r ricci
+getent passwd ricci >/dev/null || useradd -r -M -g ricci -d /var/lib/ricci -s /sbin/nologin -c "ricci daemon user" ricci
 exit 0
 
 %post -n ricci
--- conga/make/version.in	2008/07/23 19:55:40	1.21.2.41
+++ conga/make/version.in	2008/07/28 17:49:45	1.21.2.42
@@ -1,2 +1,2 @@
 VERSION=0.12.1
-RELEASE=0
+RELEASE=1
--- conga/ricci/modules/virt/Makefile	2008/07/14 16:00:12	1.1.2.1
+++ conga/ricci/modules/virt/Makefile	2008/07/28 17:49:45	1.1.2.2
@@ -14,7 +14,7 @@
 include ${top_srcdir}/make/defines.mk
 
 
-TARGET = modvirt
+TARGET = ricci-modvirt
 
 OBJECTS = main.o \
 	VirtModule.o \
@@ -22,8 +22,11 @@
 
 PARANOID=0
 INCLUDE += -I${top_srcdir}/common/
-CXXFLAGS += -DPARANOIA=$(PARANOID)
-LDFLAGS += -lvirt
+CXXFLAGS += -DPARANOIA=$(PARANOID) -DVIRT_SUPPORT=$(VIRT_SUPPORT)
+
+ifeq ($(VIRT_SUPPORT), 1)
+	LDFLAGS += -lvirt
+endif
 
 ifeq ($(PARANOID), 1)
 	LDFLAGS += ${top_srcdir}/common/paranoid/*.o
@@ -39,9 +42,9 @@
 	$(INSTALL_DIR) ${libexecdir}
 	$(INSTALL_BIN) ${TARGET} ${libexecdir}
 	$(INSTALL_DIR) ${sysconfdir}/oddjobd.conf.d
-	$(INSTALL_FILE) d-bus/modvirt.oddjob.conf ${sysconfdir}/oddjobd.conf.d
+	$(INSTALL_FILE) d-bus/ricci-modvirt.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR) ${sysconfdir}/dbus-1/system.d
-	$(INSTALL_FILE) d-bus/modvirt.systembus.conf ${sysconfdir}/dbus-1/system.d
+	$(INSTALL_FILE) d-bus/ricci-modvirt.systembus.conf ${sysconfdir}/dbus-1/system.d
 
 uninstall:
 
--- conga/ricci/modules/virt/Virt.cpp	2008/07/14 16:00:12	1.1.2.1
+++ conga/ricci/modules/virt/Virt.cpp	2008/07/28 17:49:45	1.1.2.2
@@ -23,7 +23,9 @@
 	#include <sys/stat.h>
 	#include <string.h>
 	#include <errno.h>
+#if VIRT_SUPPORT == 1
 	#include <libvirt/libvirt.h>
+#endif
 
 	#include "sys_util.h"
 	#include "base64.h"
@@ -53,6 +55,8 @@
 	return false;
 }
 
+#if VIRT_SUPPORT == 1
+
 map<String, String> Virt::get_vm_list(const String &hvURI) {
 	std::map<String, String> vm_list;
 	int i;
@@ -130,3 +134,12 @@
 	virConnectClose(con);
 	return vm_list;
 }
+
+#else
+
+map<String, String> Virt::get_vm_list(const String &hvURI) {
+	std::map<String, String> vm_list;
+	return vm_list;
+}
+
+#endif
--- conga/ricci/test_suite/cluster/vm_list.xml	2008/03/19 14:45:34	1.1.2.1
+++ conga/ricci/test_suite/cluster/vm_list.xml	2008/07/28 17:49:45	1.1.2.2
@@ -2,7 +2,7 @@
 <ricci version="1.0" function="process_batch" async="false">
 <batch>
 
-<module name="cluster">
+<module name="virt">
 <request sequence="1254" API_version="1.0">
 <function_call name="list_vm" />
 </request>



^ permalink raw reply	[flat|nested] 46+ messages in thread
* [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. ...
@ 2008-07-29 19:47 rmccabe
  0 siblings, 0 replies; 46+ messages in thread
From: rmccabe @ 2008-07-29 19:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	rmccabe at sourceware.org	2008-07-29 19:47:07

Modified files:
	.              : clustermon.spec.in.in conga.spec.in.in 
	luci/cluster   : cluster_config-macros cluster_svc-macros 
	                 fence-macros validate_config_qdisk.js 
	luci/plone-custom: conga.js 
	luci/site/luci/Extensions: HelperFunctions.py LuciValidation.py 
	                           LuciZopeClusterPortal.py 
	                           StorageReport.py cluster_adapters.py 
	                           conga_constants.py 
	luci/storage   : form-macros 
	ricci/modules/cluster/clumon: REDHAT-CLUSTER-MIB 
	ricci/modules/rpm: PackageHandler.cpp 
	ricci/modules/service: ServiceManager.cpp 
	ricci/modules/storage: LVM.cpp 
	ricci/modules/virt: Makefile Virt.cpp 
	ricci/ricci    : DBusController.cpp 
	ricci/ricci/d-bus: ricci.oddjob.conf 
	ricci/test_suite/cluster: vm_list.xml 

Log message:
	Forward port fixes from RHEL5

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/clustermon.spec.in.in.diff?cvsroot=cluster&r1=1.45&r2=1.46
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.99&r2=1.100
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/cluster_config-macros.diff?cvsroot=cluster&r1=1.4&r2=1.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/cluster_svc-macros.diff?cvsroot=cluster&r1=1.7&r2=1.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/fence-macros.diff?cvsroot=cluster&r1=1.3&r2=1.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/validate_config_qdisk.js.diff?cvsroot=cluster&r1=1.12&r2=1.13
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/plone-custom/conga.js.diff?cvsroot=cluster&r1=1.14&r2=1.15
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/HelperFunctions.py.diff?cvsroot=cluster&r1=1.15&r2=1.16
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciValidation.py.diff?cvsroot=cluster&r1=1.10&r2=1.11
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciZopeClusterPortal.py.diff?cvsroot=cluster&r1=1.3&r2=1.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/StorageReport.py.diff?cvsroot=cluster&r1=1.30&r2=1.31
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.283&r2=1.284
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&r1=1.50&r2=1.51
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/storage/form-macros.diff?cvsroot=cluster&r1=1.31&r2=1.32
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/cluster/clumon/REDHAT-CLUSTER-MIB.diff?cvsroot=cluster&r1=1.2&r2=1.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/rpm/PackageHandler.cpp.diff?cvsroot=cluster&r1=1.24&r2=1.25
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/service/ServiceManager.cpp.diff?cvsroot=cluster&r1=1.20&r2=1.21
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/storage/LVM.cpp.diff?cvsroot=cluster&r1=1.13&r2=1.14
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/virt/Makefile.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/modules/virt/Virt.cpp.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/ricci/DBusController.cpp.diff?cvsroot=cluster&r1=1.18&r2=1.19
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/ricci/d-bus/ricci.oddjob.conf.diff?cvsroot=cluster&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/ricci/test_suite/cluster/vm_list.xml.diff?cvsroot=cluster&r1=1.1&r2=1.2

--- conga/clustermon.spec.in.in	2008/06/06 16:41:52	1.45
+++ conga/clustermon.spec.in.in	2008/07/29 19:46:59	1.46
@@ -27,7 +27,7 @@
 Source0: %{name}-%{version}.tar.gz
 Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
-BuildRequires: cman-devel libvirt-devel
+BuildRequires: cman-devel
 BuildRequires: glibc-devel gcc-c++ libxml2-devel
 BuildRequires: openssl-devel dbus-devel pam-devel pkgconfig
 BuildRequires: net-snmp-devel tog-pegasus-devel
@@ -45,6 +45,7 @@
 		--docdir=%{_docdir} \
 		--pegasus_providers_dir=%{PEGASUS_PROVIDERS_DIR} \
 		--include_zope_and_plone=no
+
 make %{?_smp_mflags} clustermon
 
 %install
@@ -56,7 +57,6 @@
 
 
 
-
 ### cluster module ###
 
 
--- conga/conga.spec.in.in	2008/06/13 18:38:49	1.99
+++ conga/conga.spec.in.in	2008/07/29 19:47:00	1.100
@@ -31,9 +31,14 @@
 %endif
 Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
+%define virt_support 0
+
+%ifarch i386 x86_64 ia64
+%define virt_support 1
+%endif
+
 BuildRequires: python-devel >= 2.4.1
 BuildRequires: glibc-devel gcc-c++ libxml2-devel sed
-BuildRequires: cman-devel
 BuildRequires: cyrus-sasl-devel >= 2.1
 BuildRequires: openssl-devel dbus-devel pkgconfig file
 
@@ -57,9 +62,16 @@
 %endif
 
 %build
+%if %{virt_support}
 %configure		--arch=%{_arch} \
 		--docdir=%{_docdir} \
-		--include_zope_and_plone=%{include_zope_and_plone}
+		--include_zope_and_plone=%{include_zope_and_plone} --VIRT_SUPPORT=1
+%else
+%configure		--arch=%{_arch} \
+		--docdir=%{_docdir} \
+		--include_zope_and_plone=%{include_zope_and_plone} --VIRT_SUPPORT=0
+%endif
+
 make %{?_smp_mflags} conga
 
 %install
@@ -199,10 +211,16 @@
 Requires: initscripts
 Requires: oddjob dbus openssl pam cyrus-sasl >= 2.1
 Requires: sed util-linux
-Requires: modcluster >= 0.10.0
+Requires: modcluster >= 0.12.0
 
 # modreboot
 
+# modvirt
+
+%if %{virt_support}
+BuildRequires: libvirt-devel
+%endif
+
 # modrpm
 
 # modstorage
@@ -254,13 +272,14 @@
 %config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modlog.systembus.conf
 			%{_libexecdir}/ricci-modlog
 
+# modvirt
+%config(noreplace)	%{_sysconfdir}/oddjobd.conf.d/ricci-modvirt.oddjob.conf
+%config(noreplace)	%{_sysconfdir}/dbus-1/system.d/ricci-modvirt.systembus.conf
+			%{_libexecdir}/ricci-modvirt
+
 %pre -n ricci
-if ! /bin/grep ^ricci\:x /etc/group >&/dev/null; then
-	/usr/sbin/groupadd -r -f ricci >&/dev/null
-fi
-if ! /bin/grep ^ricci\:x /etc/passwd >&/dev/null; then
-	/usr/sbin/useradd -r -M -s /sbin/nologin -d /var/lib/ricci -g ricci ricci >&/dev/null
-fi
+getent group ricci >/dev/null || groupadd -r ricci
+getent passwd ricci >/dev/null || useradd -r -M -g ricci -d /var/lib/ricci -s /sbin/nologin -c "ricci daemon user" ricci
 exit 0
 
 %post -n ricci
--- conga/luci/cluster/cluster_config-macros	2008/02/08 21:47:55	1.4
+++ conga/luci/cluster/cluster_config-macros	2008/07/29 19:47:02	1.5
@@ -715,21 +715,39 @@
 				</td>
 			</tr>
 
-			<tr class="systemsTable">
-				<td class="systemsTable">Device</td>
-				<td class="systemsTable">
-					<input type="text" name="device"
-						tal:attributes="value clusterinfo/device | nothing" />
-				</td>
-			</tr>
-
-			<tr class="systemsTable">
-				<td class="systemsTable">Label</td>
-				<td class="systemsTable">
-					<input type="text" name="label"
-						tal:attributes="value clusterinfo/label | nothing" />
-				</td>
-			</tr>
+			
+			<tr class="systemsTable"><td colspan="2">
+				<table class="systemsTable">
+					<tr class="systemsTable">
+						<td class="systemsTable">
+							<input type="radio" name="qdisk_dev_label"
+								onclick="disable_text_field(this.form.label, this.form.device)">Label
+						</td>
+						<td class="systemsTable">
+							<input type="text" name="label" id="qdisk_label"
+								onfocus="disable_text_field(this.form.label, this.form.device);this.form.qdisk_dev_label[0].checked='checked';"
+								tal:attributes="
+									disabled python:(clusterinfo.get('label') or not clusterinfo.get('device')) and '' or 'disabled';
+									checked python:(clusterinfo.get('label') or not clusterinfo.get('label')) and 'checked' or '';
+									value clusterinfo/label | nothing" />
+						</td>
+					</tr>
+					<tr class="systemsTable">
+						<td class="systemsTable">
+							<input type="radio" name="qdisk_dev_label"
+								onclick="disable_text_field(this.form.device, this.form.label)">Device (deprecated)
+						</td>
+						<td class="systemsTable">
+							<input type="text" name="device" id="qdisk_device"
+								onfocus="disable_text_field(this.form.device, this.form.label);this.form.qdisk_dev_label[1].checked='checked';"
+								tal:attributes="
+									disabled python:clusterinfo.get('device') and '' or 'disabled';
+									checked python:clusterinfo.get('device') and 'checked' or '';
+									value clusterinfo/device | nothing" />
+						</td>
+					</tr>
+				</table>
+			</td></tr>
 		</table>
 		</div>
 
--- conga/luci/cluster/cluster_svc-macros	2008/03/06 21:27:16	1.7
+++ conga/luci/cluster/cluster_svc-macros	2008/07/29 19:47:02	1.8
@@ -46,7 +46,7 @@
 						class python: 'cluster service ' + (running and 'running' or 'stopped')"
 						tal:content="svc/name" />
 					<tal:block tal:condition="exists:svc/is_vm">
-						(virtual service)
+						(virtual machine service)
 					</tal:block>
 				</td>
 
@@ -156,46 +156,8 @@
 				<p class="reshdr">Create a Virtual Machine Service</p>
 			</td></tr>
 		<tfoot class="systemsTable">
-			<tr class="systemsTable">
-				<td>Automatically start this service</td>
-				<td>
-					<input type="checkbox" name="autostart" checked="checked">
-				</td>
-			</tr>
-			<tr class="systemsTable">
-				<td>Run exclusive</td>
-				<td>
-					<input type="checkbox" name="exclusive">
-				</td>
-			</tr>
-			<tr class="systemsTable">
-				<td>Failover Domain</td>
-				<td>
-					<select name="domain">
-						<option value="" selected="selected">None</option>
-						<tal:block tal:repeat="f python:here.get_fdom_names(modelb)">
-							<option tal:content="f"
-								tal:attributes="value f" />
-						</tal:block>
-					</select>
-				</td>
-			</tr>
-
-			<tr class="systemsTable">
-				<td>Recovery policy</td>
-				<td>
-					<select name="recovery">
-						<option value="">Select a recovery policy</option>
-						<option name="relocate" value="relocate">Relocate</option>
-						<option name="restart" value="restart">Restart</option>
-						<option name="disable" value="disable">Disable</option>
-					</select>
-				</td>
-			</tr>
-
 			<tr class="systemsTable"
 				tal:condition="exists:clusterinfo/vm_migration_choice">
-
 				<td>Migration type</td>
 				<td>
 					<select name="migration_type">
@@ -205,6 +167,8 @@
 				</td>
 			</tr>
 						
+			<tal:block metal:use-macro="here/cluster_svc-macros/macros/failover-prefs-macro" />
+
 			<tr class="systemsTable"><td colspan="2">
 				<div class="hbSubmit">
 					<input type="submit" value="Create Virtual Machine Service" />
@@ -228,7 +192,7 @@
 
 <div metal:define-macro="vmconfig-form">
 <form method="post" action=""
-	tal:define="vminfo python:here.getVMInfo(modelb, request)">
+	tal:define="sinfo python:here.getVMInfo(modelb, request)">
 
 	<input type="hidden" name="clustername"
 		tal:attributes="value request/clustername | nothing" />
@@ -237,14 +201,14 @@
 		tal:attributes="value request/pagetype | nothing" />
 
 	<input type="hidden" name="oldname"
-		tal:attributes="value vminfo/name | nothing" />
+		tal:attributes="value sinfo/name | nothing" />
 
 	<div class="service_comp_list">
 	<table class="systemsTable">
 		<thead class="systemsTable">
 			<tr class="systemsTable">
 				<td class="systemsTable">
-					<p class="reshdr">Properties for <tal:block tal:replace="vminfo/name | string:virtual machine service"/></p>
+					<p class="reshdr">Properties for <tal:block tal:replace="sinfo/name | string:virtual machine service"/></p>
 				</td>
 			</tr>
 
@@ -313,66 +277,21 @@
 		</thead>
 
 		<tfoot class="systemsTable">
-			<tr class="systemsTable">
-				<td>Automatically start this service</td>
-				<td>
-					<input type="checkbox" name="autostart"
-						tal:attributes="checked python: ('autostart' in vminfo and vminfo['autostart'] != '0') and 'checked' or ''" />
-				</td>
-			</tr>
-			<tr class="systemsTable">
-				<td>Run exclusive</td>
-				<td>
-					<input type="checkbox" name="exclusive"
-						tal:attributes="checked python: ('exclusive' in vminfo and vminfo['exclusive'] != '0') and 'checked' or ''" />
-				</td>
-			</tr>
-			<tr class="systemsTable">
-				<td>Failover Domain</td>
-				<td>
-					<select name="domain">
-						<option value="" tal:content="string:None"
-							tal:attributes="selected python: (not 'domain' in vminfo or not vminfo['domain']) and 'selected' or ''" />
-						<tal:block tal:repeat="f python:here.get_fdom_names(modelb)">
-							<option tal:content="f"
-								tal:attributes="
-									value f;
-									selected python: ('domain' in vminfo and vminfo['domain'] == f) and 'selected' or ''" />
-						</tal:block>
-					</select>
-				</td>
-			</tr>
-			<tr class="systemsTable">
-				<td>Recovery policy</td>
-				<td>
-					<select name="recovery">
-						<option value="">Select a recovery policy</option>
-						<option name="relocate" value="relocate"
-							tal:content="string:Relocate"
-							tal:attributes="selected python: ('recovery' in vminfo and vminfo['recovery'] == 'relocate') and 'selected' or ''" />
-						<option name="restart" value="restart"
-							tal:content="string:Restart"
-							tal:attributes="selected python: ('recovery' in vminfo and vminfo['recovery'] == 'restart') and 'selected' or ''" />
-						<option name="disable" value="disable"
-							tal:content="string:Disable"
-							tal:attributes="selected python: ('recovery' in vminfo and vminfo['recovery'] == 'disable') and 'selected' or ''" />
-					</select>
-				</td>
-			</tr>
-
 			<tr class="systemsTable"
-				tal:condition="exists:vminfo/vm_migration_choice">
+				tal:condition="exists:sinfo/vm_migration_choice">
 
 				<td>Migration type</td>
 				<td>
 					<select name="migration_type">
 						<option value="live" tal:content="string:Live"
-							tal:attributes="selected python:('migrate' not in vminfo or vminfo['migrate'] != 'pause') and 'selected' or ''" />
+							tal:attributes="selected python:(sinfo.get('migrate') != 'pause') and 'selected' or ''" />
 						<option value="pause" tal:content="string:Pause"
-							tal:attributes="selected python:('migrate' in vminfo and vminfo['migrate'] == 'pause') and 'selected' or ''" />
+							tal:attributes="selected python:(sinfo.get('migrate') == 'pause') and 'selected' or ''" />
 					</select>
 				</td>
 			</tr>
+
+			<tal:block metal:use-macro="here/cluster_svc-macros/macros/failover-prefs-macro" />
 						
 			<tr class="systemsTable"><td colspan="2">
 				<div class="hbSubmit">
@@ -386,14 +305,14 @@
 				<td><span class="cluster_help" title="e.g., guest1 if the VM config file is@/etc/xen/guest1">Virtual machine name</span></td>
 				<td>
 					<input type="text" name="vmname"
-						tal:attributes="value vminfo/name | nothing" />
+						tal:attributes="value sinfo/name | nothing" />
 				</td>
 			</tr>
 			<tr class="systemsTable">
 				<td><span class="cluster_help" title="e.g., /etc/xen/">Path to VM configuration files</span></td>
 				<td>
 					<input type="text" name="vmpath"
-						tal:attributes="value vminfo/path | nothing" />
+						tal:attributes="value sinfo/path | nothing" />
 				</td>
 			</tr>
 		</tbody>
@@ -427,43 +346,7 @@
 						<input type="text" length="20" name="service_name" value="" />
 					</td>
 				</tr>
-				<tr class="systemsTable">
-					<td class="systemsTable">
-						Automatically start this service
-					</td>
-					<td class="systemsTable">
-						<input type="checkbox" name="autostart" checked="checked" />
-					</td>
-				</tr>
-				<tr class="systemsTable">
-					<td class="systemsTable">Run exclusive</td>
-					<td class="systemsTable">
-						<input type="checkbox" name="exclusive">
-					</td>
-				</tr>
-				<tr class="systemsTable">
-					<td class="systemsTable">Failover Domain</td>
-					<td class="systemsTable">
-						<select name="domain">
-							<option value="" selected="selected">None</option>
-							<tal:block tal:repeat="f sinfo/fdoms">
-								<option tal:content="f"
-									tal:attributes="value f" />
-							</tal:block>
-						</select>
-					</td>
-				</tr>
-				<tr class="systemsTable">
-					<td class="systemsTable">Recovery policy</td>
-					<td class="systemsTable">
-						<select name="recovery">
-							<option value="">Select a recovery policy</option>
-							<option name="relocate" value="relocate">Relocate</option>
-							<option name="restart" value="restart">Restart</option>
-							<option name="disable" value="disable">Disable</option>
-						</select>
-					</td>
-				</tr>
+				<tal:block metal:use-macro="here/cluster_svc-macros/macros/failover-prefs-macro" />
 			</table>
 		</form>
 	</div>
@@ -522,7 +405,7 @@
 
 <div metal:define-macro="servicemigrate">
 	<script type="text/javascript">
-		set_page_title('Luci ??? cluster ??? services ??? Migrate a virtual service');
+		set_page_title('Luci ??? cluster ??? services ??? Migrate a virtual machine service');
 	</script>
 
 	<tal:block tal:define="
@@ -625,6 +508,76 @@
 	</tal:block>
 </div>
 
+<div metal:define-macro="failover-prefs-macro" tal:omit-tag="">
+	<tr>
+		<td>Automatically start this service</td>
+		<td>
+			<input type="checkbox" name="autostart"
+				tal:attributes="checked python:(sinfo and sinfo.get('autostart') and sinfo['autostart'].lower() != 'false') and 'checked'" />
+		</td>
+	</tr>
+
+	<tr>
+		<td>Run exclusive</td>
+		<td>
+			<input type="checkbox" name="exclusive"
+				tal:attributes="checked python:(sinfo and sinfo.get('exclusive')and sinfo.get('exclusive').lower() != 'false') and 'checked'" />
+		</td>
+	</tr>
+
+	<tr>
+		<td>Failover Domain</td>
+		<td>
+			<select name="domain">
+				<option value=""
+					tal:attributes="selected python:(not sinfo or not sinfo.get('domain')) and 'selected' or ''">None</option>
+				<tal:block tal:condition="exists:sinfo/fdoms">
+				<tal:block tal:repeat="f sinfo/fdoms">
+					<option tal:content="f" tal:attributes="
+						value f;
+						selected python:(sinfo and sinfo.get('domain') == f) and 'selected' or ''" />
+				</tal:block>
+				</tal:block>
+			</select>
+		</td>
+	</tr>
+
+	<tr class="systemsTable">
+		<td>Recovery policy</td>
+		<td>
+			<select name="recovery">
+				<option value="">Select a recovery policy</option>
+				<option name="relocate" value="relocate"
+					tal:content="string:Relocate"
+					tal:attributes="selected python:(sinfo and sinfo.get('recovery') == 'relocate') and 'selected' or ''" />
+				<option name="restart" value="restart"
+					tal:content="string:Restart"
+					tal:attributes="selected python:(sinfo and sinfo.get('recovery') == 'restart') and 'selected' or ''" />
+				<option name="disable" value="disable"
+					tal:content="string:Disable"
+					tal:attributes="selected python:(sinfo and sinfo.get('recovery') == 'disable') and 'selected' or ''" />
+			</select>
+		</td>
+	</tr>
+
+	<tr class="systemsTable">
+		<td class="systemsTable">
+			Maximum number of restart failures before relocating
+		</td>
+		<td class="systemsTable">
+			<input type="text" size="3" name="max_restarts"
+				tal:attributes="value sinfo/max_restarts|string:0" />
+		</td>
+	</tr>
+	<tr class="systemsTable">
+		<td class="systemsTable">Length of time in seconds after which to forget a restart</td>
+		<td class="systemsTable">
+			<input type="text" size="3" name="restart_expire_time"
+				tal:attributes="value sinfo/restart_expire_time|string:0" />
+		</td>
+	</tr>
+</div>
+
 <div metal:define-macro="service-config-head-macro" tal:omit-tag="">
 	<script type="text/javascript"
 		src="/luci/homebase/homebase_common.js">
@@ -778,51 +731,9 @@
 	<div class="service_comp_list">
 		<form name="service_name_form">
 			<table class="rescfg">
-				<tr>
-					<td>Automatically start this service</td>
-					<td><input type="checkbox" name="autostart"
-							tal:attributes="checked python: ('autostart' in sinfo and sinfo['autostart'].lower() != 'false') and 'checked'" />
-					</td>
-				</tr>
-				<tr>
-					<td>Run exclusive</td>
-					<td><input type="checkbox" name="exclusive"
-							tal:attributes="checked python: ('exclusive' in sinfo and sinfo['exclusive'].lower() != 'false') and 'checked'" />
-					</td>
-				</tr>
-				<tr>
-					<td>Failover Domain</td>
-					<td>
-						<select name="domain">
-							<option value=""
-								tal:attributes="selected python: (not 'domain' in sinfo or not sinfo['domain']) and 'selected' or ''">None</option>
-							<tal:block tal:repeat="f sinfo/fdoms">
-								<option tal:content="f"
-									tal:attributes="
-										value f;
-										selected python: ('domain' in sinfo and sinfo['domain'] == f) and 'selected' or ''" />
-							</tal:block>
-						</select>
-					</td>
-				</tr>
-				<tr class="systemsTable">
-					<td>Recovery policy</td>
-					<td>
-						<select name="recovery">
-							<option value="">Select a recovery policy</option>
-							<option name="relocate" value="relocate"
-								tal:content="string:Relocate"
-								tal:attributes="selected python: ('recovery' in sinfo and sinfo['recovery'] == 'relocate') and 'selected' or ''" />
-							<option name="restart" value="restart"
-								tal:content="string:Restart"
-								tal:attributes="selected python: ('recovery' in sinfo and sinfo['recovery'] == 'restart') and 'selected' or ''" />
-							<option name="disable" value="disable"
-								tal:content="string:Disable"
-								tal:attributes="selected python: ('recovery' in sinfo and sinfo['recovery'] == 'disable') and 'selected' or ''" />
-						</select>
-					</td>
-				</tr>
+				<tal:block metal:use-macro="here/cluster_svc-macros/macros/failover-prefs-macro" />
 			</table>
+
 			<input type="hidden" name="service_name"
 				tal:attributes="value sinfo/name | string:1" />
 		</form>
--- conga/luci/cluster/fence-macros	2008/04/23 17:33:37	1.3
+++ conga/luci/cluster/fence-macros	2008/07/29 19:47:02	1.4
@@ -684,6 +684,13 @@
 				</td>
 			</tr>
 			<tr>
+				<td>Module Name</td>
+				<td>
+					<input name="modulename" type="text"
+						tal:attributes="value cur_fencedev/modulename | nothing" />
+				</td>
+			</tr>
+			<tr>
 				<td>
 					<span title="Full path to a script to generate fence password">Password Script (optional)</span>
 				</td>
--- conga/luci/cluster/validate_config_qdisk.js	2008/01/02 20:52:22	1.12
+++ conga/luci/cluster/validate_config_qdisk.js	2008/07/29 19:47:02	1.13
@@ -206,6 +206,9 @@
 		var no_label = !form.label || str_is_blank(form.label.value);
 		if (no_dev && no_label)
 			errors.push('You must give either a label or a device.');
+		if (!no_dev && !no_label) {
+			errors.push('You may not specify both a device and a label.');
+		}
 
 		var hnum = document.getElementById('num_heuristics');
 		if (hnum) {
--- conga/luci/plone-custom/conga.js	2008/06/10 14:50:53	1.14
+++ conga/luci/plone-custom/conga.js	2008/07/29 19:47:02	1.15
@@ -248,6 +248,12 @@
 	elem.parentNode.removeChild(elem);
 }
 
+function disable_text_field(enable_obj, disable_obj) {
+	disable_obj.value = "";
+	disable_obj.disabled = "disabled";
+	enable_obj.disabled = "";
+}
+
 function swap_tabs(new_label, cur_tab, new_tab) {
 	if (cur_tab == new_tab) {
 		return (cur_tab);
--- conga/luci/site/luci/Extensions/HelperFunctions.py	2008/06/06 16:41:52	1.15
+++ conga/luci/site/luci/Extensions/HelperFunctions.py	2008/07/29 19:47:03	1.16
@@ -9,7 +9,7 @@
 import threading
 
 def resolveOSType(os_str):
-	if not os_str or os_str.find('Tikanga') != (-1) or os_str.find('Zod') != (-1) or os_str.find('Moonshine') != (-1) or os_str.find('Werewolf') != (-1) or os.str_find('Sulphur') != (-1):
+	if not os_str or os_str.find('Tikanga') != (-1) or os_str.find('Zod') != (-1) or os_str.find('Moonshine') != (-1) or os_str.find('Werewolf') != (-1) or os_str.find('Sulphur') != (-1):
 		return 'rhel5'
 	else:
 		return 'rhel4'
--- conga/luci/site/luci/Extensions/LuciValidation.py	2008/05/12 18:03:39	1.10
+++ conga/luci/site/luci/Extensions/LuciValidation.py	2008/07/29 19:47:03	1.11
@@ -268,7 +268,7 @@
 
 def validate_clusvc_add(model, request):
 	errors = list()
-	fvar = GetReqVars(request, [ 'form_xml', 'domain', 'recovery', 'svc_name', 'action' ])
+	fvar = GetReqVars(request, [ 'form_xml', 'domain', 'recovery', 'svc_name', 'action', 'max_restarts', 'restart_expire_time' ])
 
 	form_xml = fvar['form_xml']
 	if form_xml is None:
@@ -370,6 +370,26 @@
 	if recovery is not None and recovery != 'restart' and recovery != 'relocate' and recovery != 'disable':
 		errors.append('You entered an invalid recovery option: "%s" Valid options are "restart" "relocate" and "disable."')
 
+	if recovery == 'restart':
+		max_restarts = None
+		if fvar['max_restarts']:
+			try:
+				max_restarts = int(fvar['max_restarts'])
+				if max_restarts < 0:
+					raise ValueError, 'must be greater than 0'
+			except Exception, e:
+				errors.append('Maximum restarts must be a number greater than or equal to 0')
+				max_restarts = None
+		restart_expire_time = None
+		if fvar['restart_expire_time']:
+			try:
+				restart_expire_time = int(fvar['restart_expire_time'])
+				if restart_expire_time < 0:
+					raise ValueError, 'must be greater than 0'
+			except Exception, e:
+				errors.append('Restart expire time must be a number greater than or equal to 0')
+				restart_expire_time = None
+
 	service_name = fvar['svc_name']
 	if service_name is None:
 		if LUCI_DEBUG_MODE is True:
@@ -440,6 +460,11 @@
 		new_service.addAttribute('domain', fdom)
 	if recovery:
 		new_service.addAttribute('recovery', recovery)
+	if max_restarts is not None:
+		new_service.addAttribute('max_restarts', str(max_restarts))
+	if restart_expire_time is not None:
+		new_service.addAttribute('restart_expire_time', str(restart_expire_time))
+
 	new_service.addAttribute('exclusive', str(exclusive))
 	if autostart is not None:
 		new_service.attr_hash['autostart'] = autostart
@@ -725,6 +750,8 @@
 
 	if not device and not label:
 		errors.append('No Device or Label value was given')
+	if device and label:
+		errors.append('You may not specify both device and label')
 
 	num_heuristics = 0
 	try:
@@ -1058,7 +1085,7 @@
 def validate_vmsvc_form(model, request):
 	errors = list()
 
-	fvar = GetReqVars(request, [ 'vmname', 'oldname', 'vmpath', 'recovery', 'domain', 'migration_type'])
+	fvar = GetReqVars(request, [ 'vmname', 'oldname', 'vmpath', 'recovery', 'domain', 'migration_type', 'max_restarts', 'restart_expire_time'])
 
 	vm_name = fvar['vmname']
 	if vm_name is None:
@@ -1087,6 +1114,32 @@
 	recovery = fvar['recovery']
 	if recovery is not None and recovery != 'restart' and recovery != 'relocate' and recovery != 'disable':
 		errors.append('You entered an invalid recovery option "%s" for VM service "%s". Valid options are "restart" "relocate" and "disable"' % (recovery, vm_name))
+	if recovery == 'restart':
+		max_restarts = None
+		if fvar['max_restarts']:
+			try:
+				max_restarts = int(fvar['max_restarts'])
+				if max_restarts < 0:
+					raise ValueError, 'must be greater than 0'
+			except Exception, e:
+				errors.append('Maximum restarts must be a number greater than or equal to 0')
+				max_restarts = None
+		restart_expire_time = None
+		if fvar['restart_expire_time']:
+			try:
+				restart_expire_time = int(fvar['restart_expire_time'])
+				if restart_expire_time < 0:
+					raise ValueError, 'must be greater than 0'
+			except Exception, e:
+				errors.append('Restart expire time must be a number greater than or equal to 0')
+				restart_expire_time = None
+
+	service_name = fvar['svc_name']
+	if service_name is None:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('vSA5: no service name')
+		errors.append('No service name was given')
+
 
 	migration_type = fvar['migration_type']
 	if migration_type is not None and migration_type != 'live' and migration_type != 'pause':
--- conga/luci/site/luci/Extensions/LuciZopeClusterPortal.py	2008/01/02 21:00:31	1.3
+++ conga/luci/site/luci/Extensions/LuciZopeClusterPortal.py	2008/07/29 19:47:03	1.4
@@ -251,10 +251,10 @@
 
 	if model.getIsVirtualized() is True:
 		vmadd = {}
-		vmadd['Title'] = 'Add a Virtual Service'
+		vmadd['Title'] = 'Add a Virtual Machine Service'
 		vmadd['cfg_type'] = 'vmadd'
 		vmadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, VM_ADD, cluname)
-		vmadd['Description'] = 'Add a Virtual Service to this cluster'
+		vmadd['Description'] = 'Add a Virtual Machine Service to this cluster'
 		if pagetype == VM_ADD:
 			vmadd['currentItem'] = True
 		else:
@@ -305,7 +305,7 @@
 		svc['Title'] = name
 		svc['cfg_type'] = 'vm'
 		svc['absolute_url'] = '%s?pagetype=%s&servicename=%s&clustername=%s' % (url, VM_CONFIG, name, cluname)
-		svc['Description'] = 'Configure this Virtual Service'
+		svc['Description'] = 'Configure this Virtual Machine Service'
 		if pagetype == VM_CONFIG:
 			try:
 				xname = request['servicename']
--- conga/luci/site/luci/Extensions/StorageReport.py	2008/04/23 17:33:37	1.30
+++ conga/luci/site/luci/Extensions/StorageReport.py	2008/07/29 19:47:03	1.31
@@ -1621,9 +1621,10 @@
 		mutable = var.getAttribute('mutable') == 'true'
 		var_type = var.getAttribute('type')
 		value = var.getAttribute('value')
+		def_value = value
 
 		d_units = ''
-		if name in ('size', 'extent_size', 'block_size', 'size_free', 'partition_begin' ):
+		if name in ('size', 'extent_size', 'block_size', 'journal_size', 'size_free', 'partition_begin' ):
 			d_units = 'bytes'
 		if 'percent' in name:
 			d_units = '%'
@@ -1685,7 +1686,7 @@
 			d_value = str(value)
 
 		hidden = False
-		if var_type == 'hidden' or name in ( 'partition_begin', 'snapshot' ):
+		if var_type == 'hidden' or name in ( 'partition_begin', 'snapshot' ) or name[0:11] == '__snap_size':
 			hidden = True
 
 		if name == 'removable':
@@ -1697,6 +1698,7 @@
 				'name': name,
 				'pretty_name': get_pretty_prop_name(name),
 				'type': d_type,
+				'default_val': def_value,
 				'value': d_value,
 				'units': d_units,
 				'validation': validation_data,
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2008/04/23 17:33:37	1.283
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2008/07/29 19:47:03	1.284
@@ -545,7 +545,9 @@
 	next_node_id = 1
 
 	try:
-		for x in system_list:
+		skeys = system_list.keys()
+		skeys.sort()
+		for x in skeys:
 			i = system_list[x]
 
 			try:
@@ -1167,7 +1169,6 @@
 				msg_list.append('Fix the error and try again:\n')
 			else:
 				msg_list.append('PASSED\n')
-				model.setModified(True)
 				msg_list.append('DONE\n')
 				msg_list.append('Propagating the new cluster.conf')
 
--- conga/luci/site/luci/Extensions/conga_constants.py	2008/03/05 23:08:58	1.50
+++ conga/luci/site/luci/Extensions/conga_constants.py	2008/07/29 19:47:04	1.51
@@ -155,6 +155,6 @@
 # Debugging parameters. Set LUCI_DEBUG_MODE to True and LUCI_DEBUG_VERBOSITY
 # to >= 2 to get full debugging output in syslog (LOG_DAEMON/LOG_DEBUG).
 
-LUCI_DEBUG_MODE			= True
-LUCI_DEBUG_NET			= True
-LUCI_DEBUG_VERBOSITY	= 5
+LUCI_DEBUG_MODE			= False
+LUCI_DEBUG_NET			= False
+LUCI_DEBUG_VERBOSITY	= 0
--- conga/luci/storage/form-macros	2008/01/02 20:52:23	1.31
+++ conga/luci/storage/form-macros	2008/07/29 19:47:04	1.32
@@ -87,6 +87,7 @@
 						</select>
 					</td>
 				</tr>
+				<tal:comment tal:replace="nothing">
 				<tr>
 					<td>
 						Display Devices by
@@ -99,6 +100,7 @@
 						</select>
 					</td>
 				</tr>
+				</tal:comment>
 			</table>
 		</form>
 	</fieldset>
@@ -865,21 +867,28 @@
 
 							<tal:block tal:condition="python:prop_type == 'select'">
 								<select tal:define="prop_options prop/value" tal:attributes="name p">
-									<tal:block
-										tal:condition="python: prop_units != 'bytes'"
-										tal:repeat="prop_opt prop_options">
-										<option tal:attributes="value prop_opt" />
-										<span tal:replace="prop_opt" />
+									<tal:block tal:condition="python: prop_units != 'bytes'">
+										<tal:block tal:repeat="prop_opt prop_options">
+											<option tal:attributes="value prop_opt" />
+											<span tal:replace="prop_opt" />
+										</tal:block>
 									</tal:block>
-									<tal:block
-										tal:condition="python: prop_units == 'bytes'"
-										tal:repeat="prop_opt prop_options">
-										<option tal:attributes="value prop_opt" />
-										<span
+
+									<tal:block tal:condition="python: prop_units == 'bytes'">
+										<tal:block
 											tal:define="
-												dummy python: here.bytes_to_value_units(prop_opt);
-												value python: str(dummy[0]) + ' ' + str(dummy[1])"
-											tal:replace="value" />
+													dummy2 python:map(lambda x: int(x), prop_options);
+													dummy3 python:dummy2.sort()"
+											tal:repeat="prop_opt dummy2">
+												<option tal:attributes="
+															value prop_opt;
+															selected python:prop.get('default_val') == str(prop_opt) and 'selected' or ''" />
+												<span
+													tal:define="
+														dummy python: here.bytes_to_value_units(prop_opt);
+														value python: str(dummy[0]) + ' ' + str(dummy[1])"
+													tal:replace="value" />
+										</tal:block>
 									</tal:block>
 								</select>
 							</tal:block>
--- conga/ricci/modules/cluster/clumon/REDHAT-CLUSTER-MIB	2007/09/11 02:45:27	1.2
+++ conga/ricci/modules/cluster/clumon/REDHAT-CLUSTER-MIB	2008/07/29 19:47:05	1.3
@@ -13,7 +13,7 @@
 		   	    1801 Varsity Drive
 			    Raleigh, North Carolina 27606
 			    USA
-
+		  
 		  email:    customerservice at redhat.com
                  "
     DESCRIPTION  "Red Hat Cluster Suite MIB
@@ -28,7 +28,7 @@
 rhcMIBInfo		OBJECT IDENTIFIER ::= { RedHatCluster 1 }
 rhcCluster		OBJECT IDENTIFIER ::= { RedHatCluster 2 }
 rhcTables		OBJECT IDENTIFIER ::= { RedHatCluster 3 }
-
+	
 
 
 
@@ -86,7 +86,7 @@
 	MAX-ACCESS  read-only
 	STATUS      current
 	DESCRIPTION
-	"Minimum number of votes required for quorum.
+	"Minimum number of votes required for quorum. 
 	If cluster is not quorate, all services are stopped."
 	::= { rhcCluster 4 }
 
@@ -234,7 +234,7 @@
 	::= { rhcTables 1 }
 
 rhcNodeEntry OBJECT-TYPE
-	SYNTAX      RchNodeEntry
+	SYNTAX      RhcNodeEntry
 	MAX-ACCESS  not-accessible
 	STATUS      current
 	DESCRIPTION
@@ -311,7 +311,7 @@
 	::= { rhcTables 2 }
 
 rhcServiceEntry OBJECT-TYPE
-	SYNTAX      RchServiceEntry
+	SYNTAX      RhcServiceEntry
 	MAX-ACCESS  not-accessible
 	STATUS      current
 	DESCRIPTION
--- conga/ricci/modules/rpm/PackageHandler.cpp	2008/06/06 16:41:53	1.24
+++ conga/ricci/modules/rpm/PackageHandler.cpp	2008/07/29 19:47:05	1.25
@@ -511,9 +511,12 @@
 			if (kernel.find("xen") == kernel.npos) {
 				set.packages.push_back("kmod-gfs");
 				set.packages.push_back("kmod-gfs2");
+				set.packages.push_back("cmirror");
+				set.packages.push_back("kmod-cmirror");
 			} else {
 				set.packages.push_back("kmod-gfs-xen");
 				set.packages.push_back("kmod-gfs2-xen");
+				set.packages.push_back("kmod-cmirror-xen");
 			}
 		}
 	}
--- conga/ricci/modules/service/ServiceManager.cpp	2008/06/06 16:41:54	1.20
+++ conga/ricci/modules/service/ServiceManager.cpp	2008/07/29 19:47:06	1.21
@@ -521,11 +521,12 @@
 		servs.push_back("gfs");
 		servs.push_back("scsi_reserve");
 	} else if (RHEL5 || FC6) {
-		descr = "Shared Storage: clvmd, gfs, gfs2";
+		descr = "Shared Storage: clvmd, cmirror, gfs, gfs2";
 		servs.push_back("clvmd");
 		servs.push_back("gfs");
 		servs.push_back("gfs2");
 		servs.push_back("scsi_reserve");
+		servs.push_back("cmirror");
 	}
 	s = ServiceSet(name, descr);
 	if (populate_set(s, servs))
--- conga/ricci/modules/storage/LVM.cpp	2007/09/11 02:45:28	1.13
+++ conga/ricci/modules/storage/LVM.cpp	2008/07/29 19:47:06	1.14
@@ -252,7 +252,7 @@
   String attrs = words[LVS_ATTR_IDX];
   props.set(Variable("attrs", attrs));
 
-  props.set(Variable("mirrored", attrs[0] == 'm'));
+  props.set(Variable("mirrored", attrs[0] == 'm' || attrs[0] == 'M'));
 
   // clustered
   String vg_attrs = words[LVS_VG_ATTR_IDX];
@@ -602,17 +602,37 @@
 void
 LVM::lvremove(const String& path)
 {
-  vector<String> args;
-  args.push_back("lvchange");
-  args.push_back("-an");
-  args.push_back(path);
-
-  String out, err;
-  int status;
-  if (utils::execute(LVM_BIN_PATH, args, out, err, status, false))
-    throw command_not_found_error_msg(LVM_BIN_PATH);
-  if (status != 0)
-    throw String("Unable to deactivate LV (might be in use by other cluster nodes)");
+	vector<String> args;
+	args.push_back("lvchange");
+	args.push_back("-an");
+	args.push_back(path);
+
+	String out, err;
+	int status;
+
+	if (utils::execute(LVM_BIN_PATH, args, out, err, status, false))
+		throw command_not_found_error_msg(LVM_BIN_PATH);
+
+	if (status != 0) {
+		bool ignore_err = false;
+
+		try {
+			Props props;
+			std::list<counting_auto_ptr<BD> > sources;
+			std::list<counting_auto_ptr<BD> > targets;
+			probe_vg(path, props, sources, targets);
+			if (props.get("snapshot").get_bool() ||
+				props.get("mirror").get_bool())
+			{
+				ignore_err = true;
+			}
+		} catch (...) {
+			ignore_err = false;
+		}
+
+		if (!ignore_err)
+			throw String("Unable to deactivate LV (might be in use by other cluster nodes)");
+	}
 
   try {
     args.clear();
--- conga/ricci/modules/virt/Makefile	2008/07/10 20:25:58	1.1
+++ conga/ricci/modules/virt/Makefile	2008/07/29 19:47:06	1.2
@@ -14,7 +14,7 @@
 include ${top_srcdir}/make/defines.mk
 
 
-TARGET = modvirt
+TARGET = ricci-modvirt
 
 OBJECTS = main.o \
 	VirtModule.o \
@@ -22,8 +22,11 @@
 
 PARANOID=0
 INCLUDE += -I${top_srcdir}/common/
-CXXFLAGS += -DPARANOIA=$(PARANOID)
-LDFLAGS += -lvirt
+CXXFLAGS += -DPARANOIA=$(PARANOID) -DVIRT_SUPPORT=$(VIRT_SUPPORT)
+
+ifeq ($(VIRT_SUPPORT), 1)
+	LDFLAGS += -lvirt
+endif
 
 ifeq ($(PARANOID), 1)
 	LDFLAGS += ${top_srcdir}/common/paranoid/*.o
@@ -39,9 +42,9 @@
 	$(INSTALL_DIR) ${libexecdir}
 	$(INSTALL_BIN) ${TARGET} ${libexecdir}
 	$(INSTALL_DIR) ${sysconfdir}/oddjobd.conf.d
-	$(INSTALL_FILE) d-bus/modvirt.oddjob.conf ${sysconfdir}/oddjobd.conf.d
+	$(INSTALL_FILE) d-bus/ricci-modvirt.oddjob.conf ${sysconfdir}/oddjobd.conf.d
 	$(INSTALL_DIR) ${sysconfdir}/dbus-1/system.d
-	$(INSTALL_FILE) d-bus/modvirt.systembus.conf ${sysconfdir}/dbus-1/system.d
+	$(INSTALL_FILE) d-bus/ricci-modvirt.systembus.conf ${sysconfdir}/dbus-1/system.d
 
 uninstall:
 
--- conga/ricci/modules/virt/Virt.cpp	2008/07/10 20:25:58	1.1
+++ conga/ricci/modules/virt/Virt.cpp	2008/07/29 19:47:06	1.2
@@ -23,7 +23,9 @@
 	#include <sys/stat.h>
 	#include <string.h>
 	#include <errno.h>
+#if VIRT_SUPPORT == 1
 	#include <libvirt/libvirt.h>
+#endif
 
 	#include "sys_util.h"
 	#include "base64.h"
@@ -53,6 +55,8 @@
 	return false;
 }
 
+#if VIRT_SUPPORT == 1
+
 map<String, String> Virt::get_vm_list(const String &hvURI) {
 	std::map<String, String> vm_list;
 	int i;
@@ -130,3 +134,13 @@
 	virConnectClose(con);
 	return vm_list;
 }
+
+#else
+
+map<String, String> Virt::get_vm_list(const String &hvURI) {
+	std::map<String, String> vm_list;
+	throw String("Unsupported on this architecture.");
+	return vm_list;
+}
+
+#endif
--- conga/ricci/ricci/DBusController.cpp	2008/01/02 20:47:38	1.18
+++ conga/ricci/ricci/DBusController.cpp	2008/07/29 19:47:06	1.19
@@ -41,12 +41,12 @@
 DBusController::DBusController()
 {
 	// TODO: dynamically determine,
-	// currently, rpm requires storage and cluster modules
 	_mod_map["storage"]		= "modstorage_rw";
 	_mod_map["cluster"]		= "modcluster_rw";
 	_mod_map["rpm"]			= "modrpm_rw";
 	_mod_map["log"]			= "modlog_rw";
 	_mod_map["service"]		= "modservice_rw";
+	_mod_map["virt"]		= "modvirt_rw";
 	_mod_map["reboot"]		= "reboot";
 
 	MutexLocker lock(_dbus_mutex);
--- conga/ricci/ricci/d-bus/ricci.oddjob.conf	2006/06/15 03:08:37	1.1
+++ conga/ricci/ricci/d-bus/ricci.oddjob.conf	2008/07/29 19:47:06	1.2
@@ -18,6 +18,9 @@
 			<method name="modservice_rw">
 				<allow user="ricci"/>
 			</method>
+			<method name="modvirt_rw">
+				<allow user="ricci"/>
+			</method>
                         <method name="reboot">
 				<helper exec="/sbin/reboot"
 					arguments="0"
--- conga/ricci/test_suite/cluster/vm_list.xml	2008/03/14 19:58:12	1.1
+++ conga/ricci/test_suite/cluster/vm_list.xml	2008/07/29 19:47:07	1.2
@@ -2,7 +2,7 @@
 <ricci version="1.0" function="process_batch" async="false">
 <batch>
 
-<module name="cluster">
+<module name="virt">
 <request sequence="1254" API_version="1.0">
 <function_call name="list_vm" />
 </request>



^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2008-07-29 19:47 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-08-09 21:13 [Cluster-devel] conga ./clustermon.spec.in.in ./conga.spec.in. kupcevic
  -- strict thread matches above, loose matches on Subject: below --
2006-08-15  4:15 kupcevic
2006-08-16  6:34 kupcevic
2006-08-22 20:12 kupcevic
2006-08-22 23:01 kupcevic
2006-09-26  5:21 kupcevic
2006-10-04 16:32 kupcevic
2006-10-16 15:56 kupcevic
2006-10-16 21:01 kupcevic
2006-10-25 16:35 kupcevic
2006-10-25 18:47 rmccabe
2006-10-31 20:34 kupcevic
2006-11-01 20:43 rmccabe
2006-11-01 23:11 kupcevic
2006-11-02  0:46 rmccabe
2006-11-16 19:35 kupcevic
2006-11-17  0:59 kupcevic
2006-11-17 20:46 kupcevic
2006-12-12 13:53 kupcevic
2006-12-13 19:21 kupcevic
2007-01-17 14:32 kupcevic
2007-01-17 14:57 kupcevic
2007-01-17 16:36 kupcevic
2007-01-23 22:34 kupcevic
2007-02-05 12:12 kupcevic
2007-02-05 20:08 rmccabe
2007-02-05 22:01 kupcevic
2007-02-07  1:36 kupcevic
2007-03-20 20:52 kupcevic
2007-04-11 19:23 rmccabe
2007-04-11 20:15 rmccabe
2007-05-01 15:57 rmccabe
2007-06-27  7:43 rmccabe
2007-08-08 21:24 rmccabe
2007-08-09 22:02 rmccabe
2007-08-13 19:06 rmccabe
2007-08-20 16:23 rmccabe
2008-01-29 22:02 rmccabe
2008-02-12 17:40 rmccabe
2008-04-07 20:11 rmccabe
2008-04-11  6:48 rmccabe
2008-04-11  6:54 rmccabe
2008-04-18  3:31 rmccabe
2008-05-12 15:13 rmccabe
2008-07-28 17:49 rmccabe
2008-07-29 19:47 rmccabe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).