From: rmccabe@sourceware.org <rmccabe@sourceware.org>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] conga/luci cluster/form-macros cluster/index_h ...
Date: 16 Oct 2006 04:26:21 -0000 [thread overview]
Message-ID: <20061016042621.28384.qmail@sourceware.org> (raw)
CVSROOT: /cvs/cluster
Module name: conga
Changes by: rmccabe at sourceware.org 2006-10-16 04:26:19
Modified files:
luci/cluster : form-macros index_html resource-form-macros
luci/site/luci/Extensions: ricci_communicator.py
homebase_adapters.py
conga_constants.py ricci_bridge.py
cluster_adapters.py Variable.py
PropsObject.py
Log message:
all sorts of fixes and cleanups..
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.84&r2=1.85
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/index_html.diff?cvsroot=cluster&r1=1.18&r2=1.19
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/resource-form-macros.diff?cvsroot=cluster&r1=1.20&r2=1.21
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_communicator.py.diff?cvsroot=cluster&r1=1.7&r2=1.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/homebase_adapters.py.diff?cvsroot=cluster&r1=1.31&r2=1.32
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&r1=1.16&r2=1.17
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&r1=1.27&r2=1.28
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.110&r2=1.111
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Variable.py.diff?cvsroot=cluster&r1=1.2&r2=1.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/PropsObject.py.diff?cvsroot=cluster&r1=1.1&r2=1.2
--- conga/luci/cluster/form-macros 2006/10/13 21:25:14 1.84
+++ conga/luci/cluster/form-macros 2006/10/16 04:26:19 1.85
@@ -58,21 +58,24 @@
<script type="text/javascript">
set_page_title('Luci ??? cluster ??? cluster list');
</script>
+
<div id="cluster_list">
<div class="cluster" tal:repeat="clu clusystems">
- <tal:block tal:define="global ragent python: here.getRicciAgent(clu)" />
- <span tal:condition="python: ragent == ''">
- <strong class="errmsgs">An error occurred when trying to contact any of the nodes in the <span tal:replace="python: clu[0]"/> cluster.</strong>
- <hr/>
- </span>
+ <tal:block tal:define="
+ global ragent python: here.getRicciAgent(clu[0])" />
- <span tal:condition="python: ragent != ''">
- <tal:block
- tal:define="global stat python: here.getClusterStatus(ragent);
- global cstatus python: here.getClustersInfo(stat,request);
+ <div tal:condition="python: not ragent">
+ <strong class="errmsgs">An error occurred when trying to contact any of the nodes in the <span tal:replace="python: clu[0]"/> cluster.</strong>
+ <hr/>
+ </div>
+
+ <tal:block tal:condition="python: ragent">
+ <tal:block tal:define="
+ global stat python: here.getClusterStatus(ragent);
+ global cstatus python: here.getClustersInfo(stat, request);
global cluster_status python: 'cluster ' + (('running' in cstatus and cstatus['running'] == 'true') and 'running' or 'stopped');"
- />
+ />
<table class="cluster" width="100%">
<tr class="cluster info_top">
@@ -151,7 +154,7 @@
</tr>
</table>
<hr>
- </span>
+ </tal:block>
</div>
</div>
</div>
@@ -348,10 +351,8 @@
<script type="text/javascript">
set_page_title('Luci ??? cluster ??? Configure cluster properties');
</script>
- <tal:comment tal:replace="nothing">
- <span tal:define="global ricci_agent python: here.getRicciAgentForCluster(request)"/>
- </tal:comment>
+ <span tal:define="global ricci_agent python: here.getRicciAgentForCluster(request)" />
<tal:block
tal:define="global clusterinfo python: here.getClusterInfo(modelb, request)" />
@@ -1129,9 +1130,9 @@
global ricci_agent python: here.getRicciAgentForCluster(request);
global nodestatus python: here.getClusterStatus(ricci_agent);
global nodeinfo python: here.getNodeInfo(modelb, nodestatus, request);
- global fenceinfo python: here.getFenceInfo(modelb, request);
global status_class python: 'node_' + (nodeinfo['nodestate'] == '0' and 'active' or (nodeinfo['nodestate'] == '1' and 'inactive' or 'unknown'));
- global cluster_node_status_str python: (nodeinfo['nodestate'] == '0' and 'Cluster member' or (nodeinfo['nodestate'] == '1' and 'Currently not a cluster participant' or 'This node is not responding'));"
+ global cluster_node_status_str python: (nodeinfo['nodestate'] == '0' and 'Cluster member' or (nodeinfo['nodestate'] == '1' and 'Currently not a cluster participant' or 'This node is not responding'));
+ global fenceinfo python: here.getFenceInfo(modelb, request)"
/>
<table class="cluster node" width="100%">
@@ -1317,10 +1318,11 @@
<script type="text/javascript">
set_page_title('Luci ??? cluster ??? nodes');
</script>
+
<div id="node_list" tal:define="
global ricci_agent python: here.getRicciAgentForCluster(request);
global status python: here.getClusterStatus(ricci_agent);
- global nds python: here.getNodesInfo(modelb,status,request)">
+ global nds python: here.getNodesInfo(modelb, status, request)">
<div tal:repeat="nd nds">
<tal:block
@@ -1532,6 +1534,7 @@
<script type="text/javascript">
set_page_title('Luci ??? cluster ??? services');
</script>
+
<tal:block tal:omit-tag=""
tal:define="
global ricci_agent python: here.getRicciAgentForCluster(request);
@@ -1657,8 +1660,11 @@
<script type="text/javascript">
set_page_title('Luci ??? cluster ??? services ??? Start a service');
</script>
- <span tal:define="global ricci_agent python: here.getRicciAgentForCluster(request)"/>
- <span tal:define="result python: here.serviceStart(ricci_agent, request)"/>
+
+ <tal:block tal:define="
+ global ricci_agent python: here.getRicciAgentForCluster(request);
+ result python: here.serviceStart(ricci_agent, request)" />
+
<!-- <span metal:use-macro="here/form-macros/macros/serviceconfig-form"/> -->
</div>
@@ -1667,8 +1673,11 @@
<script type="text/javascript">
set_page_title('Luci ??? cluster ??? services ??? Restart a service');
</script>
- <span tal:define="global ricci_agent python: here.getRicciAgentForCluster(request)"/>
- <span tal:define="result python: here.serviceRestart(ricci_agent, request)"/>
+
+ <tal:block tal:define="
+ global ricci_agent python: here.getRicciAgentForCluster(request);
+ result python: here.serviceRestart(ricci_agent, request)" />
+
<!-- <span metal:use-macro="here/form-macros/macros/serviceconfig-form"/> -->
</div>
@@ -1676,8 +1685,11 @@
<script type="text/javascript">
set_page_title('Luci ??? cluster ??? services ??? Stop a service');
</script>
- <span tal:define="global ricci_agent python: here.getRicciAgentForCluster(request)"/>
- <span tal:define="result python: here.serviceStop(ricci_agent,request)"/>
+
+ <span tal:define="
+ global ricci_agent python: here.getRicciAgentForCluster(request);
+ result python: here.serviceStop(ricci_agent, request)" />
+
<!-- <span metal:use-macro="here/form-macros/macros/serviceconfig-form"/> -->
</div>
@@ -1720,7 +1732,7 @@
global ricci_agent python: here.getRicciAgentForCluster(request);
global global_resources python: here.getResourcesInfo(modelb, request);
global sstat python: here.getClusterStatus(ricci_agent);
- global sinfo python: here.getServiceInfo(sstat, modelb,request);
+ global sinfo python: here.getServiceInfo(sstat, modelb, request);
global running sinfo/running | nothing;" />
<tal:block tal:replace="structure python: '<script type='+chr(0x22)+'text/javascript'+chr(0x22)+'>'" />
@@ -1880,11 +1892,11 @@
<script type="text/javascript">
set_page_title('Luci ??? cluster ??? failover domains');
</script>
- <tal:block
- tal:define="
- global ragent python: here.getRicciAgentForCluster(request);
- global sta python: here.getClusterStatus(ragent);
- global fdominfo python: here.getFdomsInfo(modelb, request, sta);" />
+
+ <tal:block tal:define="
+ global ragent python: here.getRicciAgentForCluster(request);
+ global sta python: here.getClusterStatus(ragent);
+ global fdominfo python: here.getFdomsInfo(modelb, request, sta);" />
<div class="cluster fdom" tal:repeat="fdom fdominfo">
<div class="cluster fdom fdomname">
--- conga/luci/cluster/index_html 2006/10/09 16:16:11 1.18
+++ conga/luci/cluster/index_html 2006/10/16 04:26:19 1.19
@@ -34,9 +34,11 @@
<span tal:condition="not: hascluster">
<meta googaa="ooo"/>
</span>
+
<span tal:condition="hascluster">
<span tal:define="ri_agent python:here.getRicciAgentForCluster(request)">
- <span tal:define="resmap python:here.getClusterOS(ri_agent,request);
+
+ <span tal:define="resmap python:here.getClusterOS(ri_agent);
global isVirtualized resmap/isVirtualized;
global os_version resmap/os;"/>
</span>
@@ -52,6 +54,7 @@
<meta http-equiv="refresh" content="" tal:attributes="content isBusy/refreshurl"/>
</span>
</span>
+
<tal:comment replace="nothing"> A slot where you can insert elements in the header from a template </tal:comment>
</metal:headslot>
@@ -156,10 +159,14 @@
prefer layouts that don't use tables.
</tal:comment>
<!-- <div tal:define="global hascluster request/clustername |nothing"/> -->
- <span tal:condition="hascluster">
- <span tal:define="global ricci_agent python:here.getRicciAgentForCluster(request)"/>
- <div tal:omit-tag="" tal:define="global modelb python:here.getmodelbuilder(ricci_agent)" />
- </span>
+
+ <tal:block tal:condition="hascluster">
+ <tal:block tal:define="global ricci_agent python: here.getRicciAgentForCluster(request)" />
+ <tal:block tal:condition="ricci_agent"
+ tal:define="
+ global modelb python:here.getmodelbuilder(ricci_agent)" />
+ </tal:block>
+
<table id="portal-columns">
<tbody>
<tr>
--- conga/luci/cluster/resource-form-macros 2006/10/09 16:16:11 1.20
+++ conga/luci/cluster/resource-form-macros 2006/10/16 04:26:19 1.21
@@ -93,9 +93,8 @@
<h2>Resources Remove Form</h2>
<tal:block tal:define="
- ragent python: here.getRicciAgentForCluster(request);
- msg python: here.delResource(request, ragent)">
- <div tal:condition="msg" tal:content="msg" />
+ msg python: here.delResource(here.getRicciAgentForCluster(request), request)">
+ <span class="error" tal:condition="msg" tal:content="msg" />
</tal:block>
</div>
@@ -243,7 +242,7 @@
<h2>Resource <span tal:replace="python: ('edit' in request and request['edit']) and 'Edited' or 'Added'" /></h2>
<div tal:content="
- python: here.addResource(request, here.getRicciAgentForCluster(request))" />
+ python: here.addResource(here.getRicciAgentForCluster(request), request)" />
</div>
<div metal:define-macro="resourceconfig-form">
--- conga/luci/site/luci/Extensions/ricci_communicator.py 2006/10/10 21:07:18 1.7
+++ conga/luci/site/luci/Extensions/ricci_communicator.py 2006/10/16 04:26:19 1.8
@@ -10,8 +10,7 @@
from HelperFunctions import access_to_host_allowed
-CERTS_DIR_PATH='/var/lib/luci/var/certs/'
-
+CERTS_DIR_PATH = '/var/lib/luci/var/certs/'
class RicciCommunicator:
def __init__(self, hostname, port=11111):
@@ -38,14 +37,12 @@
self.__reported_hostname = hello.firstChild.getAttribute('hostname')
self.__os = hello.firstChild.getAttribute('os')
self.__dom0 = hello.firstChild.getAttribute('xen_host') == 'true'
-
+
pass
def hostname(self):
return self.__hostname
-
-
def authed(self):
return self.__authed
def system_name(self):
@@ -126,9 +123,22 @@
batch_node = node.cloneNode(True)
if batch_node == None:
raise 'missing <batch/> in ricci\'s response'
+
return batch_node
-
+ def batch_run(self, batch_str, async=True):
+ try:
+ batch_xml_str = '<?xml version="1.0" ?><batch>' + batch_str + '</batch>'
+ batch_xml = minidom.parseString(batch_xml_str).firstChild
+ except:
+ return None
+
+ try:
+ ricci_xml = self.process_batch(batch_xml, async)
+ except:
+ return None
+ return ricci_xml
+
def batch_report(self, batch_id):
if not self.authed():
raise 'not authenticated'
@@ -339,5 +349,5 @@
elif status == '5':
return -103, 'module removed from schedule'
- raise 'no ' + str(module_num) + 'th module in the batch, or malformed response'
+ raise Exception, str('no ' + str(module_num) + 'th module in the batch, or malformed response')
--- conga/luci/site/luci/Extensions/homebase_adapters.py 2006/10/13 17:12:41 1.31
+++ conga/luci/site/luci/Extensions/homebase_adapters.py 2006/10/16 04:26:19 1.32
@@ -8,6 +8,7 @@
import cgi
from ricci_defines import *
+from ricci_bridge import getClusterConf
from ricci_communicator import RicciCommunicator
from ricci_communicator import CERTS_DIR_PATH
from clusterOS import resolveOSType
@@ -1237,42 +1238,6 @@
def havePermEditPerms(self):
return isAdmin(self)
-def getClusterConf(rc):
- doc = xml.dom.minidom.Document()
- batch = doc.createElement('batch')
- module = doc.createElement('module')
- module.setAttribute('name', 'cluster')
- request = doc.createElement('request')
- request.setAttribute('API_version', '1.0')
- call = doc.createElement('function_call')
- call.setAttribute('name', 'get_cluster.conf')
- request.appendChild(call)
- module.appendChild(request)
- batch.appendChild(module)
-
- # temporary workaround for ricci bug
- system_info = rc.system_name()
- rc = RicciCommunicator(system_info)
- # end workaround
-
- try:
- ret = rc.process_batch(batch)
- except Exception, e:
- return str(e)
-
- if not ret:
- return None
-
- cur = ret
- while len(cur.childNodes) > 0:
- for i in cur.childNodes:
- if i.nodeType == xml.dom.Node.ELEMENT_NODE:
- if i.nodeName == 'var' and i.getAttribute('name') == 'cluster.conf':
- return i.childNodes[1].cloneNode(True)
- else:
- cur = i
- return None
-
def getClusterConfNodes(clusterConfDom):
cur = clusterConfDom
clusterNodes = list()
@@ -1303,3 +1268,35 @@
if storageList:
ret[1] = storageList
return ret
+
+def getClusterNode(self, nodename, clustername):
+ try:
+ cluster_node = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + str(clustername) + '/' + str(nodename))
+ return cluster_node
+ except:
+ return None
+
+def getStorageNode(self, nodename):
+ try:
+ storage_node = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/' + '/' + str(nodename))
+ return storage_node
+ except:
+ return None
+
+def setNodeFlag(self, node, flag_mask):
+ try:
+ flags = node.getProperty('flags')
+ node.manage_changeProperties({ 'flags': flags | flag_mask })
+ except:
+ try:
+ node.manage_addProperty('flags', flag_mask, 'int')
+ except:
+ pass
+
+def delNodeFlag(self, node, flag_mask):
+ try:
+ flags = node.getProperty('flags')
+ if flags & flag_mask != 0:
+ node.manage_changeProperties({ 'flags': flags & ~flag_mask })
+ except:
+ pass
--- conga/luci/site/luci/Extensions/conga_constants.py 2006/10/12 20:48:48 1.16
+++ conga/luci/site/luci/Extensions/conga_constants.py 2006/10/16 04:26:19 1.17
@@ -94,6 +94,7 @@
REDIRECT_MSG=" You will be redirected in 5 seconds. Please fasten your safety restraints."
+# Homebase-specific constants
HOMEBASE_ADD_USER="1"
HOMEBASE_ADD_SYSTEM="2"
HOMEBASE_PERMS="3"
@@ -102,4 +103,9 @@
HOMEBASE_ADD_CLUSTER="6"
HOMEBASE_ADD_CLUSTER_INITIAL="7"
+# Cluster node exception attribute flags
+CLUSTER_NODE_NEED_AUTH = 0x01
+CLUSTER_NODE_NOT_MEMBER = 0x02
+CLUSTER_NODE_ADDED = 0x04
+
PLONE_ROOT='luci'
--- conga/luci/site/luci/Extensions/ricci_bridge.py 2006/10/13 17:04:11 1.27
+++ conga/luci/site/luci/Extensions/ricci_bridge.py 2006/10/16 04:26:19 1.28
@@ -1,689 +1,76 @@
-from time import *
-import os
-import sys
-from socket import *
import xml
-import xml.dom
from xml.dom import minidom
-from conga_constants import *
-from RicciReceiveError import RicciReceiveError
+from ricci_communicator import RicciCommunicator
+def checkBatch(rc, batch_id):
+ try:
+ batch = rc.batch_report(batch_id)
+ if batch is None:
+ return True
+ except:
+ return False
+
+ try:
+ batchid = batch.getAttribute('batch_id')
+ result = batch.getAttribute('status')
+ except:
+ return False
-class ricci_bridge:
- def __init__(self, hostname, port=11111):
- self.__hostname = hostname
- self.__port = port
-
-
- def process(self, xml_out):
- CLUSTER_STR='<?xml version="1.0" ?><ricci async="false" function="process_batch" version="1.0"><batch><module name="cluster"><request API_version="1.0"><function_call name="get_cluster.conf"/></request></module></batch></ricci>'
-
- docc = None
- try:
- doc = self.makeConnection(CLUSTER_STR)
- except RicciReceiveError, r:
- return None
-
- #if doc == None:
- # print "Sorry, doc is None"
- if doc != None:
- bt_node = None
- for node in doc.firstChild.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'batch':
- bt_node = node
- if bt_node == None:
- #print "bt_node == None"
- doc = None
- else:
- #print doc.toxml()
- mod_node = None
- for node in bt_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'module':
- mod_node = node
- if mod_node == None:
- #print "mod_node == None"
- doc = None
- else:
- resp_node = None
- for node in mod_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- resp_node = node
- if resp_node == None:
- #print "resp_node == None"
- doc = None
- else:
- fr_node = None
- for node in resp_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- fr_node = node
- if fr_node == None:
- #print "fr_node == None"
- doc = None
- else:
- varnode = None
- for node in fr_node.childNodes:
- if node.nodeName == 'var':
- varnode = node
- break
- if varnode == None:
- #print "varnode == None"
- doc = None
- else:
- cl_node = None
- for node in varnode.childNodes:
- if node.nodeName == 'cluster':
- cl_node = node
- break
- if cl_node == None:
- #print "cl_node == None"
- doc = None
- else:
- docc = minidom.Document()
- docc.appendChild(cl_node)
-
- return docc
-
- def __sendall(self, str, ssl_sock):
- #print str
- s = str
- while len(s) != 0:
- pos = ssl_sock.write(s)
- s = s[pos:]
- return
-
-
- def __receive(self, ssl_sock):
- doc = None
- xml_in = ''
- try:
- while True:
- buff = ssl_sock.read(1024)
- if buff == '':
- break
- xml_in += buff
- try:
- doc = minidom.parseString(xml_in)
- break
- except:
- pass
- except:
- pass
- try:
- #print 'try parse xml'
- doc = minidom.parseString(xml_in)
- #print 'xml is good'
- except:
- pass
- return doc
-
- def getClusterStatus(self):
- CLUSTER_STR='<?xml version="1.0" ?><ricci async="false" function="process_batch" version="1.0"><batch><module name="cluster"><request API_version="1.0"><function_call name="status"/></request></module></batch></ricci>'
- # socket
- sock = socket(AF_INET, SOCK_STREAM)
- try:
- sock.connect((self.__hostname, self.__port))
- except:
- sock.close()
- return ''
-
- ss = 0
- try:
- ss = ssl(sock, PATH_TO_PRIVKEY, PATH_TO_CACERT)
- except sslerror, e:
- if ss:
- del ss
- sock.close()
- return ''
-
- # receive ricci header
- hello = self.__receive(ss)
- if hello != None:
- pass
- #print hello.toxml()
-
- try:
- self.__sendall(CLUSTER_STR, ss)
- doc = self.__receive(ss)
- except sslerror, e:
- doc = None
-
- del ss
- sock.close()
-
- if doc == None:
- return ''
- #print "Sorry, doc is None"
-
- payload = self.extractPayload(doc)
- return payload
-
- def startService(self,servicename, preferrednode = None):
- if preferrednode != None:
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/><var mutable="false" name="nodename" type="string" value=\"' + preferrednode + '\" /></function_call></request></module></batch></ricci>'
- else:
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
-
- batch_number, result = self.batchAttemptResult(payload)
- return (batch_number, result)
-
- def restartService(self,servicename):
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="restart_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
-
- batch_number, result = self.batchAttemptResult(payload)
- return (batch_number, result)
-
-
- def stopService(self,servicename):
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="stop_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
-
- batch_number, result = self.batchAttemptResult(payload)
- return (batch_number, result)
-
- def getDaemonStates(self, dlist):
- CLUSTER_STR='<?xml version="1.0" ?><ricci async="false" function="process_batch" version="1.0"><batch><module name="service"><request API_version="1.0"><function_call name="query"><var mutable="false" name="search" type="list_xml">'
-
- for item in dlist:
- CLUSTER_STR = CLUSTER_STR + '<service name=\"' + item + '\"/>'
-
- CLUSTER_STR = CLUSTER_STR + '</var></function_call></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(CLUSTER_STR)
- except RicciReceiveError, r:
- return None
-
- result = self.extractDaemonInfo(payload)
-
- return result
-
- def makeConnection(self,query_str):
- # socket
- sock = socket(AF_INET, SOCK_STREAM)
- try:
- sock.connect((self.__hostname, self.__port))
- except:
- sock.close()
- return None
-
- ss = 0
- try:
- ss = ssl(sock, PATH_TO_PRIVKEY, PATH_TO_CACERT)
- hello = self.__receive(ss)
- #print >> sys.stderr, hello.toxml()
- self.__sendall(query_str, ss)
- # receive response
- payload = self.__receive(ss)
- except sslerror, e:
- payload = None
-
- if ss:
- del ss
- sock.close()
-
- if payload == None:
- raise RicciReceiveError('FATAL',"Unable to receive ricci data for %s" % self.__hostname)
- return payload
-
-
- def extractPayload(self, doc):
- docc = None
- bt_node = None
- for node in doc.firstChild.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'batch':
- bt_node = node
- if bt_node == None:
- doc = None
- else:
- #print doc.toxml()
- mod_node = None
- for node in bt_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'module':
- mod_node = node
- if mod_node == None:
- doc = None
- else:
- resp_node = None
- for node in mod_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- resp_node = node
- if resp_node == None:
- doc = None
- else:
- fr_node = None
- for node in resp_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- fr_node = node
- if fr_node == None:
- doc = None
- else:
- varnode = None
- for node in fr_node.childNodes:
- if node.nodeName == 'var':
- varnode = node
- break
- if varnode == None:
- doc = None
- else:
- cl_node = None
- for node in varnode.childNodes:
- if node.nodeName == 'cluster':
- cl_node = node
- break
- if cl_node == None:
- doc = None
- else:
- docc = minidom.Document()
- docc.appendChild(cl_node)
- return docc
-
-
- def getBatchResult(self, doc):
- docc = None
- bt_node = None
- for node in doc.firstChild.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'batch':
- bt_node = node
- if bt_node == None:
- doc = None
- else:
- #print doc.toxml()
- mod_node = None
- for node in bt_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'module':
- mod_node = node
- if mod_node == None:
- doc = None
- else:
- resp_node = None
- for node in mod_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- resp_node = node
- if resp_node == None:
- doc = None
- else:
- fr_node = None
- for node in resp_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- fr_node = node
- if fr_node == None:
- doc = None
- else:
- varnode = None
- for node in fr_node.childNodes:
- if node.nodeName == 'var':
- varnode = node
- break
- if varnode == None:
- doc = None
- else:
- cl_node = None
- for node in varnode.childNodes:
- if node.nodeName == 'cluster':
- cl_node = node
- break
- if cl_node == None:
- doc = None
- else:
- docc = minidom.Document()
- docc.appendChild(cl_node)
- return docc
-
- def extractClusterConf(self, doc):
- docc = None
- bt_node = None
- for node in doc.firstChild.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'batch':
- bt_node = node
- if bt_node == None:
- #print "bt_node == None"
- doc = None
- else:
- #print doc.toxml()
- mod_node = None
- for node in bt_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'module':
- mod_node = node
- if mod_node == None:
- #print "mod_node == None"
- doc = None
- else:
- resp_node = None
- for node in mod_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- resp_node = node
- if resp_node == None:
- #print "resp_node == None"
- doc = None
- else:
- fr_node = None
- for node in resp_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- fr_node = node
- if fr_node == None:
- #print "fr_node == None"
- doc = None
- else:
- varnode = None
- for node in fr_node.childNodes:
- if node.nodeName == 'var':
- varnode = node
- break
- if varnode == None:
- #print "varnode == None"
- doc = None
- else:
- cl_node = None
- for node in varnode.childNodes:
- if node.nodeName == 'cluster':
- cl_node = node
- break
- if cl_node == None:
- #print "cl_node == None"
- doc = None
- else:
- docc = minidom.Document()
- docc.appendChild(cl_node)
-
- return docc
-
- def extractDaemonInfo(self, doc):
- resultlist = list()
- docc = None
- bt_node = None
- for node in doc.firstChild.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'batch':
- bt_node = node
- if bt_node == None:
- #print "bt_node == None"
- doc = None
- else:
- #print doc.toxml()
- mod_node = None
- for node in bt_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'module':
- mod_node = node
- if mod_node == None:
- #print "mod_node == None"
- doc = None
- else:
- resp_node = None
- for node in mod_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- resp_node = node
- if resp_node == None:
- #print "resp_node == None"
- doc = None
- else:
- fr_node = None
- for node in resp_node.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- fr_node = node
- if fr_node == None:
- #print "fr_node == None"
- doc = None
- else:
- varnode = None
- for node in fr_node.childNodes:
- if node.nodeName == 'var':
- varnode = node
- break
- if varnode == None:
- #print "varnode == None"
- doc = None
- else:
- svc_node = None
- for node in varnode.childNodes:
- if node.nodeName == 'service':
- svchash = {}
- svchash['name'] = node.getAttribute('name')
- svchash['enabled'] = node.getAttribute('enabled')
- svchash['running'] = node.getAttribute('running')
- resultlist.append(svchash)
-
- return resultlist
-
- def batchAttemptResult(self, doc):
- docc = None
- rc_node = None
- for node in doc.firstChild.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'batch':
- #get batch number and status code
- batch_number = node.getAttribute('batch_id')
- result = node.getAttribute('status')
- return (batch_number, result)
- else:
- #print "RETURNING NONE!!!"
- return (None, None )
-
-
-
- def getRicciResponse(self):
- sock = socket(AF_INET, SOCK_STREAM)
- sock.settimeout(2.0)
- try:
- sock.connect((self.__hostname, self.__port))
- except:
- sock.close()
- return False
-
- ss = 0
- try:
- ss = ssl(sock, PATH_TO_PRIVKEY, PATH_TO_CACERT)
- except sslerror, e:
- if ss:
- del ss
- sock.close()
- return False
- sock.settimeout(600.0) # 10 minutes
- # TODO: data transfer timeout should be much less,
- # leave until all calls are async ricci calls
-
- # receive ricci header
- try:
- hello = self.__receive(ss)
- except sslerror, e:
- hello = None
-
- del ss
- sock.close()
-
- if hello != None:
- return True
- else:
- return False
-
- def checkBatch(self, batch_id):
- QUERY_STR = '<?xml version="1.0" ?><ricci version="1.0" function="batch_report" batch_id="' + batch_id + '"/>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
- #return true if finished or not present
- success = payload.firstChild.getAttribute('success')
- if success != "0":
- return True #I think this is ok...if id cannot be found
- for node in payload.firstChild.childNodes:
- if node.nodeType == xml.dom.Node.ELEMENT_NODE:
- if node.nodeName == 'batch':
- #get batch number and status code
- batch_number = node.getAttribute('batch_id')
- result = node.getAttribute('status')
- if result == "0":
- return True
- else:
- return False
- else:
- return False
-
- return False
-
- def setClusterConf(self, clusterconf, propagate=True):
- if propagate == True:
- propg = "true"
- else:
- propg = "false"
-
- conf = str(clusterconf).replace('<?xml version="1.0"?>', '')
- conf = conf.replace('<?xml version="1.0" ?>', '')
- conf = conf.replace('<? xml version="1.0"?>', '')
- conf = conf.replace('<? xml version="1.0" ?>', '')
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="cluster"><request API_version="1.0"><function_call name="set_cluster.conf"><var type="boolean" name="propagate" mutable="false" value="' + propg + '"/><var type="xml" mutable="false" name="cluster.conf">' + conf + '</var></function_call></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
-
- batch_number, result = self.batchAttemptResult(payload)
- return (batch_number, result)
-
- def nodeLeaveCluster(self, cluster_shutdown=False):
- cshutdown = "false"
- if cluster_shutdown == True:
- cshutdown = "true"
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="cluster"><request sequence="111" API_version="1.0"><function_call name="stop_node"><var mutable="false" name="cluster_shutdown" type="boolean" value="' + cshutdown + '"/></function_call></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
- batch_number, result = self.batchAttemptResult(payload)
-
- return (batch_number, result)
-
- def nodeJoinCluster(self, cluster_startup=False):
- cstartup = "false"
- if cluster_startup == True:
- cstartup = "true"
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="cluster"><request sequence="111" API_version="1.0"><function_call name="start_node"><var mutable="false" name="cluster_startup" type="boolean" value="' + cstartup + '"/></function_call></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
- batch_number, result = self.batchAttemptResult(payload)
-
- return (batch_number, result)
-
- def nodeReboot(self):
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="reboot"><request sequence="111" API_version="1.0"><function_call name="reboot_now"/></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
- batch_number, result = self.batchAttemptResult(payload)
-
- return (batch_number, result)
-
- def nodeFence(self, nodename):
- QUERY_STR='<?xml version="1.0" ?><ricci async="true" function="process_batch" version="1.0"><batch><module name="cluster"><request sequence="111" API_version="1.0"><function_call name="fence_node"><var mutable="false" name="nodename" type="string" value="' + nodename + '"/></function_call></request></module></batch></ricci>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return None
-
- batch_number, result = self.batchAttemptResult(payload)
-
- return (batch_number, result)
-
- def getNodeLogs(self):
- QUERY_STR = '<?xml version="1.0" ?><ricci async="false" function="process_batch" version="1.0"><batch><module name="log"><request sequence="1254" API_version="1.0"><function_call name="get"><var mutable="false" name="age" type="int" value="360000"/><var mutable="false" name="tags" type="list_str"><listentry value="cluster"/></var></function_call></request></module>'
-
- try:
- payload = self.makeConnection(QUERY_STR)
- except RicciReceiveError, r:
- return "log not accessible"
+ if result == '0':
+ return True
- #parse out log entry
- return payload
+ return False
def addClusterNodeBatch(cluster_name,
- install_base,
- install_services,
- install_shared_storage,
- install_LVS,
- upgrade_rpms):
+ install_base,
+ install_services,
+ install_shared_storage,
+ install_LVS,
+ upgrade_rpms):
+
batch = '<?xml version="1.0" ?>'
batch += '<batch>'
-
-
batch += '<module name="rpm">'
batch += '<request API_version="1.0">'
batch += '<function_call name="install">'
batch += '<var name="upgrade" type="boolean" value="'
- if upgrade_rpms:
- batch += 'true'
- else:
- batch += 'false'
- batch += '"/>'
+ if upgrade_rpms:
+ batch += 'true'
+ else:
+ batch += 'false'
+ batch += '"/>'
+
batch += '<var name="sets" type="list_xml">'
- if install_base or install_services or install_shared_storage:
- batch += '<set name="Cluster Base"/>'
- if install_services:
- batch += '<set name="Cluster Service Manager"/>'
+
+ if install_base or install_services or install_shared_storage:
+ batch += '<set name="Cluster Base"/>'
+ if install_services:
+ batch += '<set name="Cluster Service Manager"/>'
if install_shared_storage:
- batch += '<set name="Clustered Storage"/>'
+ batch += '<set name="Clustered Storage"/>'
if install_LVS:
- batch += '<set name="Linux Virtual Server"/>'
+ batch += '<set name="Linux Virtual Server"/>'
+
batch += '</var>'
batch += '</function_call>'
batch += '</request>'
batch += '</module>'
-
-
+
need_reboot = install_base or install_services or install_shared_storage or install_LVS
- if need_reboot:
- batch += '<module name="reboot">'
- batch += '<request API_version="1.0">'
- batch += '<function_call name="reboot_now"/>'
- batch += '</request>'
- batch += '</module>'
- else:
- # need placeholder instead of reboot
- batch += '<module name="rpm">'
- batch += '<request API_version="1.0">'
- batch += '<function_call name="install"/>'
- batch += '</request>'
- batch += '</module>'
-
-
+ if need_reboot:
+ batch += '<module name="reboot">'
+ batch += '<request API_version="1.0">'
+ batch += '<function_call name="reboot_now"/>'
+ batch += '</request>'
+ batch += '</module>'
+ else:
+ # need placeholder instead of reboot
+ batch += '<module name="rpm">'
+ batch += '<request API_version="1.0">'
+ batch += '<function_call name="install"/>'
+ batch += '</request>'
+ batch += '</module>'
+
batch += '<module name="cluster">'
batch += '<request API_version="1.0">'
batch += '<function_call name="set_cluster.conf">'
@@ -700,121 +87,389 @@
batch += '</function_call>'
batch += '</request>'
batch += '</module>'
-
-
+
batch += '<module name="cluster">'
batch += '<request API_version="1.0">'
batch += '<function_call name="start_node"/>'
batch += '</request>'
batch += '</module>'
-
-
batch += '</batch>'
return minidom.parseString(batch).firstChild
-def createClusterBatch(os_str,
- cluster_name,
- cluster_alias,
- nodeList,
- install_base,
- install_services,
- install_shared_storage,
- install_LVS,
- upgrade_rpms):
- batch = '<?xml version="1.0" ?>'
- batch += '<batch>'
-
- if os_str == 'rhel5':
- cluster_version = '5'
- elif os_str == 'rhel4':
- cluster_version = '4'
- else:
- cluster_version = 'unknown'
-
- batch += '<module name="rpm">'
- batch += '<request API_version="1.0">'
- batch += '<function_call name="install">'
- batch += '<var name="upgrade" type="boolean" value="'
- if upgrade_rpms:
- batch += 'true'
- else:
- batch += 'false'
- batch += '"/>'
- batch += '<var name="sets" type="list_xml">'
- if install_base or install_services or install_shared_storage:
- batch += '<set name="Cluster Base"/>'
- if install_services:
- batch += '<set name="Cluster Service Manager"/>'
- if install_shared_storage:
- batch += '<set name="Clustered Storage"/>'
- if install_LVS:
- batch += '<set name="Linux Virtual Server"/>'
- batch += '</var>'
- batch += '</function_call>'
- batch += '</request>'
- batch += '</module>'
-
-
- need_reboot = install_base or install_services or install_shared_storage or install_LVS
- if need_reboot:
- batch += '<module name="reboot">'
- batch += '<request API_version="1.0">'
- batch += '<function_call name="reboot_now"/>'
- batch += '</request>'
- batch += '</module>'
- else:
- # need placeholder instead of reboot
- batch += '<module name="rpm">'
- batch += '<request API_version="1.0">'
- batch += '<function_call name="install"/>'
- batch += '</request>'
- batch += '</module>'
-
-
- batch += '<module name="cluster">'
- batch += '<request API_version="1.0">'
- batch += '<function_call name="set_cluster.conf">'
- batch += '<var mutable="false" name="propagate" type="boolean" value="false"/>'
- batch += '<var mutable="false" name="cluster.conf" type="xml">'
- batch += '<cluster config_version="1" name="' + cluster_name + '" alias="' + cluster_alias + '">'
- batch += '<fence_daemon post_fail_delay="0" post_join_delay="3"/>'
-
- batch += '<clusternodes>'
- x = 1
- for i in nodeList:
- if os_str == "rhel4":
- batch += '<clusternode name="' + i + '" votes="1" />'
- else:
- batch += '<clusternode name="' + i + '" votes="1" nodeid="' + str(x) + '" />'
- x = x + 1
-
- batch += '</clusternodes>'
-
- if len(nodeList) == 2:
- batch += '<cman expected_votes="1" two_node="1"/>'
- else:
- batch += '<cman/>'
+def createClusterBatch( os_str,
+ cluster_name,
+ cluster_alias,
+ nodeList,
+ install_base,
+ install_services,
+ install_shared_storage,
+ install_LVS,
+ upgrade_rpms):
+
+ if os_str == 'rhel5':
+ cluster_version = '5'
+ elif os_str == 'rhel4':
+ cluster_version = '4'
+ else:
+ cluster_version = 'unknown'
+
+ batch = '<?xml version="1.0" ?>'
+ batch += '<batch>'
+ batch += '<module name="rpm">'
+ batch += '<request API_version="1.0">'
+ batch += '<function_call name="install">'
+ batch += '<var name="upgrade" type="boolean" value="'
+ if upgrade_rpms:
+ batch += 'true'
+ else:
+ batch += 'false'
+ batch += '"/>'
+ batch += '<var name="sets" type="list_xml">'
+
+ if install_base or install_services or install_shared_storage:
+ batch += '<set name="Cluster Base"/>'
+ if install_services:
+ batch += '<set name="Cluster Service Manager"/>'
+ if install_shared_storage:
+ batch += '<set name="Clustered Storage"/>'
+ if install_LVS:
+ batch += '<set name="Linux Virtual Server"/>'
+ batch += '</var>'
+ batch += '</function_call>'
+ batch += '</request>'
+ batch += '</module>'
+
+ need_reboot = install_base or install_services or install_shared_storage or install_LVS
+ if need_reboot:
+ batch += '<module name="reboot">'
+ batch += '<request API_version="1.0">'
+ batch += '<function_call name="reboot_now"/>'
+ batch += '</request>'
+ batch += '</module>'
+ else:
+ # need placeholder instead of reboot
+ batch += '<module name="rpm">'
+ batch += '<request API_version="1.0">'
+ batch += '<function_call name="install"/>'
+ batch += '</request>'
+ batch += '</module>'
+
+ batch += '<module name="cluster">'
+ batch += '<request API_version="1.0">'
+ batch += '<function_call name="set_cluster.conf">'
+ batch += '<var mutable="false" name="propagate" type="boolean" value="false"/>'
+ batch += '<var mutable="false" name="cluster.conf" type="xml">'
+ batch += '<cluster config_version="1" name="' + cluster_name + '" alias="' + cluster_alias + '">'
+ batch += '<fence_daemon post_fail_delay="0" post_join_delay="3"/>'
+
+ batch += '<clusternodes>'
+
+ x = 1
+ for i in nodeList:
+ if os_str == "rhel4":
+ batch += '<clusternode name="' + i + '" votes="1" />'
+ else:
+ batch += '<clusternode name="' + i + '" votes="1" nodeid="' + str(x) + '" />'
+ x = x + 1
+
+ batch += '</clusternodes>'
+
+ if len(nodeList) == 2:
+ batch += '<cman expected_votes="1" two_node="1"/>'
+ else:
+ batch += '<cman/>'
- batch += '<fencedevices/>'
- batch += '<rm/>'
- batch += '</cluster>'
- batch += '</var>'
- batch += '</function_call>'
- batch += '</request>'
- batch += '</module>'
-
-
- batch += '<module name="cluster">'
- batch += '<request API_version="1.0">'
- batch += '<function_call name="start_node">'
- batch += '<var mutable="false" name="cluster_startup" type="boolean" value="true"/>'
- batch += '</function_call>'
- batch += '</request>'
- batch += '</module>'
-
-
- batch += '</batch>'
-
- return minidom.parseString(batch).firstChild
+ batch += '<fencedevices/>'
+ batch += '<rm/>'
+ batch += '</cluster>'
+ batch += '</var>'
+ batch += '</function_call>'
+ batch += '</request>'
+ batch += '</module>'
+
+ batch += '<module name="cluster">'
+ batch += '<request API_version="1.0">'
+ batch += '<function_call name="start_node">'
+ batch += '<var mutable="false" name="cluster_startup" type="boolean" value="true"/>'
+ batch += '</function_call>'
+ batch += '</request>'
+ batch += '</module>'
+ batch += '</batch>'
+
+ return minidom.parseString(batch).firstChild
+def batchAttemptResult(self, doc):
+ docc = None
+ rc_node = None
+
+ for node in doc.firstChild.childNodes:
+ if node.nodeType == xml.dom.Node.ELEMENT_NODE:
+ if node.nodeName == 'batch':
+ #get batch number and status code
+ batch_number = node.getAttribute('batch_id')
+ result = node.getAttribute('status')
+ return (batch_number, result)
+ else:
+ #print "RETURNING NONE!!!"
+ return (None, None)
+
+def getPayload(bt_node):
+ if not bt_node:
+ return None
+
+ mod_node = None
+ for node in bt_node.childNodes:
+ if node.nodeType == xml.dom.Node.ELEMENT_NODE and node.nodeName == 'module':
+ mod_node = node
+ if not mod_node:
+ return None
+
+ resp_node = None
+ for node in mod_node.childNodes:
+ if node.nodeType == xml.dom.Node.ELEMENT_NODE:
+ resp_node = node
+ if not resp_node:
+ return None
+
+ fr_node = None
+ for node in resp_node.childNodes:
+ if node.nodeType == xml.dom.Node.ELEMENT_NODE:
+ fr_node = node
+ if not fr_node:
+ return None
+
+ varnode = None
+ for node in fr_node.childNodes:
+ if node.nodeName == 'var':
+ varnode = node
+ break
+ if not varnode:
+ return None
+
+ cl_node = None
+ for node in varnode.childNodes:
+ if node.nodeName == 'cluster':
+ cl_node = node
+ break
+ if not cl_node:
+ return None
+
+ doc = minidom.Document()
+ doc.appendChild(cl_node)
+ return doc
+
+def setClusterConf(rc, clusterconf, propagate=True):
+ if propagate == True:
+ propg = 'true'
+ else:
+ propg = 'false'
+
+ conf = str(clusterconf).replace('<?xml version="1.0"?>', '')
+ conf = conf.replace('<?xml version="1.0" ?>', '')
+ conf = conf.replace('<? xml version="1.0"?>', '')
+ conf = conf.replace('<? xml version="1.0" ?>', '')
+
+ batch_str = '<module name="cluster"><request API_version="1.0"><function_call name="set_cluster.conf"><var type="boolean" name="propagate" mutable="false" value="' + propg + '"/><var type="xml" mutable="false" name="cluster.conf">' + conf + '</var></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return (None, None)
+ return batchAttemptResult(doc)
+
+def getNodeLogs(rc):
+ errstr = 'log not accessible'
+
+ batch_str = '<module name="log"><request sequence="1254" API_version="1.0"><function_call name="get"><var mutable="false" name="age" type="int" value="360000"/><var mutable="false" name="tags" type="list_str"><listentry value="cluster"/></var></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str, async=False)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return errstr
+ return doc.firstChild
+
+def nodeReboot(rc):
+ batch_str = '<module name="reboot"><request sequence="111" API_version="1.0"><function_call name="reboot_now"/></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return (None, None)
+ return batchAttemptResult(doc)
+
+def nodeLeaveCluster(rc, cluster_shutdown=False, purge=True):
+ cshutdown = 'false'
+ if cluster_shutdown == True:
+ cshutdown = 'true'
+
+ purge_conf = 'true'
+ if purge == False:
+ purge_conf = 'false'
+
+ batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="stop_node"><var mutable="false" name="cluster_shutdown" type="boolean" value="' + cshutdown + '"/><var mutable="false" name="purge_conf" type="boolean" value="' + purge_conf + '"/></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return (None, None)
+ return batchAttemptResult(doc)
+
+def nodeFence(rc, nodename):
+ batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="fence_node"><var mutable="false" name="nodename" type="string" value="' + nodename + '"/></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return (None, None)
+ return batchAttemptResult(doc)
+
+def nodeJoinCluster(rc, cluster_startup=False):
+ cstartup = 'false'
+ if cluster_startup == True:
+ cstartup = 'true'
+
+ batch_str = '<module name="cluster"><request sequence="111" API_version="1.0"><function_call name="start_node"><var mutable="false" name="cluster_startup" type="boolean" value="' + cstartup + '"/></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return (None, None)
+ return batchAttemptResult(doc)
+
+def startService(rc, servicename, preferrednode=None):
+ if preferrednode != None:
+ batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/><var mutable="false" name="nodename" type="string" value=\"' + preferrednode + '\" /></function_call></request></module>'
+ else:
+ batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="start_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return (None, None)
+ return batchAttemptResult(doc)
+
+def restartService(rc, servicename):
+ batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="restart_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return (None, None)
+ return batchAttemptResult(doc)
+
+def stopService(rc, servicename):
+ batch_str = '<module name="cluster"><request sequence="1254" API_version="1.0"><function_call name="stop_service"><var mutable="false" name="servicename" type="string" value=\"' + servicename + '\"/></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str)
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return (None, None)
+ return batchAttemptResult(doc)
+
+def getDaemonStates(rc, dlist):
+ batch_str = '<module name="service"><request API_version="1.0"><function_call name="query"><var mutable="false" name="search" type="list_xml">'
+
+ for item in dlist:
+ batch_str += '<service name=\"' + item + '\"/>'
+
+ batch_str += '</var></function_call></request></module>'
+
+ ricci_xml = rc.batch_run(batch_str, async=False)
+ if not ricci_xml:
+ return None
+ result = extractDaemonInfo(ricci_xml)
+ return result
+
+def extractDaemonInfo(bt_node):
+ if not bt_node:
+ return None
+
+ mod_node = None
+ for node in bt_node.childNodes:
+ if node.nodeType == xml.dom.Node.ELEMENT_NODE:
+ if node.nodeName == 'module':
+ mod_node = node
+ if not mod_node:
+ return None
+
+ resp_node = None
+ for node in mod_node.childNodes:
+ if node.nodeType == xml.dom.Node.ELEMENT_NODE:
+ resp_node = node
+ if not resp_node:
+ return None
+
+ fr_node = None
+ for node in resp_node.childNodes:
+ if node.nodeType == xml.dom.Node.ELEMENT_NODE:
+ fr_node = node
+ if not fr_node:
+ return None
+
+ varnode = None
+ for node in fr_node.childNodes:
+ if node.nodeName == 'var':
+ varnode = node
+ break
+ if not varnode:
+ return None
+
+ resultlist = list()
+ svc_node = None
+ for node in varnode.childNodes:
+ if node.nodeName == 'service':
+ svchash = {}
+ try:
+ name = node.getAttribute('name')
+ if not name:
+ raise
+ except:
+ name = '[unknown]'
+ svchash['name'] = name
+
+ try:
+ svc_enabled = node.getAttribute('enabled')
+ except:
+ svc_enabled = '[unknown]'
+ svchash['enabled'] = svc_enabled
+
+ try:
+ running = node.getAttribute('running')
+ except:
+ running = '[unknown]'
+ svchash['running'] = running
+ resultlist.append(svchash)
+
+ return resultlist
+
+def getClusterConf(rc):
+ doc = xml.dom.minidom.Document()
+ batch = doc.createElement('batch')
+ module = doc.createElement('module')
+ module.setAttribute('name', 'cluster')
+ request = doc.createElement('request')
+ request.setAttribute('API_version', '1.0')
+ call = doc.createElement('function_call')
+ call.setAttribute('name', 'get_cluster.conf')
+ request.appendChild(call)
+ module.appendChild(request)
+ batch.appendChild(module)
+
+ try:
+ ret = rc.process_batch(batch)
+ except Exception, e:
+ return None
+
+ if not ret:
+ return None
+
+ cur = ret
+ while len(cur.childNodes) > 0:
+ for i in cur.childNodes:
+ if i.nodeType == xml.dom.Node.ELEMENT_NODE:
+ if i.nodeName == 'var' and i.getAttribute('name') == 'cluster.conf':
+ return i.childNodes[1].cloneNode(True)
+ else:
+ cur = i
+ return None
--- conga/luci/site/luci/Extensions/cluster_adapters.py 2006/10/13 22:56:28 1.110
+++ conga/luci/site/luci/Extensions/cluster_adapters.py 2006/10/16 04:26:19 1.111
@@ -6,6 +6,7 @@
from conga_constants import *
from ricci_bridge import *
from ricci_communicator import *
+from string import lower
import time
import Products.ManagedSystem
from Products.Archetypes.utils import make_uuid
@@ -20,7 +21,7 @@
from clusterOS import resolveOSType
from GeneralError import GeneralError
from UnknownClusterError import UnknownClusterError
-from homebase_adapters import nodeUnauth, nodeAuth, manageCluster, createClusterSystems, havePermCreateCluster
+from homebase_adapters import nodeUnauth, nodeAuth, manageCluster, createClusterSystems, havePermCreateCluster, setNodeFlag, delNodeFlag
#Policy for showing the cluster chooser menu:
#1) If there are no clusters in the ManagedClusterSystems
@@ -103,6 +104,7 @@
return [errors, cluster_properties]
+
def validateCreateCluster(self, request):
errors = list()
messages = list()
@@ -117,7 +119,7 @@
sessionData = None
if not 'clusterName' in request.form or not request.form['clusterName']:
- return (False, {'errors': [ 'No cluster name was specified.' ] })
+ return (False, {'errors': [ 'No cluster name was specified.' ]})
clusterName = request.form['clusterName']
try:
@@ -177,14 +179,15 @@
if cluster_properties['isComplete'] == True:
batchNode = createClusterBatch(cluster_os,
- clusterName,
- clusterName,
- map(lambda x: x['ricci_host'], nodeList),
- True,
- True,
- enable_storage,
- False,
- rhn_dl)
+ clusterName,
+ clusterName,
+ map(lambda x: x['ricci_host'], nodeList),
+ True,
+ True,
+ enable_storage,
+ False,
+ rhn_dl)
+
if not batchNode:
nodeUnauth(nodeList)
cluster_properties['isComplete'] = False
@@ -210,7 +213,6 @@
cluster_properties['isComplete'] = False
errors.append('An error occurred while attempting to add cluster node \"' + i['ricci_host'] + '\"')
return (False, {'errors': errors, 'requestResults':cluster_properties })
-
buildClusterCreateFlags(self, batch_id_map, clusterName)
messages.append('Creation of cluster \"' + clusterName + '\" has begun')
@@ -1203,248 +1205,338 @@
return clist
def cluster_permission_check(cluster):
- #Does this take too long?
- sm = AccessControl.getSecurityManager()
- user = sm.getUser()
- if user.has_permission("View",cluster):
- return True
+ #Does this take too long?
+ try:
+ sm = AccessControl.getSecurityManager()
+ user = sm.getUser()
+ if user.has_permission('View', cluster):
+ return True
+ except:
+ pass
+ return False
- return False
+def getRicciAgent(self, clustername):
+ #Check cluster permission here! return none if false
+ path = CLUSTER_FOLDER_PATH + clustername
-def getRicciAgentForCluster(self, req):
- clustername = req['clustername']
- #Check cluster permission here! return none if false
- path = CLUSTER_FOLDER_PATH + clustername
- clusterfolder = self.restrictedTraverse(path)
- if clusterfolder != None:
- nodes = clusterfolder.objectItems('Folder')
- for node in nodes:
- rb = ricci_bridge(node[1].getId())
- if rb.getRicciResponse() == True:
- return node[1].getId()
- return None
- else:
- return None
+ try:
+ clusterfolder = self.restrictedTraverse(path)
+ if not clusterfolder:
+ raise
+ nodes = clusterfolder.objectItems('Folder')
+ if len(nodes) < 1:
+ return None
+ except:
+ return None
-def getRicciAgent(self, clustername):
- #Check cluster permission here! return none if false
- path = CLUSTER_FOLDER_PATH + clustername[0]
- clusterfolder = self.restrictedTraverse(path)
- if clusterfolder != None:
- nodes = clusterfolder.objectItems('Folder')
- for node in nodes:
- rb = ricci_bridge(node[1].getId())
- if rb.getRicciResponse() == True:
- return node[1].getId()
- return ""
- else:
- return ""
+ cluname = lower(clustername)
+ for node in nodes:
+ try:
+ hostname = node[1].getId()
+ except:
+ try:
+ hostname = node[0]
+ except:
+ continue
-def getClusterStatus(self, ricci_name):
- rb = ricci_bridge(ricci_name)
- doc = rb.getClusterStatus()
- results = list()
+ try:
+ rc = RicciCommunicator(hostname)
+ if not rc:
+ raise
+ except:
+ #raise Exception, ('unable to communicate with the ricci agent on %s', hostname)
+ continue
- if not doc or not doc.firstChild:
- return {}
+ try:
+ clu_info = rc.cluster_info()
+ if cluname != lower(clu_info[0]) and cluname != lower(clu_info[1]):
+ # node reports it's in a different cluster
+ raise
+ except:
+ continue
- vals = {}
- vals['type'] = "cluster"
- try:
- vals['alias'] = doc.firstChild.getAttribute('alias')
- except AttributeError, e:
- vals['alias'] = doc.firstChild.getAttribute('name')
- vals['votes'] = doc.firstChild.getAttribute('votes')
- vals['name'] = doc.firstChild.getAttribute('name')
- vals['minQuorum'] = doc.firstChild.getAttribute('minQuorum')
- vals['quorate'] = doc.firstChild.getAttribute('quorate')
- results.append(vals)
- for node in doc.firstChild.childNodes:
- if node.nodeName == 'node':
- vals = {}
- vals['type'] = "node"
- vals['clustered'] = node.getAttribute('clustered')
- vals['name'] = node.getAttribute('name')
- vals['online'] = node.getAttribute('online')
- vals['uptime'] = node.getAttribute('uptime')
- vals['votes'] = node.getAttribute('votes')
- results.append(vals)
- elif node.nodeName == 'service':
- vals = {}
- vals['type'] = 'service'
- vals['name'] = node.getAttribute('name')
- vals['nodename'] = node.getAttribute('nodename')
- vals['running'] = node.getAttribute('running')
- vals['failed'] = node.getAttribute('failed')
- vals['autostart'] = node.getAttribute('autostart')
- results.append(vals)
+ if rc.authed():
+ return rc
+ setNodeFlag(node[1], CLUSTER_NODE_NEED_AUTH)
+ return None
- return results
+def getRicciAgentForCluster(self, req):
+ try:
+ clustername = req['clustername']
+ except KeyError, e:
+ try:
+ clustername = req.form['clusterName']
+ if not clustername:
+ raise
+ except:
+ return None
+ return getRicciAgent(self, clustername)
+
+def getClusterStatus(self, rc):
+ clustatus_batch ='<?xml version="1.0" ?><batch><module name="cluster"><request API_version="1.0"><function_call name="status"/></request></module></batch>'
+ try:
+ clustatuscmd_xml = minidom.parseString(clustatus_batch).firstChild
+ except:
+ return {}
+
+ try:
+ ricci_xml = rc.process_batch(clustatuscmd_xml, async=False)
+ except:
+ return {}
+
+ doc = getPayload(ricci_xml)
+ if not doc or not doc.firstChild:
+ return {}
+ results = list()
+
+ vals = {}
+ vals['type'] = "cluster"
+
+ try:
+ vals['alias'] = doc.firstChild.getAttribute('alias')
+ except AttributeError, e:
+ vals['alias'] = doc.firstChild.getAttribute('name')
+
+ vals['votes'] = doc.firstChild.getAttribute('votes')
+ vals['name'] = doc.firstChild.getAttribute('name')
+ vals['minQuorum'] = doc.firstChild.getAttribute('minQuorum')
+ vals['quorate'] = doc.firstChild.getAttribute('quorate')
+ results.append(vals)
+
+ for node in doc.firstChild.childNodes:
+ if node.nodeName == 'node':
+ vals = {}
+ vals['type'] = "node"
+ vals['clustered'] = node.getAttribute('clustered')
+ vals['name'] = node.getAttribute('name')
+ vals['online'] = node.getAttribute('online')
+ vals['uptime'] = node.getAttribute('uptime')
+ vals['votes'] = node.getAttribute('votes')
+ results.append(vals)
+ elif node.nodeName == 'service':
+ vals = {}
+ vals['type'] = 'service'
+ vals['name'] = node.getAttribute('name')
+ vals['nodename'] = node.getAttribute('nodename')
+ vals['running'] = node.getAttribute('running')
+ vals['failed'] = node.getAttribute('failed')
+ vals['autostart'] = node.getAttribute('autostart')
+ results.append(vals)
+ return results
def getServicesInfo(self, status, modelb, req):
- map = {}
- maplist = list()
- baseurl = req['URL']
- cluname = req['clustername']
- for item in status:
- if item['type'] == "service":
- itemmap = {}
- itemmap['name'] = item['name']
- if item['running'] == "true":
- itemmap['running'] = "true"
- itemmap['nodename'] = item['nodename']
- itemmap['autostart'] = item['autostart']
- itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE
- svc = modelb.retrieveServiceByName(item['name'])
- dom = svc.getAttribute("domain")
- if dom != None:
- itemmap['faildom'] = dom
- else:
- itemmap['faildom'] = "No Failover Domain"
+ map = {}
+ maplist = list()
+
+ try:
+ baseurl = req['URL']
+ if not baseurl:
+ raise KeyError, 'is blank'
+ except KeyError, e:
+ baseurl = '.'
- maplist.append(itemmap)
+ try:
+ cluname = req['clustername']
+ if not cluname:
+ raise KeyError, 'is blank'
+ except KeyError, e:
+ try:
+ cluname = req.form['clusterName']
+ if not cluname:
+ raise
+ except:
+ cluname = '[error retrieving cluster name]'
- map['services'] = maplist
+ for item in status:
+ if item['type'] == "service":
+ itemmap = {}
+ itemmap['name'] = item['name']
+ if item['running'] == "true":
+ itemmap['running'] = "true"
+ itemmap['nodename'] = item['nodename']
+ itemmap['autostart'] = item['autostart']
+ itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE
+
+ svc = modelb.retrieveServiceByName(item['name'])
+ dom = svc.getAttribute("domain")
+ if dom != None:
+ itemmap['faildom'] = dom
+ else:
+ itemmap['faildom'] = "No Failover Domain"
+ maplist.append(itemmap)
- return map
+ map['services'] = maplist
+ return map
-def getServiceInfo(self,status,modelb,req):
- #set up struct for service config page
- baseurl = req['URL']
- cluname = req['clustername']
- hmap = {}
- root_uuid = 'toplevel'
+def getServiceInfo(self, status, modelb, req):
+ #set up struct for service config page
+ hmap = {}
+ root_uuid = 'toplevel'
- hmap['root_uuid'] = root_uuid
- hmap['uuid_list'] = map(lambda x: make_uuid('resource'), xrange(30))
+ try:
+ baseurl = req['URL']
+ if not baseurl:
+ raise KeyError, 'is blank'
+ except KeyError, e:
+ baseurl = '.'
- try:
- servicename = req['servicename']
- except KeyError, e:
- hmap['resource_list'] = {}
- return hmap
+ try:
+ cluname = req['clustername']
+ if not cluname:
+ raise KeyError, 'is blank'
+ except KeyError, e:
+ try:
+ cluname = req.form['clusterName']
+ if not cluname:
+ raise
+ except:
+ cluname = '[error retrieving cluster name]'
- for item in status:
- if item['type'] == "service":
- if item['name'] == servicename:
- hmap['name'] = servicename
- starturls = list()
- if item['running'] == "true":
- hmap['running'] = "true"
- #In this case, determine where it can run...
- innermap = {}
- nodename = item['nodename']
- innermap['current'] = "This service is currently running on %s" % nodename
- innermap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_STOP
- innermap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_RESTART
- nodes = modelb.getNodes()
- for node in nodes:
- starturl = {}
- if node.getName() != nodename:
- starturl['nodename'] = node.getName()
- starturl['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START + "&nodename=" + node.getName()
- starturls.append(starturl)
- innermap['links'] = starturls
- else:
- #Do not set ['running'] in this case...ZPT will detect it is missing
- #In this case, determine where it can run...
- innermap = {}
- innermap['current'] = "This service is currently stopped"
- innermap['enableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START
- nodes = modelb.getNodes()
- starturls = list()
- for node in nodes:
- starturl = {}
- starturl['nodename'] = node.getName()
- starturl['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START + "&nodename=" + node.getName()
- starturls.append(starturl)
- innermap['links'] = starturls
- hmap['innermap'] = innermap
-
- #Now build hashes for resources under service.
- #first get service by name from model
- svc = modelb.getService(servicename)
- resource_list = list()
- if svc != None:
- indent_ctr = 0
- children = svc.getChildren()
- for child in children:
- recurse_resources(root_uuid, child, resource_list, indent_ctr)
+ hmap['root_uuid'] = root_uuid
+ # uuids for the service page needed when new resources are created
+ hmap['uuid_list'] = map(lambda x: make_uuid('resource'), xrange(30))
+
+ try:
+ servicename = req['servicename']
+ except KeyError, e:
+ hmap['resource_list'] = {}
+ return hmap
+
+ for item in status:
+ if item['type'] == "service":
+ if item['name'] == servicename:
+ hmap['name'] = servicename
+ starturls = list()
+ if item['running'] == "true":
+ hmap['running'] = "true"
+ #In this case, determine where it can run...
+ innermap = {}
+ nodename = item['nodename']
+ innermap['current'] = "This service is currently running on %s" % nodename
+ innermap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_STOP
+ innermap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_RESTART
+ nodes = modelb.getNodes()
+ for node in nodes:
+ starturl = {}
+ if node.getName() != nodename:
+ starturl['nodename'] = node.getName()
+ starturl['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START + "&nodename=" + node.getName()
+ starturls.append(starturl)
+ innermap['links'] = starturls
+ else:
+ #Do not set ['running'] in this case...ZPT will detect it is missing
+ #In this case, determine where it can run...
+ innermap = {}
+ innermap['current'] = "This service is currently stopped"
+ innermap['enableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START
+ nodes = modelb.getNodes()
+ starturls = list()
+ for node in nodes:
+ starturl = {}
+ starturl['nodename'] = node.getName()
+ starturl['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START + "&nodename=" + node.getName()
+ starturls.append(starturl)
+ innermap['links'] = starturls
+ hmap['innermap'] = innermap
+
+ #Now build hashes for resources under service.
+ #first get service by name from model
+ svc = modelb.getService(servicename)
+ resource_list = list()
+ if svc != None:
+ indent_ctr = 0
+ children = svc.getChildren()
+ for child in children:
+ recurse_resources(root_uuid, child, resource_list, indent_ctr)
- hmap['resource_list'] = resource_list
- return hmap
+ hmap['resource_list'] = resource_list
+ return hmap
def recurse_resources(parent_uuid, child, resource_list, indent_ctr, parent=None):
- #First, add the incoming child as a resource
- #Next, check for children of it
- #Call yourself on every children
- #then return
- rc_map = {}
- if parent != None:
- rc_map['parent'] = parent
- rc_map['name'] = child.getName()
- if child.isRefObject() == True:
- rc_map['ref_object'] = True
- rc_map['type'] = child.getObj().getResourceType()
- else:
- rc_map['type'] = child.getResourceType()
-
- rc_map['indent_ctr'] = indent_ctr
-
- #Note: Final version needs all resource attrs
- rc_map['attrs'] = child.getAttributes()
- rc_map['uuid'] = make_uuid('resource')
- rc_map['parent_uuid'] = parent_uuid
-
- resource_list.append(rc_map)
- kids = child.getChildren()
- child_depth = 0
- new_indent_ctr = indent_ctr + 1
- for kid in kids:
- cdepth = recurse_resources(rc_map['uuid'], kid, resource_list, new_indent_ctr, child)
- child_depth = max(cdepth, child_depth)
+ #First, add the incoming child as a resource
+ #Next, check for children of it
+ #Call yourself on every children
+ #then return
+ rc_map = {}
+ if parent != None:
+ rc_map['parent'] = parent
+ rc_map['name'] = child.getName()
+ if child.isRefObject() == True:
+ rc_map['ref_object'] = True
+ rc_map['type'] = child.getObj().getResourceType()
+ else:
+ rc_map['type'] = child.getResourceType()
- rc_map['max_depth'] = child_depth
- return child_depth + 1
+ rc_map['indent_ctr'] = indent_ctr
-def serviceStart(self, ricci_agent, req):
- rb = ricci_bridge(ricci_agent)
- svcname = req['servicename']
- try:
- nodename = req['nodename']
- except KeyError, e:
- nodename = None
- batch_number,result = rb.startService(svcname,nodename)
+ #Note: Final version needs all resource attrs
+ rc_map['attrs'] = child.getAttributes()
+ rc_map['uuid'] = make_uuid('resource')
+ rc_map['parent_uuid'] = parent_uuid
+ resource_list.append(rc_map)
+ kids = child.getChildren()
+ child_depth = 0
+ new_indent_ctr = indent_ctr + 1
+ for kid in kids:
+ cdepth = recurse_resources(rc_map['uuid'], kid, resource_list, new_indent_ctr, child)
+ child_depth = max(cdepth, child_depth)
- #Now we need to create a DB flag for this system.
- cluname = req['clustername']
+ rc_map['max_depth'] = child_depth
+ return child_depth + 1
- path = CLUSTER_FOLDER_PATH + cluname
- clusterfolder = self.restrictedTraverse(path)
- batch_id = str(batch_number)
- objname = ricci_agent + "____flag"
- clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
- #Now we need to annotate the new DB object
- objpath = path + "/" + objname
- flag = self.restrictedTraverse(objpath)
- #flag[BATCH_ID] = batch_id
- #flag[TASKTYPE] = SERVICE_START
- #flag[FLAG_DESC] = "Starting service " + svcname
- flag.manage_addProperty(BATCH_ID,batch_id, "string")
- flag.manage_addProperty(TASKTYPE,SERVICE_START, "string")
- flag.manage_addProperty(FLAG_DESC,"Starting service \'" + svcname + "\'", "string")
+def serviceStart(self, rc, req):
+ try:
+ svcname = req['servicename']
+ except KeyError, e:
+ try:
+ svcname = req.form['servicename']
+ except:
+ return None
- response = req.RESPONSE
- response.redirect(req['HTTP_REFERER'] + "&busyfirst=true")
+ try:
+ nodename = req['nodename']
+ except KeyError, e:
+ try:
+ nodename = req.form['nodename']
+ except:
+ return None
+ try:
+ cluname = req['clustername']
+ except KeyError, e:
+ try:
+ cluname = req.form['clusterName']
+ except:
+ return None
-def serviceRestart(self, ricci_agent, req):
- rb = ricci_bridge(ricci_agent)
- svcname = req['servicename']
- batch_number, result = rb.restartService(svcname)
+ ricci_agent = rc.hostname()
+
+ batch_number, result = startService(rc, svcname, nodename)
+ #Now we need to create a DB flag for this system.
+
+ path = CLUSTER_FOLDER_PATH + cluname
+ clusterfolder = self.restrictedTraverse(path)
+ batch_id = str(batch_number)
+ objname = ricci_agent + "____flag"
+ clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+ #Now we need to annotate the new DB object
+ objpath = path + "/" + objname
+ flag = self.restrictedTraverse(objpath)
+ #flag[BATCH_ID] = batch_id
+ #flag[TASKTYPE] = SERVICE_START
+ #flag[FLAG_DESC] = "Starting service " + svcname
+ flag.manage_addProperty(BATCH_ID,batch_id, "string")
+ flag.manage_addProperty(TASKTYPE,SERVICE_START, "string")
+ flag.manage_addProperty(FLAG_DESC,"Starting service \'" + svcname + "\'", "string")
+ response = req.RESPONSE
+ response.redirect(req['HTTP_REFERER'] + "&busyfirst=true")
+def serviceRestart(self, rc, req):
+ svcname = req['servicename']
+ batch_number, result = restartService(rc, svcname)
+ ricci_agent = rc.hostname()
#Now we need to create a DB flag for this system.
cluname = req['clustername']
@@ -1466,14 +1558,15 @@
response = req.RESPONSE
response.redirect(req['HTTP_REFERER'] + "&busyfirst=true")
-def serviceStop(self, ricci_agent, req):
- rb = ricci_bridge(ricci_agent)
+def serviceStop(self, rc, req):
svcname = req['servicename']
- batch_number, result = rb.stopService(svcname)
+ batch_number, result = stopService(rc, svcname)
#Now we need to create a DB flag for this system.
cluname = req['clustername']
+ ricci_agent = rc.hostname()
+
path = CLUSTER_FOLDER_PATH + cluname
clusterfolder = self.restrictedTraverse(path)
batch_id = str(batch_number)
@@ -1787,175 +1880,241 @@
return map
def nodeTaskProcess(self, model, request):
- clustername = request['clustername']
- nodename = request['nodename']
- task = request['task']
- nodename_resolved = resolve_nodename(self, clustername, nodename)
- if nodename_resolved == None:
- return None
-
- if task == NODE_LEAVE_CLUSTER:
- rb = ricci_bridge(nodename_resolved)
- batch_number, result = rb.nodeLeaveCluster()
-
- path = CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved
- nodefolder = self.restrictedTraverse(path)
- batch_id = str(batch_number)
- objname = nodename_resolved + "____flag"
- if noNodeFlagsPresent(self, nodefolder, objname, nodename_resolved) == False:
- raise UnknownClusterError("Fatal", "An unfinished task flag exists for node %s" % nodename)
-
- nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
- #Now we need to annotate the new DB object
- objpath = path + "/" + objname
- flag = self.restrictedTraverse(objpath)
- flag.manage_addProperty(BATCH_ID,batch_id, "string")
- flag.manage_addProperty(TASKTYPE,NODE_LEAVE_CLUSTER, "string")
- flag.manage_addProperty(FLAG_DESC,"Node \'" + nodename + "\' leaving cluster", "string")
+ try:
+ clustername = request['clustername']
+ except KeyError, e:
+ try:
+ clustername = request.form['clusterName']
+ except:
+ return None
- response = request.RESPONSE
- #Is this correct? Should we re-direct to the cluster page?
- response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+ try:
+ nodename = request['nodename']
+ except KeyError, e:
+ try:
+ nodename = request.form['nodename']
+ except:
+ return None
- elif task == NODE_JOIN_CLUSTER:
- rb = ricci_bridge(nodename_resolved)
- batch_number, result = rb.nodeJoinCluster()
-
- path = CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved
- nodefolder = self.restrictedTraverse(path)
- batch_id = str(batch_number)
- objname = nodename_resolved + "____flag"
- nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
- #Now we need to annotate the new DB object
- objpath = path + "/" + objname
- flag = self.restrictedTraverse(objpath)
- flag.manage_addProperty(BATCH_ID,batch_id, "string")
- flag.manage_addProperty(TASKTYPE,NODE_JOIN_CLUSTER, "string")
- flag.manage_addProperty(FLAG_DESC,"Node \'" + nodename + "\' joining cluster", "string")
+ try:
+ task = request['task']
+ if not task:
+ raise
+ except KeyError, e:
+ try:
+ task = request.form['task']
+ except:
+ return None
- response = request.RESPONSE
- #Once again, is this correct? Should we re-direct to the cluster page?
- response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+ nodename_resolved = resolve_nodename(self, clustername, nodename)
+ if not nodename_resolved or not nodename or not task or not clustername:
+ return None
+ if task != NODE_FENCE:
+ # Fencing is the only task for which we don't
+ # want to talk to the node on which the action is
+ # to be performed.
+ try:
+ rc = RicciCommunicator(nodename_resolved)
+ # XXX - check the cluster
+ if not rc.authed():
+ # set the flag
+ rc = None
- elif task == NODE_REBOOT:
- rb = ricci_bridge(nodename_resolved)
- batch_number, result = rb.nodeReboot()
-
- path = CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved
- nodefolder = self.restrictedTraverse(path)
- batch_id = str(batch_number)
- objname = nodename_resolved + "____flag"
- nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
- #Now we need to annotate the new DB object
- objpath = path + "/" + objname
- flag = self.restrictedTraverse(objpath)
- flag.manage_addProperty(BATCH_ID,batch_id, "string")
- flag.manage_addProperty(TASKTYPE,NODE_REBOOT, "string")
- flag.manage_addProperty(FLAG_DESC,"Node \'" + nodename + "\' is being rebooted", "string")
+ if not rc:
+ raise
+ except:
+ return None
- response = request.RESPONSE
- #Once again, is this correct? Should we re-direct to the cluster page?
- response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+ if task == NODE_LEAVE_CLUSTER:
+ batch_number, result = nodeLeaveCluster(rc)
+
+ path = CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved
+ nodefolder = self.restrictedTraverse(path)
+ batch_id = str(batch_number)
+ objname = nodename_resolved + "____flag"
+ if noNodeFlagsPresent(self, nodefolder, objname, nodename_resolved) == False:
+ raise UnknownClusterError("Fatal", "An unfinished task flag exists for node %s" % nodename)
+
+ nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+ #Now we need to annotate the new DB object
+ objpath = path + "/" + objname
+ flag = self.restrictedTraverse(objpath)
+ flag.manage_addProperty(BATCH_ID,batch_id, "string")
+ flag.manage_addProperty(TASKTYPE,NODE_LEAVE_CLUSTER, "string")
+ flag.manage_addProperty(FLAG_DESC,"Node \'" + nodename + "\' leaving cluster", "string")
+
+ response = request.RESPONSE
+ #Is this correct? Should we re-direct to the cluster page?
+ response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+ elif task == NODE_JOIN_CLUSTER:
+ batch_number, result = nodeJoinCluster(rc)
+ path = CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved
+ nodefolder = self.restrictedTraverse(path)
+ batch_id = str(batch_number)
+ objname = nodename_resolved + "____flag"
+ nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+ #Now we need to annotate the new DB object
+ objpath = path + "/" + objname
+ flag = self.restrictedTraverse(objpath)
+ flag.manage_addProperty(BATCH_ID,batch_id, "string")
+ flag.manage_addProperty(TASKTYPE,NODE_JOIN_CLUSTER, "string")
+ flag.manage_addProperty(FLAG_DESC,"Node \'" + nodename + "\' joining cluster", "string")
+
+ response = request.RESPONSE
+ #Once again, is this correct? Should we re-direct to the cluster page?
+ response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+ elif task == NODE_REBOOT:
+ batch_number, result = nodeReboot(rc)
+ path = CLUSTER_FOLDER_PATH + clustername + "/" + nodename_resolved
+ nodefolder = self.restrictedTraverse(path)
+ batch_id = str(batch_number)
+ objname = nodename_resolved + "____flag"
+ nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+ #Now we need to annotate the new DB object
+ objpath = path + "/" + objname
+ flag = self.restrictedTraverse(objpath)
+ flag.manage_addProperty(BATCH_ID,batch_id, "string")
+ flag.manage_addProperty(TASKTYPE,NODE_REBOOT, "string")
+ flag.manage_addProperty(FLAG_DESC,"Node \'" + nodename + "\' is being rebooted", "string")
+
+ response = request.RESPONSE
+ #Once again, is this correct? Should we re-direct to the cluster page?
+ response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+ elif task == NODE_FENCE:
+ #here, we DON'T want to open connection to node to be fenced.
+ path = CLUSTER_FOLDER_PATH + clustername
+ try:
+ clusterfolder = self.restrictedTraverse(path)
+ if not clusterfolder:
+ raise
+ except:
+ return None
+ nodes = clusterfolder.objectItems('Folder')
+ found_one = False
+ for node in nodes:
+ if node[1].getID().find(nodename) != (-1):
+ continue
- elif task == NODE_FENCE:
- #here, we DON'T want to open connection to node to be fenced.
- path = CLUSTER_FOLDER_PATH + clustername
- clusterfolder = self.restrictedTraverse(path)
- if clusterfolder != None:
- nodes = clusterfolder.objectItems('Folder')
- found_one = False
- for node in nodes:
- if node[1].getID().find(nodename) != (-1):
- continue
- rb = ricci_bridge(node[1].getId())
- if rb.getRicciResponse() == True:
- found_one = True
- break
- if found_one == False:
- return None
- else:
- return None
+ try:
+ rc = RicciCommunicator(node[1].getId())
+ if not rc.authed():
+ # set the node flag
+ rc = None
+ if not rc:
+ raise
+ found_one = True
+ break
+ except:
+ continue
+ if not found_one:
+ return None
- batch_number, result = rb.nodeFence(nodename)
+ batch_number, result = nodeFence(rc, nodename)
+ path = path + "/" + nodename_resolved
+ nodefolder = self.restrictedTraverse(path)
+ batch_id = str(batch_number)
+ objname = nodename_resolved + "____flag"
+ nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+ #Now we need to annotate the new DB object
+ objpath = path + "/" + objname
+ flag = self.restrictedTraverse(objpath)
+ flag.manage_addProperty(BATCH_ID,batch_id, "string")
+ flag.manage_addProperty(TASKTYPE,NODE_FENCE, "string")
+ flag.manage_addProperty(FLAG_DESC,"Node \'" + nodename + "\' is being fenced", "string")
+
+ response = request.RESPONSE
+ #Once again, is this correct? Should we re-direct to the cluster page?
+ response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+ elif task == NODE_DELETE:
+ #We need to get a node name other than the node
+ #to be deleted, then delete the node from the cluster.conf
+ #and propogate it. We will need two ricci agents for this task.
- path = path + "/" + nodename_resolved
- nodefolder = self.restrictedTraverse(path)
- batch_id = str(batch_number)
- objname = nodename_resolved + "____flag"
- nodefolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
- #Now we need to annotate the new DB object
- objpath = path + "/" + objname
- flag = self.restrictedTraverse(objpath)
- flag.manage_addProperty(BATCH_ID,batch_id, "string")
- flag.manage_addProperty(TASKTYPE,NODE_FENCE, "string")
- flag.manage_addProperty(FLAG_DESC,"Node \'" + nodename + "\' is being fenced", "string")
+ # Make sure we can find a second node before we hose anything.
+ path = CLUSTER_FOLDER_PATH + clustername
+ try:
+ clusterfolder = self.restrictedTraverse(path)
+ if not clusterfolder:
+ raise
+ except:
+ return None
- response = request.RESPONSE
- #Once again, is this correct? Should we re-direct to the cluster page?
- response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername)
+ nodes = clusterfolder.objectItems('Folder')
+ found_one = False
+ for node in nodes:
+ if node[1].getId().find(nodename) != (-1):
+ continue
+ #here we make certain the node is up...
+ # XXX- we should also make certain this host is still
+ # in the cluster we believe it is.
+ try:
+ rc2 = RicciCommunicator(node[1].getId())
+ if not rc2.authed():
+ # set the flag
+ rc2 = None
+ if not rc2:
+ raise
+ found_one = True
+ break
+ except:
+ continue
- elif task == NODE_DELETE:
- #We need to get a node name other than the node
- #to be deleted, then delete the node from the cluster.conf
- #and propogate it. We will need two ricci agents for this task.
-
- #First, delete cluster.conf from node to be deleted.
-
- #next, have node leave cluster.
- rb = ricci_bridge(nodename_resolved)
- batch_number, result = rb.nodeLeaveCluster()
-
- #It is not worth flagging this node in DB, as we are going
- #to delete it anyway. Now, we need to delete node from model
- #and send out new cluster.conf
-
- model.deleteNode(nodename)
- str_buf = ""
- model.exportModelAsString(str_buf)
-
- #here, we DON'T want to open connection to node to be fenced.
- path = CLUSTER_FOLDER_PATH + clustername
- clusterfolder = self.restrictedTraverse(path)
- if clusterfolder != None:
- nodes = clusterfolder.objectItems('Folder')
- found_one = False
- for node in nodes:
- if node[1].getID().find(nodename) != (-1):
- continue
- #here we make certain the node is up...
- rbridge = ricci_bridge(node[1].getId())
- if rbridge.getRicciResponse() == True:
- found_one = True
- break
- if found_one == False:
- return None
- else:
- return None
+ if not found_one:
+ return None
- batch_number, result = rbridge.setClusterConf(str(str_buf))
+ #First, delete cluster.conf from node to be deleted.
+ #next, have node leave cluster.
+ batch_number, result = nodeLeaveCluster(rc)
+
+ #It is not worth flagging this node in DB, as we are going
+ #to delete it anyway. Now, we need to delete node from model
+ #and send out new cluster.conf
+ delete_target = None
+ try:
+ nodelist = model.getClusterNodesPtr().getChildren()
+ for n in nodelist:
+ if n.getName() == nodename:
+ delete_target = n
+ break
+ except:
+ return None
- #Now we need to delete the node from the DB
- path = CLUSTER_FOLDER_PATH + clustername
- del_path = path + "/" + nodename_resolved
- delnode = self.restrictedTraverse(del_path)
- clusterfolder = self.restrictedTraverse(path)
- clusterfolder.manage_delObjects(delnode[0])
+ if delete_target is None:
+ return None
- batch_id = str(batch_number)
- objname = nodename_resolved + "____flag"
- clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
- #Now we need to annotate the new DB object
- objpath = path + "/" + objname
- flag = self.restrictedTraverse(objpath)
- flag.manage_addProperty(BATCH_ID,batch_id, "string")
- flag.manage_addProperty(TASKTYPE,NODE_DELETE, "string")
- flag.manage_addProperty(FLAG_DESC,"Deleting node \'" + nodename + "\'", "string")
- response = request.RESPONSE
- response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
+ model.deleteNode(delete_target)
+ str_buf = ""
+ model.exportModelAsString(str_buf)
+
+ # propagate the new cluster.conf via the second node
+ batch_number, result = setClusterConf(rc2, str(str_buf))
+
+ #Now we need to delete the node from the DB
+ path = str(CLUSTER_FOLDER_PATH + clustername)
+ del_path = str(path + "/" + nodename_resolved)
+ try:
+ delnode = self.restrictedTraverse(del_path)
+ clusterfolder = self.restrictedTraverse(path)
+ clusterfolder.manage_delObjects(delnode[0])
+ except:
+ # XXX - we need to handle this
+ pass
+
+ batch_id = str(batch_number)
+ objname = str(nodename_resolved + "____flag")
+ clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+ #Now we need to annotate the new DB object
+ objpath = str(path + "/" + objname)
+ flag = self.restrictedTraverse(objpath)
+ flag.manage_addProperty(BATCH_ID,batch_id, "string")
+ flag.manage_addProperty(TASKTYPE,NODE_DELETE, "string")
+ flag.manage_addProperty(FLAG_DESC,"Deleting node \'" + nodename + "\'", "string")
+ response = request.RESPONSE
+ response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
def getNodeInfo(self, model, status, request):
infohash = {}
@@ -2030,13 +2189,13 @@
infohash['d_states'] = None
if nodestate == NODE_ACTIVE or nodestate == NODE_INACTIVE:
#call service module on node and find out which daemons are running
- rb = ricci_bridge(nodename)
+ rc = RicciCommunicator(nodename)
dlist = list()
dlist.append("ccsd")
dlist.append("cman")
dlist.append("fenced")
dlist.append("rgmanager")
- states = rb.getDaemonStates(dlist)
+ states = getDaemonStates(rc, dlist)
infohash['d_states'] = states
infohash['logurl'] = baseurl + "?pagetype=" + NODE_LOGS + "&nodename=" + nodename + "&clustername=" + clustername
@@ -2157,6 +2316,8 @@
return map
for i in xrange(2):
+ if not i in levels:
+ continue
fence_struct = {}
if levels[i] != None:
level = levels[i]
@@ -2177,6 +2338,8 @@
for kid in kids:
name = kid.getName()
found_fd = False
+ if not i in map:
+ continue
for entry in map[i]:
if entry['name'] == name:
fence_struct = entry
@@ -2213,18 +2376,36 @@
return map
+def getLogsForNode(self, request):
+ try:
+ nodename = request['nodename']
+ except KeyError, e:
+ try:
+ nodename = request.form['nodename']
+ except:
+ return "Unable to resolve node name %s to retrieve logging information" % nodename
+ try:
+ clustername = request['clustername']
+ except KeyError, e:
+ try:
+ clustername = request.form['clusterName']
+ except:
+ return "Unable to resolve node name %s to retrieve logging information" % nodename
-def getLogsForNode(self, request):
- nodename = request['nodename']
- clustername = request['clustername']
- try:
- nodename_resolved = resolve_nodename(self, clustername, nodename)
- except:
- return "Unable to resolve node name %s to retrieve logging information" % nodename
+ try:
+ nodename_resolved = resolve_nodename(self, clustername, nodename)
+ except:
+ return "Unable to resolve node name %s to retrieve logging information" % nodename
- rb = ricci_bridge(nodename_resolved)
- return rb.getNodeLogs()
+ try:
+ rc = RicciCommunicator(nodename_resolved)
+ if not rc:
+ raise
+ except:
+ return "Unable to resolve node name %s to retrieve logging information" % nodename_resolved
+
+ return getNodeLogs(rc)
def isClusterBusy(self, req):
items = None
@@ -2233,14 +2414,34 @@
redirect_message = False
nodereports = list()
map['nodereports'] = nodereports
- cluname = req['clustername']
+
+ try:
+ cluname = req['clustername']
+ except KeyError, e:
+ try:
+ cluname = req.form['clustername']
+ except:
+ try:
+ cluname = req.form['clusterName']
+ except:
+ return map
+
path = CLUSTER_FOLDER_PATH + cluname
- clusterfolder = self.restrictedTraverse(path)
- items = clusterfolder.objectItems('ManagedSystem')
- if len(items) == 0:
- return map #This returns an empty map, and should indicate not busy
- else:
- map['busy'] = "true"
+ try:
+ clusterfolder = self.restrictedTraverse(str(path))
+ if not clusterfolder:
+ raise
+ except:
+ return map
+
+ try:
+ items = clusterfolder.objectItems('ManagedSystem')
+ if len(items) == 0:
+ return map #This returns an empty map, and should indicate not busy
+ except:
+ return map
+
+ map['busy'] = "true"
#Ok, here is what is going on...if there is an item,
#we need to call the ricci_bridge and get a batch report.
#This report will tell us one of three things:
@@ -2352,8 +2553,8 @@
node_report = {}
node_report['isnodecreation'] = False
ricci = item[0].split("____") #This removes the 'flag' suffix
- rb = ricci_bridge(ricci[0])
- finished = rb.checkBatch(item[1].getProperty(BATCH_ID))
+ rc = RicciCommunicator(ricci[0])
+ finished = checkBatch(rc, item[1].getProperty(BATCH_ID))
if finished == True:
node_report['desc'] = item[1].getProperty(FLAG_DESC) + REDIRECT_MSG
nodereports.append(node_report)
@@ -2381,38 +2582,17 @@
map['refreshurl'] = '5; url=\".\"'
return map
-def getClusterOS(self, ragent, request):
- try:
- clustername = request['clustername']
- except KeyError, e:
- try:
- clustername = request.form['clustername']
- except:
- return {}
- except:
- return {}
-
- try:
- ricci_agent = resolve_nodename(self, clustername, ragent)
- except:
- map = {}
- map['os'] = ""
- map['isVirtualized'] = False
- return map
-
- try:
- rc = RicciCommunicator(ricci_agent)
- except:
- map = {}
- map['os'] = ""
- map['isVirtualized'] = False
- return map
-
- map = {}
- os_str = resolveOSType(rc.os())
- map['os'] = os_str
- map['isVirtualized'] = rc.dom0()
- return map
+def getClusterOS(self, rc):
+ map = {}
+ try:
+ os_str = resolveOSType(rc.os())
+ map['os'] = os_str
+ map['isVirtualized'] = rc.dom0()
+ except:
+ # default to rhel5 if something crazy happened.
+ map['os'] = 'rhel5'
+ map['isVirtualized'] = False
+ return map
def getResourcesInfo(modelb, request):
resList = list()
@@ -2477,41 +2657,73 @@
except:
return {}
-def delResource(self, request, ragent):
- modelb = request.SESSION.get('model')
- resPtr = modelb.getResourcesPtr()
- resources = resPtr.getChildren()
- name = request['resourcename']
- for res in resources:
- if res.getName() == name:
- resPtr.removeChild(res)
- break
+def delResource(self, rc, request):
+ errstr = 'An error occurred in while attempting to set the cluster.conf'
- modelstr = ""
- conf = modelb.exportModelAsString()
- rb = ricci_bridge(ragent)
- #try:
- if True:
- batch_number, result = rb.setClusterConf(str(conf))
- #except:
- else:
- return "Some error occured in setClusterConf\n"
+ try:
+ modelb = request.SESSION.get('model')
+ except:
+ return errstr
- clustername = request['clustername']
- path = CLUSTER_FOLDER_PATH + clustername
- clusterfolder = self.restrictedTraverse(path)
- batch_id = str(batch_number)
- objname = ragent + "____flag"
- clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
- #Now we need to annotate the new DB object
- objpath = path + "/" + objname
- flag = self.restrictedTraverse(objpath)
- flag.manage_addProperty(BATCH_ID,batch_id, "string")
- flag.manage_addProperty(TASKTYPE,RESOURCE_REMOVE, "string")
- flag.manage_addProperty(FLAG_DESC,"Removing Resource \'" + request['resourcename'] + "\'", "string")
+ try:
+ name = request['resourcename']
+ except KeyError, e:
+ return errstr + ': ' + str(e)
+
+ try:
+ clustername = request['clustername']
+ except KeyError, e:
+ try:
+ clustername = request.form['clustername']
+ except:
+ return errstr + ': could not determine the cluster name.'
+
+ try:
+ ragent = rc.hostname()
+ if not ragent:
+ raise
+ except:
+ return errstr
+
+ resPtr = modelb.getResourcesPtr()
+ resources = resPtr.getChildren()
+
+ found = 0
+ for res in resources:
+ if res.getName() == name:
+ resPtr.removeChild(res)
+ found = 1
+ break
+
+ if not found:
+ return errstr + ': the specified resource was not found.'
+
+ try:
+ conf = modelb.exportModelAsString()
+ if not conf:
+ raise
+ except:
+ return errstr
+
+ batch_number, result = setClusterConf(str(conf))
+ if batch_number is None or result is None:
+ return errstr
+
+ modelstr = ""
+ path = CLUSTER_FOLDER_PATH + str(clustername)
+ clusterfolder = self.restrictedTraverse(path)
+ batch_id = str(batch_number)
+ objname = str(ragent) + '____flag'
+ clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
+ #Now we need to annotate the new DB object
+ objpath = str(path + '/' + objname)
+ flag = self.restrictedTraverse(objpath)
+ flag.manage_addProperty(BATCH_ID, batch_id, "string")
+ flag.manage_addProperty(TASKTYPE, RESOURCE_REMOVE, "string")
+ flag.manage_addProperty(FLAG_DESC, "Removing Resource \'" + request['resourcename'] + "\'", "string")
- response = request.RESPONSE
- response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
+ response = request.RESPONSE
+ response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
def addIp(request):
modelb = request.SESSION.get('model')
@@ -2973,16 +3185,24 @@
messages = list()
for i in missing_list:
cluster_node.delObjects([i])
+ ## or alternately
+ #new_node = cluster_node.restrictedTraverse(i)
+ #setNodeFlag(self, new_node, CLUSTER_NODE_NOT_MEMBER)
messages.append('Node \"' + i + '\" is no longer in a member of cluster \"' + clusterName + '.\". It has been deleted from the management interface for this cluster.')
+ new_flags = CLUSTER_NODE_NEED_AUTH | CLUSTER_NODE_ADDED
for i in new_list:
- cluster_node.manage_addFolder(i, '__luci__:csystem:' + clusterName)
- cluster_node.manage_addProperty('exceptions', 'auth', 'string')
- messages.append('A new node, \"' + i + ',\" is now a member of cluster \"' + clusterName + '.\". It has added to the management interface for this cluster, but you must authenticate to it in order for it to be fully functional.')
+ try:
+ cluster_node.manage_addFolder(i, '__luci__:csystem:' + clusterName)
+ new_node = cluster_node.restrictedTraverse(i)
+ setNodeFlag(self, new_node, new_flags)
+ messages.append('A new node, \"' + i + ',\" is now a member of cluster \"' + clusterName + '.\" It has added to the management interface for this cluster, but you must authenticate to it in order for it to be fully functional.')
+ except:
+ messages.append('A new node, \"' + i + ',\" is now a member of cluster \"' + clusterName + ',\". but has not added to the management interface for this cluster as a result of an error creating the database entry.')
return messages
-def addResource(self, request, ragent):
+def addResource(self, rc, request):
if not request.form:
return (False, {'errors': ['No form was submitted.']})
@@ -3000,47 +3220,57 @@
if request.form['type'] != 'ip':
return (False, {'errors': ['No resource name was given.']})
+ try:
+ clustername = request['clustername']
+ except KeyError, e:
+ try:
+ clustername = request.form['clustername']
+ except:
+ return 'unable to determine the current cluster\'s name'
+
res = resourceAddHandler[type](request)
modelb = request.SESSION.get('model')
modelstr = ""
conf = modelb.exportModelAsString()
- rb = ricci_bridge(ragent)
- #try:
- if True:
- batch_number, result = rb.setClusterConf(str(conf))
- #except:
- else:
+
+ try:
+ ragent = rc.hostname()
+ if not ragent:
+ raise
+ batch_number, result = setClusterConf(str(conf))
+ if batch_number is None or result is None:
+ raise
+ except:
return "Some error occured in setClusterConf\n"
- clustername = request['clustername']
- path = CLUSTER_FOLDER_PATH + clustername
+ path = str(CLUSTER_FOLDER_PATH + clustername)
clusterfolder = self.restrictedTraverse(path)
batch_id = str(batch_number)
- objname = ragent + "____flag"
+ objname = str(ragent + '____flag')
clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
#Now we need to annotate the new DB object
- objpath = path + "/" + objname
+ objpath = str(path + '/' + objname)
flag = self.restrictedTraverse(objpath)
- flag.manage_addProperty(BATCH_ID,batch_id, "string")
- flag.manage_addProperty(TASKTYPE,RESOURCE_ADD, "string")
+ flag.manage_addProperty(BATCH_ID, batch_id, "string")
+ flag.manage_addProperty(TASKTYPE, RESOURCE_ADD, "string")
+
if type != 'ip':
- flag.manage_addProperty(FLAG_DESC,"Creating New Resource \'" + request.form['resourceName'] + "\'", "string")
+ flag.manage_addProperty(FLAG_DESC, "Creating New Resource \'" + request.form['resourceName'] + "\'", "string")
else:
- flag.manage_addProperty(FLAG_DESC,"Creating New Resource \'" + res.attr_hash['address'] + "\'", "string")
+ flag.manage_addProperty(FLAG_DESC, "Creating New Resource \'" + res.attr_hash['address'] + "\'", "string")
+
response = request.RESPONSE
response.redirect(request['HTTP_REFERER'] + "&busyfirst=true")
def getResourceForEdit(modelb, name):
- resPtr = modelb.getResourcesPtr()
- resources = resPtr.getChildren()
-
- for res in resources:
- if res.getName() == name:
- resPtr.removeChild(res)
- break
-
- return res
+ resPtr = modelb.getResourcesPtr()
+ resources = resPtr.getChildren()
+ for res in resources:
+ if res.getName() == name:
+ resPtr.removeChild(res)
+ return res
+ raise KeyError, name
def appendModel(request, model):
try:
@@ -3049,26 +3279,35 @@
pass
def resolve_nodename(self, clustername, nodename):
- path = CLUSTER_FOLDER_PATH + clustername
- clusterfolder = self.restrictedTraverse(path)
- objs = clusterfolder.objectItems('Folder')
- for obj in objs:
- if obj[0].find(nodename) != (-1):
- return obj[0]
-
- raise
+ path = CLUSTER_FOLDER_PATH + clustername
+ clusterfolder = self.restrictedTraverse(path)
+ objs = clusterfolder.objectItems('Folder')
+ for obj in objs:
+ if obj[0].find(nodename) != (-1):
+ return obj[0]
+ raise
def noNodeFlagsPresent(self, nodefolder, flagname, hostname):
- items = nodefolder.objectItems()
- for item in items:
- if item[0] == flagname: #a flag already exists...
- #try and delete it
- rb = ricci_bridge(hostname)
- finished = rb.checkBatch(item[1].getProperty(BATCH_ID))
- if finished == True:
- nodefolder.manage_delObjects(item[0])
- return True
- else:
- return False #Not finished, so cannot remove flag
+ items = nodefolder.objectItems('ManagedSystem')
- return True
+ for item in items:
+ if item[0] != flagname:
+ continue
+
+ #a flag already exists... try to delete it
+ rc = RicciCommunicator(hostname)
+ finished = checkBatch(rc, item[1].getProperty(BATCH_ID))
+ if finished == True:
+ try:
+ nodefolder.manage_delObjects(item[0])
+ except:
+ return False
+ return True
+ else:
+ #Not finished, so cannot remove flag
+ return False
+ return True
+
+def getModelBuilder(rc):
+ cluster_conf_node = getClusterConf(rc)
+ return ModelBuilder(0, None, None, cluster_conf_node)
--- conga/luci/site/luci/Extensions/Variable.py 2006/10/15 22:34:54 1.2
+++ conga/luci/site/luci/Extensions/Variable.py 2006/10/16 04:26:19 1.3
@@ -85,7 +85,6 @@
self.__name = str(name)
self.__mods = mods
self.set_value(value)
- return
def get_name(self):
return self.__name
--- conga/luci/site/luci/Extensions/PropsObject.py 2006/05/30 20:17:21 1.1
+++ conga/luci/site/luci/Extensions/PropsObject.py 2006/10/16 04:26:19 1.2
@@ -10,7 +10,6 @@
def __init__(self):
self.__vars = {}
- return
def add_prop(self, variable):
self.__vars[variable.get_name()] = variable
next reply other threads:[~2006-10-16 4:26 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-10-16 4:26 rmccabe [this message]
-- strict thread matches above, loose matches on Subject: below --
2007-07-26 4:16 [Cluster-devel] conga/luci cluster/form-macros cluster/index_h rmccabe
2007-02-20 23:09 rmccabe
2007-02-20 23:07 rmccabe
2006-12-21 5:08 rmccabe
2006-11-07 21:33 rmccabe
2006-11-03 19:13 rmccabe
2006-10-31 17:28 rmccabe
2006-09-08 22:54 rmccabe
2006-07-19 20:20 rmccabe
2006-07-05 20:13 rmccabe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20061016042621.28384.qmail@sourceware.org \
--to=rmccabe@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).