From: rmccabe@sourceware.org <rmccabe@sourceware.org>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] conga ./conga.spec.in.in luci/cluster/form-mac ...
Date: 8 Dec 2006 20:47:38 -0000 [thread overview]
Message-ID: <20061208204738.8972.qmail@sourceware.org> (raw)
CVSROOT: /cvs/cluster
Module name: conga
Changes by: rmccabe at sourceware.org 2006-12-08 20:47:37
Modified files:
. : conga.spec.in.in
luci/cluster : form-macros
luci/homebase : validate_cluster_add.js
luci/site/luci/Extensions: cluster_adapters.py
Log message:
- more fence fixes
- fix for most of the add node fails bug
- fix for 218964
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/conga.spec.in.in.diff?cvsroot=cluster&r1=1.57&r2=1.58
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&r1=1.125&r2=1.126
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/validate_cluster_add.js.diff?cvsroot=cluster&r1=1.4&r2=1.5
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&r1=1.176&r2=1.177
--- conga/conga.spec.in.in 2006/12/06 23:03:35 1.57
+++ conga/conga.spec.in.in 2006/12/08 20:47:37 1.58
@@ -284,7 +284,7 @@
%changelog
-* Wed Dec 06 2006 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
+* Wed Dec 06 2006 Stanko Kupcevic <kupcevic@redhat.com> 0.9.1-2
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXX UPDATE NOT RELEASED YET XXXXXXXXXXXXXXXXXXX
--- conga/luci/cluster/form-macros 2006/12/06 22:44:18 1.125
+++ conga/luci/cluster/form-macros 2006/12/08 20:47:37 1.126
@@ -1325,7 +1325,7 @@
<td>Hostname</td>
<td>
<input name="hostname" type="text"
- tal:attributes="value cur_fencedev/hostname | nothing" />
+ tal:attributes="value cur_fencedev/hostname | cur_fencedev/ipaddr | nothing" />
</td>
</tr>
<tr>
@@ -2299,9 +2299,7 @@
global nodestatus python: here.getClusterStatus(request, ricci_agent);
global nodeinfo python: here.getNodeInfo(modelb, nodestatus, request);
global status_class python: 'node_' + (nodeinfo['nodestate'] == '0' and 'active' or (nodeinfo['nodestate'] == '1' and 'inactive' or 'unknown'));
- global cluster_node_status_str python: (nodeinfo['nodestate'] == '0' and 'Cluster member' or (nodeinfo['nodestate'] == '1' and 'Currently not a cluster participant' or 'This node is not responding'));
- global fenceinfo python: here.getFenceInfo(modelb, request);
- global fencedevinfo python: here.getFencesInfo(modelb, request)"
+ global cluster_node_status_str python: (nodeinfo['nodestate'] == '0' and 'Cluster member' or (nodeinfo['nodestate'] == '1' and 'Currently not a cluster participant' or 'This node is not responding'))"
/>
<table class="cluster node" width="100%">
@@ -2443,6 +2441,17 @@
<tal:block metal:use-macro="here/form-macros/macros/fence-form-list" />
</div>
+ <tal:block tal:define="
+ global fenceinfo python: here.getFenceInfo(modelb, request);
+ global fencedevinfo python: here.getFencesInfo(modelb, request)" />
+
+ <div>
+ fenceinfo:
+ <span tal:replace="fenceinfo" /><br/>
+ fencedevinfo:
+ <span tal:replace="fencedevinfo" />
+ </div>
+
<div class="invisible" id="shared_fence_devices">
<tal:block tal:repeat="cur_fencedev fencedevinfo/fencedevs">
<tal:block metal:use-macro="here/form-macros/macros/shared-fence-device-list" />
@@ -2494,33 +2503,57 @@
</tr>
<tr class="cluster node info_top fence">
- <td class="cluster node fence_main fence"><div class="fence_container">
- <div id="fence_list_level1">
- <tal:comment tal:replace="nothing">
- XXX - fill in any existing fence devices for this node
- and update the counter number for this level
- </tal:comment>
+ <td class="cluster node fence_main fence">
+ <div class="fence_container">
+ <div id="fence_list_level1" tal:define="global cur_fence_num python: 0">
+ <tal:block tal:condition="exists: fenceinfo/level1">
+ <tal:block tal:repeat="cur_fencedev fenceinfo/level1">
+ <tal:block tal:define="
+ cur_fence_type cur_fencedev/agent | nothing;
+ cur_fence_level python: 1;">
+ <div tal:attributes="id python: 'fence1_' + str(cur_fence_num)">
+ <tal:block
+ metal:use-macro="here/form-macros/macros/fencedev-cond-ladder" />
+ </div>
+ </tal:block>
+ <tal:block tal:define="global cur_fence_num python: cur_fence_num + 1" />
+ </tal:block>
+ </tal:block>
+ <tal:block
+ tal:replace="structure python: '<script type='+chr(0x22)+'text/javascript'+chr(0x22)+'>num_fences_level[0] = ' + str(cur_fence_num) + ';</script>'" />
</div>
<div class="fence_control">
<input type="button" value="Add a fence to this level"
onclick="add_node_fence_device(1);" />
</div>
- </div></td>
+ </div>
+ </td>
- <td class="cluster node fence_main fence"><div class="fence_container">
- <div id="fence_list_level2">
- <tal:comment tal:replace="nothing">
- XXX - fill in any existing fence devices for this node
- and update the counter number for this level
- </tal:comment>
+ <td class="cluster node fence_main fence">
+ <div class="fence_container">
+ <div id="fence_list_level2" tal:define="global cur_fence_num python: 0">
+ <tal:block tal:condition="exists: fenceinfo/level2">
+ <tal:block tal:repeat="cur_fencedev fenceinfo/level2">
+ <tal:block tal:define="cur_fence_type cur_fencedev/agent | nothing">
+ <div tal:attributes="id python: 'fence2_' + str(cur_fence_num)">
+ <tal:block
+ metal:use-macro="here/form-macros/macros/fencedev-cond-ladder" />
+ </div>
+ </tal:block>
+ <tal:block tal:define="global cur_fence_num python: cur_fence_num + 1" />
+ </tal:block>
+ </tal:block>
+ <tal:block
+ tal:replace="structure python: '<script type='+chr(0x22)+'text/javascript'+chr(0x22)+'>num_fences_level[1] = ' + str(cur_fence_num) + ';</script>'" />
</div>
<div class="fence_control">
<input type="button" value="Add a fence to this level"
onclick="add_node_fence_device(2)" />
</div>
- </div></td>
+ </div>
+ </td>
</tr>
</tbody>
</table>
@@ -2671,6 +2704,7 @@
<form name="adminform" action="" method="post">
<input name="numStorage" type="hidden" value="1" />
<input name="pagetype" type="hidden" value="15" />
+ <input name="addnode" type="hidden" value="1" />
<input type="hidden" name="clusterName"
tal:attributes="
value request/form/clusterName | request/clustername | nothing"
@@ -3430,20 +3464,7 @@
</tal:block>
</div>
-<div metal:define-macro="fencedev-form">
- <h2>Fence Device Form</h2>
-
- <div class="cluster fencedev">
- <tal:block tal:define="
- global cur_fencename request/fencename | nothing;
- global cur_cluster request/clustername | nothing;
- global cur_fence_type python: 'fence_apc'"/>
-
- <span tal:condition="cur_fencename">
- <span tal:define="global cur_fencedev python:here.getFence(modelb,request);
- global cur_fence_type cur_fencedev/agent"/>
- </span>
-
+<div metal:define-macro="fencedev-cond-ladder">
<tal:block tal:condition="python: cur_fence_type == 'fence_apc'">
<tal:block metal:use-macro="here/form-macros/macros/fence-form-apc" />
</tal:block>
@@ -3515,10 +3536,26 @@
<tal:block tal:condition="python: cur_fence_type == 'fence_manual'">
<tal:block metal:use-macro="here/form-macros/macros/fence-form-manual" />
</tal:block>
+</div>
- <div class="fence_submit">
- <input class="hbInput" type="button" value="Submit" name="Submit" />
- </div>
+
+<div metal:define-macro="fencedev-form">
+ <h2>Fence Device Form</h2>
+
+ <div class="cluster fencedev">
+ <tal:block tal:define="
+ global cur_fencename request/fencename | nothing;
+ global cur_cluster request/clustername | nothing;
+ global cur_fence_type python: 'fence_apc'"/>
+
+ <span tal:condition="cur_fencename">
+ <span tal:define="
+ global cur_fencedev python:here.getFence(modelb,request);
+ global cur_fence_type cur_fencedev/agent" />
+ </span>
+
+ <tal:block
+ metal:use-macro="here/form-macros/macros/fencedev-cond-ladder" />
</div>
</div>
--- conga/luci/homebase/validate_cluster_add.js 2006/09/27 22:49:09 1.4
+++ conga/luci/homebase/validate_cluster_add.js 2006/12/08 20:47:37 1.5
@@ -29,7 +29,13 @@
if (error_dialog(errors))
return (-1);
- if (confirm('Add the cluster \"' + clusterName + '\" to the Luci management interface?'))
+ var confirm_str = '';
+ if (form.addnode)
+ confirm_str = 'Add node' + (added_storage.length > 1 ? 's' : '') + ' to the \"' + clusterName + '\" cluster?';
+ else
+ confirm_str = 'Add the cluster \"' + clusterName + '\" to the Luci management interface?';
+
+ if (confirm(confirm_str))
form.submit();
return (0);
--- conga/luci/site/luci/Extensions/cluster_adapters.py 2006/12/06 22:44:18 1.176
+++ conga/luci/site/luci/Extensions/cluster_adapters.py 2006/12/08 20:47:37 1.177
@@ -280,6 +280,7 @@
if 'clusterName' in request.form:
clusterName = str(request.form['clusterName'])
else:
+ luci_log.debug_verbose('vACN00: no cluster name was given')
return (False, {'errors': [ 'Cluster name is missing'], 'requestResults': requestResults })
rhn_dl = 1
@@ -301,8 +302,9 @@
try:
numStorage = int(request.form['numStorage'])
if numStorage < 1:
- raise
- except:
+ raise Exception, 'no nodes were added'
+ except Exception, e:
+ luci_log.debug_verbose('vACN0: %s: %s' % (clusterName, str(e)))
errors.append('You must specify at least one node to add to the cluster')
return (False, {'errors': [ errors ], 'requestResults': requestResults })
@@ -313,34 +315,56 @@
try:
nodeList = cluster_properties['nodeList']
if len(nodeList) < 1:
- raise
- except:
+ raise Exception, 'no cluster nodes'
+ except Exception, e:
+ luci_log.debug_verbose('vACN1: %s: %s' % (clusterName, str(e)))
errors.append('You must specify@least one valid node to add to the cluster')
+ clusterObj = None
try:
clusterObj = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName)
cluster_os = clusterObj.manage_getProperty('cluster_os')
if not cluster_os:
- luci_log.debug('The cluster OS property is missing for cluster ' + clusterName)
- raise Exception, 'no cluster OS was found.'
+ raise Exception, 'no cluster OS was found in DB for %s' % clusterName
+ except Exception, e:
+ luci_log.debug_verbose('vACN2: %s: %s' % (clusterName, str(e)))
try:
- if len(filter(lambda x: x['os'] != cluster_os, nodeList)) > 0:
- raise Exception, 'different operating systems were detected.'
- except:
+ rc = getRicciAgent(self, clusterName)
+ if not rc:
+ raise Exception, 'cannot find a ricci agent for %s' % clusterName
+ cluster_os = getClusterOS(self, rc)['os']
+ if clusterObj is None:
+ try:
+ clusterObj = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName)
+ except:
+ pass
+
+ try:
+ clusterObj.manage_addProperty('cluster_os', cluster_os, 'string')
+ except:
+ pass
+ except Exception, e:
+ luci_log.debug_verbose('vACN3: %s: %s' % (clusterName, str(e)))
nodeUnauth(nodeList)
+ cluster_os = None
cluster_properties['isComplete'] = False
- errors.append('Cluster nodes must be running compatible operating systems.')
- except:
+ errors.append('Unable to determine the cluster OS for the ' + clusterName + ' cluster.')
+
+ try:
+ if cluster_os is None:
+ raise Exception, 'no cluster OS found for %s' % clusterName
+ if len(filter(lambda x: x['os'] != cluster_os, nodeList)) > 0:
+ raise Exception, 'different operating systems were detected.'
+ except Exception, e:
+ luci_log.debug_verbose('vACN4: %s: %s' % (clusterName, str(e)))
nodeUnauth(nodeList)
cluster_properties['isComplete'] = False
- errors.append('Unable to determine the cluster OS for the ' + clusterName + ' cluster.')
+ errors.append('Cluster nodes must be running compatible operating systems.')
if not cluster_properties['isComplete']:
return (False, {'errors': errors, 'requestResults': cluster_properties})
- i = 0
- while i < len(nodeList):
- clunode = nodeList[i]
+ for clunode in nodeList:
try:
batchNode = addClusterNodeBatch(clunode['os'],
clusterName,
@@ -350,9 +374,11 @@
False,
rhn_dl)
if not batchNode:
- raise
- del nodeList[i]
- except:
+ raise Exception, 'batchnode is None'
+ clunode['batchnode'] = batchNode
+ except Exception, e:
+ luci_log.debug_verbose('vACN5: node add for %s failed: %s' \
+ % (clunode['host'], str(e)))
clunode['errors'] = True
nodeUnauth(nodeList)
cluster_properties['isComplete'] = False
@@ -363,37 +389,42 @@
error = createClusterSystems(self, clusterName, nodeList)
if error:
+ luci_log.debug_verbose('vACN5a: %s: %s' % (clusterName, str(e)))
nodeUnauth(nodeList)
cluster_properties['isComplete'] = False
errors.append(error)
return (False, {'errors': errors, 'requestResults': cluster_properties})
batch_id_map = {}
- for i in nodeList:
- clunode = nodeList[i]
+ for clunode in nodeList:
success = True
try:
rc = RicciCommunicator(clunode['host'])
+ if not rc:
+ raise Exception, 'rc is None'
except Exception, e:
- luci_log.info('Unable to connect to the ricci daemon on host %s: %s'% (clunode['host'], str(e)))
+ nodeUnauth([clunode['host']])
success = False
+ luci_log.info('vACN6: Unable to connect to the ricci daemon on host %s: %s' % (clunode['host'], str(e)))
if success:
try:
- resultNode = rc.process_batch(batchNode, async=True)
+ resultNode = rc.process_batch(clunode['batchnode'], async=True)
batch_id_map[clunode['host']] = resultNode.getAttribute('batch_id')
- except:
+ except Exception, e:
+ nodeUnauth([clunode['host']])
success = False
+ luci_log.info('vACN7: %s' % (clunode['host'], str(e)))
if not success:
- nodeUnauth(nodeList)
cluster_properties['isComplete'] = False
errors.append('An error occurred while attempting to add cluster node \"' + clunode['host'] + '\"')
- return (False, {'errors': errors, 'requestResults': cluster_properties})
- messages.append('Cluster join initiated for host \"' + clunode['host'] + '\"')
buildClusterCreateFlags(self, batch_id_map, clusterName)
+ if len(errors) > 0:
+ return (False, {'errors': errors, 'requestResults': cluster_properties})
+
response = request.RESPONSE
response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
next reply other threads:[~2006-12-08 20:47 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-12-08 20:47 rmccabe [this message]
-- strict thread matches above, loose matches on Subject: below --
2006-12-12 13:26 [Cluster-devel] conga ./conga.spec.in.in luci/cluster/form-mac kupcevic
2007-08-23 18:47 rmccabe
2007-08-27 18:38 rmccabe
2007-09-21 3:24 rmccabe
2007-11-06 19:58 rmccabe
2009-01-26 17:01 rmccabe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20061208204738.8972.qmail@sourceware.org \
--to=rmccabe@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).