* [Cluster-devel] [RFC][PATCH] Unique resource name handling for rgmanager.
@ 2007-04-27 15:57 Simone Gotti
2007-04-27 17:51 ` Lon Hohberger
0 siblings, 1 reply; 2+ messages in thread
From: Simone Gotti @ 2007-04-27 15:57 UTC (permalink / raw)
To: cluster-devel.redhat.com
Hi all,
starting from the clarification quoted from lon in bug 229650:
"instances of shared resources are expected to be able to operate
independently. That is, if one instance fails, it does not imply that
they all have failed. If it does, something is broken in the resource
agent and/or rgmanager. If it isn't possible to make the resource
instances completely independent of one-another, then the resource agent
should not define maxinstances > 1."
and thinking of a future ability of rgmanager to manage in the detail a
single resource inside a resource group I noticed some things:
*) restree.c:_res_op walks all the resource tree (so also resources
owned by another rg_thread) and starts working on the resource passed
with parameter "first".
This now isn't a problem as _res_op is called passing always to it, as
parameter first, the service name.
But for example with rg_test you can test the stop/start/etc.. of a
resource inside a resource group. In this case, if I call for example
"rg_test test /etc/cluster/cluster.conf stop clusterfs gfs01"
and gfs01 has multiple instances on the same machine it will stop all
the resources of type gfs01 without the ability to stop only the one I
want to.
*) For the fact that now in the upper layer of the resource tree I can
also put a resource that it's not of type service I can have multiple
instances of it in the upper layer.
So, trying to leave unchanged the behavior of _res_op (i changed its
parameters anyway...) I (with the help of Lon on IRC) thinked:
1) Make possible to start/stop just 1 instance of a multiple instance
resource => call it with an unique name.
2) Implement this way to call the resource inside rgmanager. This means
that I'm going to work directly on the resource_node_t (and the resource
it points to) and not to a general resource_t that can be found multiple
times in the tree.
I did a basic example patch that needs a lot of work just to share it
with you and talk all togheter on its implementation.
The resource's unique name generation should be changed without changing
all the other code that use it and now it's in this format:
$TYPE:$PRIMARYATTRVALUE[$INSTANCENUMBER]
where $INSTANCENUMBER is optional if the resource type doesn't support
multiple instances or only one multiple instances resource exists with
the cluster.
for example 2 instances of clusterfs:gfs01 will be called
clusterfs:gfs01[0] and clusterfs:gfs01[1].
This kind of naming isn't unique as if I change the cluster.conf the
$INSTANCENUMBER can change but I'll be able to know its new name for
example with rg_test.
Sorry for the long mail but I hope that everythings is clear.
Thanks!
Bye!
--
Simone Gotti
--
Email.it, the professional e-mail, gratis per te: http://www.email.it/f
Sponsor:
La Cronaca del Carnevale di Ivrea 2007 visto su www.localport.it: per conoscere il Carnevale, per rivivere l?edizione 2007. Acquistalo on line
Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=6431&d=27-4
-------------- next part --------------
A non-text attachment was scrubbed...
Name: rgmanager-cvsHEAD-res_operations-basicexample01.patch
Type: text/x-patch
Size: 20576 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/cluster-devel/attachments/20070427/af53aabe/attachment.bin>
^ permalink raw reply [flat|nested] 2+ messages in thread
* [Cluster-devel] [RFC][PATCH] Unique resource name handling for rgmanager.
2007-04-27 15:57 [Cluster-devel] [RFC][PATCH] Unique resource name handling for rgmanager Simone Gotti
@ 2007-04-27 17:51 ` Lon Hohberger
0 siblings, 0 replies; 2+ messages in thread
From: Lon Hohberger @ 2007-04-27 17:51 UTC (permalink / raw)
To: cluster-devel.redhat.com
So far, it's pretty good - the only thing I worry about is
group-reordering. For example, if we do:
<resources>
<clusterfs name="foo" .../>
</resources>
<service name="1">
<clusterfs ref="foo"/>
</service>
<service name="2">
<clusterfs ref="foo"/> <!-- ref = 2 -->
</service>
...and we delete service 1 and add service 3 below service 2, we'll end
up with a different reference number for service 2:
<service name="2">
<clusterfs ref="foo"/> <!-- ref = 1 now -->
</service>
<service name="3">
<clusterfs ref="foo"/>
</service>
If we were using the ref number to track instances of the clusterfs
resource on disk, then the ref number '1' would disappear out from under
service:2 when we delete service:1.
The patch looks right - except the way we decide 'refno'. That scares
me :)
To expand on what you started, we can add an easy way to fix the
reordering problem by adding a 'refno' attribute in cluster.conf for
ref= tags.
That is, the reorder case above can be eliminated if we simply require a
specific index attribute when doing ref=. For example:
<service name="2">
<clusterfs ref="foo" refno="2" />
</service>
<service name="3">
<clusterfs ref="foo" refno="3" />
</service>
Of course, this complicates cluster.conf parsing from rgmanager's
perspective, because now it has to keep track of indices (what if
someone forgot to add a refno?) - while expensive, this can't be
terribly difficult to do.
As an alternative, in order to avoid configuration changes as well as
adding new configuration options, we could very easily calculate hash or
CRC values based on configuration parameters and parent node attributes
(with some sort of conflict resolution if there's overlap, I guess) -
this would get us a position independent per-type identifier. That is:
service:2[6fc1] (crc-16)
service:2[fe13c0d3] (crc-32)
--
Lon Hohberger - Software Engineer - Red Hat, Inc.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2007-04-27 17:51 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-04-27 15:57 [Cluster-devel] [RFC][PATCH] Unique resource name handling for rgmanager Simone Gotti
2007-04-27 17:51 ` Lon Hohberger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).