From mboxrd@z Thu Jan 1 00:00:00 1970 From: Simone Gotti Date: Mon, 30 Apr 2007 22:14:15 +0200 Subject: [Cluster-devel] [RFC]Some toughts of future enhancements for rgmanager. Message-ID: <1177964055.3738.45.camel@localhost> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Hi all, I'd like to share with you some toughts on future developments on rgmanager that I have in mind, just before trying to code something that isn't right or not approved for various reasons. These are only toughts that I matured also working on other clusters and so they can be quite silly. Feel free to say what you really think about them ;) ======================================================================== What I like: *) The 2 concepts of service (resource group) and resource and their clear separation (also if in rgmanager a resource is a service at the first level) that makes a service a container of N resources. ======================================================================== What I'd like to implement: *) The ability to freely manage (stop/start/disable) (with clusvcadm or another program) a single resource inside a service. This will also bring to the ability of an enhanced management (a resource can be online only between 9am and 18pm etc...). Also be able to restart only the resource the needs to be restarted when one of them fails. Now it's difficult (at least for me) to implement this features using the current way of rapresentation and resource management. I don't know how to code the stop of only one of them keeping the implied and forced dependencies or the above example of a time based resource online. ======================================================================== How this can be done: *) A different (for me also more logical) way to explain the dependencies or other constraints between the resources in a service. Using the work that is already going on for service dependencies, implement something similar for the resources inside a service. This shoule be also simpler as it would only be a require=yes|[no]. [*) Propagate the resources status around the cluster like is already done for services.] In this way I can also restart only the single failed resource (and its deps) instead of all the ones in the service. ======================================================================== This will force to these changes: *) Only 2 levels (first level: services, second level: resources in that service). *) multiple instance resources should have forced an unique name to clearly define dependencies and manage them. *) implied dependencies in ocf scripts aren't used anymore. *) Obviously this new behavior needs to be actived to maintain the backward compatibility. ======================================================================== Example of possible configuration and behavior: