From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Teigland Date: Wed, 10 Feb 2016 15:16:30 -0600 Subject: [Cluster-devel] DLM Shutdown In-Reply-To: References: <20160210173846.GA18813@redhat.com> <20160210201824.GA20841@redhat.com> Message-ID: <20160210211630.GA21379@redhat.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On Wed, Feb 10, 2016 at 09:38:58PM +0100, Andreas Gruenbacher wrote: > On Wed, Feb 10, 2016 at 9:18 PM, David Teigland wrote: > > On Wed, Feb 10, 2016 at 08:48:12PM +0100, Andreas Gruenbacher wrote: > >> When a shutdown is requested, shouldn't dlm_controld really release > >> lockspaces in a similar way as well? > > > > You could probably do that if you check that the lockspace is managing no > > local locks (which would be a pain). If locks exist you'd not want to do > > that without a force option at least. > > Is that not what dlm_release_lockspace() with force set to false is > supposed to do? It's weird that the operation may have to be repeated > several times before the lockspace finally goes away; that could be > improved with an additional flag to DLM_USER_REMOVE_LOCKSPACE. OK, yes, but we've wandered into the weeds here. dlm_controld isn't involved in lockspace lifetimes, that's the application/libdlm side. The question is what behavior the program creating/removing the lockspace needs (and if the program is for more than just testing.)