From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NHeXF-0003mO-Ro for qemu-devel@nongnu.org; Mon, 07 Dec 2009 09:25:57 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NHeXA-0003f9-NT for qemu-devel@nongnu.org; Mon, 07 Dec 2009 09:25:57 -0500 Received: from [199.232.76.173] (port=48390 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NHeXA-0003eu-Df for qemu-devel@nongnu.org; Mon, 07 Dec 2009 09:25:52 -0500 Received: from mx1.redhat.com ([209.132.183.28]:53334) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NHeXA-0002Qg-A0 for qemu-devel@nongnu.org; Mon, 07 Dec 2009 09:25:52 -0500 Date: Mon, 7 Dec 2009 14:25:11 +0000 From: "Daniel P. Berrange" Subject: Re: [Qemu-devel] [PATCH] Disk image shared and exclusive locks. Message-ID: <20091207142511.GP24530@redhat.com> References: <20091204165301.GA4167@amd.home.annexia.org> <20091207103908.GI2271@arachsys.com> <4B1D03E0.5080006@codemonkey.ws> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4B1D03E0.5080006@codemonkey.ws> Reply-To: "Daniel P. Berrange" List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Chris Webb , "Richard W.M. Jones" , qemu-devel@nongnu.org On Mon, Dec 07, 2009 at 07:32:16AM -0600, Anthony Liguori wrote: > Chris Webb wrote: > >Hi. There's a connected discussion on the sheepdog list about locking, and > >I > >have a patch there which could complement this one quite well. > > > >Sheepdog is a distributed, replicated block store being developed > >(primarily) for Qemu. Images have a mandatory exclusive locking > >requirement, > >enforced by the cluster manager. Without this, the replication scheme > >breaks down and you can end up with inconsistent copies of the block > >image. > > > >The initial release of sheepdog took these locks in the block driver > >bdrv_open() and bdrv_close() hooks. They also added a bdrv_closeall() and > >ensured it was called in all the usual qemu exit paths to avoid stray > >locks. > >(The rarer case of crashing hosts or crashing qemus will have to be handled > >externally, and is 'to do'.) > > > >The problem was that this prevented live migration, because both ends > >wanted > >to open the image at once, even though only one would be using it at a > >time. > > > Yeah, this is a bigger problem I think. Technically speaking, when > using NFS as the backing filesystem, we really should not open the > destination end before we close the source end to keep the caches fully > coherent. > > I've resisted this because I'm concerned that if we delay the opening of > the file on the destination, it could fail. That's a very late failure > and that makes me uncomfortable as just a work around for NFS. The only other alternative would be for the destination to open the disks, but not immediately acquire the locks. In the final stage of migration have the source release its lock & signal to the dest that it can now acquire the lock. The assumption being that the lock acquisition is far less likely to fail than the open(), so we focus on making sure we can properly handle open() failure. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|