From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1O51GU-0005Fa-R3 for qemu-devel@nongnu.org; Thu, 22 Apr 2010 14:36:42 -0400 Received: from [140.186.70.92] (port=48966 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1O51GQ-0005Ds-ED for qemu-devel@nongnu.org; Thu, 22 Apr 2010 14:36:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1O51GO-0003Y5-G5 for qemu-devel@nongnu.org; Thu, 22 Apr 2010 14:36:38 -0400 Received: from mail-pw0-f45.google.com ([209.85.160.45]:63094) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1O51GO-0003Xo-BP for qemu-devel@nongnu.org; Thu, 22 Apr 2010 14:36:36 -0400 Received: by pwi6 with SMTP id 6so5748363pwi.4 for ; Thu, 22 Apr 2010 11:36:35 -0700 (PDT) Message-ID: <4BD0972E.9080709@codemonkey.ws> Date: Thu, 22 Apr 2010 13:36:30 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Locking block devices for concurrent access? References: <4BA1323F.6080501@msgid.tls.msk.ru> In-Reply-To: <4BA1323F.6080501@msgid.tls.msk.ru> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Michael Tokarev Cc: qemu-devel , "Richard W.M. Jones" On 03/17/2010 02:49 PM, Michael Tokarev wrote: > I remember quite long discussion about this issue > a while back. But unfortunately, a) I can't find > it now, and b) as far as I remember, there was no > definitive solution presented at that time. So I > thought it's Ok to ask again to get more conclusive > answer... > > The original problem is that currently qemu provides > no attempt to prevent concurrent access to the same > "virtual disk" by multiple qemu instances, or it can > happily pass a filesystem mounted in host to a guest > it runs. > > I understand pretty well that there are valid usage > cases for multiple qemu guests having the same block > device (file, whatever) open at the same time, even > in read-write mode (but it is still not quite safe > for formats with a structure, like qcow for example). > There are cluster filesystems out there, which works > on shared storage devices. > > But the thing is that in almost all "usual" cases, > non-cluster filesystem will be used in guests, and > it'd be _very_ useful for qemu to actually at least > try to warn user that the given device is already > in use. Because it is quite easy to trash the guest > filesystem by "mounting" the same "device" in two > different guests at the same time (or in host and in > guest simultaneously, for that matter). I've run > across this already myself, not once, and there are > other people who've hit the same trap. > > I understand also that there are qcow[2] base images > which needs to be opened in different locking mode, > since they're usually read-only; and even there, it'd > be a good idea to ensure that the base image is not > open in RW mode already, or that it WILL not be opened > RW while we're basing on it. Or something like that > anyway. > > The mentioned discussion which I can't find - there > was a proposal to add an argument like "share-mode" > or "lock" to -drive foo=bar,xyz=asdf parameter list, > with values from the set "none", "shared", "exclusive". > But what I can't remember is what the conclusion was... > > Can we please have some summary of where it all sits > nowadays? > I think we got to the point where there was general agreement on the usefulness of lock=read|write but where there was still some contention was on this whole notion of lock=exclusive|shared. I believe Richard Jones was driving the original patch and his use case was libguestfs which really wants lock=exclusive (sort of). But IMHO, it's very confusing compared to lock=read|write. If someone did a lock=read|write patch, I think it would be applied without much fuss. I think for lock=exclusive|shared, we would need a bit more thinking about the use-cases. Regards, Anthony Liguori > Thank you! > > /mjt > > >