From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1JIV9F-00025Y-K0 for qemu-devel@nongnu.org; Fri, 25 Jan 2008 15:27:37 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1JIV9D-00025E-8F for qemu-devel@nongnu.org; Fri, 25 Jan 2008 15:27:36 -0500 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1JIV9D-00025A-1P for qemu-devel@nongnu.org; Fri, 25 Jan 2008 15:27:35 -0500 Received: from wr-out-0506.google.com ([64.233.184.226]) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1JIV9B-0005U8-OT for qemu-devel@nongnu.org; Fri, 25 Jan 2008 15:27:34 -0500 Received: by wr-out-0506.google.com with SMTP id c37so1823408wra.19 for ; Fri, 25 Jan 2008 12:27:30 -0800 (PST) Message-ID: <479A4636.3050307@codemonkey.ws> Date: Fri, 25 Jan 2008 14:27:34 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH][RFC] To mount qemu disk image on the host References: <1201264245.4114.42.camel@frecb07144> <4799FDBE.6030502@codemonkey.ws> <1201276153.4114.57.camel@frecb07144> <479A3E1B.2020500@amd.com> In-Reply-To: <479A3E1B.2020500@amd.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Laurent.Vivier@bull.net Andre Przywara wrote: > Laurent Vivier wrote: > >> What I'm wondering is how loop and device mapper can work ? > I shortly evaluated the loop device idea, but came to the conclusion > that this not so easy to implement (and would require qcow code in the > kernel). I see only little chance for this go upstream in Linux and > maintaining this out-of-tree is actually a bad idea. I recently was poking around at the loop device and discovered that it had a plugging xfer ops to allow for encrypted loop devices. My initial analysis was that by simply adding a couple of operations to that structure (such as map_sector and get_size), you could very easily write a kernel module that registered a set of xfer ops that implemented QCOW support. Of course, this would all be kernel code. The best solution would be a proper userspace block device. I think it's a pretty reasonable stop-gap though (that wouldn't be very difficult to get merged upstream). > If you think about deferring the qcow code into userland, you will > sooner or later run into the same deadlock problems as the current > solution (after all this is what nbd does...) > > I have implemented a clean device-mapper solution, the big drawback is > that it is read-only. It's a simple tool which converts the qcow map > into a format suitable for dm-setup, to which the output can be > directly piped to. I will clean up the code and send it to the list ASAP. You could only do something read-only with device mapper. dm-userspace was an effort to try and work around that with a userspace daemon but it didn't move upstream as quickly as we would have liked. Regards, Anthony Liguori > Read/write support is not that easy, but maybe someone can comment on > this idea: > Create a sparse file on the host which is as large as the number of > all still unallocated blocks. Assign these blocks via device mapper in > addition to the already allocated ones. When unmounting the dm device, > look for blocks which have been changed and allocate and write them > into the qcow file. One could also use the bmap-ioctl to scan for > non-sparse blocks. > This is a bit complicated, but should work cleanly (especially for the > quick fsck or file editing case). If you find it worth, I could try to > implement it. > > Regards, > Andre. >