From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Jyv37-0002OM-G9 for qemu-devel@nongnu.org; Wed, 21 May 2008 16:36:37 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Jyv35-0002Np-Qz for qemu-devel@nongnu.org; Wed, 21 May 2008 16:36:37 -0400 Received: from [199.232.76.173] (port=44497 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Jyv35-0002Nl-NS for qemu-devel@nongnu.org; Wed, 21 May 2008 16:36:35 -0400 Received: from wx-out-0506.google.com ([66.249.82.238]:59085) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Jyv34-0001Ay-Sx for qemu-devel@nongnu.org; Wed, 21 May 2008 16:36:35 -0400 Received: by wx-out-0506.google.com with SMTP id h29so2549698wxd.4 for ; Wed, 21 May 2008 13:36:32 -0700 (PDT) Message-ID: <48348795.3020806@codemonkey.ws> Date: Wed, 21 May 2008 15:35:33 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [PATCH][v2] Align file accesses with cache=off (O_DIRECT) References: <20080521153454.GB20527@shareable.org> <48344793.2020902@codemonkey.ws> <20080521162406.GA21501@shareable.org> <48345258.9040004@qumranet.com> <20080521170129.GF22488@duo.random> <48345949.4050903@qumranet.com> <20080521174754.GG22488@duo.random> <483461B0.20709@codemonkey.ws> <20080521180852.GI22488@duo.random> <48346937.80408@codemonkey.ws> <20080521201335.GK22488@duo.random> In-Reply-To: <20080521201335.GK22488@duo.random> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andrea Arcangeli Cc: Laurent Vivier , Dave Hansen , qemu-devel@nongnu.org, Blue Swirl , Paul Brook Andrea Arcangeli wrote: >> As has been pointed out, this is probably not ideal since it would cause >> heavy vma fragmentation. We may be able to simulate this using the slots >> API although slots are quite similar to vma's in that we optimize for a >> small number of them. >> > > I'm quite sure remap_file_pages can be extended to work on > MAP_PRIVATE. But I don't see the big benefit in sharing the ram > between host and guest, when having it in the guest is enough and this > only works for read anyway and it can only share ram among different > guests with -snapshot. > Or if multiple guests are using the same backing file (imagine each guest has it's own qcow file backed to a common one). Or if we had a more advanced storage system that did something like content addressable storage. Regards, Anthony Liguori > So while it sounds a clever trick, I doubt it's a worthwhile > optimization, it has downsides, and the worst is that I don't see how > we could extend this logic to work for writes because the pagecache of > the guest can't be written on disk before the dma is explicitly > started on the guest. >