From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:56820) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S2LzM-00049I-Bi for qemu-devel@nongnu.org; Tue, 28 Feb 2012 07:17:12 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S2Lz6-0001P5-Vl for qemu-devel@nongnu.org; Tue, 28 Feb 2012 07:17:03 -0500 Received: from ssl.dlh.net ([91.198.192.8]:53798) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S2Lz6-0001Om-MI for qemu-devel@nongnu.org; Tue, 28 Feb 2012 07:16:48 -0500 Message-ID: <4F4CC5AD.4020005@dlh.net> Date: Tue, 28 Feb 2012 13:16:45 +0100 From: Peter Lieven MIME-Version: 1.0 References: <1333613dbb15f2b736394d77e795223e.squirrel@ssl.dlh.net> <2c2e4d6e-53f0-4698-8ad0-f4708b7987c6@email.android.com> <4F4CBE7C.8040304@dlh.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] linux guests and ksm performance List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: Gleb Natapov , kvm@vger.kernel.org, qemu-devel@nongnu.org On 28.02.2012 13:05, Stefan Hajnoczi wrote: > On Tue, Feb 28, 2012 at 11:46 AM, Peter Lieven wrote: >> On 24.02.2012 08:23, Stefan Hajnoczi wrote: >>> On Fri, Feb 24, 2012 at 6:53 AM, Stefan Hajnoczi >>> wrote: >>>> On Fri, Feb 24, 2012 at 6:41 AM, Stefan Hajnoczi >>>> wrote: >>>>> On Thu, Feb 23, 2012 at 7:08 PM, peter.lieven@gmail.com >>>>> wrote: >>>>>> Stefan Hajnoczi schrieb: >>>>>> >>>>>>> On Thu, Feb 23, 2012 at 3:40 PM, Peter Lieven wrote: >>>>>>>> However, in a virtual machine I have not observed the above slow down >>>>>>> to >>>>>>>> that extend >>>>>>>> while the benefit of zero after free in a virtualisation environment >>>>>>> is >>>>>>>> obvious: >>>>>>>> >>>>>>>> 1) zero pages can easily be merged by ksm or other technique. >>>>>>>> 2) zero (dup) pages are a lot faster to transfer in case of >>>>>>> migration. >>>>>>> >>>>>>> The other approach is a memory page "discard" mechanism - which >>>>>>> obviously requires more code changes than zeroing freed pages. >>>>>>> >>>>>>> The advantage is that we don't take the brute-force and CPU intensive >>>>>>> approach of zeroing pages. It would be like a fine-grained ballooning >>>>>>> feature. >>>>>>> >>>>>> I dont think that it is cpu intense. All user pages are zeroed anyway, >>>>>> but at allocation time it shouldnt be a big difference in terms of cpu >>>>>> power. >>>>> It's easy to find a scenario where eagerly zeroing pages is wasteful. >>>>> Imagine a process that uses all of physical memory. Once it >>>>> terminates the system is going to run processes that only use a small >>>>> set of pages. It's pointless zeroing all those pages if we're not >>>>> going to use them anymore. >>>> Perhaps the middle path is to zero pages but do it after a grace >>>> timeout. I wonder if this helps eliminate the 2-3% slowdown you >>>> noticed when compiling. >>> Gah, it's too early in the morning. I don't think this timer actually >>> makes sense. >> >> do you think it makes then sense to make a patchset/proposal to notice a >> guest >> kernel about the presense of ksm in the host and switch to zero after free? > I think your idea is interesting - whether or not people are happy > with it will depend on the performance impact. It seems reasonable to > me. could you support/help me in implementing and publishing this approach? Peter