From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MImKy-0008Tw-7g for qemu-devel@nongnu.org; Mon, 22 Jun 2009 12:25:40 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MImKt-0008Qr-Bu for qemu-devel@nongnu.org; Mon, 22 Jun 2009 12:25:39 -0400 Received: from [199.232.76.173] (port=57168 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MImKs-0008QP-Tb for qemu-devel@nongnu.org; Mon, 22 Jun 2009 12:25:35 -0400 Received: from mail-ew0-f211.google.com ([209.85.219.211]:60988) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MImKs-0008UD-F3 for qemu-devel@nongnu.org; Mon, 22 Jun 2009 12:25:34 -0400 Received: by mail-ew0-f211.google.com with SMTP id 7so110800ewy.34 for ; Mon, 22 Jun 2009 09:25:34 -0700 (PDT) Message-ID: <4A3FB077.4040607@codemonkey.ws> Date: Mon, 22 Jun 2009 11:25:27 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [Qemu-commits] [COMMIT 3086844] Instead of writing a zero page, madvise it away References: <200906221549.n5MFn3Qd015389@d03av02.boulder.ibm.com> <4A3FAD69.60507@redhat.com> In-Reply-To: <4A3FAD69.60507@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Anthony Liguori , qemu-devel Avi Kivity wrote: > On 06/22/2009 06:51 PM, Anthony Liguori wrote: >> From: Anthony Liguori >> >> Otherwise, after migration, we end up with a much larger RSS size >> then we >> ought to have. >> >> > > We have the same issue on the migration source node. I don't see a > simple way to solve it, though. I don't follow. In this case, the issue is: 1) Start a guest with 1024, balloon down to 128MB. RSS size is now ~128MB 2) Live migrate to a different node 3) RSS on different node jumps to ~1GB 4) Weep at all your lost memory Xen had a similar issue. This ends up biting people who overcommit their VMs via ballooning, live migration, and badness ensues. At least for us, the error is swapping but madvise also avoids the issue by never consuming that memory to begin with. Regards, Anthony Liguori