From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:41060) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UHrlH-0002kZ-Bh for qemu-devel@nongnu.org; Tue, 19 Mar 2013 04:19:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UHrlE-0000IY-Pe for qemu-devel@nongnu.org; Tue, 19 Mar 2013 04:19:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36831) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UHrlE-0000IQ-HW for qemu-devel@nongnu.org; Tue, 19 Mar 2013 04:19:08 -0400 Date: Tue, 19 Mar 2013 10:19:39 +0200 From: "Michael S. Tsirkin" Message-ID: <20130319081939.GC11259@redhat.com> References: <1363576743-6146-1-git-send-email-mrhines@linux.vnet.ibm.com> <1363576743-6146-4-git-send-email-mrhines@linux.vnet.ibm.com> <20130318104013.GE5267@redhat.com> <5147780C.1080800@linux.vnet.ibm.com> <20130318212646.GB20406@redhat.com> <5147A209.80202@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5147A209.80202@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 03/10] more verbose documentation of the RDMA transport List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael R. Hines" Cc: aliguori@us.ibm.com, qemu-devel@nongnu.org, owasserm@redhat.com, abali@us.ibm.com, mrhines@us.ibm.com, gokul@us.ibm.com, pbonzini@redhat.com On Mon, Mar 18, 2013 at 07:23:53PM -0400, Michael R. Hines wrote: > On 03/18/2013 05:26 PM, Michael S. Tsirkin wrote: > > > >Probably but I haven't mentioned ballooning at all. > > > >memory overcommit != ballooning > > Sure, then setting ballooning aside for the moment, > then let's just consider regular (unused) virtual memory. > > In this case, what's wrong with the destination mapping > and pinning all the memory if it is not being ballooned? > > If the guest touches all the memory during normal operation > before migration begins (which would be the common case), > then overcommit is irrelevant, no? We have ways (e.g. cgroups) to limit what a VM can do. If it tries to use more RAM than we let it, it will swap, still making progress, just slower. OTOH it looks like pinning more memory than allowed by the cgroups limit will just get stuck forever (probably a bug, should fail instead? but does not help your protocol which needs it all pinned at all times). There are also per-task resource limits. If you exceed this registration will fail, so not good either. I just don't see why do registration by chunks on source but not on destination. -- MST