From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andre Przywara Subject: Re: [vNUMA v2][PATCH 2/8] public interface Date: Tue, 3 Aug 2010 23:35:02 +0200 Message-ID: <4C588B86.7080201@amd.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Keir Fraser Cc: "xen-devel@lists.xensource.com" , Dulloor List-Id: xen-devel@lists.xenproject.org Keir Fraser wrote: > On 03/08/2010 16:43, "Dulloor" wrote: > >>> I would expect guest would see nodes 0 to nr_vnodes-1, and the mnode_id >>> could go away. >> mnode_id maps the vnode to a particular physical node. This will be >> used by balloon driver in >> the VMs when the structure is passed as NUMA enlightenment to PVs and >> PV on HVMs. >> I have a patch ready for that (once we are done with this series). > > So what happens when the guest is migrated to another system with different > physical node ids? Is that never to be supported? I'm not sure why you > wouldn't hide the vnode-to-mnode translation in the hypervisor. And what about if the node assignment changes at guest's runtime to satisfy load-balancing? I think we should have the opportunity to change the assignment, although this could be costly when it involves copying guest memory to another physical location. A major virtualization product ;-) solves this by a "hot pages first, the rest in background" algorithm. I see that this is definitely a future extension, but we shouldn't block this way already in this early stage. Andre. -- Andre Przywara AMD-Operating System Research Center (OSRC), Dresden, Germany Tel: +49 351 448-3567-12