From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 0/3] KVM-userspace: add NUMA support for guests Date: Sun, 30 Nov 2008 18:38:14 +0200 Message-ID: <4932C176.7020102@redhat.com> References: <492F1DD9.8030901@amd.com> <87fxlcxo62.fsf@basil.nowhere.org> <49318D57.6040601@redhat.com> <20081129201032.GT6703@one.firstfloor.org> <4931A77F.6050505@redhat.com> <20081130154145.GU6703@one.firstfloor.org> <4932B37A.3070305@redhat.com> <20081130160539.GV6703@one.firstfloor.org> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Andre Przywara , kvm@vger.kernel.org To: Andi Kleen Return-path: Received: from mx2.redhat.com ([66.187.237.31]:36046 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751370AbYK3QiU (ORCPT ); Sun, 30 Nov 2008 11:38:20 -0500 In-Reply-To: <20081130160539.GV6703@one.firstfloor.org> Sender: kvm-owner@vger.kernel.org List-ID: Andi Kleen wrote: >> Please explain. When would you call getcpu() and what would you do at >> that time? >> > > When the guest allocates on the node of its current CPU get memory on > the node pool getcpu() tells you it is running on. More tricky > is handling guest explicitely accessing other node for NUMA policy > purposes, but in this case you can access the cache of the getcpu > information of other vcpus. > The guest allocates when it touches the page for the first time. This means very little since all of memory may be touched during guest bootup or shortly afterwards. Even if not, it is still a one-time operation, and any choices we make based on it will last the lifetime of the guest. > This is roughly equivalent of getting a fresh new demand fault page, > but doesn't require to unmap/free/remap. > Lost again, sorry. > The tricky bit is probably figuring out what is a fresh new page for > the guest. That might need some paravirtual help. > The guest typically recycles its own pages (exception is ballooning). Also it doesn't make sense to manage this on a per page basis as the guest won't do that. We need to mimic real hardware. The static case is simple. We allocate memory from a few nodes (for small guests, only one) and establish a guest_node -> host_node mapping. vcpus on guest node X are constrained to host node according to this mapping. The dynamic case is really complicated. We can allow vcpus to wander to other cpus on cpu overcommit, but need to pull them back soonish, or alternatively migrate the entire node, taking into account the cost of the migration, cpu availability on the target node, and memory availability on the target node. Since the cost is so huge, this needs to be done on a very coarse scale. I don't see this happening in the kernel anytime soon. -- error compiling committee.c: too many arguments to function