From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andi Kleen Subject: Re: [PATCH 0/3] KVM-userspace: add NUMA support for guests Date: Sun, 30 Nov 2008 16:41:45 +0100 Message-ID: <20081130154145.GU6703@one.firstfloor.org> References: <492F1DD9.8030901@amd.com> <87fxlcxo62.fsf@basil.nowhere.org> <49318D57.6040601@redhat.com> <20081129201032.GT6703@one.firstfloor.org> <4931A77F.6050505@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Andi Kleen , Andre Przywara , kvm@vger.kernel.org To: Avi Kivity Return-path: Received: from one.firstfloor.org ([213.235.205.2]:54090 "EHLO one.firstfloor.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750956AbYK3Pa7 (ORCPT ); Sun, 30 Nov 2008 10:30:59 -0500 Content-Disposition: inline In-Reply-To: <4931A77F.6050505@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: > I don't think the first one works without the second. Calling getcpu() > on startup is meaningless since the initial placement doesn't take the Who said anything about startup? The idea behind getcpu() is to call it every time you allocate someting. > > > >Anyways it's not ideal either, but in my mind would be all preferable > >to default CPU pinning. > > I agree we need something dynamic, and that we need to tie cpu affinity > and memory affinity together. > > This could happen completely in the kernel (not an easy task), or by There were experimental patches for tieing memory migration to cpu migration some time ago from Lee S. > having a second-level scheduler in userspace polling for cpu usage an > rebalancing processes across numa nodes. Given that with virtualization > you have a few long lived processes, this does not seem too difficult. I think I would prefer to fix that in the kernel. user space will never have the full picture. -Andi -- ak@linux.intel.com