From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 0/3] KVM-userspace: add NUMA support for guests Date: Sat, 29 Nov 2008 20:43:35 +0200 Message-ID: <49318D57.6040601@redhat.com> References: <492F1DD9.8030901@amd.com> <87fxlcxo62.fsf@basil.nowhere.org> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Andre Przywara , kvm@vger.kernel.org To: Andi Kleen Return-path: Received: from mx2.redhat.com ([66.187.237.31]:33676 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751908AbYK2Sna (ORCPT ); Sat, 29 Nov 2008 13:43:30 -0500 In-Reply-To: <87fxlcxo62.fsf@basil.nowhere.org> Sender: kvm-owner@vger.kernel.org List-ID: Andi Kleen wrote: > It depends -- it's not necessarily an improvement. e.g. if it leads to > some CPUs being idle while others are oversubscribed because of the > pinning you typically lose more than you win. In general default > pinning is a bad idea in my experience. > > Alternative more flexible strategies: > > - Do a mapping from CPU to node at runtime by using getcpu() > - Migrate to complete nodes using migrate_pages when qemu detects > node migration on the host. > Wouldn't that cause lots of migrations? Migrating a 1GB guest can take a huge amount of cpu time (tens or even hundreds of milliseconds?) compared to very high frequency activity like the scheduler. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.