From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx113.postini.com [74.125.245.113]) by kanga.kvack.org (Postfix) with SMTP id A368B6B005A for ; Sat, 14 Jul 2012 12:21:46 -0400 (EDT) Message-ID: <50019C5E.8020508@redhat.com> Date: Sat, 14 Jul 2012 12:20:46 -0400 From: Rik van Riel MIME-Version: 1.0 Subject: Re: [RFC][PATCH 14/26] sched, numa: Numa balancer References: <20120316144028.036474157@chello.nl> <20120316144241.012558280@chello.nl> <4FFF4987.4050205@redhat.com> <5000347E.1050301@hp.com> In-Reply-To: <5000347E.1050301@hp.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Don Morris Cc: Peter Zijlstra , Linus Torvalds , Andrew Morton , Thomas Gleixner , Ingo Molnar , Paul Turner , Suresh Siddha , Mike Galbraith , "Paul E. McKenney" , Lai Jiangshan , Dan Smith , Bharata B Rao , Lee Schermerhorn , Andrea Arcangeli , Johannes Weiner , linux-kernel@vger.kernel.org, linux-mm@kvack.org On 07/13/2012 10:45 AM, Don Morris wrote: >> IIRC the test consisted of a 16GB NUMA system with two 8GB nodes. >> It was running 3 KVM guests, two guests of 3GB memory each, and >> one guest of 6GB each. > > How many cpus per guest (host threads) and how many physical/logical > cpus per node on the host? Any comparisons with a situation where > the memory would fit within nodes but the scheduling load would > be too high? IIRC this particular test was constructed to have guests A and B fit in one NUMA node, with guest C in the other NUMA node. With schednuma, guest A ended up on one NUMA node, guest B on the other, and guest C was spread between both nodes. Only migrating when there is plenty of free space available means you can end up not doing the right thing when running a few large workloads on the system. -- All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org