From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932136AbbFWBSs (ORCPT ); Mon, 22 Jun 2015 21:18:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36460 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751787AbbFWBSj (ORCPT ); Mon, 22 Jun 2015 21:18:39 -0400 Message-ID: <5588B3E9.2000906@redhat.com> Date: Mon, 22 Jun 2015 21:18:33 -0400 From: Rik van Riel User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Srikar Dronamraju CC: Ingo Molnar , Peter Zijlstra , linux-kernel@vger.kernel.org, Mel Gorman Subject: Re: [PATCH v2 2/4] sched:Consider imbalance_pct when comparing loads in numa_has_capacity References: <1434455762-30857-1-git-send-email-srikar@linux.vnet.ibm.com> <1434455762-30857-3-git-send-email-srikar@linux.vnet.ibm.com> <55803511.1060601@redhat.com> <20150622162958.GB32412@linux.vnet.ibm.com> In-Reply-To: <20150622162958.GB32412@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/22/2015 12:29 PM, Srikar Dronamraju wrote: > * Rik van Riel [2015-06-16 10:39:13]: > >> On 06/16/2015 07:56 AM, Srikar Dronamraju wrote: >>> This is consistent with all other load balancing instances where we >>> absorb unfairness upto env->imbalance_pct. Absorbing unfairness upto >>> env->imbalance_pct allows to pull and retain task to their preferred >>> nodes. >>> >>> Signed-off-by: Srikar Dronamraju >> >> How does this work with other workloads, eg. >> single instance SPECjbb2005, or two SPECjbb2005 >> instances on a four node system? >> >> Is the load still balanced evenly between nodes >> with this patch? >> > > Yes, I have looked at mpstat logs while running SPECjbb2005 for 1JVMper > System, 2 JVMs per System and 4 JVMs per System and observed that the > load spreading was similar with and without this patch. > > Also I have visualized using htop when running 0.5X (i.e 48 threads on > 96 cpu system) cpu stress workloads to see that the spread is similar > before and after the patch. > > Please let me know if there are any better ways to observe the > spread. In a slightly loaded or less loaded system, the chance of > migrating threads to their home node by way of calling migrate_task_to > and migrate_swap might be curtailed without this patch. i.e 2 process > each having N/2 threads may converge slower without this change. Awesome. Feel free to put my Acked-by: on this patch. Acked-by: Rik van Riel -- All rights reversed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/