From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754679AbbFSRwj (ORCPT ); Fri, 19 Jun 2015 13:52:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56156 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753436AbbFSRwd (ORCPT ); Fri, 19 Jun 2015 13:52:33 -0400 Message-ID: <558456DB.3040108@redhat.com> Date: Fri, 19 Jun 2015 13:52:27 -0400 From: Rik van Riel User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Srikar Dronamraju CC: linux-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org, mgorman@suse.de Subject: Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting References: <20150616155450.62ec234b@cuia.usersys.redhat.com> <20150618155547.GA16576@linux.vnet.ibm.com> <5582EC99.8040005@redhat.com> <20150618164140.GB16576@linux.vnet.ibm.com> <5582F944.6080204@redhat.com> <20150619171633.GC16576@linux.vnet.ibm.com> In-Reply-To: <20150619171633.GC16576@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/19/2015 01:16 PM, Srikar Dronamraju wrote: >> >> OK, so we are looking at two multi-threaded processes >> on a 4 node system, and waiting for them to converge? >> >> It may make sense to add my patch in with your patch >> 1/4 from last week, as well as the correct part of >> your patch 4/4, and see how they all work together. >> > > Tested specjbb and autonumabenchmark on 4 kernels. > > Plain 4.1.0-rc7-tip (i) > tip + only Rik's patch (ii) > tip + Rik's ++ (iii) > tip + Srikar's ++ (iv) > 5 interations of Specjbb on 4 node, 24 core powerpc machine. > Ran 1 instance per system. Would you happen to have 2 instance and 4 instance SPECjbb numbers, too? The single instance numbers seem to be within the margin of error, but I would expect multi-instance numbers to show more dramatic changes, due to changes in how workloads converge... Those behave very differently from single instance, especially with the "always set the preferred_nid, even if we moved the task to a node we do NOT prefer" patch... It would be good to understand the behaviour of these patches under more circumstances. -- All rights reversed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/