From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752754AbbFVW3L (ORCPT ); Mon, 22 Jun 2015 18:29:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36313 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751843AbbFVW3B (ORCPT ); Mon, 22 Jun 2015 18:29:01 -0400 Message-ID: <55888C27.7010100@redhat.com> Date: Mon, 22 Jun 2015 18:28:55 -0400 From: Rik van Riel User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Srikar Dronamraju CC: linux-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org, mgorman@suse.de Subject: Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting References: <20150616155450.62ec234b@cuia.usersys.redhat.com> <20150622161322.GA32412@linux.vnet.ibm.com> In-Reply-To: <20150622161322.GA32412@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/22/2015 12:13 PM, Srikar Dronamraju wrote: >> + * migrating the task to where it really belongs. >> + * The exception is a task that belongs to a large numa_group, which >> + * spans multiple NUMA nodes. If that task migrates into one of the >> + * workload's active nodes, remember that node as the task's >> + * numa_preferred_nid, so the workload can settle down. >> */ >> if (p->numa_group) { >> if (env.best_cpu == -1) >> @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p) >> nid = env.dst_nid; >> >> if (node_isset(nid, p->numa_group->active_nodes)) >> - sched_setnuma(p, env.dst_nid); >> + sched_setnuma(p, nid); >> } >> >> /* No better CPU than the current one was found. */ >> > > When I refer to the Modified Rik's patch, I mean to remove the > node_isset() check before the sched_setnuma. With that change, we kind > of reduce the numa02 and 1JVMper System regression while getting as good > numbers as Rik's patch with 2JVM and 4JVM per System. > > The idea behind removing the node_isset check is: > node_isset is mostly used to track mem movement to nodes where cpus are > running and not vice versa. This is as per comment in > update_numa_active_node_mask. There could be a sitation where task memory > is all in a node and the node has capacity to accomodate but no tasks > associated with the task have run enuf on that node. In such a case, we > shouldnt be ruling out migrating the task to the node. That is a good point. However, if overriding the preferred_nid that task_numa_placement identified is a good idea in task_numa_migrate, would it also be a good idea for tasks that are NOT part of a numa group? What are the consequences of never setting preferred_nid from task_numa_migrate? (we would try to migrate the task to a better node more frequently) What are the consequences of always setting preferred_nid from task_numa_migrate? (we would only try migrating the task once, and it could get stuck in a sub-optimal location) The patch seems to work, but I do not understand why, and would like to know your ideas on why you think the patch works. I am really not looking forward to the idea of maintaining code that nobody understands... -- All rights reversed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/