From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>,
Andrea Arcangeli <aarcange@redhat.com>,
Ingo Molnar <mingo@kernel.org>
Cc: Rik van Riel <riel@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Hugh Dickins <hughd@google.com>,
Thomas Gleixner <tglx@linutronix.de>,
Paul Turner <pjt@google.com>,
Lee Schermerhorn <Lee.Schermerhorn@hp.com>,
Alex Shi <lkml.alex@gmail.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 46/46] Simple CPU follow
Date: Wed, 21 Nov 2012 10:21:52 +0000 [thread overview]
Message-ID: <1353493312-8069-47-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1353493312-8069-1-git-send-email-mgorman@suse.de>
---
kernel/sched/fair.c | 112 +++++++--------------------------------------------
1 file changed, 15 insertions(+), 97 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5cc5b60..fd53f17 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -877,118 +877,36 @@ static inline unsigned long balancenuma_mm_weight(struct task_struct *p,
*/
static void task_numa_find_placement(struct task_struct *p)
{
- struct cpumask *allowed = tsk_cpus_allowed(p);
- int this_cpu = smp_processor_id();
int this_nid = numa_node_id();
long p_task_weight, p_mm_weight;
- long weight_diff_max = 0;
- struct task_struct *selected_task = NULL;
+ long max_weight = 0;
int selected_nid = -1;
int nid;
p_task_weight = balancenuma_task_weight(p, this_nid);
p_mm_weight = balancenuma_mm_weight(p, this_nid);
- /* Examine a task on every other node */
+ /* Check if this task should run on another node */
for_each_online_node(nid) {
- int cpu;
- for_each_cpu_and(cpu, cpumask_of_node(nid), allowed) {
- struct rq *rq;
- struct mm_struct *other_mm;
- struct task_struct *other_task;
- long this_weight, other_weight, p_weight;
- long other_diff, this_diff;
-
- if (!cpu_online(cpu))
- continue;
-
- /* Idle CPU, consider running this task on that node */
- if (idle_cpu(cpu)) {
- this_weight = balancenuma_task_weight(p, nid);
- other_weight = 0;
- other_task = NULL;
- p_weight = p_task_weight;
- goto compare_other;
- }
-
- /* Racy check if a task is running on the other rq */
- rq = cpu_rq(cpu);
- other_mm = rq->curr->mm;
- if (!other_mm || !other_mm->mm_balancenuma)
- continue;
-
- /* Effectively pin the other task to get fault stats */
- raw_spin_lock_irq(&rq->lock);
- other_task = rq->curr;
- other_mm = other_task->mm;
-
- /* Ensure the other task has usable stats */
- if (!other_task->task_balancenuma ||
- !other_task->task_balancenuma->task_numa_fault_tot ||
- !other_mm ||
- !other_mm->mm_balancenuma ||
- !other_mm->mm_balancenuma->mm_numa_fault_tot) {
- raw_spin_unlock_irq(&rq->lock);
- continue;
- }
-
- /*
- * Read the fault statistics. If the remote task is a
- * thread in the process then use the task statistics.
- * Otherwise use the per-mm statistics.
- */
- if (other_mm == p->mm) {
- this_weight = balancenuma_task_weight(p, nid);
- other_weight = balancenuma_task_weight(other_task, nid);
- p_weight = p_task_weight;
- } else {
- this_weight = balancenuma_mm_weight(p, nid);
- other_weight = balancenuma_mm_weight(other_task, nid);
- p_weight = p_mm_weight;
- }
-
- raw_spin_unlock_irq(&rq->lock);
-
-compare_other:
- /*
- * other_diff: How much does the current task perfer to
- * run on the remote node thn the task that is
- * currently running there?
- */
- other_diff = this_weight - other_weight;
+ unsigned long nid_weight;
- /*
- * this_diff: How much does the currrent task prefer to
- * run on the remote NUMA node compared to the current
- * node?
- */
- this_diff = this_weight - p_weight;
-
- /*
- * Would nid reduce the overall cross-node NUMA faults?
- */
- if (other_diff > 0 && this_diff > 0) {
- long weight_diff = other_diff + this_diff;
-
- /* Remember the best candidate. */
- if (weight_diff > weight_diff_max) {
- weight_diff_max = weight_diff;
- selected_nid = nid;
- selected_task = other_task;
- }
- }
+ /*
+ * Read the fault statistics. If the remote task is a
+ * thread in the process then use the task statistics.
+ * Otherwise use the per-mm statistics.
+ */
+ nid_weight = balancenuma_task_weight(p, nid) +
+ balancenuma_mm_weight(p, nid);
- /*
- * Examine just one task per node. Examing all tasks
- * disrupts the system excessively
- */
- break;
+ /* Remember the best candidate. */
+ if (nid_weight > max_weight) {
+ max_weight = nid_weight;
+ selected_nid = nid;
}
}
- if (selected_nid != -1 && selected_nid != this_nid) {
+ if (selected_nid != -1 && selected_nid != this_nid)
sched_setnode(p, selected_nid);
- }
}
static void task_numa_placement(struct task_struct *p)
--
1.7.9.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-11-21 10:23 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-21 10:21 [PATCH 00/46] Automatic NUMA Balancing V4 Mel Gorman
2012-11-21 10:21 ` [PATCH 01/46] x86: mm: only do a local tlb flush in ptep_set_access_flags() Mel Gorman
2012-11-21 10:21 ` [PATCH 02/46] x86: mm: drop TLB flush from ptep_set_access_flags Mel Gorman
2012-11-21 10:21 ` [PATCH 03/46] mm,generic: only flush the local TLB in ptep_set_access_flags Mel Gorman
2012-11-21 10:21 ` [PATCH 04/46] x86/mm: Introduce pte_accessible() Mel Gorman
2012-11-21 10:21 ` [PATCH 05/46] mm: Only flush the TLB when clearing an accessible pte Mel Gorman
2012-11-21 10:21 ` [PATCH 06/46] mm: Count the number of pages affected in change_protection() Mel Gorman
2012-11-21 10:21 ` [PATCH 07/46] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Mel Gorman
2012-11-21 10:21 ` [PATCH 08/46] mm: compaction: Move migration fail/success stats to migrate.c Mel Gorman
2012-11-21 10:21 ` [PATCH 09/46] mm: migrate: Add a tracepoint for migrate_pages Mel Gorman
2012-11-21 10:21 ` [PATCH 10/46] mm: compaction: Add scanned and isolated counters for compaction Mel Gorman
2012-11-21 10:21 ` [PATCH 11/46] mm: numa: define _PAGE_NUMA Mel Gorman
2012-11-21 10:21 ` [PATCH 12/46] mm: numa: pte_numa() and pmd_numa() Mel Gorman
2012-11-21 10:21 ` [PATCH 13/46] mm: numa: Support NUMA hinting page faults from gup/gup_fast Mel Gorman
2012-11-21 10:21 ` [PATCH 14/46] mm: numa: split_huge_page: transfer the NUMA type from the pmd to the pte Mel Gorman
2012-11-21 10:21 ` [PATCH 15/46] mm: numa: Create basic numa page hinting infrastructure Mel Gorman
2012-11-21 10:21 ` [PATCH 16/46] mm: mempolicy: Make MPOL_LOCAL a real policy Mel Gorman
2012-11-21 10:21 ` [PATCH 17/46] mm: mempolicy: Add MPOL_MF_NOOP Mel Gorman
2012-11-21 10:21 ` [PATCH 18/46] mm: mempolicy: Check for misplaced page Mel Gorman
2012-11-21 10:21 ` [PATCH 19/46] mm: migrate: Introduce migrate_misplaced_page() Mel Gorman
2012-11-21 10:21 ` [PATCH 20/46] mm: mempolicy: Use _PAGE_NUMA to migrate pages Mel Gorman
2012-11-21 10:21 ` [PATCH 21/46] mm: mempolicy: Add MPOL_MF_LAZY Mel Gorman
2012-11-21 10:21 ` [PATCH 22/46] mm: mempolicy: Implement change_prot_numa() in terms of change_protection() Mel Gorman
2012-11-21 10:21 ` [PATCH 23/46] mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now Mel Gorman
2012-11-21 10:21 ` [PATCH 24/46] mm: numa: Add fault driven placement and migration Mel Gorman
2012-11-21 10:21 ` [PATCH 25/46] mm: sched: numa: Implement constant, per task Working Set Sampling (WSS) rate Mel Gorman
2012-11-21 10:21 ` [PATCH 26/46] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Mel Gorman
2012-11-21 10:21 ` [PATCH 27/46] mm: sched: numa: Implement slow start for working set sampling Mel Gorman
2012-11-21 10:21 ` [PATCH 28/46] mm: numa: Add pte updates, hinting and migration stats Mel Gorman
2012-11-21 10:21 ` [PATCH 29/46] mm: numa: Migrate on reference policy Mel Gorman
2012-11-21 10:21 ` [PATCH 30/46] mm: numa: Migrate pages handled during a pmd_numa hinting fault Mel Gorman
2012-11-21 10:21 ` [PATCH 31/46] mm: numa: Structures for Migrate On Fault per NUMA migration rate limiting Mel Gorman
2012-11-21 10:21 ` [PATCH 32/46] mm: numa: Rate limit the amount of memory that is migrated between nodes Mel Gorman
2012-11-21 10:21 ` [PATCH 33/46] mm: numa: Rate limit setting of pte_numa if node is saturated Mel Gorman
2012-11-21 10:21 ` [PATCH 34/46] sched: numa: Slowly increase the scanning period as NUMA faults are handled Mel Gorman
2012-11-21 10:21 ` [PATCH 35/46] mm: numa: Introduce last_nid to the page frame Mel Gorman
2012-11-21 10:21 ` [PATCH 36/46] mm: numa: Use a two-stage filter to restrict pages being migrated for unlikely task<->node relationships Mel Gorman
2012-11-21 18:25 ` Ingo Molnar
2012-11-21 19:15 ` Mel Gorman
2012-11-21 19:39 ` Mel Gorman
2012-11-21 19:46 ` Rik van Riel
2012-11-22 0:05 ` Ingo Molnar
2012-11-21 10:21 ` [PATCH 37/46] mm: numa: Add THP migration for the NUMA working set scanning fault case Mel Gorman
2012-11-21 11:24 ` Mel Gorman
2012-11-21 12:21 ` Mel Gorman
2012-11-21 10:21 ` [PATCH 38/46] sched: numa: Introduce tsk_home_node() Mel Gorman
2012-11-21 10:21 ` [PATCH 39/46] sched: numa: Make find_busiest_queue() a method Mel Gorman
2012-11-21 10:21 ` [PATCH 40/46] sched: numa: Implement home-node awareness Mel Gorman
2012-11-21 10:21 ` [PATCH 41/46] sched: numa: Introduce per-mm and per-task structures Mel Gorman
2012-11-21 10:21 ` [PATCH 42/46] sched: numa: CPU follows memory Mel Gorman
2012-11-21 10:21 ` [PATCH 43/46] sched: numa: Rename mempolicy to HOME Mel Gorman
2012-11-21 10:21 ` [PATCH 44/46] sched: numa: Consider only one CPU per node for CPU-follows-memory Mel Gorman
2012-11-21 10:21 ` [PATCH 45/46] balancenuma: no task swap in finding placement Mel Gorman
2012-11-21 10:21 ` Mel Gorman [this message]
2012-11-21 16:53 ` [PATCH 00/46] Automatic NUMA Balancing V4 Mel Gorman
2012-11-21 17:03 ` Ingo Molnar
2012-11-21 17:20 ` Mel Gorman
2012-11-21 17:33 ` Ingo Molnar
2012-11-21 18:02 ` Mel Gorman
2012-11-21 18:21 ` Ingo Molnar
2012-11-21 19:01 ` Mel Gorman
2012-11-21 23:27 ` Ingo Molnar
2012-11-22 9:32 ` Mel Gorman
2012-11-22 9:05 ` Ingo Molnar
2012-11-22 9:43 ` Mel Gorman
2012-11-22 12:56 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1353493312-8069-47-git-send-email-mgorman@suse.de \
--to=mgorman@suse.de \
--cc=Lee.Schermerhorn@hp.com \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkml.alex@gmail.com \
--cc=mingo@kernel.org \
--cc=pjt@google.com \
--cc=riel@redhat.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).