linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] pseudo-interleaving NUMA placement
@ 2013-11-26 22:03 riel
  2013-11-26 22:03 ` [RFC PATCH 1/4] remove p->numa_migrate_deferred riel
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: riel @ 2013-11-26 22:03 UTC (permalink / raw)
  To: linux-mm; +Cc: linux-kernel, mgorman, chegu_vinod, peterz

This patch set attempts to implement a pseudo-interleaving
policy for workloads that do not fit in one NUMA node.

For each NUMA group, we track the NUMA nodes on which the
workload is actively running, and try to concentrate the
memory on those NUMA nodes.

Unfortunately, the scheduler appears to move tasks around
quite a bit, leading to nodes being dropped from the
"active nodes" mask, and re-added a little later, causing
excessive memory migration.

I am not sure how to solve that. Hopefully somebody will
have an idea :)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-11-26 22:18 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-26 22:03 [RFC PATCH 0/4] pseudo-interleaving NUMA placement riel
2013-11-26 22:03 ` [RFC PATCH 1/4] remove p->numa_migrate_deferred riel
2013-11-26 22:03 ` [RFC PATCH 2/4] track from which nodes NUMA faults are triggered riel
2013-11-26 22:03 ` [RFC PATCH 3/4] build per numa_group active node mask from faults_from statistics riel
2013-11-26 22:03 ` [RFC PATCH 4/4] use active_nodes nodemask to decide on numa migrations riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).