linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 02/48] mm: Use raw_cpu ops for determining current NUMA node
       [not found] <20140214201841.826179349@linux.com>
@ 2014-02-14 20:18 ` Christoph Lameter
  2014-02-14 20:18 ` [PATCH 06/48] mm: Replace __get_cpu_var uses with this_cpu_ptr Christoph Lameter
  1 sibling, 0 replies; 2+ messages in thread
From: Christoph Lameter @ 2014-02-14 20:18 UTC (permalink / raw)
  To: Tejun Heo
  Cc: akpm, rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, linux-mm, Alex Shi

[-- Attachment #1: preempt_fix_numa_node --]
[-- Type: text/plain, Size: 5151 bytes --]

[Patch depends on another patch in this series that introduces raw_cpu_ops]

With the preempt checking logic for __this_cpu_ops we will get false
positives from locations in the code that use numa_node_id.

Before the  __this_cpu ops where introduced there were
no checks for preemption present either. smp_raw_processor_id()
was used. See http://www.spinics.net/lists/linux-numa/msg00641.html

Therefore we need to use raw_cpu_read here to avoid false postives.

Note that this issue has been discussed in prior years.
If the process changes nodes after retrieving the current numa node then
that is acceptable since most uses of numa_node etc are for optimization
and not for correctness.

There were suggestions to implement a raw_numa_node_id in order to
do preempt checks for numa_node_id as well. But I think we better
defer that to another patch since that would mean investigating
how numa_node_id() is used throughout the kernel which would increase
the scope of this patchset significantly. After all preemption was never
checked before when numa_node_id() was used.

Some sample traces:

__this_cpu_read operation in preemptible [00000000] code: login/1456
caller is __this_cpu_preempt_check+0x2b/0x2d
CPU: 0 PID: 1456 Comm: login Not tainted 3.12.0-rc4-cl-00062-g2fe80d3-dirty #185
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
 000000000000013c ffff88001f31ba58 ffffffff8147cf5e ffff88001f31bfd8
 ffff88001f31ba88 ffffffff8127eea9 0000000000000000 ffff88001f3975c0
 00000000f7707000 ffff88001f3975c0 ffff88001f31bac0 ffffffff8127eeef
Call Trace:
 [<ffffffff8147cf5e>] dump_stack+0x4e/0x82
 [<ffffffff8127eea9>] check_preemption_disabled+0xc5/0xe0
 [<ffffffff8127eeef>] __this_cpu_preempt_check+0x2b/0x2d
 [<ffffffff81030ff5>] ? show_stack+0x3b/0x3d
 [<ffffffff810ebee3>] get_task_policy+0x1d/0x49
 [<ffffffff810ed705>] get_vma_policy+0x14/0x76
 [<ffffffff810ed8ff>] alloc_pages_vma+0x35/0xff
 [<ffffffff810dad97>] handle_mm_fault+0x290/0x73b
 [<ffffffff810503da>] __do_page_fault+0x3fe/0x44d
 [<ffffffff8109b360>] ? trace_hardirqs_on_caller+0x142/0x19e
 [<ffffffff8109b3c9>] ? trace_hardirqs_on+0xd/0xf
 [<ffffffff81278bed>] ? trace_hardirqs_off_thunk+0x3a/0x3c
 [<ffffffff810be97f>] ? find_get_pages_contig+0x18e/0x18e
 [<ffffffff810be97f>] ? find_get_pages_contig+0x18e/0x18e
 [<ffffffff81050451>] do_page_fault+0x9/0xc
 [<ffffffff81483602>] page_fault+0x22/0x30
 [<ffffffff810be97f>] ? find_get_pages_contig+0x18e/0x18e
 [<ffffffff810be97f>] ? find_get_pages_contig+0x18e/0x18e
 [<ffffffff810be4c3>] ? file_read_actor+0x3a/0x15a
 [<ffffffff810be97f>] ? find_get_pages_contig+0x18e/0x18e
 [<ffffffff810bffab>] generic_file_aio_read+0x38e/0x624
 [<ffffffff810f6d69>] do_sync_read+0x54/0x73
 [<ffffffff810f7890>] vfs_read+0x9d/0x12a
 [<ffffffff810f7a59>] SyS_read+0x47/0x7e
 [<ffffffff81484f21>] cstar_dispatch+0x7/0x23


caller is __this_cpu_preempt_check+0x2b/0x2d
CPU: 0 PID: 1456 Comm: login Not tainted 3.12.0-rc4-cl-00062-g2fe80d3-dirty #185
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
 00000000000000e8 ffff88001f31bbf8 ffffffff8147cf5e ffff88001f31bfd8
 ffff88001f31bc28 ffffffff8127eea9 ffffffff823c5c40 00000000000213da
 0000000000000000 0000000000000000 ffff88001f31bc60 ffffffff8127eeef
Call Trace:
 [<ffffffff8147cf5e>] dump_stack+0x4e/0x82
 [<ffffffff8127eea9>] check_preemption_disabled+0xc5/0xe0
 [<ffffffff8127eeef>] __this_cpu_preempt_check+0x2b/0x2d
 [<ffffffff810e006e>] ? install_special_mapping+0x11/0xe4
 [<ffffffff810ec8a8>] alloc_pages_current+0x8f/0xbc
 [<ffffffff810bec6b>] __page_cache_alloc+0xb/0xd
 [<ffffffff810c7e90>] __do_page_cache_readahead+0xf4/0x219
 [<ffffffff810c7e0e>] ? __do_page_cache_readahead+0x72/0x219
 [<ffffffff810c827c>] ra_submit+0x1c/0x20
 [<ffffffff810c850c>] ondemand_readahead+0x28c/0x2b4
 [<ffffffff810c85e9>] page_cache_sync_readahead+0x38/0x3a
 [<ffffffff810bfe7e>] generic_file_aio_read+0x261/0x624
 [<ffffffff810f6d69>] do_sync_read+0x54/0x73
 [<ffffffff810f7890>] vfs_read+0x9d/0x12a
 [<ffffffff810f7a59>] SyS_read+0x47/0x7e
 [<ffffffff81484f21>] cstar_dispatch+0x7/0x23

Cc: linux-mm@kvack.org
Cc: Alex Shi <alex.shi@intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/include/linux/topology.h
===================================================================
--- linux.orig/include/linux/topology.h	2013-12-02 16:07:51.304591590 -0600
+++ linux/include/linux/topology.h	2013-12-02 16:07:51.304591590 -0600
@@ -188,7 +188,7 @@ DECLARE_PER_CPU(int, numa_node);
 /* Returns the number of the current Node. */
 static inline int numa_node_id(void)
 {
-	return __this_cpu_read(numa_node);
+	return raw_cpu_read(numa_node);
 }
 #endif
 
@@ -245,7 +245,7 @@ static inline void set_numa_mem(int node
 /* Returns the number of the nearest Node with memory */
 static inline int numa_mem_id(void)
 {
-	return __this_cpu_read(_numa_mem_);
+	return raw_cpu_read(_numa_mem_);
 }
 #endif
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH 06/48] mm: Replace __get_cpu_var uses with this_cpu_ptr
       [not found] <20140214201841.826179349@linux.com>
  2014-02-14 20:18 ` [PATCH 02/48] mm: Use raw_cpu ops for determining current NUMA node Christoph Lameter
@ 2014-02-14 20:18 ` Christoph Lameter
  1 sibling, 0 replies; 2+ messages in thread
From: Christoph Lameter @ 2014-02-14 20:18 UTC (permalink / raw)
  To: Tejun Heo
  Cc: akpm, rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, akpm, linux-mm

[-- Attachment #1: this_mm --]
[-- Type: text/plain, Size: 6450 bytes --]

Replace places where __get_cpu_var() is used for an address calculation
with this_cpu_ptr().

Cc: akpm@linux-foundation.org
Cc: linux-mm@kvack.org
Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/lib/radix-tree.c
===================================================================
--- linux.orig/lib/radix-tree.c	2014-02-03 13:41:17.832645984 -0600
+++ linux/lib/radix-tree.c	2014-02-03 13:41:17.822646194 -0600
@@ -221,7 +221,7 @@
 		 * succeed in getting a node here (and never reach
 		 * kmem_cache_alloc)
 		 */
-		rtp = &__get_cpu_var(radix_tree_preloads);
+		rtp = this_cpu_ptr(&radix_tree_preloads);
 		if (rtp->nr) {
 			ret = rtp->nodes[rtp->nr - 1];
 			rtp->nodes[rtp->nr - 1] = NULL;
@@ -277,14 +277,14 @@
 	int ret = -ENOMEM;
 
 	preempt_disable();
-	rtp = &__get_cpu_var(radix_tree_preloads);
+	rtp = this_cpu_ptr(&radix_tree_preloads);
 	while (rtp->nr < ARRAY_SIZE(rtp->nodes)) {
 		preempt_enable();
 		node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
 		if (node == NULL)
 			goto out;
 		preempt_disable();
-		rtp = &__get_cpu_var(radix_tree_preloads);
+		rtp = this_cpu_ptr(&radix_tree_preloads);
 		if (rtp->nr < ARRAY_SIZE(rtp->nodes))
 			rtp->nodes[rtp->nr++] = node;
 		else
Index: linux/mm/memcontrol.c
===================================================================
--- linux.orig/mm/memcontrol.c	2014-02-03 13:41:17.832645984 -0600
+++ linux/mm/memcontrol.c	2014-02-03 13:41:17.822646194 -0600
@@ -2475,7 +2475,7 @@
  */
 static void drain_local_stock(struct work_struct *dummy)
 {
-	struct memcg_stock_pcp *stock = &__get_cpu_var(memcg_stock);
+	struct memcg_stock_pcp *stock = this_cpu_ptr(&memcg_stock);
 	drain_stock(stock);
 	clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags);
 }
Index: linux/mm/memory-failure.c
===================================================================
--- linux.orig/mm/memory-failure.c	2014-02-03 13:41:17.832645984 -0600
+++ linux/mm/memory-failure.c	2014-02-03 13:41:17.822646194 -0600
@@ -1297,7 +1297,7 @@
 	unsigned long proc_flags;
 	int gotten;
 
-	mf_cpu = &__get_cpu_var(memory_failure_cpu);
+	mf_cpu = this_cpu_ptr(&memory_failure_cpu);
 	for (;;) {
 		spin_lock_irqsave(&mf_cpu->lock, proc_flags);
 		gotten = kfifo_get(&mf_cpu->fifo, &entry);
Index: linux/mm/page-writeback.c
===================================================================
--- linux.orig/mm/page-writeback.c	2014-02-03 13:41:17.832645984 -0600
+++ linux/mm/page-writeback.c	2014-02-03 13:41:17.822646194 -0600
@@ -1623,7 +1623,7 @@
 	 * 1000+ tasks, all of them start dirtying pages at exactly the same
 	 * time, hence all honoured too large initial task->nr_dirtied_pause.
 	 */
-	p =  &__get_cpu_var(bdp_ratelimits);
+	p =  this_cpu_ptr(&bdp_ratelimits);
 	if (unlikely(current->nr_dirtied >= ratelimit))
 		*p = 0;
 	else if (unlikely(*p >= ratelimit_pages)) {
@@ -1635,7 +1635,7 @@
 	 * short-lived tasks (eg. gcc invocations in a kernel build) escaping
 	 * the dirty throttling and livelock other long-run dirtiers.
 	 */
-	p = &__get_cpu_var(dirty_throttle_leaks);
+	p = this_cpu_ptr(&dirty_throttle_leaks);
 	if (*p > 0 && current->nr_dirtied < ratelimit) {
 		unsigned long nr_pages_dirtied;
 		nr_pages_dirtied = min(*p, ratelimit - current->nr_dirtied);
Index: linux/mm/swap.c
===================================================================
--- linux.orig/mm/swap.c	2014-02-03 13:41:17.832645984 -0600
+++ linux/mm/swap.c	2014-02-03 13:41:17.822646194 -0600
@@ -441,7 +441,7 @@
 
 		page_cache_get(page);
 		local_irq_save(flags);
-		pvec = &__get_cpu_var(lru_rotate_pvecs);
+		pvec = this_cpu_ptr(&lru_rotate_pvecs);
 		if (!pagevec_add(pvec, page))
 			pagevec_move_tail(pvec);
 		local_irq_restore(flags);
Index: linux/mm/vmalloc.c
===================================================================
--- linux.orig/mm/vmalloc.c	2014-02-03 13:41:17.832645984 -0600
+++ linux/mm/vmalloc.c	2014-02-03 13:41:17.822646194 -0600
@@ -1488,7 +1488,7 @@
 	if (!addr)
 		return;
 	if (unlikely(in_interrupt())) {
-		struct vfree_deferred *p = &__get_cpu_var(vfree_deferred);
+		struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
 		if (llist_add((struct llist_node *)addr, &p->list))
 			schedule_work(&p->wq);
 	} else
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c	2014-02-03 13:41:17.832645984 -0600
+++ linux/mm/slub.c	2014-02-03 13:41:17.822646194 -0600
@@ -2190,7 +2190,7 @@
 
 	page = new_slab(s, flags, node);
 	if (page) {
-		c = __this_cpu_ptr(s->cpu_slab);
+		c = raw_cpu_ptr(s->cpu_slab);
 		if (c->page)
 			flush_slab(s, c);
 
@@ -2410,7 +2410,7 @@
 	 * and the retrieval of the tid.
 	 */
 	preempt_disable();
-	c = __this_cpu_ptr(s->cpu_slab);
+	c = this_cpu_ptr(s->cpu_slab);
 
 	/*
 	 * The transaction ids are globally unique per cpu and per operation on
@@ -2666,7 +2666,7 @@
 	 * during the cmpxchg then the free will succedd.
 	 */
 	preempt_disable();
-	c = __this_cpu_ptr(s->cpu_slab);
+	c = this_cpu_ptr(s->cpu_slab);
 
 	tid = c->tid;
 	preempt_enable();
Index: linux/mm/vmstat.c
===================================================================
--- linux.orig/mm/vmstat.c	2014-02-03 13:41:17.832645984 -0600
+++ linux/mm/vmstat.c	2014-02-03 13:41:17.822646194 -0600
@@ -489,7 +489,7 @@
 			continue;
 
 		if (__this_cpu_read(p->pcp.count))
-			drain_zone_pages(zone, __this_cpu_ptr(&p->pcp));
+			drain_zone_pages(zone, this_cpu_ptr(&p->pcp));
 #endif
 	}
 	fold_diff(global_diff);
@@ -1218,7 +1218,7 @@
 static void vmstat_update(struct work_struct *w)
 {
 	refresh_cpu_vm_stats();
-	schedule_delayed_work(&__get_cpu_var(vmstat_work),
+	schedule_delayed_work(this_cpu_ptr(&vmstat_work),
 		round_jiffies_relative(sysctl_stat_interval));
 }
 
Index: linux/mm/zsmalloc.c
===================================================================
--- linux.orig/mm/zsmalloc.c	2014-01-31 09:15:37.674121110 -0600
+++ linux/mm/zsmalloc.c	2014-02-03 13:42:11.281526141 -0600
@@ -1071,7 +1071,7 @@
 	class = &pool->size_class[class_idx];
 	off = obj_idx_to_offset(page, obj_idx, class->size);
 
-	area = &__get_cpu_var(zs_map_area);
+	area = this_cpu_ptr(&zs_map_area);
 	if (off + class->size <= PAGE_SIZE)
 		kunmap_atomic(area->vm_addr);
 	else {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-02-14 20:19 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20140214201841.826179349@linux.com>
2014-02-14 20:18 ` [PATCH 02/48] mm: Use raw_cpu ops for determining current NUMA node Christoph Lameter
2014-02-14 20:18 ` [PATCH 06/48] mm: Replace __get_cpu_var uses with this_cpu_ptr Christoph Lameter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).